SOLVED 30 Disk enclosure 14TB disks - 1 Pool with 3 or 4 RAIDZ and Spare Disks?

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
Good day, I would appreciate some advise on this setup I am doing.
We have a 30Bay JBOD (14TB Seagate 7200RPM Enterprise disks) enclosure connecting with 12Gbps SAS to HP ML350P Gen8 with spec;
2x Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
32GB RAM
10Gbps Dual Port Ethernet.

I thought of doing 1 Pool with 3x 10 Disk RAIDZ1 VDev. and 1x Spare Disk for the Pool.

I Need Max Capacity, with obviously acceptable redundancy.

My question is;
1. Will the 3x VDevs increase write speed to the Pool?
2. With RAIDZ1, each VDev will have tolerance of 1x Disk Failure. In the event of a second disk failure in the same VDev, will the Spare be able to keep the VDev up?

Any advise will be helpful!

Thanx!
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hi,

10 disk RAIDZ1 is asking for trouble. I would rather go RAIDZ2 instead.
With RAIDZ1, if one disk fails you can use the spare to replace the failed disk, but in the process of resilvering, if one or more disk within the same Vdev is to fail, your entire pool will becomes unavailable.

I would increase RAM capacity.
 

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
Thank you Apollo,
RAIDZ2 is Good for me.
Is there a best practice or calculator for calculating required memory?
This Unit will be used to store backup files created by Veeam and will be accessed using iSCSI.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Depending on the use case and size of your backups, you may need more RAM or you could still max out your system and never have enough.
RAM is being used when a file is accessed as the file is being cached.
If the data could be cached, the system would need to flush the cache to make room.

If on the other hand your most used backup can be cached in RAM, the system will have no need to gain access to the pool and you will have top performance.
30 disks in 3 VDEV or 10 RAIDZ2 would give you roughly around 336TB of storage space.

RAM requirement would depends on your use case.
 

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
This does make sense.

Last question;
Because this solution has 10Gbps network, I would like the write speed to be as close to that as possible.
If I am not mistaken, 7200RPM disks gets about 150MegaByte per second -- the more hard disks in your VDev, the faster your READ speed, but write speed is stuck at 1x speed.
Am I correct to say the more Vdev in the same pool will increase write speed?

with that, Thank you VERY MUCH for your knowledge and insight!
 

Fredda

Guru
Joined
Jul 9, 2019
Messages
608
You might want to check the ixsystems "Six Metrics for Measuring ZFS Pool Performance" blog post Part 1 and Part 2.
They cover the topics you ask pretty good with a lot of examples.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
For Veeam you will want to format with ReFS 64k for faster Merge and Synthetic full performance. It's significantly more performance.
For strictly write on a repository more RAM isn't going to make much difference, you will want a lot of RAM on your backup proxy.

For iSCSI you might want a SLOG for better write performance, and keep in mind the recommended not to fill an iSCSI pool over 50%. This will likely have less impact for a Veeam repository, however you will have some fragmentation due to merging backups etc. In any event do not treat it like an SMB share and fill it to 80%+.
 

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
Thank you, our current backups is stored on ReFS, will definitely configure it for this implementation.
If iSCSI does not like more than 50% capacity utilisation, what solution would you recommend?
NFS is slow, SMB does not deal nicely with VMWare...
Fibre Channel?
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
iSCSI is your best choice, the issue is iSCSI doesn't like fragmentation. It wants lots of contiguous space to write to.

For VM storage 50% is a good thing to shoot for, I think for a Veaam repository you can go much higher as you wont have the same amount of fragmentation. Its not likely you could go as high as 80% without performance impacts due to fragmentation.

Next best bet is to scale out rather than up. Are you using scale-out repositories?
 

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
The JBOD we are using is a 60 Bay enclosure, but we are only using 30Disks due to current cost constraints.
The idea is to fill the enclosure when we hit capacity issues and we have the ability to extend the enclosure.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Top