Dell R720XD build question.

SmoothRunnings

Dabbler
Joined
Aug 18, 2016
Messages
48
I am looking to replace my R510 with an R720XD 26 drive bay system, well technically a 24 drive bay system. I guess the question is what's best way to utilize all 24 drives and gain the best performance?

The server will be connected directly to my ESXi 6.7 hopefully over iSCSI using 10Gbit or better.

Thanks
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
For virtualization use you're going to want as many vdevs as possible, the optimum config for performance would be to use the 24 drives (not counting the rear 2 bays) in 12 2-drive mirrored vdevs. Obviously this configuration sacrifices half of your overall storage capacity.

An alternative that would be almost* as fast would be to create 8 3-drive RaidZ1 vdevs - this would reduce your capacity by about 30%.

If you're more concerned about overall capacity go with 2 12-drive RaidZ2 vdevs - this would definitely be a performance hit, but you're only sacrificing 4 drives' worth of space to redundancy. Depending on your VM's usage pattern and the amount of RAM in the system this might* work out ok for you.

You don't mention what kind of disks you're using, but if you're using any type of spinning media you're going to want to throw as much RAM into the system as possible (for serious virtualization use I'm talking 128GB and up) And most likely use a fast SSD for a L2ARC. SSD's for ZIL won't have any affect over iscsi unless you set your pool to sync=always.

With more detailed hardware specs and projected usage expectations, we could provide better advice on how to configure the drives.
 

SmoothRunnings

Dabbler
Joined
Aug 18, 2016
Messages
48
For virtualization use you're going to want as many vdevs as possible, the optimum config for performance would be to use the 24 drives (not counting the rear 2 bays) in 12 2-drive mirrored vdevs. Obviously this configuration sacrifices half of your overall storage capacity.

An alternative that would be almost* as fast would be to create 8 3-drive RaidZ1 vdevs - this would reduce your capacity by about 30%.

If you're more concerned about overall capacity go with 2 12-drive RaidZ2 vdevs - this would definitely be a performance hit, but you're only sacrificing 4 drives' worth of space to redundancy. Depending on your VM's usage pattern and the amount of RAM in the system this might* work out ok for you.

You don't mention what kind of disks you're using, but if you're using any type of spinning media you're going to want to throw as much RAM into the system as possible (for serious virtualization use I'm talking 128GB and up) And most likely use a fast SSD for a L2ARC. SSD's for ZIL won't have any affect over iscsi unless you set your pool to sync=always.

With more detailed hardware specs and projected usage expectations, we could provide better advice on how to configure the drives.

Would would there be as big of a performance hit if I did the 2 12 RaidZ2 configuration with SSDs? I don't think I need 128GB+ if I am going plug the FreeNAS directly into ESXi and use iSCSI do I?

Thanks,
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Would would there be as big of a performance hit if I did the 2 12 RaidZ2 configuration with SSDs? I don't think I need 128GB+ if I am going plug the FreeNAS directly into ESXi and use iSCSI do I?

Thanks,

ZFS is a copy on write file system (COW), which means that it isn't ideally suited to block level sharing protocols (like iscsi). With iscsi you share a block of storage and let the initiating OS manage the file system (in the case the initiating OS would be ESXi). In order to replicate this block of storage, you create either a file-based or zvol-based extent within FreeNAS - When the data is originally written to these extents it's going to be fairly sequential; however, as data is updated the COW nature of zfs means that the updated blocks of data are written to a new location on your drives and the original data is left in place (this makes things like snapshots incredibly easy with zfs) - as you're not changing all of the data at once, fragmentation occurs... There's absolutely no getting around the fact that using zfs as the back-end for iscsi storage is going to lead to fragmentation - the severity of the issue is going to entirely depend on the types of VM's your running (IE: mostly static web servers vs highly active database servers), but the bottom line is it's going to happen. ZFS uses RAM for the primary ARC (read cache) and having large amounts of RAM/ARC means that as your pool starts to fragment the impact is going to be far less apparent to your VM's.

The fact that you're considering going with all SSD storage will also help to mitigate the affect of fragmentation, but the more RAM you can put in the system the better off you're going to be. I would also propose that the cost of going with 24 SSD's is going to be fairly significant - a quick eBay search shows that your can get 128GB RAM kits for your server for under $300 and is going to pay off in performance far more than a couple more SSD drives...

To directly answer your question, with an all SSD pool in 2 12-drive RaidZ2 vdevs I would still recommend at least 128GB RAM or more if this is going into a production environment. If this is for a home/lab setup (you still haven't provided details of what exactly you'll be doing with this setup) and you'll be running 10 or fewer VM's that aren't highly transnational (no busy Database servers) you can definitely get away with less.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
The fact that you're considering going with all SSD storage will also help to mitigate the affect of fragmentation, but the more RAM you can put in the system the better off you're going to be. I would also propose that the cost of going with 24 SSD's is going to be fairly significant - a quick eBay search shows that your can get 128GB RAM kits for your server for under $300 and is going to pay off in performance far more than a couple more SSD drives...

While generally true, it may not hold true if the "hot" data cannot fit into RAM.
 

SmoothRunnings

Dabbler
Joined
Aug 18, 2016
Messages
48
RickH

I have a question for you which is a bit off topic but on the same page. I have a R510 right now with 12 x 1TB Dell SAS drives RaidZ2, 64GB of RAM. If I were to upgrade this server currently to 128GB of RAM would there be any benefit? The R720XD is for down the road but I need to start thinking about how it would play out so I can make sure I have all the right pieces in place, but my current config is an R510.

Thanks

Setup:
Dell R510
64GB of RAM
2 x E5645 CPUs
12 x 1TB Dell 7200RPM SAS drives (raidZ2)
2 x Crucial USB thumb drives mirrored in FreeNAS for the OS
1 x PCIe Samsung 970 EVO (L2ARC cache)
Onboard Broadcom used for LAN
Intel dual 10Gbit fiber used for the direct connection to my ESXi host.

The host:
Dell R810
128GB of RAM
4 x E7-4630's CPUs
2 x SDHC cards mirror with ESXi 6.7
Onboard Broadcom used for ESXi LAN connects
Intel dual 10Gbit fiber (connected to FreeNAS)
 

sokoloff

Dabbler
Joined
Sep 24, 2018
Messages
10
Hot data is the recently used/working set of data that is typically held in cache.
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
RickH

I have a question for you which is a bit off topic but on the same page. I have a R510 right now with 12 x 1TB Dell SAS drives RaidZ2, 64GB of RAM. If I were to upgrade this server currently to 128GB of RAM would there be any benefit? The R720XD is for down the road but I need to start thinking about how it would play out so I can make sure I have all the right pieces in place, but my current config is an R510.

Thanks

Setup:
Dell R510
64GB of RAM
2 x E5645 CPUs
12 x 1TB Dell 7200RPM SAS drives (raidZ2)
2 x Crucial USB thumb drives mirrored in FreeNAS for the OS
1 x PCIe Samsung 970 EVO (L2ARC cache)
Onboard Broadcom used for LAN
Intel dual 10Gbit fiber used for the direct connection to my ESXi host.

The host:
Dell R810
128GB of RAM
4 x E7-4630's CPUs
2 x SDHC cards mirror with ESXi 6.7
Onboard Broadcom used for ESXi LAN connects
Intel dual 10Gbit fiber (connected to FreeNAS)


Hey there,

After adding the extra 64GB of memory, it'll increase your hot-data cache capacity. So that's an extra 64GB of really fast cache, so the question really is; Do you want 64 more GB of hot cache?
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
I personaly would go half-half if it is enough for you. ( capacity )
12 bay's in raid 10: 6 x mirror - 1'st pool for speed
12 bay's in raidz2: 2 x z6 - 2'nd pool - storage

But if you want IOPS, go 12 x mirror - 1 pool

128 gb ram
No zil or L2ARC drive, you do not need ZIL or L2ARC bekause of high number of drive's, unless you use M2 NVMe, but not necesary from my point of view, 1.2 gb/sec will be your limit anyway ( controller - 2 x 600 mb/sec, backplane and sata interface )

The PERC H310 can be flas'ed to IT, here is the link:
https://techmattr.wordpress.com/201...-flashing-to-it-mode-dell-perc-h200-and-h310/

I use similar setup, but on HP server, 24 bay's in raid 10 - 12 x mirror - 1 pool.
I tried putting one ssd in the 25'th bay - no difrence in speed, nor write, iops or read speed changed.
128 gb ram in the server.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
ZFS is a copy on write file system (COW), which means that it isn't ideally suited to block level sharing protocols (like iscsi).
Fantastic answers. I hope you will contribute to the forum more in the future. Thank you!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have a question for you which is a bit off topic but on the same page. I have a R510 right now with 12 x 1TB Dell SAS drives RaidZ2, 64GB of RAM. If I were to upgrade this server currently to 128GB of RAM would there be any benefit?
Highly dependent on workload. How is the server being used? Have you looked at the ARC hit ratio?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I use similar setup, but on HP server, 24 bay's in raid 10 - 12 x mirror - 1 pool.
I tried putting one ssd in the 25'th bay - no difrence in speed, nor write, iops or read speed changed.
128 gb ram in the server.
What kind of drives? How have you measured the performance? This is on FreeNAS?
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
What kind of drives? How have you measured the performance? This is on FreeNAS?

Freenas 11.1u6
24 SATA 2.5 inch Disk's, Seagate and WD mixed.- it started with 10 Seagate and grew over time, 1 pool.
SAS9207-8i controller connected with 2 x sff 8087 to the HP SAS BACKPLANE.
Server HP ProLiant SE326M1, 25 bay SFF Model
4850 iops read and 3770 write @ 4k random
128 GB Ram.
This Storage is serving 25 production vm's.
3 are MySQL databases in UBUNTU
6 are MSSQL Databases
The other are from Domain Controller's to other type of VM's, including a apache 2 server housing more than 20 websites.
I did a screenshot.
Connection is by 2 x 4 gigabit FC connection to a EMC 4100 SW. Each host is connected by 2 x 4 GB/s FC to the SW.
Round Robin is used in VmWARE to the datastore.
Host's are Vmware 6.0u3 and 6.7 versions.

When i added a SSD ( Samsung Evo 860 i did not see any raise in real life performance )
Aaa, and it is in production for over 3 years.
 

Attachments

  • 1.png
    1.png
    136 KB · Views: 377
  • 2.png
    2.png
    10.1 KB · Views: 365
Last edited:

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Drive's are:
Seagate BarraCuda® 1TB, 5400rpm, 128MB cache, SATA III - 6 pc
And WD Blue WD10JPVX 1TB, 5400rpm, 8MB, SATA III - 8 pc
The rest are older model's
 
Top