iSCSI target for esxi hosts - best configuration for 4x 1tb 860 evos?

talz13

Dabbler
Joined
Oct 5, 2016
Messages
19
I'm working on a DIY vSAN type configuration to run on my new physical freenas host. Still in the process of setting it up, and trying to decide on how to configure my 4x 1tb 860 evo SSDs. Server specs are as follows:

* Dell r620 - 10 bay model
* 2x e5-2620 v2
* 96gb ddr3 / ecc / reg. / etc.
* 4x samsung 860 evo 1tb
* Chelsio T420-CR for 2x10gb (1x to each esxi host)
* h310 flashed to IT mode, 20.00.07 fw for internal ports
* LSI 9207-8e for external ports (to existing lenovo SA120 12x2tb array)

That leaves me with 6 internal 2.5" ports for expansion.

So for the 4x SSDs, what would be a recommended configuration for the pool if it is intended to be an iSCSI target for up to 2 esxi hosts, over 10gb? I was thinking of RAID-Z to have 2.8-ish TB of storage, but wasn't sure how that would affect expandability in the future?
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
Personally, I'd do RAID10 (2x mirrored vdevs). You'll get more IOPS out of the pool that way, which is important for backing VM workloads. Check out these two posts if you want to read more on why 2x mirrored vdevs will give you better performance than a single Z1 vdev:
When you want to expand, just buy drives in increments of your vdev size (2 drives if you do mirrors) and add them to the pool.

If you really want to ball out, you might throw a 280GB Intel 900p in there as a SLOG. You can get one for ~$270, but they come with a code for Star Citizen that you can sell on eBay for $120-$140, making the net price closer to $150. VM workloads (especially chatty VMs) will usually benefit from a really fast SLOG.
 

talz13

Dabbler
Joined
Oct 5, 2016
Messages
19
Personally, I'd do RAID10 (2x mirrored vdevs). You'll get more IOPS out of the pool that way, which is important for backing VM workloads. Check out these two posts if you want to read more on why 2x mirrored vdevs will give you better performance than a single Z1 vdev:
When you want to expand, just buy drives in increments of your vdev size (2 drives if you do mirrors) and add them to the pool.

If you really want to ball out, you might throw a 280GB Intel 900p in there as a SLOG. You can get one for ~$270, but they come with a code for Star Citizen that you can sell on eBay for $120-$140, making the net price closer to $150. VM workloads (especially chatty VMs) will usually benefit from a really fast SLOG.

Background on the esxi hosts, they're currently running on raid 10 (2x mirrors of 2x4tb striped hgst sata HDDs), so I'm wondering how the iops of the raidZ of 4x ssds would compare to existing. I also have an old intel s3700 200gb that I was planning on using for SLOG for the 12x2tb storage array, but since I'm not doing any sync writes to that pool, it's sitting pretty unused. I could move that over if it would help, or leave it alone if it wouldn't.
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
On your current HDD-based array, you'll get roughly 4x the read IOPS and 2x the write IOPS of a single platter drive. If you do 4-wide Z1 on flash, you'll get roughly the read and write IOPS of a single SSD. Your SSDs probably can do more than 4x the IOPS of your platter drives, so it might still be a speed upgrade. What kind of VMs are you running? Do you think they're bottlenecked at all by their current (4x HDD) storage system?

For the SLOG, if your 12x2TB array doesn't do any sync writes, then the S3700 isn't doing much good there. However, that Intel drive is SATA based and won't really be any faster (latency-wise) than the 860 EVO drives in your new VM pool. For a SLOG to be worthwhile, it should have significantly lower write latency than the pool disks. The ZFS intent log is kept in the pool by default, so taking it out of the pool and putting it onto a seperate log device ("SLOG" device) with specs similar to that of the pool disks won't net you much of a benefit. You'd want something NVMe- or PCIe-based for an all flash pool.

Realistically, you're gonna have great performance just by virtue of the fact that you're on all flash regardless of having a SLOG. Maybe set up a 4-wide Z1 pool with the SSDs and see what performance is like. If it's not fast enough, destroy the pool and remake it with 2x mirror vdevs. If it's still not fast enough, think about picking up a 900p for a SLOG.

As for the S3700, maybe switch it over the L2ARC duty on your 12x2TB pool? If you're truly not generating any sync writes, it's just a small, expensive space heater.
 

talz13

Dabbler
Joined
Oct 5, 2016
Messages
19
As for the S3700, maybe switch it over the L2ARC duty on your 12x2TB pool? If you're truly not generating any sync writes, it's just a small, expensive space heater.

I had a really good deal on 2x of them, so I currently have one as L2ARC and one as SLOG, in the SA120 enclosure. I'm not sure if I'll need any L2ARC with my new physical host, as it's got 3x the RAM of my current FreeNAS VM, so that'll also be one thing I can eliminate.

Not to mention, getting rid of up to 8x HDDs from my servers for local storage could cut down on my power usage as well. Really trying to think if it would be worth getting the 12x 2tb drives to spin down on idle, if they would ever even hit that state. This is for a home system, mainly running plex, and several ubuntu VMs, a win2k12 DC, and a couple other win VMs (one for security cam DVR, one for a desktop VM), and I'm basically the only user.
 
Top