Hardware Upgrade

Status
Not open for further replies.

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
Hi Everyone

I have used FreeNAS for a while now, however after some advice.

I am using vmware with 2 hosts all with direct connections to 2 x FreeNAS boxes using iSCSI connections. (Specs Below)

I am running 10 vms across 2 Hosts and they are split between both storage boxes. I have been noticing that the vms aren't running very well. For example I can log on to a VM click start and wait 5 seconds for the start menu to appear.

When looking at the reporting on FreeNAS the network cards are using 200mb out of 1000mb

What I am after is this:

Could I used SSD caching to help on FreeNAS?
Is it worth just buying a new box with say 64GB ram and using raid. I would prefer raid 10 but it hurts losing half my drives for a performance boost.

Something i have thought of is creating a new box with 4 x 480gb ssd and using raid 10. Placing my OS only on these and data on the other stores.

Any help, advice, or diagnostics would be grateful.

HP Micro Server
4 x 3TB WD Red
16gb Ram
i3 CPU

HP Micro Server
4 x 3 TB WD Red
8GB Ram
Celeron CPU

In fairness they both perform similar.
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
https://forums.freenas.org/index.php?threads/zfs-fragmentation-issues.11818/

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

64GB RAM and mirrors should be considered the floor, meaning entry level, for any vaguely serious effort to store VM's on FreeNAS and ZFS. I frequently write about the impact and interactions of VM's with ZFS and fragmentation on these forums. I don't have to write anything *new*, I mostly just quote the stuff I already wrote, because this mostly hasn't changed in years.

But to summarize for you:

You mitigate write speed loss due to pool fragmentation by maintaining low pool occupancy, which means that your 4 x 3TB pools create a 6TB pool in mirrors, of which you can use maybe 2TB if you want performance to remain acceptable over time. Fragmentation generally still increases over time, but having lots of free space mitigates the worst effects.

You mitigate read speed loss on a highly fragmented pool by having gobs of ARC and probably lots of L2ARC too. If you have 2TB of data out there and maybe a 200GB working set size, you want that all in ARC/L2ARC. This is a fairly easy target to hit for a 64GB filer.
 

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
https://forums.freenas.org/index.php?threads/zfs-fragmentation-issues.11818/

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

64GB RAM and mirrors should be considered the floor, meaning entry level, for any vaguely serious effort to store VM's on FreeNAS and ZFS. I frequently write about the impact and interactions of VM's with ZFS and fragmentation on these forums. I don't have to write anything *new*, I mostly just quote the stuff I already wrote, because this mostly hasn't changed in years.

But to summarize for you:

You mitigate write speed loss due to pool fragmentation by maintaining low pool occupancy, which means that your 4 x 3TB pools create a 6TB pool in mirrors, of which you can use maybe 2TB if you want performance to remain acceptable over time. Fragmentation generally still increases over time, but having lots of free space mitigates the worst effects.

You mitigate read speed loss on a highly fragmented pool by having gobs of ARC and probably lots of L2ARC too. If you have 2TB of data out there and maybe a 200GB working set size, you want that all in ARC/L2ARC. This is a fairly easy target to hit for a 64GB filer.

Thanks for the advice.

I am currently using raid 5 as only lose 1 disk per box.

Option 1: I will look at building a NAS that will take 10 x 3TB drives and maybe raid 10 them.
Option 2: I will build a separate Box with 4 x 1TB with 16gb Ram Raid 10 just for OS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for the advice.

I am currently using raid 5 as only lose 1 disk per box.

Option 1: I will look at building a NAS that will take 10 x 3TB drives and maybe raid 10 them.
Option 2: I will build a separate Box with 4 x 1TB with 16gb Ram Raid 10 just for OS.

Hopefully you're not actually using "raid 5".

https://forums.freenas.org/index.php?resources/terminology-and-abbreviations-primer.37/

With ZFS, you want to be using RAIDZ, because it gives you the ability to correct errors in addition to the ability to survive failures.

If you're going to buy new drives, make sure to buy something larger than 3TB drives. Leaving ZFS lots of free space makes it faster.
 

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
Hopefully you're not actually using "raid 5".

https://forums.freenas.org/index.php?resources/terminology-and-abbreviations-primer.37/

With ZFS, you want to be using RAIDZ, because it gives you the ability to correct errors in addition to the ability to survive failures.

If you're going to buy new drives, make sure to buy something larger than 3TB drives. Leaving ZFS lots of free space makes it faster.

Whats funny is the store that has lower memory and lower CPU preforms better than the higher spec box.

Also the lower performance that is 96% full but runs faster. I am doing some more testing.

I am not buying new drives i have 8 x 3TB drives so would like to fill them before i buy more disks.
 

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
I am using RAIDZ (Raid 5 equivalent)

can you not use an SSD for the ZFS transactions or something?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Please feel free to read the quoted articles above. These explain:

1) Why we do not use RAIDZ for this

2) The role that SSD can play in all this

3) Why 96% full is an incredibly bad thing (and why you are actually experiencing very good performance right now, all things considered)
 

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
Please feel free to read the quoted articles above. These explain:

1) Why we do not use RAIDZ for this

2) The role that SSD can play in all this

3) Why 96% full is an incredibly bad thing (and why you are actually experiencing very good performance right now, all things considered)


Thanks ill have a read.

Whats annoying with ZFS is you lose space form the get go. so 3TB drives formatted roughly 2.8tb x 4, Then RAIDZ you loose 1 drive. Then your saying you have to leave space for ZFS

So your actually getting is 7.4tb.

Where i work with have QNAPs with SSD caching and they work really well. I like free nas as if i have a hardware failure i can chuck it all in another system to get working, However this whole space thing where you loose so much storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
3TB drives are roughly 2.8TB by *anyone's* count. That's a function of drive manufacturers having switched from 2^3 to 10^2 for counting megabytes.

The thing about RAID5 (not RAIDZ) is that you get what I call chorus line behaviour. All of your drives have to be involved in most operations, and they are seeking in unison. This is hell on performance. RAIDZ is a little more intelligent about this, sometimes.

But the real thing is that the QNAP is always going to hit a performance limit that is dictated by the ability of its disks to seek. Even if you use SSD for read caching, this will be true for writes. ZFS, on the other hand, allows for some interesting tricks. Because a transaction group tends to want to get laid down as a contiguous group of blocks, if you have lots of free space on the pool, it is likely that even totally random writes will be coalesced into a sequential write to the pool, eliminating seeks. This can make a HDD-based pool feel and perform like it is SSD.

If you give ZFS the resources it was designed for, it offers some amazing performance opportunities. You might find a Synology or QNAP with some RAID5 SSD to be a cheaper way to fast-ish performance, though.

If it makes you feel any better, sites where policy calls for strict redundancy means that you have to do three way mirroring, with sparing as well. Once the merry-go-round stops, you can easily find yourself burning up a 24-drive-of-2TB-disks array (48TB raw) into 7 3-way vdevs for 14TB of pool space, of which you really can't use more than about 6-7TB, and if you really need stellar performance, only around 2-3TB usable space. From 48TB of raw space. But it'll be faster than anything you can do with the QNAP, and more reliable too.
 

Mat Burchett

Cadet
Joined
May 16, 2017
Messages
8
3TB drives are roughly 2.8TB by *anyone's* count. That's a function of drive manufacturers having switched from 2^3 to 10^2 for counting megabytes.

The thing about RAID5 (not RAIDZ) is that you get what I call chorus line behaviour. All of your drives have to be involved in most operations, and they are seeking in unison. This is hell on performance. RAIDZ is a little more intelligent about this, sometimes.

But the real thing is that the QNAP is always going to hit a performance limit that is dictated by the ability of its disks to seek. Even if you use SSD for read caching, this will be true for writes. ZFS, on the other hand, allows for some interesting tricks. Because a transaction group tends to want to get laid down as a contiguous group of blocks, if you have lots of free space on the pool, it is likely that even totally random writes will be coalesced into a sequential write to the pool, eliminating seeks. This can make a HDD-based pool feel and perform like it is SSD.

If you give ZFS the resources it was designed for, it offers some amazing performance opportunities. You might find a Synology or QNAP with some RAID5 SSD to be a cheaper way to fast-ish performance, though.

If it makes you feel any better, sites where policy calls for strict redundancy means that you have to do three way mirroring, with sparing as well. Once the merry-go-round stops, you can easily find yourself burning up a 24-drive-of-2TB-disks array (48TB raw) into 7 3-way vdevs for 14TB of pool space, of which you really can't use more than about 6-7TB, and if you really need stellar performance, only around 2-3TB usable space. From 48TB of raw space. But it'll be faster than anything you can do with the QNAP, and more reliable too.

It does make me feel better lol. This is a home enviroment i just want a bit quicker when logged in to a virtual machine.

Work wise we have QNAPs but were using raid 10 and 2 x 1 TB SSD. I cant afford that set up lol.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't have any idea what kind of QNAP you'd be able to get for that price.

For low-power mid-tier iSCSI storage, we've had good success with the Synology DS416slim. It's a 2.5" unit with 4 bays, with dual 1GbE ethernet, around $280. I loaded up a SanDisk Ultra II 960GB and a PNY CS1311 960GB 960GB in RAID1, around $450, and then another two 2TB 2.5" HDD's, around $200, so that's 1TB of redundant SSD and 2TB of redundant HDD for less than $1000. It's good for about 50MBytes/sec, which isn't exactly snappy, but mostly we don't need speed here so much as we needed to reduce disk contention due to seeks. We have four of these units forming our mid-tier shared storage. They're pretty solid, a year's uptime with no crashes. I kinda went this way because I was expecting 2.5" SSD prices to continue to fall, but that hasn't happened. We do most of our heavy lifting here such as image builds on DAS RAID1, and then the shared storage is a good place to migrate low I/O loads. As with most things, it is a compromise solution.
 
Status
Not open for further replies.
Top