Hi All,
I have a question regarding the IOPS I see for the 3 drives that compose the primary zvol where my data is stored on my FreeNAS rig. Specifically two of the drives show the same “OPS” values while the 3rd always shows higher #s (by a significant %age). All three drives are essentially the same (Western Digital RE 3TB disks), however the drive showing higher OPS is a slightly different SKU, not 100% sure as to what the difference is, but more is explained below.
I was wondering if this is normal or if this is because there is a performance difference between the drives, and if so, would the drive with higher OPS be of higher or lower performance (personal curiosity).
Also this is based on the widget that is shown on the FreeNAS Corral dashboard.
A little background - I was looking to upgrade my home NAS as the one I had been using was getting long in the tooth and didn’t have any redundancy so I wanted to get a new NAS before that one failed (have no fear - the NAS was and is used as a backup of data, not the primary source). Not wanting to spend a lot of $, but at the same time wanting the ability to run some basic applications off of the NAS like I had been doing I spent some time searching for deals and after a long while I found a WD DX4000 sentinel for a very good price, however this one only came with two 3TB drives and if I was going to go for some redundancy I wanted to at least have RAID5 (I know now that having only a single parity disk with platter drives over 2TB isn’t probably the best idea due to resilvering Times, but I didn’t want to break the bank and the sentinel runs windows so I liked the idea of faster and allegedly more resilient RE drives).
When the sentinel was first released there were only a few RE drive SKUs that it would work with, but WD has released a patch allowing for all drives to be compatible. Since I couldn’t find the ‘allowed’ RE drives, at least for less than what I paid for the whole unit, I bought an RE drive that wasn’t one of the few allowed SKUs thinking the patch would allow it to be used and it was still an RE drive with the same specs as far as I could tell.
Fast forward some time & the drives worked but the install of the WD ISO for some reason didn’t install the necessary AFP plugins so time machine wouldn’t work unless I used the iSCSI method....which I tried and worked fine for me, but initiators cost $ and other users didn’t want to deal with the hassle of using an initiator. Also the sentinel was very slow even with the ram doubled and using even one extra application taxed the system a lot....and the fan didn’t cycle very well so it was loud. So that led me to FreeNAS and all the wonderful possibilities....unfortunately I learned about FreeNAS/switched right as Corral came out & have become enamored with docker and it works OK for now.
If I had started with FreeNAS from the get I probably would have gotten Red drives or some HGST drives, but seeing as I had like $600 in RE drives that are ‘supposed’ to be more resilient & aren’t from the same batch I figured I’d re-use them as my main pool.
The drive that has higher OPS as per the widget is the ‘unapproved’ drive (WD3000FYYG I believe is the full sku, the FYYG past is the important bit). The other two drives have basically the same OPS and I believe the sku is WD3000F9MZ...not 100% sure but I believe that is it.
So long story short (I work in metadata management in an architectural role so tend to be very verbose in an effort to be as precise as possible/provide proper context - my apologies) is this normal? And if so I would appreciate some background as to why out of curiosity.
If it’s bexause of different performance characteristics, what factors would you guys think are the cause of this & which drives perform better?
One of the bits that made me really curious is that it seems as though write/read operations start/end at what seems like the same time for all the disks (granted I don’t know what the refresh rate of the widget is and I’m sure there is more to it than that as well). I figured that since a raidz vdev would have the performance of one disk in general, namely the slowest one, I figured that if the disk performance was unequal then the OPS changes wouldn’t be done in unison/when the faster disks were done the OPS would drop while the slower one would work to catch up. Or maybe in scenarios where the full bandwidth of the disk isn’t being used there are more OPS because that disk is slower? (This feels wrong since it seems like the peak OPS of that one disk is higher by the same consistent margin when there is a high load/traffic ).
Anyway, thank you very much for any possible answers!
And some hardware/software info if it’s needed:
FreeNAS Corral 10.4 (latest build I believe)
Pool is running with LZ4 compression & no dedupe
Supermicro X11 SLL-F
Intel i3 6100 Skylake
32gb ECC Micron/Crucial Ram at 2133 MT/s over 2 UDIMMs
1 Supermicro 16 gb SATADOM for the OS
3 x 3TB WD RE ‘Datacenter’ drives at 7200 rpm via SATA (1 vdev via RaidZ)
1 NIC used for the OS (i210) & I also have the IPMI NIC being used
Recently added an Oracle Sun F20 storage accelerator for use as an SLOG. All 4 drives are striped and used as a 96gb ZIL. Nothing else attached to that HBA
I use the NAS for timemachine backups and a couple windows backups.
In addition I have Emby, Transmission, Nextcloud and MySQL for Nextcloud running via docker
PS -
I was also wondering if it might be worth while to use some of the FMODS from the F20 as an L2ARC since there isn’t a whole lot of traffic to the Machine except when downloading/watching media and 96gb for the SLOG is way more than I need....and I thought having maybe 24gb, or 48gb which would be two FMODs striped, as an L2ARC might be helpful for queuing up large video files when they are being viewed?
Based on the widget it seems like my metadata hit ratio is usually quite high (once again, this is from the widget... no arcstat in FN10) and the metadata and data misses go up when a movie is being moved around...also thought it might help with transcoding speed for multiple viewing?
I haven’t used LAGG yet since 1 IP can’t use the aggregated bandwidth and it seems like when there are two separate video files being watched it’s the hardware, specifically the drives, that are kind of the bottle neck.
Cheers!
I have a question regarding the IOPS I see for the 3 drives that compose the primary zvol where my data is stored on my FreeNAS rig. Specifically two of the drives show the same “OPS” values while the 3rd always shows higher #s (by a significant %age). All three drives are essentially the same (Western Digital RE 3TB disks), however the drive showing higher OPS is a slightly different SKU, not 100% sure as to what the difference is, but more is explained below.
I was wondering if this is normal or if this is because there is a performance difference between the drives, and if so, would the drive with higher OPS be of higher or lower performance (personal curiosity).
Also this is based on the widget that is shown on the FreeNAS Corral dashboard.
A little background - I was looking to upgrade my home NAS as the one I had been using was getting long in the tooth and didn’t have any redundancy so I wanted to get a new NAS before that one failed (have no fear - the NAS was and is used as a backup of data, not the primary source). Not wanting to spend a lot of $, but at the same time wanting the ability to run some basic applications off of the NAS like I had been doing I spent some time searching for deals and after a long while I found a WD DX4000 sentinel for a very good price, however this one only came with two 3TB drives and if I was going to go for some redundancy I wanted to at least have RAID5 (I know now that having only a single parity disk with platter drives over 2TB isn’t probably the best idea due to resilvering Times, but I didn’t want to break the bank and the sentinel runs windows so I liked the idea of faster and allegedly more resilient RE drives).
When the sentinel was first released there were only a few RE drive SKUs that it would work with, but WD has released a patch allowing for all drives to be compatible. Since I couldn’t find the ‘allowed’ RE drives, at least for less than what I paid for the whole unit, I bought an RE drive that wasn’t one of the few allowed SKUs thinking the patch would allow it to be used and it was still an RE drive with the same specs as far as I could tell.
Fast forward some time & the drives worked but the install of the WD ISO for some reason didn’t install the necessary AFP plugins so time machine wouldn’t work unless I used the iSCSI method....which I tried and worked fine for me, but initiators cost $ and other users didn’t want to deal with the hassle of using an initiator. Also the sentinel was very slow even with the ram doubled and using even one extra application taxed the system a lot....and the fan didn’t cycle very well so it was loud. So that led me to FreeNAS and all the wonderful possibilities....unfortunately I learned about FreeNAS/switched right as Corral came out & have become enamored with docker and it works OK for now.
If I had started with FreeNAS from the get I probably would have gotten Red drives or some HGST drives, but seeing as I had like $600 in RE drives that are ‘supposed’ to be more resilient & aren’t from the same batch I figured I’d re-use them as my main pool.
The drive that has higher OPS as per the widget is the ‘unapproved’ drive (WD3000FYYG I believe is the full sku, the FYYG past is the important bit). The other two drives have basically the same OPS and I believe the sku is WD3000F9MZ...not 100% sure but I believe that is it.
So long story short (I work in metadata management in an architectural role so tend to be very verbose in an effort to be as precise as possible/provide proper context - my apologies) is this normal? And if so I would appreciate some background as to why out of curiosity.
If it’s bexause of different performance characteristics, what factors would you guys think are the cause of this & which drives perform better?
One of the bits that made me really curious is that it seems as though write/read operations start/end at what seems like the same time for all the disks (granted I don’t know what the refresh rate of the widget is and I’m sure there is more to it than that as well). I figured that since a raidz vdev would have the performance of one disk in general, namely the slowest one, I figured that if the disk performance was unequal then the OPS changes wouldn’t be done in unison/when the faster disks were done the OPS would drop while the slower one would work to catch up. Or maybe in scenarios where the full bandwidth of the disk isn’t being used there are more OPS because that disk is slower? (This feels wrong since it seems like the peak OPS of that one disk is higher by the same consistent margin when there is a high load/traffic ).
Anyway, thank you very much for any possible answers!
And some hardware/software info if it’s needed:
FreeNAS Corral 10.4 (latest build I believe)
Pool is running with LZ4 compression & no dedupe
Supermicro X11 SLL-F
Intel i3 6100 Skylake
32gb ECC Micron/Crucial Ram at 2133 MT/s over 2 UDIMMs
1 Supermicro 16 gb SATADOM for the OS
3 x 3TB WD RE ‘Datacenter’ drives at 7200 rpm via SATA (1 vdev via RaidZ)
1 NIC used for the OS (i210) & I also have the IPMI NIC being used
Recently added an Oracle Sun F20 storage accelerator for use as an SLOG. All 4 drives are striped and used as a 96gb ZIL. Nothing else attached to that HBA
I use the NAS for timemachine backups and a couple windows backups.
In addition I have Emby, Transmission, Nextcloud and MySQL for Nextcloud running via docker
PS -
I was also wondering if it might be worth while to use some of the FMODS from the F20 as an L2ARC since there isn’t a whole lot of traffic to the Machine except when downloading/watching media and 96gb for the SLOG is way more than I need....and I thought having maybe 24gb, or 48gb which would be two FMODs striped, as an L2ARC might be helpful for queuing up large video files when they are being viewed?
Based on the widget it seems like my metadata hit ratio is usually quite high (once again, this is from the widget... no arcstat in FN10) and the metadata and data misses go up when a movie is being moved around...also thought it might help with transcoding speed for multiple viewing?
I haven’t used LAGG yet since 1 IP can’t use the aggregated bandwidth and it seems like when there are two separate video files being watched it’s the hardware, specifically the drives, that are kind of the bottle neck.
Cheers!