SAS disks

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
I've been thinking about upgrading the speeds a little bit lately, but SSDs seem to be mostly out of the question in such capacities I have in mind.
My small home server has TruNAS running virtualized in ESXi (that's not very relevant, just putting it here) with three 4TB WD Reds (the basic old ones) in RAIDz1. I don't need the full 8TB storage this setup offers, but I cannot go lower than 4TB, and that would cost quite a bit of money if I was to buy 2TB SSD, and would already be close to max capacity (currently all the data I have is around 3TB).
So after some input from elsewhere, I started thinking about SAS disks. There are plenty of them on Ebay, and with some luck it should be possible to get these in good shape (not sure how many tens of thousands of hours can an average SAS disk do though).
My concern is noise though. The server is sitting next to my wife's desk in the living room, and the WD Reds are just about fine with their 5400rpm. I haven't seen, used or heard 7200rpm disk in well over ten years, so I have no idea how much more noisier these are, and I am also concerned about SAS disk being server stuff where noise is no concern at all, so they are probably made with no regards to this.
Can anyone comment on the noise part? I'm sure there are plenty of people with SAS disk around here.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well I think you need to take a hard look at what your goal is here and be very specific. When you say that you want to speed things up what are you specifically talking about? Believe it or not there is a lot to consider. Are you looking for faster bandwidth over your network? faster IOPS? and what is your hardware configuration and the speeds you are currently experiencing? What is your goal? With some more information we could help you design a faster system but just to think that changing your drives for SAS drives doesn't mean you will improve anything. And buying used hard drives, not something I'd consider. And if you do purchase a used hard drive, it may work only for a few weeks or month before dying. People sell the used drives often becasue they have been heavily used already. It's also possible to just add a single Red HD to your system, recreate the pool in another way and have a faster throughput.

So tell us exactly what you want to achieve, your current hardware and I do understand it's on VMWare (like mine is), space you might have to add hard drives, and we can offer some advice. Also do some homework and search the internet for some keywords like "truenas iops throughput" or similar. You will find things to offer you help becasue you are not the first to ask for this kind of help. Pool design will likely be the proper solution for you.

Good luck and
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
You're right.
Hmm...
The server is running ESXi 6.7 and it's basically all in one box: a router, NAS, a seedbox, and I will probably add something that will serve as a game server at some point (like for hosting coop in whatever games I might play in future).
Hardware-wise, six core Xeon from the Coffee Lake line, 32GB RAM, HBA card based on the LSI 2308 chipset.
There's also Intel X710 network card that doesn't do much at the moment, because I still haven't upgraded the link to my PC with with another 10Gbit card. The rest of the house is perfectly fine with gigabit speed, because there are only automated backups going on at boot time.

The NAS itself really isn't much more than dead storage for films, tv series, music, plus backups of all the computers in the house.
It works just fine, but I am genetically impatient person, and copying anything from/to the box is just slow, and any random access operations are nothing short of horrendous. I am not naive though, and am well aware the latter cannot get much better with spinning disks. BUT I'll take any improvement that doesn't cost too much I can.
I have a vague idea what IOPS is, but not much past "it's not related to speed in MB/s".
One thing I don't know is what kind of raw speeds can I expect once I upgrade to 10Gbit connection. The basic WD Reds have what, 150MB/s per the specs? I guess I could get that since with gigabit the max is about 100MB/s, but then there's this thing with RAID and that's something I have absolutely zero knowledge about.

I have actually reinstalled TrueNAS some time after the last major version was released (12, isn't it? I am quite oblivious to stuff once it's up and runninng) and vaguely remember I redid the pool somewhat, I think increasing the record size on the dataset where the large files are stored to 1MB. There was no specific reason for that, it just seemed like a logical thing to do after reading up on that topic in the documentation.

I wouldn't normally buy anything that moves 2nd hand, but the SAS disks might work since they probably have way higher durability than anything else, and when a seller provides S.M.A.R.T. info, this might be ok. Obviously I wouldn't buy a disk manufactured in 2012 or something, and neither would I buy something that has 50k hours on the record, but then again one of the Reds I have has been running nonstop for 4,5 years and works flawlessly with zero S.M.A.R.T. errors. In any case, they are not expensive even brand new and are easily replaceable. Most of the SAS disks on Ebay are, I presume, from servers that were migrated to SSDs and there's no use for them anymore. Most are pretty cheap too, so I'm thinking if I have bad luck, it won't cost much to replace. I might be wrong in all my assumptions in this area though.

I read that SATA is half duplex and SAS full duplex, so that might have something to do with performance too. But mostly the SAS disks that aren't older than about five years have specced speeds of >250MB/s, which is a lot more than my Reds, which might be somewhat significant upgrade with 10Gbit connection.
What else... I'm not looking to add more disks to the mix. Just to potentially replace what I have with something a little faster, if the gains are meaningful.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
You need to do more research on IOPS and pool configuration or you are apt to spend money and not get the gain your think you will get. How much RAM are you using for the TrueNAS vm? I give 16GB and all works great. My machine has 64GB but I also play around with other VMs as well.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Give me some pointers then. I can read up on IOPS on Wikipedia all day long and it would still not get me anywhere.
Likewise, WHAT kind of pool configuration? What am I looking for?

I have 10GB allocated and it's totally enough, if I can judge from the reports screen. Or well, no matter what I allocate, it gets eaten by "ZFS cache" regardless.
Edit: reading in the hardware recommendation guide, it seems like 12GB+ would be more suitable. That's easy to fix, but I doubt it's limiting factor to performance. After all, right now I am mostly limited by the gigabit connection. There's still the theoretical question about speeds with 10Gbit.
Edit2: But then I read this. I think memory shouldn't be a problem at all.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Do not overthink the RAM thing, look at the SWAP space to see if it's being used, if it is then you need more RAM to prevent the SWAP from happening. But you still haven't been very specific with what you want. How are you testing your NAS? How will you know when you have achieved your goal? But keep reading and you will find out lots of information to get you where you need to be. When you create pools of varios configurations you can increase IOPS and throughput. There are several people who run 10Gbe as well so there are postings about their adventures you will find educational. You will figure it out and good luck.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
With 29MB maximum swap usage since july, I guess I can conclude I'm fine with the allocated memory?

I won't know anything until I upgrade the connection to my PC to 10Gbit, until then it's just a guessing game, BUT it doesn't hurt to get some input about SAS vs SATA disks I guess....
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
With 29MB maximum swap usage since july, I guess I can conclude I'm fine with the allocated memory?
For 29MB then I'd just give the VM one GB more of RAM and test it out. If the 29MB was a one time thing back in July then you should be okay.
If you run out of RAM then SWAP is used to prevent your system from crashing. And the system gets slow when using SWAP space. If you are also running VM's/Jails within TrueNAS then you are using more RAM, thus you need more RAM. So use the SWAP space as a guide, try to give your VM more RAM until the SWAP used size remains at a zero value. Little blips of 200K are okay in my book it if happens rarely.

I won't know anything until I upgrade the connection to my PC to 10Gbit, until then it's just a guessing game, BUT it doesn't hurt to get some input about SAS vs SATA disks I guess....
That is an entire adventure because you need to upgrade all Ethernet connection pints to 10gbe. For example, a network switch, your cables, the NIC cards on each device. If you only care about your computer and the NAS then new NIC's in each and I'd get a 10gbe switch for all your cables to tie into each other, but that is me. And you need to find a good brand and a supported brand of NIC. TrueNAS does not work with everything out there. Thankfully you do have ESXi which should be able to virtualize the NIC if there is some issue with passing it through. I can't give you the bullet proof answer on 10gbe, I don't have it and have no experience with it so you will have some leg work to do. Many have 10gbe here so search for threads on this topic and you will find quite a few. Then read and read some more. I would not purchase hardware without knowing it can work in your system.

Good luck.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm sure there are plenty of people with SAS disk around here.

Not really, because SAS disks are generally somewhat stupid to use with ZFS. You use them if you need multipath. Or if there's no way you can get the IOPS using other ZFS tricks like L2ARC. Otherwise, they're hot and hungry and sometimes noisy for very little ROI. You're better off spending the same amount of money on shucked SATA drives that are twice the size and increase performance that way.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

rvassar

Guru
Joined
May 2, 2018
Messages
972
Not really, because SAS disks are generally somewhat stupid to use with ZFS. You use them if you need multipath. Or if there's no way you can get the IOPS using other ZFS tricks like L2ARC. Otherwise, they're hot and hungry and sometimes noisy for very little ROI. You're better off spending the same amount of money on shucked SATA drives that are twice the size and increase performance that way.

I'm going to second @jgreco here. You go with SAS for the multipath & expander capability, which it doesn't sound like you're going to use. SAS's other advantages are not going to be exposed by ZFS. The IOPS problem... Think of it as a task list. You're submitting a list of things for the pool to do. IOPS are analogous to the rate at which the pool can work thru the tasks. It can be significantly impacted by the pool's configuration.

Example: your RAIDz1 pool has three disks. In order to read a single block, it has to issue requests and get responses from two of the three disks. Assuming the pool isn't degraded, it can issue the second item on the list to the third device and get a head start on it, and sit and walk the task list back and forth as it works thru it. Writes on the other hand, don't have this little advantage, the data has to be committed to all three devices before the task is complete. So your three disk pool has the write performance of the single slowest device in the pool. Changing the pool geometry allows the task list to complete faster. Adding a fourth drive, and configuring a 2 x 2 mirror allows ZFS to issue IOPS in parallel. One task to one mirror vdev, and another to the other vdev. For read operations, since each drive in the vdev is an identical copy, the pool can now complete 4 tasks in the same amount of time. Write operations round robin between the vdevs, and are double the speed. You can do something similar with your RAIDz1 pool, but you'd have to add three drives and make a two vdev RAIDz1 pool. Each vdev is independent, and can potentially act as an IOPS multiplier, provided the data is balanced between the vdevs. Adding vdevs by pool expansion does not rebalance the data.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Ok I am getting lost here, @rvassar. That's way above my (already extremely limited) knowledge.
Also:
1) What's "shucked drives" and multipath?
2) How does getting larger disks increase performance?

Re: network. I have TP-Link T1700G-28TQ switch and Intel X710 card in it, so I have this part covered.

So anyway, are you saying SAS disks will cause problems, or that they're simply not worth it from performance point of view?
If the latter, I think they would be significantly faster than my regular Reds with their ~150MB/s.
I found one seller on Ebay who has 5-pack of 4TB HGST Ultrastar 7K6000s (4kn) for $200 (plus shipping). These should be very slightly above 200MB/s, which is a little lower than what WD Red Pro can do, but for quarter the price, and the reliability should theoretically be higher than the Red. And I would have two spares anyway.
It's tough to decide!
 
Joined
Oct 22, 2019
Messages
3,641
It's okay, thanks to the Chia-holes, prices were up.

chai-to-the-moon.jpg

Chia coin is flying straight "to the moon!" :tongue:

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"shucked drives"


Not to be confused with "shingled drives" (or "SMR drives")


multipath?


Multiple (well, two) connections to each drive.

How does getting larger disks increase performance?

See points four through six:


I think they would be significantly faster than my regular Reds with their ~150MB/s.

Yes, but under what conditions do you get a reliable 150MBytes/sec? Only sequential read. This is a complex interplay of fragmentation, overwrite, and the fact that a fast 15K RPM SAS HDD doesn't seek THAT much faster than a 5900RPM SATA HDD.

It's tough to decide!

Yup. In general, I find new 5900RPM SATA drives to be doing a pretty good job at longevity. Hotter-running 7200RPM+ drives may give you a slightly faster pool, but you may pay a tax in wear and tear, heat, power, etc.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Thanks for the explanation.
Considering the prevalent type of data (large to very large files) I have stored on the NAS and the fact the common random reads (typically when torrents actively seed) are limited to 2MB/s (by my damn ISP), I feel like I might actually be able to benefit from the real maximum speed.
I regularly get 115MB/s when I copy large files from the NAS to my PC, so I am pretty sure this would go a lot higher when I add 10Gbgit card.
The whole point is, if I can make it slightly faster for not too much money, I'll take it. Everything counts.

I am not worried about temperatures at all. The case I have the server in, Fractal Node 804 works perfectly. Three drives aren't likely to generate any significant heat really. Wear&tear shouldn't be a problem either with server grade disks, should it? Not with my use case anyway. Most of the time, the disks are just spinning with occasional light reads. It's mostly cold storage, rememeber. It's only when I actively do something with the NAS I wish I had faster access to it. The whole motivation for all this is my personal comfort :D

The only thing I am worried about is the noise. Some people say SAS disks are loud, some say they are about the same as regular ones.
I'm willing to gamble here with the price for this 5-pack.

So unless the warning I got said "SAS drives and ZFS cause problems" and not "you wouldn't gain much of anything with SAS on ZFS", I'm willing to go on with this. I think the drives are resellable too.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
The disks have 440-ish days on the record.
The price comes to $40 apiece.
What do you think?
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
The disks have 440-ish days on the record.
The price comes to $40 apiece.
What do you think?

440-ish days... Meaning around 10.5k power-on hours. FWIW - Most quality drives last 40k - 60k hours. The question you have to ask... Did they reset the counter to make them more attractive to a buyer? We can't answer that question for you. Most drives at 10k hours will be as quiet as new, almost no bearing noise at all. If you buy them and they wail like a 20 year old Ford power steering pump... You'll know. :wink:

BTW - I'm not familiar with the Fractal Node cases, but I'm under the impression it doesn't have a SAS capable backplane. What are you planning to do for cabling? SAS drives do not have the "notch" between the power and data plugs.

Also:
I regularly get 115MB/s when I copy large files from the NAS to my PC, so I am pretty sure this would go a lot higher when I add 10Gbgit card.

I went the 10GbE route, just for fun... Best I get is around 250MB/sec from a 2x2 7200 rpm mirror pool, and it usually loafs around 160MB/sec. The drives are likely slightly faster, but I'm on older technology... DDR3 & PCIe 2.0. Keep in mind there may be other bottlenecks. I'm guessing mine is memory speed & with only 16Gb RAM, I need a larger ARC.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@Octopuss
You should settle on some sort of benchmark testing and record your results before your modification and then rerun the exact same tests after the fact. Also you should upgrade one item at a time, meaning either the hard drives or the 10GbE connection, retest, then upgrade the other item and retest. Figure out your bottleneck. I'd probably upgrade to the 10GbE first as it looks like your network is your current bottleneck
I found one seller on Ebay who has 5-pack of 4TB HGST Ultrastar 7K6000s (4kn) for $200 (plus shipping)

Be careful what you are buying, there is a lot of misrepresentation out there. I found many 4TB HGST Ultrastar 7K6000 that were not SAS but rather SATA. Make sure it states 12Gb/s transfer speeds. A new drive is about $174, even used used with only just over a year on it for $40 is too good to be true. Better yet, have them send you a photo of the drive labels then you could verify what you are getting and look up the serial numbers on the WD website to see how old they are.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
4TB HGST Ultrastar 7K6000s (4kn) for $200 (plus shipping).

Key disclosure... "4kn"... Those are 4096 byte sector drives. Not the usual 512 byte.... That's one of those new standards that hasn't been entirely well received... You need to research how ZFS handles those.
 
Top