adamoconnor
Cadet
- Joined
- Sep 30, 2021
- Messages
- 5
Hello everyone! While I'm not exactly new (or anywhere near an expert, either) at servers and basic NAS usage, I'm planning my first real endeavor into TrueNAS as all of my experience prior has been w/ WinServer or unRAID for a high-density storage server I am planning to build. My current environment consists of 2x Windows Server machines, one hosting Hyper-V appliances for several different server instances and the other mainly handles NAS tasks via a RAID 10 SSD volume and a RAID 5 HDD volume. Up until now, these have been working great for our use, but we have lately started to need high-capacity video storage as we have hired a media manager and they are managing to fill up the rest of our storage with raw video files.
That said, as a proof-of-concept machine and one that will certainly be upgraded with more serious hardware, I am looking at doing the following "budget-build":
3rd Gen i7 3770K
32GB DDR3
6x WD RED Plus 4TB
2x SATA SSD (type and size not yet considered as I am learning adding SSD may not be of benefit to me)
Above drives attached to an LSI 9207-8i
Intel X520-T2 for peer-to-peer between server and video editing rig
Intel I340-T4 for LAGG'd access to rest of 1GBE network (yes, my switch supports aggregation)
I understand that the above is partially consumer-grade, particularly in CPU/Mobo/RAM, and probably unsafe in some way, shape or form, but like I said, this is a proof-of-concept idea to make sure we're going to be barking up the right tree.
My original plan was to place the disks into two separate RAIDZ1 vdevs so that I can stripe the data across the two vdevs to hopefully gain some performance and retain some safety in case a drive fails. I'm quickly learning though, that this may not be the right way to go about this. For some reason, there seems to be a consensus that RAIDZ1 is evil and should never be used but rather I should be going for Z2. Is there any practical reason for this other than the minimum of two drive failure safety? It seems like I can expect to lose some performance by going this route as I would no longer be striping across two vdevs.
Ultimately, what I am looking to get out of this server is
1.) High density storage for a very reasonable $/TB
2.) Fast enough drive access to be able to saturate a 10GBE link ( I don't think I'll quite make it at only 6 drives, but based on some napkin math it seems I should be able to at say 8 or 10...)
The server is going to be more about bulk storage, and less about actually editing video off of it. I'm expecting our editor to draw several-gig files off of it at any moment, and then the next moment dump a whole 2TB onto it. The dumping of the 2TB onto the server is what concerns me, as if it takes 8 hours to do I'll be losing a lot of productivity.
So this is where the opinion/advice part comes into play.
Are my aspirations something even practically feasible?
Is there something I am saying/planning in the above that I should be beaten with a short stick over for even thinking of?
What is the meaning of life?
As far as network and drive array tuning go, I understand that I'll need to do some work on both in order to maximize my performance. I'm not necessarily asking for advice on that front as I realize it's very subjective to the specific build, but as I understand it now, I'll be needing to enable jumbo packets on my 10GBE network, and set up sector sizing on TrueNAS to match the drive's sector sizes. Please correct me if I'm wrong on those.
Thanks in advance for your advice.
Adam
That said, as a proof-of-concept machine and one that will certainly be upgraded with more serious hardware, I am looking at doing the following "budget-build":
3rd Gen i7 3770K
32GB DDR3
6x WD RED Plus 4TB
2x SATA SSD (type and size not yet considered as I am learning adding SSD may not be of benefit to me)
Above drives attached to an LSI 9207-8i
Intel X520-T2 for peer-to-peer between server and video editing rig
Intel I340-T4 for LAGG'd access to rest of 1GBE network (yes, my switch supports aggregation)
I understand that the above is partially consumer-grade, particularly in CPU/Mobo/RAM, and probably unsafe in some way, shape or form, but like I said, this is a proof-of-concept idea to make sure we're going to be barking up the right tree.
My original plan was to place the disks into two separate RAIDZ1 vdevs so that I can stripe the data across the two vdevs to hopefully gain some performance and retain some safety in case a drive fails. I'm quickly learning though, that this may not be the right way to go about this. For some reason, there seems to be a consensus that RAIDZ1 is evil and should never be used but rather I should be going for Z2. Is there any practical reason for this other than the minimum of two drive failure safety? It seems like I can expect to lose some performance by going this route as I would no longer be striping across two vdevs.
Ultimately, what I am looking to get out of this server is
1.) High density storage for a very reasonable $/TB
2.) Fast enough drive access to be able to saturate a 10GBE link ( I don't think I'll quite make it at only 6 drives, but based on some napkin math it seems I should be able to at say 8 or 10...)
The server is going to be more about bulk storage, and less about actually editing video off of it. I'm expecting our editor to draw several-gig files off of it at any moment, and then the next moment dump a whole 2TB onto it. The dumping of the 2TB onto the server is what concerns me, as if it takes 8 hours to do I'll be losing a lot of productivity.
So this is where the opinion/advice part comes into play.
Are my aspirations something even practically feasible?
Is there something I am saying/planning in the above that I should be beaten with a short stick over for even thinking of?
As far as network and drive array tuning go, I understand that I'll need to do some work on both in order to maximize my performance. I'm not necessarily asking for advice on that front as I realize it's very subjective to the specific build, but as I understand it now, I'll be needing to enable jumbo packets on my 10GBE network, and set up sector sizing on TrueNAS to match the drive's sector sizes. Please correct me if I'm wrong on those.
Thanks in advance for your advice.
Adam