Specific NAS requirements (24x7 write 100-200MB/s and sometimes little read 1-50MB/s)

Status
Not open for further replies.

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Your RAID issues looks like related to unadequate hardware, not RAID itself.
-
You can later extend existing raidz2 with another raidz2. kind of raid60
No, it isn't like raid60, it is just another RAID-Z2 vdev (virtual device) in the pool. ZFS applies writes to the available vdevs in the pool as each vdev is ready. The more vdevs you have, the faster your pool is able to handle transactions.
Because the OP need to have a certain transnational speed and that speed is faster than a single drive is able to handle, the OP will need to build the zpool out to a size that will support the required transaction speed.
The math on this is purely theoretical, as your mileage may vary depending on the specifics of your hardware implementation, but I figured you need to have 60 drives, 4TB each, divided into 10 vdevs, 6 drives in each vdev, and each vdev being RAID-Z2. This should give you enough space for your total storage requirement and give you enough speed to support the sustained write activity. You will need to invest in some fast network gear to carry the traffic. Please review this article for more information on that: http://www.mellanox.com/blog/2016/03/25-is-the-new-10-50-is-the-new-40-100-is-the-new-amazing/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
Did you cross-posted ? Why do you need 24OTB of drives for 100TB usable space and 10 vdev to support 200MB/s ?
Also the box connected to the NAS have 1Gb network. Why do he need >10Gb NIC ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
...or alternatively, get in touch with our hosts at iXSystems about a TrueNAS box.
True and they will also be happy to sell you a FreeNAS certified solution. I have gotten quotes from them for both and they are pleasant and easy to work with.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Did you cross-posted ? Why do you need 24OTB of drives for 100TB usable space and 10 vdev to support 200MB/s ?
Also the box connected to the NAS have 1Gb network. Why do he need >10Gb NIC ?
I did the math on this, it works out to 111TB of storage with the way I have it and I read the requirements, not that they entirely make sense, but this should be more than enough to do the job.
 

IcePlanet

Cadet
Joined
Aug 9, 2017
Messages
7
It is fixed facility, dependion on the solution for 'ocasional' data download there might be movement of certain HDDs (if reconnect will be neeed)
As I mentioned earlier I can buy ready-to-use NAS, however I consider it as wasted money, because there are lot of features that I will never use and also I will miss the trill of building something by myself... so that was the reason why I has joined this forum (I want to build it on the other hand I do not have dufficient knowledge so was hoping for some hints here)
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
I did the math on this, it works out to 111TB of storage with the way I have it and I read the requirements, not that they entirely make sense, but this should be more than enough to do the job.
Math are 10vdev x 4 disk x 4TB = 160TB not 111TB.
And why waste disks with 10 vdev when one is enough to sustain the througput ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It is fixed facility, dependion on the solution for 'ocasional' data download there might be movement of certain HDDs (if reconnect will be neeed)
As I mentioned earlier I can buy ready-to-use NAS, however I consider it as wasted money, because there are lot of features that I will never use and also I will miss the trill of building something by myself... so that was the reason why I has joined this forum (I want to build it on the other hand I do not have dufficient knowledge so was hoping for some hints here)
One of the biggest problems you have with your proposed solution it this idea that you can pull a drive out and stick it into a regular computer to access the files. That is NOT happening in any kind of RAID solution that I have heard of, certainly not FreeNAS / ZFS. If you need to download data, you need to pull it from the server via the network by a NFS or SMB share.
Also, if you want to grow to 100TB, you will need to start with a chassis that can hold 100TB worth of hard drives. If you use 10TB hard drives and no redundancy at all (NOT RECOMMENDED) that would be 11 drives, because 10TB drives only actually hold a little more than 9TB of data. If you use 8TB drives that are more reasonably priced and a number of drives that would allow for redundancy, you are looking at a minimum of 24 drives, which puts you in rack-mount server territory.

Still, you need to really think about what it is you need to do here because the problem description is broken.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Math are 10vdev x 4 disk x 4TB = 160TB not 111TB.
And why waste disks with 10 vdev when one is enough to sustain the througput ?
You are not taking into account that a 4TB drive only stores 3.6 TiB of data and that the filesystem uses some of the space too. The maximum usable capacity the way I mapped is 140 TiB and allowing for the fact that you are not supposed to fill a zpool past 80%, the most data you should put in it is 111.46 TiB.
I can do math. Don't think you know more than me. You can't even spell throughput.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
When I read this kind of argument instead of answers about the 10 vdev I feel like I won :)
Why are you hung up on the number of vdevs? Needs more than one. You are not supposed to have more than a certain number of drives in a vdev. Read the manual. I chose 6 drives per vdev, that is where that came from. If you use 6 x 4TB drives, and RAID-Z2, you need 10 vdevs to have enough storage. It is not about the number of vdevs.
What do you think you won? The prize for being here?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Did you cross-posted ? Why do you need 24OTB of drives for 100TB usable space and 10 vdev to support 200MB/s ?
Also the box connected to the NAS have 1Gb network. Why do he need >10Gb NIC ?
He has three devices sending data simultaneously. The OP is the one that stipulated that the devices were very fragile and could not have any interruption in the data stream. If they are each writing at Gigabit network speed, the server needs to be able to ingest data at three times that speed. So, the server needs a faster connection than a single Gigabit NIC. A 10gig NIC might do the job but I referred the OP to an article for them to obtain more information. I did not suggest a specific product for the NIC.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Math are 10vdev x 4 disk x 4TB = 160TB not 111TB.
And why waste disks with 10 vdev when one is enough to sustain the througput ?
The reason to "waste" disks is to prevent any interruption in the data flow as the OP indicated that was not acceptable. My proposed solution would allow up to 20 disks to fail before the pool became unavailable. That level of resiliency should preclude any possibility of failure.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I want to build it on the other hand I do not have dufficient knowledge so was hoping for some hints here
  • By your own admission, you really don't know what you're doing here.
  • By your description, you need to support a very touchy system.
  • Also by your description, that very touchy system has pretty high performance requirements.
  • Further from your description, that very touchy, high-performance system requires absolute availability. Some relatively-small amount of data loss is permissible, but the system must be up at all times.
  • The very touchy, high-performance, high-availability system also needs to store quite a lot of data.
  • You have presented requirements that are, I believe, impossible. Not just impossible for FreeNAS, but impossible for any known system.
Yet, with all of that, you want to do it yourself. I don't think this will end well.
 
Joined
Jan 26, 2015
Messages
8
Did you cross-posted ? Why do you need 24OTB of drives for 100TB usable space and 10 vdev to support 200MB/s ?
Also the box connected to the NAS have 1Gb network. Why do he need >10Gb NIC ?
Thomas,
(1) this: "when one is enough to sustain the througput" is plain wrong. *WRITE* performance of a vdev is typically around the write performance of *1* drive.
To alleviate this, you stripe across vdevs - exactly what Chris recommends. The more vdevs, the merrier.
(2) A 100TB pool filled with 100TB is *DEAD*. Never ever fill a ZFS pool with more than 80% of its capacity. This has to do with the copy-on-write principle ZFS is based on - please read, this has been covered ad nauseum in this forum.
(3) If you think a 1G network link will transfer 1Gbit/sec, you're probably mistaken. Depending on the protocol (chatter: request/acknowledge type of stuff), latency can kill bandwidth. With the OP requested guaranteed speed, you have to go at least with 10G, best is to use SFP+/optical fibre as latency is even smaller than with copper.

So - yes, I am certain, such a system can be built based on FreeNAS. If you consider the cost of tinkering and testing, one of the commercial alternatives Chris mentions is probably cheaper :D
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I am certain, such a system can be built based on FreeNAS
I'd agree with this, as far as the data storage, performance, and availability requirements are concerned. Some of the other requirements, though, render it deeply problematic.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
  • Data written 24x7 with transfer rates 100 to 200 MB/s (by 3 blackboxes each has 1 Gb ethernet interface), in parallel cca 50 to 100 files are being written (1-2MB/s per file), and file size is 1 to 5 GB (file_size_treshold = 5GB)
  • Data read max 12 hours/day using read rates 1 to 50 MB/s (by 2 to 5 IPs connected via 1Gb ethernet) so most data will be deleted by purge without reading :)
  • No hot swap/hot plug necessary, however it must be possible to identify faulty drive
  • For power/cooling dimensioning it would be great if the delayed spin-up can be used (this I can do also on HW base)
  • Initial capacity will be ~20 TB, it will grow to ~100 TB until DEC 2017
  • All files to be stored in one directory (all 3 blackboxes need to see one and the same mount point)

These are readily doable with FreeNAS on the proper hardware, though I wouldn't count on delayed spinup doing much for you (and it wouldn't affect the cooling requirements in any event). I'll leave it to others to suggest what "the proper hardware" might be, though, as that's beyond my experience--but several six-disk RAIDZ2 vdevs sounds like a good start. And in that configuration, your pool will withstand the loss of up to two disks in a single vdev with no data loss.

  • One file is recorded on MAXIMUM 1 physical HDD (exception are files greater than set treshold, write continues on another drive) (if free space on HDD is less than file_size_treshold no write operation starts on this drive)
  • If one particular HDD is dismounted and connected to stand alone PC the files recorded on this drive must be readable (any filesystem that can be read by Win or Linux accepted)
...but these aren't, in combination with the requirements above. If they are non-negotiable requirements, I think your only option will be to write your own solution, as I don't think there's anything out there that does them in connection with the others.
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
Hi,

Thomas,
(1) this: "when one is enough to sustain the througput" is plain wrong. *WRITE* performance of a vdev is typically around the write performance of *1* drive.
To alleviate this, you stripe across vdevs - exactly what Chris recommends. The more vdevs, the merrier.

The point is about the requirements and you are mixing iops and throughput.

200Mb/s, Writing, Big files, No synchronised writes so anyway the iops will be provided by the cache.
Where is the iops requirement ? This is about throughput here.

There is not point in justificating a high number of vdev with performance. This results in a waste of space.

(2) A 100TB pool filled with 100TB is *DEAD*. Never ever fill a ZFS pool with more than 80% of its capacity. This has to do with the copy-on-write principle ZFS is based on - please read, this has been covered ad nauseum in this forum.
Why do you think I am not aware of this rule ?

160TB*0.8=128TB, not 100TB.
Would you buy 128TB when you need 100TB ?
Added to (1), this is quite a lot of wasted disks.

(3) If you think a 1G network link will transfer 1Gbit/sec, you're probably mistaken. Depending on the protocol (chatter: request/acknowledge type of stuff), latency can kill bandwidth. With the OP requested guaranteed speed, you have to go at least with 10G, best is to use SFP+/optical fibre as latency is even smaller than with copper.
I don't think 1G network link will transfer 1Gbit/sec. Why may I think that ?
What is your justification for "With the OP requested guaranteed speed, you have to go at least with 10G" in this particular case ?

The 1GB is on the blackbox side.
IcePlanet was planning to have a Quad 1Gb NIC on the NAS and connect each blackbox to a dedicated NIC.
So no 10Gb NIC here and actually this might be the only specification that make sense in his original post.

So yes ok, all this is nice, but where is the point for this NAS ?
 
Status
Not open for further replies.
Top