Dell 730xd 24-bay with SSD EVO

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
Hi eveyone

in fact i want see your opinions about these specifications for FreeNas


i want buy Dell 730xd with 24-bay 2.5 SAS
2 x E5-2680 v3
Ram 64G DDR4
Controller H730 min
Drivers will be 24 x SSD evo from samsung for DATA on HBA mode
i want use SSD from samsung its more cheap from Dell SSD
so evo 2.5 will wrok with dell730xd without problems ?? or you advice me to use LSI

2 x 256 SSD on raid1 for FreeNAS OS

this freeNAS will connect with another server its also dell730xd runs windows server 2016 via iscsi

so will be waiting your feedback about this

thanks for all
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Consider the HBA330 instead of the H730, since it will be a vanilla HBA without any need for adjustment. An LSI SAS3008 might be even better though. Your boot RAID1 can be handled via FreeNAS ZFS boot mirroring with either HBA; just select both of the SSDs during the install.

Which exact Samsung EVO are you planning to use? 860 EVO I assume? Bear in mind that some controllers require the SSD to support "Deterministic Read Zero After Trim" sometimes abbreviated to DZAT or RZAT. I believe the 860 supports this but the 850 does not.

Consider some refurbished or used-pull Dell drives as well, depending on cost and endurance requirements.

An all-flash array might negate the need for SLOG depending on your performance needs, but having one might reduce write amplification as well. It would need to be quite fast to keep up with the underlying vdevs though.
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
Consider the HBA330 instead of the H730, since it will be a vanilla HBA without any need for adjustment. An LSI SAS3008 might be even better though. Your boot RAID1 can be handled via FreeNAS ZFS boot mirroring with either HBA; just select both of the SSDs during the install.

Which exact Samsung EVO are you planning to use? 860 EVO I assume? Bear in mind that some controllers require the SSD to support "Deterministic Read Zero After Trim" sometimes abbreviated to DZAT or RZAT. I believe the 860 supports this but the 850 does not.

Consider some refurbished or used-pull Dell drives as well, depending on cost and endurance requirements.

An all-flash array might negate the need for SLOG depending on your performance needs, but having one might reduce write amplification as well. It would need to be quite fast to keep up with the underlying vdevs though.

Thanks for all these information
So you advice me to use SAS 3008 but I don't now if it works with dell or no ?? So about raid1 for freeNas OS I can do it during the installation ??
About SSD in fact I don't know which one is the best so will work with your idea and go to EVO 860
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks for all these information
So you advice me to use SAS 3008 but I don't now if it works with dell or no ?? So about raid1 for freeNas OS I can do it during the installation ??
About SSD in fact I don't know which one is the best so will work with your idea and go to EVO 860
A SAS3008 should work fine; you may need to purchase different or longer cables though to reach your backplane.

During your install you will be able to set up the ZFS mirror on your boot devices. Just make sure to select the correct ones as any drive used for boot purposes is not usable as "general storage."

The EVO 860 is OK for a light duty server; what is the expected workload and write volume? Different SSDs have different endurance ratings and you might be better off to spend a little more now for drives that will handle your workload better. The EVO is rated for 0.3 DWPD (Drive Writes Per Day) so in other words a 500G drive is expected to accept no more than 150G of writes per day. More expensive drives could get you to 1-3 DWPD; "write intensive" drives for heavy usage will be 10 or 25 DWPD.

@HoneyBadger SAS 3008 support 24 ports ??? Because the 730xd support 24-bay
Your backplane has a SAS expander chip in it - similar to a network switch, it will take your 8 lanes of upstream SAS (2x4-lane ports) and share them among the 24 drives connected to it.
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
A SAS3008 should work fine; you may need to purchase different or longer cables though to reach your backplane.

During your install you will be able to set up the ZFS mirror on your boot devices. Just make sure to select the correct ones as any drive used for boot purposes is not usable as "general storage."

The EVO 860 is OK for a light duty server; what is the expected workload and write volume? Different SSDs have different endurance ratings and you might be better off to spend a little more now for drives that will handle your workload better. The EVO is rated for 0.3 DWPD (Drive Writes Per Day) so in other words a 500G drive is expected to accept no more than 150G of writes per day. More expensive drives could get you to 1-3 DWPD; "write intensive" drives for heavy usage will be 10 or 25 DWPD.


Your backplane has a SAS expander chip in it - similar to a network switch, it will take your 8 lanes of upstream SAS (2x4-lane ports) and share them among the 24 drives connected to it.
many thanks for all theses info , so we can say the server specifications is good from cpu + ram
but about SSD , during my reading in internet some of them say you should be far from consumer SSD and go to enterprice
yes its more expensive but many things best from consumer SSD , some of them say we have tried the both , and both of them was same performace so still now i have big confuse in this point
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
many thanks for all theses info , so we can say the server specifications is good from cpu + ram
but about SSD , during my reading in internet some of them say you should be far from consumer SSD and go to enterprice
yes its more expensive but many things best from consumer SSD , some of them say we have tried the both , and both of them was same performace so still now i have big confuse in this point
The big differences between consumer and enterprise SSD is usually in the write endurance, as mentioned before, but also in consistent performance. Consumer SSDs often will provide good performance for a short period, but "run out of steam" long term. If your workload is very bursty in nature and has long periods of idle time between, then consumer SSD may be fine. But if you plan to be hitting this array with reads and writes 24x7x365 then enterprise SSD makes a lot of sense.
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
The big differences between consumer and enterprise SSD is usually in the write endurance, as mentioned before, but also in consistent performance. Consumer SSDs often will provide good performance for a short period, but "run out of steam" long term. If your workload is very bursty in nature and has long periods of idle time between, then consumer SSD may be fine. But if you plan to be hitting this array with reads and writes 24x7x365 then enterprise SSD makes a lot of sense.
Again thanks for this explain ,
In fact I am looking for reading more than writing becuse one perosn will add data daily in this server like this but I have students they will reach this data and it's big number maybe at sametime, will be like library of videos for trainer
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Again thanks for this explain ,
In fact I am looking for reading more than writing becuse one perosn will add data daily in this server like this but I have students they will reach this data and it's big number maybe at sametime, will be like library of videos for trainer
In that case you can likely get away with consumer or "read focused" SSDs, but for video streaming you may even be able to use regular HDDs for much less cost. It would depend on the number of students accessing simultaneously, as well as if they will all be watching the same videos in a few days, in a scheduled fashion, or if it will be randomly accessed across a large amount of data. If they are viewing videos on a "schedule" the videos will be cached in RAM and very little disk I/O will be needed after the first few viewings.
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
In that case you can likely get away with consumer or "read focused" SSDs, but for video streaming you may even be able to use regular HDDs for much less cost. It would depend on the number of students accessing simultaneously, as well as if they will all be watching the same videos in a few days, in a scheduled fashion, or if it will be randomly accessed across a large amount of data. If they are viewing videos on a "schedule" the videos will be cached in RAM and very little disk I/O will be needed after the first few viewings.

Nice information really so I will explain more details about this one of student will add videos ok and maybe after one houre number of students will reach it I dont know but maybe 200-300 or more this , so why I wanted use SSD I think with SSD I can get high iops about reading ( sas , sata ) will be wrong for this project but your idea very good
I can use some SSD like 4 x 1TB SSD this will be for cache and 20 x 6TB SATA 7.2 or 20 x 4TB for data so the first student will open the lecture will come sata drivers and will be in cache so after this the SSD drives will give to other students right ?? What is your oppion about this or you advice me to use ram for cache ? If yes it means should use 128G ram to can use like 64G for freeNas and 64G for cache which is the best your think ??? And one more thing how many days this lecture will stay in cache or I can edit time of it ask I like ? because after 3 or 4 hours maybe this lecture will like old no one will open it maybe daily 3 or 4 or max 10 only the students that did not reach at time of adding
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Essentially all of the RAM in a machine running FreeNAS (save for 4GB or so) ends up being used for cache, so there's no need to "assign 64G for FreeNAS, 64G for cache" - it will all be used for that.

As far as how long it will stay in the RAM cache - "until something more valuable needs to be placed there" is the answer. ZFS uses both MRU (Most Recently Used) and MFU (Most Frequently Used) logic to decide what goes into the cache and what gets kicked out. A good way to encourage the caching of files would be after a video/presentation is written, play it back a couple times to test it.

How large is each lecture/video?
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
Essentially all of the RAM in a machine running FreeNAS (save for 4GB or so) ends up being used for cache, so there's no need to "assign 64G for FreeNAS, 64G for cache" - it will all be used for that.

As far as how long it will stay in the RAM cache - "until something more valuable needs to be placed there" is the answer. ZFS uses both MRU (Most Recently Used) and MFU (Most Frequently Used) logic to decide what goes into the cache and what gets kicked out. A good way to encourage the caching of files would be after a video/presentation is written, play it back a couple times to test it.

How large is each lecture/video?

thanks again in fact szie of videos some of them 300mbit and some 500mbit and some of them 1.5G it depends of the time , so which is best to make ram for cache or make some SSD for caching ?? if you advice to use ram so i think best to go 128G ram or more and also another point if i connect this freeNas via iscsi with my windows server 2016 - server the database in in windows server but the data of of lectures from the freeNas server so where will be the load ?? from ram and cpu , i think will be on windows server 2016 - server right ???
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
thanks again in fact szie of videos some of them 300mbit and some 500mbit and some of them 1.5G it depends of the time , so which is best to make ram for cache or make some SSD for caching ?? if you advice to use ram so i think best to go 128G ram or more and also another point if i connect this freeNas via iscsi with my windows server 2016 - server the database in in windows server but the data of of lectures from the freeNas server so where will be the load ?? from ram and cpu , i think will be on windows server 2016 - server right ???
Given the smaller size of the videos and the usage pattern (200-300 students will be viewing them over a period of a day, then only sporadically afterwards) you are likely fine with just the 64G of memory - but more RAM never hurts. The usual rule is "add RAM first, until either your system cannot hold any more or you cannot afford to buy any more" - and then add SSD for L2ARC afterwards.

The use of iSCSI complicates things. If you present all of the storage from FreeNAS to Windows, and then store files within that Windows installation, ZFS (the file system on FreeNAS) will not be able to easily "see" which files are being accessed most frequently. ZFS will know that "certain blocks of storage" are being opened from the Windows system, and cache those - but if you are able to serve the files directly from FreeNAS instead (perhaps via a mapped drive to the Windows server, or by joining your FreeNAS system to the same domain as the Windows machine, and then redirecting the file-open requests to an SMB share on FreeNAS) then you will likely get better results with fewer resources, since FreeNAS/ZFS will "see" the requests to open the same files over and over.
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
Given the smaller size of the videos and the usage pattern (200-300 students will be viewing them over a period of a day, then only sporadically afterwards) you are likely fine with just the 64G of memory - but more RAM never hurts. The usual rule is "add RAM first, until either your system cannot hold any more or you cannot afford to buy any more" - and then add SSD for L2ARC afterwards.

The use of iSCSI complicates things. If you present all of the storage from FreeNAS to Windows, and then store files within that Windows installation, ZFS (the file system on FreeNAS) will not be able to easily "see" which files are being accessed most frequently. ZFS will know that "certain blocks of storage" are being opened from the Windows system, and cache those - but if you are able to serve the files directly from FreeNAS instead (perhaps via a mapped drive to the Windows server, or by joining your FreeNAS system to the same domain as the Windows machine, and then redirecting the file-open requests to an SMB share on FreeNAS) then you will likely get better results with fewer resources, since FreeNAS/ZFS will "see" the requests to open the same files over and over.
Yes you are on right , so best to connect freeNas with windows server via SMB and after this make map for the driver and will be fine without problems ?? Like cache service will work without problems ? And everything will be fine ?
If if we have cache service I think will not need to use all drives SSD I think best increase ram 128G and go either sas 10k ,15k or sata and witch is best use big capacity or small and which is best also of pool and vdms how will be the dividing let us say I will use 2TB 2.5 sata , in pool 8 x 2 or there is another oppion , also about future if wanted add storage for this server like MD 1420 or MD1420 I can add it for freeNas or buy new dell730xd 24-bay and make same idea above
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I don't know enough about your software suite here to say that "everything will work perfectly" but I can say that if it is capable of either redirecting or referencing the video files on an external SMB share or mapped network drive then it should work. If it needs to store the video files as objects in its own proprietary database inside of Windows (which is a poorly thought-out design, in my opinion) then you will have to use iSCSI.

For drive choice, the need for capacity without excessive random I/O seems to strongly indicate either regular SATA or NL-SAS drives. When selecting your drives I would be careful to avoid "shingled" or "SMR" drives - your workload is mostly reads, but rebuilds and rewrites run very poorly on this type of drive. A smaller number of larger 3.5" drives may be a better solution, unless you already own the 2.5" R730XD chassis.

Drive configuration - mirrors are almost always the most performant solution. If you are not using iSCSI, then you could consider using RAIDZ2 for increased capacity at the cost of performance; or even combine the two solution in a single chassis. Eg: if you use a 12-bay 3.5" R730XD, you could configure one 8-drive Z2 (roughly 6 drives of usable space) for file storage and 4-drive mirrored pool (roughly 2 drives usable space) for the iSCSI/block service. If you have 24 drives, you could do 6 drives of mirrors (3x 2-way) for block, and 18 drives of RAIDZ2 (3x 6-drive Z2) for file.

Additional drive shelves can be added via external SAS HBAs - the same rules apply in that LSI SAS HBAs are preferred but other solutions are possible (the Dell PERC H830 can be switched to an HBA mode in its configuration, just ensure that you update the firmware!)
 

MRBIQ

Dabbler
Joined
Apr 24, 2020
Messages
20
@HoneyBadger many thanks dear for all this sorry about late answer , so we can say freeNas will connect with my windows server via SMB protocol , about cach service with SMB is there any problem or will give me good performance , what I will do , will add the data in the freeNas partition in my windows server and will let my local site in windows server take all data from freeNas partition this is my idea , about drivers of course if use 7.2 like 2TB or 4TB if used 730xd 12-bay and sata 2.5 if use 730xd 24-bay this the cost will be good for me because the prices of sata drivers are good and not like SSD but what about I/O especially the reading speed maybe 7.2 will not give right results if have many online in same time I think if wanted use 7.2 should make minmum cash 64G so online will take data from 64G ram cache right ???? But should be sure the cache service work in my plan if didn't work will put me in big problem and my data drives will down with this number of onlines and this will cost me to much money like , why said if add with 64G ram 1TB SSD to be in safety place
 
Last edited:
Top