What I have, What I want to do, how it's set up now, and my current issues

Status
Not open for further replies.

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
TL;DR: iSCSI drops connection and refuses to connect again without a hard reboot of the box. Also, wondering if I have my box setup correctly or what I can do to make it better. Advice welcome


Current Setup:

CPU: Intel i5-4690K 3.50GHz
MoBo: ASUS Z97-A
RAM: 16GB Avexir Core Series DDR3 1600 \(PC3 12800\
Case: Rosewill 4U Server Chassis \(RSV-L4500\)
SAS Card (x2): SAS9211-8I 8PORT Int 6GB Sata+sas
Single Flash Drive (Microcenter branded) 8GB

Samsung 840 EVO 256GB: Not in use
Samsung 850 EVO 1TB: Not in use

10GB NIC with Fiber SFP

Single Mode fiber cable
-----------------------------------------------------------------------------------
Thermal are pretty stable and CPU has yet to ever hit even 75% load. Moved 9TB of data to the thing and it never got above 35c

[Reporting Snapshot]

jc5nNVl.png

One of the WD 3TB's failed when I was working on setting this up but it is under warranty so I am waiting on getting that back before I so anything with the 3TB drives.

-------------------------HDD---------------------------

Totals:

6 x 2TB

3 x 3TB

3 x 4TB

3 x 5TB
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
5 x SAMSUNG F3EG HD203WI 2TB 5400 RPM 32MB Cache SATA 3.0Gb/s 3.5"

1 x Toshiba X300 5TB 7200 RPM 128MB Cache SATA 6.0Gb/s

2 x WD Black 3TB 7200 RPM SATA 6Gb/s 64MB - WD3003FZEX

1 x WD WDBSLA0040HNC 4TB 7200 RPM 64MB

1 x WD Se WD2000F9YZ 2TB 7200 RPM 64MB 6.0Gb/s

2 x WD Black 5TB 7200 RPM 6Gb/s 128MB WD5001FZWX

1 x Seagate BarraCuda STBD3000100 3TB 7200 RPM 64MB

1 x Western Digital Red Pro 4TB 7200rpm 64MB

1 x Other drive ( 4 TB ) that I don't have the model number for but can look up later if need be

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Current use: iSCSI - Steam Library and data storage (purely Home use). Data is not super important but I do want some redundancy.

*Planning on setting it up in to VOL's one to a R640 I have at home to host a bunch of VM's but can't seem to get the 10GB NIC inside of it to work.*

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Current Config:

RAIDZ1 - 6 x 2TB Drives

RAIDZ1 - 3 x 4TB Drives

RAIDZ1 - 3 x 5TB Drives

[Volume Stats]
MFvgKXK.png


[Storage Tab]
9opsm28.png


[iSCSI Setup]
yBp1mgo.png


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------

What I wanted to do:

- Have my Steam Library, Movie, Game, TV, Anime, Music storage remote. (aka I build a PC that has no 3.5" HDD mounts and I am stuck with M.2 and SSD on sticky tape)

- Have Semi-redundant storage, nothing crazy but just something to give me peace of mind.

- Have high speed storage

- Work with 10GB networking

------------------------------------------------------------------------------------------------------------------------------------------------------------------------

What is currently happening:

- Speeds are ok

- iSCSI drops from time to time and I can not reconnect to it for a while and when I finally do, I can not move any data to it without the connection dropping again. Only way to fix this is a hard reboot of the NAS.

From what I have researched, It seems to be because I throw too many write requests at the NAS at one time and iSCSI doesn't like that. That and RAIDZ doesn't pair well with iSCSI? Looking for some advice on the current setup, current config, and things I might have to change to get everything working that way that I want to.

I am sure I forgot some info here so let me know if I need to add anything else.

[Pictures of the current setup](https://imgur.com/a/0iya4)
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Just to get this out of the way: your hardware is definitely not recommended for FreeNAS. You are using desktop-grade mobo, CPU, and non-ECC RAM. There may be unintended side effects of using this hardware. Obviously that's not a guarantee, but we've seen all sorts of problem on these forums that went away with recommended hardware.

The first thing that jumps out to me is that you're using over 30TB of storage, with only 16GB of memory. This is a huge problem. Especially with block storage. The rule of thumb is 1GB RAM to 1TB storage for a reason. If you were into min-maxing, you might be able to get away with 24GB of memory if you were only using this server for file serving, but I'd strongly recommend 32GB for your use case.

The second thing that jumps out to me is that you're striping across three RAIDZ1 vdevs (effectively RAID50). I would not recommend this configuration if you value your data. A failure of any one of those vdevs will wipe out everything. Since we're on this topic, you have one heck of a grab-bag of hard drives there. Of all the hard drives you have there, only two are really rated for the workload that your NAS will put on them: the WD Red Pro, and the WD Se. Otherwise, I expect you'll to see premature failure with the other drives. I would reconfigure your vdevs to: 6x 2TB RAIDZ2, 3x 4TB + 3x 5TB RAIDZ2. The 5TB drives will effectively "lose" 1TB because they're paired with the 4TB drives, but you'll dramatically increase your redundancy, which I feel is important given the less than ideal (and I'm assuming previously used) drives.

Moving on to the problems in the post, there are many issues with block storage with ZFS. That's not to say there are bugs, but rather conscious architecture decisions in the name of data security that unfortunately also tank block-level performance. If you must do block storage with ZFS, you want to use stripped mirrors (RAID10), and you'll probably want a SLOG (though that would depend on your workload). Also, you'll want to keep your storage utilization below 50%.

Also, I'm not sure your use-case really benefits from using block storage on ZFS, especially with the extra hardware you've got laying around. In particular, since you can re-download your Steam library any time, there's no reason to need redundancy. I would use that 1TB SSD for you desktop's boot drive, and you'll probably have more than enough storage. Worst case, you can add the other SSD. Which brings me to: there is no problem with sticky taping SSDs in a case. SSDs do not have any vibration or orientation sensitivity, and can happily live wherever.

For the rest of the stuff (Media, TV, Movies), I would assume you actually don't want block storage, since you would now be prevented from sharing it with other users. I would store all that data file-level on your NAS, and share it, most likely SMB.

-------------

So, let's get down to brass tacks: what do I really recommend? Sell your desktop grade hardware, and either buy an enterprise-grade mobo/CPU/RAM to put in your Norco case, or buy a server with enough drive bays. eBay is a great resource for all of the above, no matter what route you choose. If you don't want to replace your HDDs with NAS/server spec'ed drives, make sure to keep a couple extra drives on hand to replace them when they fail. Use FreeNAS as a NAS, and not for block storage.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Just to get this out of the way: your hardware is definitely not recommended for FreeNAS. You are using desktop-grade mobo, CPU, and non-ECC RAM. There may be unintended side effects of using this hardware. Obviously that's not a guarantee, but we've seen all sorts of problem on these forums that went away with recommended hardware.

The first thing that jumps out to me is that you're using over 30TB of storage, with only 16GB of memory. This is a huge problem. Especially with block storage. The rule of thumb is 1GB RAM to 1TB storage for a reason. If you were into min-maxing, you might be able to get away with 24GB of memory if you were only using this server for file serving, but I'd strongly recommend 32GB for your use case.

The second thing that jumps out to me is that you're striping across three RAIDZ1 vdevs (effectively RAID50). I would not recommend this configuration if you value your data. A failure of any one of those vdevs will wipe out everything. Since we're on this topic, you have one heck of a grab-bag of hard drives there. Of all the hard drives you have there, only two are really rated for the workload that your NAS will put on them: the WD Red Pro, and the WD Se. Otherwise, I expect you'll to see premature failure with the other drives. I would reconfigure your vdevs to: 6x 2TB RAIDZ2, 3x 4TB + 3x 5TB RAIDZ2. The 5TB drives will effectively "lose" 1TB because they're paired with the 4TB drives, but you'll dramatically increase your redundancy, which I feel is important given the less than ideal (and I'm assuming previously used) drives.

Moving on to the problems in the post, there are many issues with block storage with ZFS. That's not to say there are bugs, but rather conscious architecture decisions in the name of data security that unfortunately also tank block-level performance. If you must do block storage with ZFS, you want to use stripped mirrors (RAID10), and you'll probably want a SLOG (though that would depend on your workload). Also, you'll want to keep your storage utilization below 50%.

Also, I'm not sure your use-case really benefits from using block storage on ZFS, especially with the extra hardware you've got laying around. In particular, since you can re-download your Steam library any time, there's no reason to need redundancy. I would use that 1TB SSD for you desktop's boot drive, and you'll probably have more than enough storage. Worst case, you can add the other SSD. Which brings me to: there is no problem with sticky taping SSDs in a case. SSDs do not have any vibration or orientation sensitivity, and can happily live wherever.

For the rest of the stuff (Media, TV, Movies), I would assume you actually don't want block storage, since you would now be prevented from sharing it with other users. I would store all that data file-level on your NAS, and share it, most likely SMB.

-------------

So, let's get down to brass tacks: what do I really recommend? Sell your desktop grade hardware, and either buy an enterprise-grade mobo/CPU/RAM to put in your Norco case, or buy a server with enough drive bays. eBay is a great resource for all of the above, no matter what route you choose. If you don't want to replace your HDDs with NAS/server spec'ed drives, make sure to keep a couple extra drives on hand to replace them when they fail. Use FreeNAS as a NAS, and not for block storage.

Thought as much on the hardware side of things. I used my old gaming hardware as more of a test than anything else. Only thing I had to buy was the SAS controllers, SAS to SATA cables, the case, and 1 HDD ( The 5TB Toshiba ).

I have been looking to change this out with 64GB of ECC as I plan to replace the 2TB drives with the 5TB X300's. I need to get a parts list for what I should get. Needs at least 3 PCIE slots for the 2 SAS cards and the 10GB fiber. I would like to keep my CPU if possible but that point is kinda moot since it seems like it only supports 32GB of RAM and I plan on add more storage to the NAS anyway.

So onto the iSCSI vs Datastore issue, SSD issue, and redundancy issue.

I didn't have too much of a plan to begin with so the volume that I made was mostly just a cram it all into one place and move everything there. I wanted iSCSI 1/2 to test it out and get used to using and troubleshooting it at home (Used at almost all of our clients at my job but never touched it before) and 1/2 because there is a lot of steam games, All blizzard games, Most Origin games, and applications that will only install to a "Local" drive. An iSCSI drive counts for that. I have been doing as much research as I can and I am currently looking into how I want to setup the storage. Here are my thoughts at the moment:

1 iSCSI vol ~6-8TB: Purely used for Game and App installs. Not sure if a SLOG would help in this situation. This honestly might not even need redundancy. Every game I own can be Re-downloaded fairly quickly (120mbps down \ 10UP).

1 SMB Share ~10-15TB: This would be for all my media storage. Also thinking about hosting a PLEX server off this but for now I am just worried about correct configuration. This will need some redundancy as some of this stuff I can't re-download easy or at all. Thinking of replacing the 2TB with the 5TB X300 which would put me at 9 x 5TB Drives. Thinking a Raidz2?

The SSD's are actually extra that currently don't have anything on them. I have a 512gb NVME that I use for boot on my main system and the 1tb and the 256 are just stuck to the bottom.

https://i.imgur.com/wi0YGD1.jpg

They currently look nice but don't have anything on them.


Beyond all that, I have a Dell R620
Dual Xeon E5-2650 2GHz 20mb of cache 8 core 16 thread
Dual power supply
128 GB of RAM
sitting on top of a box in the basement because I can't seem to get the 10GB Nic working on it to connect it to the NAS. I was thinking of iSCSI --> R620 --> PC but I can't for the life of me get the 10GB Nic working on the thing and didn't want to stick with multiple 1GB networking. Still not sure what I want to do with the thing.
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I have been looking to change this out with 64GB of ECC as I plan to replace the 2TB drives with the 5TB X300's. I need to get a parts list for what I should get. Needs at least 3 PCIE slots for the 2 SAS cards and the 10GB fiber. I would like to keep my CPU if possible but that point is kinda moot since it seems like it only supports 32GB of RAM and I plan on add more storage to the NAS anyway.

Your CPU also does not support ECC, so you have that working against you. I would again recommend replacing everything.

1 iSCSI vol ~6-8TB: Purely used for Game and App installs. Not sure if a SLOG would help in this situation. This honestly might not even need redundancy. Every game I own can be Re-downloaded fairly quickly (120mbps down \ 10UP).

If you are definitely set on doing block storage, you need to complete re-architect your storage. Block storage should go on its own pool, which is optimized specifically for this use case. If your goal is simply playing with block storage, you could easily stripe across a few hard drives, and that would give you very good performance with absolutely no redundancy. If you want to add redundancy, you should only considered stripped mirrors. If 6TB is sufficient (how many games do you play at once!?), just do 4x 3TB drives in striped mirrors. Once that's working, you can do some benchmarks to see if you really need a SLOG. Again though (and this could be my biases slipping through), I just can't see how nearly 2TB (256+512+1TB) is insufficient for OS+applications.

Thinking of replacing the 2TB with the 5TB X300 which would put me at 9 x 5TB Drives. Thinking a Raidz2?
If I had 9x 5TB drives, I would do a single RAIDZ2 vdev and call it good. That's approximately 35TB of space. However, the X300 is again a desktop class drive, not designed to stand up to the rigors of a NAS system.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
Run Steam locally on your gaming machine and back it up daily to your NAS, why make things complicated and expensive

Use the Dell R620 as your FreeNAS machine and one gigabit networking is just fine for media. Dual Xeon E5-2650 passmarks over 15000, your 4670K passmarks about 7600, 128GB of RAM, If you want to add a SSD mirror for your VM/Jails

Have Fun
 
Last edited by a moderator:

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Your CPU also does not support ECC, so you have that working against you. I would again recommend replacing everything.

Should have considered that. I will see if I can whip up a parts list for the system when I get a chance. Shouldn't be terrible.

If you are definitely set on doing block storage, you need to complete re-architect your storage. Block storage should go on its own pool, which is optimized specifically for this use case. If your goal is simply playing with block storage, you could easily stripe across a few hard drives, and that would give you very good performance with absolutely no redundancy. If you want to add redundancy, you should only considered stripped mirrors. If 6TB is sufficient (how many games do you play at once!?), just do 4x 3TB drives in striped mirrors. Once that's working, you can do some benchmarks to see if you really need a SLOG. Again though (and this could be my biases slipping through), I just can't see how nearly 2TB (256+512+1TB) is insufficient for OS+applications.

Debating if its even worth it to have redundancy on a drive that is purely going to be installed games / Apps. Also,
https://i.imgur.com/mNcgSWU.jpg

And that's just steam. Steam library alone takes up around 4-6.5TB depending on what I install. Combine that with MMO's, other game platforms, and standalone games its get big very fast and I want to leave a lot of breathing room as well.

If I had 9x 5TB drives, I would do a single RAIDZ2 vdev and call it good. That's approximately 35TB of space. However, the X300 is again a desktop class drive, not designed to stand up to the rigors of a NAS system.

From what I see, The price per GB on those drives is crazy. Low prices I am seeing is $119 shipped. No tax as well. The reds are over $200+. Stats seem pretty fair and no real word on reliability issues from what I have researched. They just run a bit hotter than most drives.
http://hdd.userbenchmark.com/Compare/WD-Red-5TB-2014-vs-Toshiba-X300-5TB/3524vs3593
I'm sure the overall the failure rate on these are higher but seeing as its almost (if not less than) half the price of the RED with better Read/Write speeds. I'm thinking that is the way to go. Unless there is a reason not to that I am unaware of?
 
Last edited:

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Run Steam locally on your gaming machine and back it up daily to your NAS, why make things complicated and expensive

Use the Dell R620 as your freeNAS machine and one gigabit networking is just fine for media. Dual Xeon E5-2650 passmarks over 15000, your 4670K passmarks about 7600, 128GB of RAM, If you want to add a SSD mirror for your VM/Jails

Have Fun

I would have to buy purely SSD storage for it. Which would be about as much as the NAS in total. I should just lump it as games in total as I currently have to uninstall some games if I want to install new ones. (Before I setup the NAS as it is right now). By this point, it's become more of a challenge than a efficient choice.

The R620 is a blade with 2 HDD slots. Everything is a weird custom size. I would like to just keep it as it is and have the NAS box be its storage.
The 10GB / iSCSI to the server was mostly for attempting to run some VM's off the R620. Game servers, DL Boxes, Plex server, and home lab / testing.
I am fairly new to iSCSI and ESXI so I am not sure if the gig connection from NAS to the server would be enough.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Holy pictures Batman. If you could please resize, or at least thumbnail your pictures, that would be really nice while trying to browse through the forum.

And that's just steam. Steam library alone takes up around 4-6.5TB depending on what I install. Combine that with MMO's, other game platforms, and standalone games its get big very fast and I want to leave a lot of breathing room as well.

My point is that, how many of these games are you actually playing at any given time? Realistically, how many are you even going to play this year? Obviously, you know your preferences better than I do, but I just find it unlikely that you really use multiple TB of game data, and what you're basically doing is providing a local backup to Steam's copy.

Now, I want you to know I'm not trying to argue with you here. My goal is to help you best meet your needs. And if your needs really involves 6TB+ of direct attached storage for applications, we'll get you there.

From what I see, The price per GB on those drives is crazy.

You're not really comparing things correctly here. The 5TB was a weird size for the Reds, and the price never came down. You can get an 8TB Red right now at about $240, or a 4TB for $130, both of which are largely comparable to the price on those Toshibas. Especially when you factor in the better warranty, better design (you're night fighting with the stupid head parking issues), and better reliability.

The R620 is a blade with 2 HDD slots.

The R620 is a rack server. M620 is the blade form factor. You will probably have a very hard time getting any non-Dell approved networking card working. Blades are extremely complicated, and the chassis software is usually woefully out of date by the time the end up in the aftermarket.

You could probably sell the M620 and the M1000e chassis and get enough to buy a proper server and have money left over.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Holy pictures Batman. If you could please resize, or at least thumbnail your pictures, that would be really nice while trying to browse through the forum.

Oops. My bad on that one. Had that labeled as "Smaller" for the PC. Fixed them to just links so it doesn't look stupid anymore.


My point is that, how many of these games are you actually playing at any given time? Realistically, how many are you even going to play this year? Obviously, you know your preferences better than I do, but I just find it unlikely that you really use multiple TB of game data, and what you're basically doing is providing a local backup to Steam's copy.

It's honestly hard to say as the needs change from time to time. I could be playing small indie games for a few months or AAA games that are 80-100GB a pop. I would like to error on the side of caution and since I can minimize the amount stored, I could safely say maybe ~6TB usable. I am assuming maybe 4TB top but adding headroom to be on the safe side.

Now, I want you to know I'm not trying to argue with you here. My goal is to help you best meet your needs. And if your needs really involves 6TB+ of direct attached storage for applications, we'll get you there.

I completely understand. Thanks for being a big help so far. I know I already have a very sub optimal build. Just trying to figure out what I want / need and the options I have to get there. I am just trying to balance my "I want this because I am curious" and "This will solve my storage issues".


You're not really comparing things correctly here. The 5TB was a weird size for the Reds, and the price never came down. You can get an 8TB Red right now at about $240, or a 4TB for $130, both of which are largely comparable to the price on those Toshibas. Especially when you factor in the better warranty, better design (you're night fighting with the stupid head parking issues), and better reliability.

Good point. I completely missed that. I zoned in on 5tb because that was the highest I already had.

The R620 is a rack server. M620 is the blade form factor. You will probably have a very hard time getting any non-Dell approved networking card working. Blades are extremely complicated, and the chassis software is usually woefully out of date by the time the end up in the aftermarket.

You could probably sell the M620 and the M1000e chassis and get enough to buy a proper server and have money left over.

Yeah, I got this after a install project from a client that got all new equipment and handed off the old gear to us.
This is what I got: Dell support page
Someone else grabbed the SAN so I am just left with the blade. It is a bit overkill for my needs and if the price is still good (Which i hope it is) Your right that it might be a better option to just sell the thing and start from scratch.

Sorry if it seems like I am bouncing back and forth. Just trying to decide if I want to just deal with what I got, replace the mobo/cpu/ram, sell the server, sell the drives i got and get new ones, or some combination of those
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Also forgot to add that the NIC that I got was a dell branded NIC that was labeled as compatible with the server. Lights up and shows up in ESXI but never shows anything connected. Thinking it might just be an SFP problem since I got an aftermarket one
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Yeah, I got this after a install project from a client that got all new equipment and handed off the old gear to us.
This is what I got: Dell support page

That is not a blade. That is a full, proper 1U server. R620s should have 6x 2.5" HDD bays in the front. The particular server you have may not have all its trays installed, so it may look like there are only two HDD slots. Since it's not a blade, you probably should just hang on to it. This is more-or-less the kind of server I would recommend purchasing anyway.

Also, while the mobo may be a custom form factor, all the sockets on that server should be standard. If the NIC shows up in ESXi, I would strongly wager it's working correctly, and you just have a non-supported SFP. You can't just through any SFP module into any SFP cage and expect it to work. There's vendor lock-in all over the place.

Another option for you, if you want to do an AIO FreeNAS on ESXi, is you could get a SAS card and a SAS shelf with more drive space, and connect the two. You can do some research on SAS expanders and SAS shelves on Google to get you started. Then, you'd configure ESXi to passthrough the SAS card to FreeNAS.

EDIT: In fact, you could DIY your Norco case into a SAS shelf with a SAS expander card: https://www.servethehome.com/sas-expanders-diy-cheap-low-cost-jbod-enclosures-raid/

EDIT2: A better guide: https://www.servethehome.com/sas-expanders-build-jbod-das-enclosure-save-iteration-2/
 
Last edited:

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
That is not a blade. That is a full, proper 1U server. R620s should have 6x 2.5" HDD bays in the front. The particular server you have may not have all its trays installed, so it may look like there are only two HDD slots. Since it's not a blade, you probably should just hang on to it. This is more-or-less the kind of server I would recommend purchasing anyway.

Also, while the mobo may be a custom form factor, all the sockets on that server should be standard. If the NIC shows up in ESXi, I would strongly wager it's working correctly, and you just have a non-supported SFP. You can't just through any SFP module into any SFP cage and expect it to work. There's vendor lock-in all over the place.

Another option for you, if you want to do an AIO FreeNAS on ESXi, is you could get a SAS card and a SAS shelf with more drive space, and connect the two. You can do some research on SAS expanders and SAS shelves on Google to get you started. Then, you'd configure ESXi to passthrough the SAS card to FreeNAS.

EDIT: In fact, you could DIY your Norco case into a SAS shelf with a SAS expander card: https://www.servethehome.com/sas-expanders-diy-cheap-low-cost-jbod-enclosures-raid/

EDIT2: A better guide: https://www.servethehome.com/sas-expanders-build-jbod-das-enclosure-save-iteration-2/

That's pretty sweet. I'm going to look into that more tonight. I was looking for something that I didn't have to setup iSCSI for but couldn't find anything. This is exactly what I am looking for!
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
That's pretty sweet. I'm going to look into that more tonight. I was looking for something that I didn't have to setup iSCSI for but couldn't find anything. This is exactly what I am looking for!
That is not a blade. That is a full, proper 1U server. R620s should have 6x 2.5" HDD bays in the front. The particular server you have may not have all its trays installed, so it may look like there are only two HDD slots. Since it's not a blade, you probably should just hang on to it. This is more-or-less the kind of server I would recommend purchasing anyway.

Also, while the mobo may be a custom form factor, all the sockets on that server should be standard. If the NIC shows up in ESXi, I would strongly wager it's working correctly, and you just have a non-supported SFP. You can't just through any SFP module into any SFP cage and expect it to work. There's vendor lock-in all over the place.

Another option for you, if you want to do an AIO FreeNAS on ESXi, is you could get a SAS card and a SAS shelf with more drive space, and connect the two. You can do some research on SAS expanders and SAS shelves on Google to get you started. Then, you'd configure ESXi to passthrough the SAS card to FreeNAS.

EDIT: In fact, you could DIY your Norco case into a SAS shelf with a SAS expander card: https://www.servethehome.com/sas-expanders-diy-cheap-low-cost-jbod-enclosures-raid/

EDIT2: A better guide: https://www.servethehome.com/sas-expanders-build-jbod-das-enclosure-save-iteration-2/

Looking into it a bit, here are the parts I am looking at:

SAS Expander x2
Dell SAS card: Looks like it is compatible
Looking into the PICMG at the moment and trying to get a feel for the size as well so I can mount it correctly.

Ill keep the PSU the same but the rest will come out. I will need to pickup and DELL SFP to test the NIC as well.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Looking into it a bit, here are the parts I am looking at:

SAS Expander x2
Dell SAS card: Looks like it is compatible
Looking into the PICMG at the moment and trying to get a feel for the size as well so I can mount it correctly.

Ill keep the PSU the same but the rest will come out. I will need to pickup and DELL SFP to test the NIC as well.

So from how I understand it. I would put the Dell SAS card into the server. Then I have the 2 SAS expanders on the "SAS shelf" that it connects to powered by the PICMG board. Then just SATA to the HDD. Unless I am missing something here?

Edit: Thinking of this?

Edit2: Now that I had some time to look into it. Looks like the HP card is SAS ports and not SATA ports like I thought.
That would allow me to only get one and be fine for the 15 HDD that I have in the case currently.

My main question now is if I need a SAS card for the server & if I need more than one to support all the drives speeds (Will one cable provide enough bandwidth for the entire array of 15 drives?)

I also looked on the server itself and found what looks like a SAS connector on the board. Anyone know if this is useable?

Edit 3: This might be a better option. Looking into it a bit more to see if it will work but I don't currently see a reason why it would not.
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Couple things:
  • A SAS controller is different than a SAS expander. You need a SAS controller in the server, and a SAS expander in the JBOD array.
  • Make sure you don't confuse SAS with SATA. Don't go down the SATA port multiplier route!!!
  • Most of the SAS controllers/extenders you'll be looking at have SAS ports, and will require breakout cables.
  • Given the community build up around it, I wouldn't use anything other than the HP SAS Expander. If you want to blaze new ground, be my guest, but the project you're undertaking is almost universally done in the community (as far as I can see) with the HP SAS Expander.
  • SAS controllers are available as RAID units and as HBAs. You need a controller that can act as an HBA. In other words, your controller needs to make the drives themselves available to the OS. Many RAID controllers do not have this capability, and you're either SOL or you must flash the card with different firmware to make this work.
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Couple things:
  • A SAS controller is different than a SAS expander. You need a SAS controller in the server, and a SAS expander in the JBOD array.
  • Make sure you don't confuse SAS with SATA. Don't go down the SATA port multiplier route!!!
  • Most of the SAS controllers/extenders you'll be looking at have SAS ports, and will require breakout cables.
  • Given the community build up around it, I wouldn't use anything other than the HP SAS Expander. If you want to blaze new ground, be my guest, but the project you're undertaking is almost universally done in the community (as far as I can see) with the HP SAS Expander.
  • SAS controllers are available as RAID units and as HBAs. You need a controller that can act as an HBA. In other words, your controller needs to make the drives themselves available to the OS. Many RAID controllers do not have this capability, and you're either SOL or you must flash the card with different firmware to make this work.

I wanted to thank you again for all the help. Here is what I got so far (current config has the breakout cables already)

Server card: Listed as "Non-Raid" on amazon and the part number on the dell site lists it as a HBA card
HP SAS Expander: If it aint broke, don't fix it
PCI-E Extension: Seems to be sold out now so I might need to look for another brand but its for crypto mining so I think I should be able to find something comparable.
Seagate 8TB IronWolf NAS Drives: Looking into current driven prices of decent NAS drives. Looks like the 8TB version goes down to $209 on sale for a cost per TB of $26.125 which is the lowest I can find for NAS drives that don't suck.
External SAS connector: Best I could find.
Brocade 10GB Fiber Connector: Could not find any of the "Officially supported" SFP's for less than $200. Just going to try this out and see if it works. Prime shipped so returns are not an issue. Worth a shot by this point.

Edit: Same board, different color
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Another resource for you with regards to the HP SAS Expander: https://forums.servethehome.com/index.php?threads/hp-sas-expander-wiki.146/

SAS Controller: The card you've got there is indeed an HBA card. However, I'm not familiar with that card. A quick Google search found nothing about the actual chipset on that Dell card, so it's difficult to predict actual compatibility. However, based on the specs, I would think it would work. Worst-case you might have to return the card if it doesn't work.

PCI-extension: that should work. You obviously won't need all the power for it (the HP SAS Expander only draws something like 12W). You'll want to make sure you understand which power plugs you have to plug in. Also, once you confirm that it works, please post back here!

HDDs: Not a bad choice. I myself have Seagates in my personal FreeNAS.

External cable: no problem with that one

Fiber connector: I would check fs.com for fiber stuff. We use their stuff at work with great success. And it's super cheap.

In general, I would also suggest checking eBay for these things. You can probably find much of this stuff cheaper on eBay. Also, I'm not sure what the Norco backplane requires, so don't forget your breakout cables!
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
Another resource for you with regards to the HP SAS Expander: https://forums.servethehome.com/index.php?threads/hp-sas-expander-wiki.146/

SAS Controller: The card you've got there is indeed an HBA card. However, I'm not familiar with that card. A quick Google search found nothing about the actual chipset on that Dell card, so it's difficult to predict actual compatibility. However, based on the specs, I would think it would work. Worst-case you might have to return the card if it doesn't work.

PCI-extension: that should work. You obviously won't need all the power for it (the HP SAS Expander only draws something like 12W). You'll want to make sure you understand which power plugs you have to plug in. Also, once you confirm that it works, please post back here!

HDDs: Not a bad choice. I myself have Seagates in my personal FreeNAS.

External cable: no problem with that one

Fiber connector: I would check fs.com for fiber stuff. We use their stuff at work with great success. And it's super cheap.

In general, I would also suggest checking eBay for these things. You can probably find much of this stuff cheaper on eBay. Also, I'm not sure what the Norco backplane requires, so don't forget your breakout cables!

Tried too look into the HBA card as much as I could. From what is listed on the site it says it's compatible with the R620 but you are right. I won't know till it gets here. Worth a shot at the price.

I am also just waiting on a price drop for the ironwolfs.

Also got a cheap rack and rails for them so they are no longer sitting on top of my Holiday decoration storage boxes.

Once I get all the parts in i'll try to make a guide about the case and setup. I actually really like this case for the price. Should middle - end of next week as I'll be out of time for a while when the parts come in.

Thanks again for the help!
 

Poetart

Dabbler
Joined
Jan 4, 2018
Messages
43
HDDs: Not a bad choice. I myself have Seagates in my personal FreeNAS.

BestBuy seems to have a deal right now for an 8TB external drive for $160. Inside drive is a WL WD RED 256mb. That's the best price to GB I have seen yet and seems pretty easy to shuck them. Just looking into the performance of them ATM.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
It's up to you, but WD will not honor warranties for drives pulled from external enclosures. Even if you are able to reinstall the drive in its enclosure, and convince WD that you never removed the drive, the warranty is only 2 years instead of 3 years. If having warranty protection is important to you, then I would reconsider, though that is an incredible $/GB.
 
Status
Not open for further replies.
Top