What is going on

Joined
May 7, 2020
Messages
7
Hi all, I'm new to FreeNAS and I'm starting to get extremely frustrated. On the recommendation of a friend I chose to build a new NAS this way instead of purchasing a new symbology and I'm starting to regret it.... Hopefully you can help.

I bought the following
Fractal Node 804 case
Gigabyte H370 motherboard
LSI 9215 SAS controller flashed to IT MoDE
Intel I3-8100
8 6TB WD Reds
2 2TB Samsung SSDs
Corsair AX860 PS

Everything went together great and started fine
But when I went to create one z2 pool with the 2 cache drives is where everything started going wrong.

I couldn't do it.

All I could get is the 4 WDs connected to one breakout cable and the 2 SSDs on the board into one pool.... The other 4 that are all attached to a single breakout cable and plugged into a single bank in the LSI card still won't work. I thought maybe it was a bad cable so I swapped it out, I thought it was a bad card so I swapped it out, still nothing.

Each time it's the same thing. If all drives are showing in the UI, when I go to extend the pool to the other 4 drives my screen just goea into a loading screen and I get the errors that show up in the screenshot.

The drives plugin individually fine, so what could I be missing or what is it I'm doing wrong.

Any Advise would be greatly appreciated.
 

Attachments

  • 8918E6FF-4EDB-425E-8BA0-8BD215646F45.jpeg
    8918E6FF-4EDB-425E-8BA0-8BD215646F45.jpeg
    419.1 KB · Views: 198

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Command timeout - you have a hardware issue. So first, don't store any data on anything. Forget that cache idea for now. You want to read more into how ZFS works, trust me on this please.

Move the cable with the 4 drives to the first connector. Try to create a new pool on that with them. Does that work?
Then destroy the pool, move the cable to the second connector. How about now?

If the fault follows the drives / cable, then that's where your issue lies.
If it follows the connector, then the LSI card has an issue.

6TB WD Reds

EFRX or EFAX? If EFAX: Return those post-haste. You will have nothing but toil & trouble & rage with a DM-SMR drive. You want CMR drives. 8TB Red and up are CMR; EFRX 2-6TB are CMR; EFAX 2-6TB are DM-SMR.

BTW screenshots are hard to deal with - whenever possible, copy paste into a CODE tag, it'll make it easier for forum users to help you.

Okay back to ZFS.
- You likely do not want a cache of any sort. How much RAM do you have?
- Do keep those two SSDs, they'll be an awesome special alloc vdev come TrueNAS 12
- What was your intended layout? You say you have 4 drives in your pool - as what? raidz1? raidz2?
- Understand you can't expand a raidz. If your use case is large file storage - media, backups, the like - a single raidz2 8-wide is a great choice. If your use case is storage for an ESXi VM box or a database, four 2-way mirrors is your best bet.

Be glad you came here first. ZFS has a learning curve. We all happen to think it's awesome, and, if you just start banging around on it and store data on your pool without choosing your pool layout very deliberately, you are quite likely to come to regret your choices.

Read 8) and 10) in this resource to understand why I said "forget cache": https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

Read this resource to understand pools, vdevs, and what you can and can't do with raidz: https://www.ixsystems.com/community/resources/introduction-to-zfs.111/
 
Joined
May 7, 2020
Messages
7
Command timeout - you have a hardware issue. So first, don't store any data on anything. Forget that cache idea for now. You want to read more into how ZFS works, trust me on this please.

Move the cable with the 4 drives to the first connector. Try to create a new pool on that with them. Does that work?
Then destroy the pool, move the cable to the second connector. How about now?

If the fault follows the drives / cable, then that's where your issue lies.
If it follows the connector, then the LSI card has an issue.



EFRX or EFAX? If EFAX: Return those post-haste. You will have nothing but toil & trouble & rage with a DM-SMR drive. You want CMR drives. 8TB Red and up are CMR; EFRX 2-6TB are CMR; EFAX 2-6TB are DM-SMR.

BTW screenshots are hard to deal with - whenever possible, copy paste into a CODE tag, it'll make it easier for forum users to help you.

Okay back to ZFS.
- You likely do not want a cache of any sort. How much RAM do you have?
- Do keep those two SSDs, they'll be an awesome special alloc vdev come TrueNAS 12
- What was your intended layout? You say you have 4 drives in your pool - as what? raidz1? raidz2?
- Understand you can't expand a raidz. If your use case is large file storage - media, backups, the like - a single raidz2 8-wide is a great choice. If your use case is storage for an ESXi VM box or a database, four 2-way mirrors is your best bet.

Be glad you came here first. ZFS has a learning curve. We all happen to think it's awesome, and, if you just start banging around on it and store data on your pool without choosing your pool layout very deliberately, you are quite likely to come to regret your choices.

Read 8) and 10) in this resource to understand why I said "forget cache": https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

Read this resource to understand pools, vdevs, and what you can and can't do with raidz: https://www.ixsystems.com/community/resources/introduction-to-zfs.111/


Move the cable with the 4 drives to the first connector. Try to create a new pool on that with them. Does that work?
Then destroy the pool, move the cable to the second connector. How about now?

Yes this works on both connectors

- You likely do not want a cache of any sort. How much RAM do you have? 16GB
- Do keep those two SSDs, they'll be an awesome special alloc vdev come TrueNAS 12 Okay, they are removed
- What was your intended layout? You say you have 4 drives in your pool - as what? raidz1? raidz2? z2
- Understand you can't expand a raidz. If your use case is large file storage - media, backups, the like - a single raidz2 8-wide is a great choice. If your use case is storage for an ESXi VM box or a database, four 2-way mirrors is your best bet.Z2 with 8 wide was what the goal was.... this was going to become my primary for ESX VM's and a file storage.

I just tried both 4 sets with the same cable, it worked and create a pool fine. I also tried to create a pool with the other cable (attaching one set of 4 then the other set) and that worked fine on both SAS connectors as well.

But when I try to create a single z2 with all 8 drives (2 breakout cables) it dies and I get the errors that were in that screenshot.

This doesn't make sense.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Can you check the model numbers of the 6TB WD drives? I wouldn't think trying to create an eight-drive Z2 would be the think that pushes DM-SMR over its limit, but you never know.

What about four drives on a breakout cable and four from SATA ports on the motherboard (if you have the ports) - can this form a Z2 pool?

It still seems to lean towards the HBA. Is this a genuine LSI card or an OEM (HP/Dell/IBM) that was reflashed? What process was used to flash it to IT mode, and did you validate that the right firmware was used?

Z2 with 8 wide was what the goal was.... this was going to become my primary for ESX VM's and a file storage.

Z2 is fine for files but not recommended for VMs; you'd have far better results just making a mirror volume using those 2x 2TB SSDs as another pool and running VMs there.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
this was going to become my primary for ESX VM's and a file storage

Read that block storage article a few times. Performance on raidz2 will suffer for VM storage. Which may be okay, if all you have is a gig link you might not care. Also, since you are coming from ESXi, now you need to think about sync writes and a cap-backed Optane SLOG, small capacity. You can do that and NFS or iSCSI with sync forced on; or you can forget the SLOG and do iSCSI with standard sync, which means the ZFS metadata will be fine, but you might have some VM corruption if the power dies on the FreeNAS box.

Not sure where your hardware issue is, might be one of the drives? Does it single out the same drive each time, or is the issue across all drives?

Don't take that EFAX question lightly, please, that's still outstanding.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Sorry, what is DMSMR?

The devil. See https://www.ixsystems.com/community/threads/WD-SMR-iX-Statement/

So: EFAX? If that's what they are, you Do Not Want. Seriously. Return, replace, any other path of action will have you even more frustrated than you are now. WD has been known to RMA EFAX for EFRX for people who run ZFS on them, you can try that.

Not to say looking deeply into the soul of that HBA isn't warranted, it is, @HoneyBadger is right about that.
 
Joined
May 7, 2020
Messages
7
Read that block storage article a few times. Performance on raidz2 will suffer for VM storage. Which may be okay, if all you have is a gig link you might not care. Also, since you are coming from ESXi, now you need to think about sync writes and a cap-backed Optane SLOG, small capacity. You can do that and NFS or iSCSI with sync forced on; or you can forget the SLOG and do iSCSI with standard sync, which means the ZFS metadata will be fine, but you might have some VM corruption if the power dies on the FreeNAS box.

Not sure where your hardware issue is, might be one of the drives? Does it single out the same drive each time, or is the issue across all drives?

Don't take that EFAX question lightly, please, that's still outstanding.
I'm not sure... these are older 6TB drives I had left over from an old project... they are from 2014... so my guess is that they are whichever is worse. before I make it scream, at this point I'm just trying to get it out of the garage... plus the Servers that host ESXi have over 4TB's each for caching the VM's once they are running so they should only write back as needed.... which should not be enough to worry, but that was why I wanted to 2-2TB SSD's for caching on the NAS to help offload if possible.
 
Joined
May 7, 2020
Messages
7
The devil. See https://www.ixsystems.com/community/threads/WD-SMR-iX-Statement/

So: EFAX? If that's what they are, you Do Not Want. Seriously. Return, replace, any other path of action will have you even more frustrated than you are now. WD has been known to RMA EFAX for EFRX for people who run ZFS on them, you can try that.

Not to say looking deeply into the soul of that HBA isn't warranted, it is, @HoneyBadger is right about that.
Gothca, Okay I'll read that.. but like I said... at this point I'm just trying to get everything working. Then I can purchase new Drives if needed.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
but that was why I wanted to 2-2TB SSD's for caching on the NAS to help offload if possible.

There is no write cache. If these are older drives they're likely EFRX and you are fine there.

You got articles to read. Until you do, we're just going to keep repeating stuff and it won't make sense to you. You gotta understand what ZIL and SLOG are, why there isn't a write cache, how raidz2 and mirrors behave with regards to block storage, what sync writes are, and why a SLOG benefits from being ultra-fast and cap-backed. For starters.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
How much RAM do you have?
16GB

Not much room for ARC (read cache), and the general recommendation for iSCSI is 64GB minimum. Someone who knows more about iSCSI and NFS will need to explain "why".

Merits of a consumer board aside - you bought the thing already - you can do up to 64GB of RAM with that i3. I'd say get things running with the 16GB before you run out and buy more. Then you can look at performance, and if it's clear you need more RAM, only then buy. You could consider an x11sch-f and ECC RAM at that point, just so you don't endanger your data through your RAM. Definitely not a "must", it really depends on how important these VMs are. You can always rebuild one or restore it from backup if its file system gets corrupted.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm not sure... these are older 6TB drives I had left over from an old project... they are from 2014.

You're in luck then, SMR drives hadn't hit the mainstream in 2014 - even if those drives are a few years newer I'd wager they are all non-SMR. You might want to look at SMART statistics though and make sure to do a good "burn-in" test once you get this system going (and you will get it going, don't worry!)

plus the Servers that host ESXi have over 4TB's each for caching the VM's once they are running so they should only write back as needed

Are you talking about the now-deprecated VMware vFRC? As the name implies (vSphere Flash Read Cache) that will only handle reads, not writes. A full hardware spec dump might be good to determine if your strategy will work.

I'd like to see the results of attempting to build the 8-drive Z2 across the HBA and SATA ports, as well as the information about the origin of the HBA - if it's a "brand new" card that came from an Amazon third-party seller or eBay out of Shenzhen, then I would be suspicious of it as compared to it being a used-pull that came from a working server.

Not much room for ARC (read cache), and the general recommendation for iSCSI is 64GB minimum. Someone who knows more about iSCSI and NFS will need to explain "why".

Point 7 and 8 in the aforementioned "block storage" thread do it pretty well - short version is "random reads on spinning disk suck, ARC is a pretty effective band-aid but it needs to be big."


This is why I'm suggesting the VMs live on the SSDs. Random reads on NAND work great, and you can even do naughty things like push your pool occupancy up.
 
Top