First ZFS FreeNAS build

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
Please see the system in the signature.

I have discovered a newbie boo-boo on my part. The Supermicro X11SSM-F-O motherboard supports exactly 8 SATA devices on board.
I have 8 10TB drives for storage... and 1 120GB SSD for OS. Grin. 9 drives for 8 ports.

I have opted to add an LSI 6Gbps SAS HBA LSI 92118i (92018i) P20 IT Mode from ebay.

What are folks thoughts on how to structure this? Should I put all the storage drives on the LSI with the OS on the motherboard? Ideas? I'm very new to SAS (no experience at all), and the Don't be afraid of SAS-sy post has not sunk in fully yet.

TIA
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
That's why I like the X11SSH-F, you can boot from M.2 and keep all 8 SATA.

How you structure this is of little concern. It'll work any which way. I'd personally probably use the HBA for everything, but that's more out of a personal sense of "neatness" than any technical reason.

Side note: If your switch supports it, you can do an active-active LACP LAG (Link Aggregated Ethernet) on the X11SSM. With 15 stations, you'll see some benefit. Monitor that and see whether you even need 10Gbit.
Also, a single vdev will give you gig throughput from the disk, roughly. How do you intend to structure those 8 drives to get beyond gig? Adding: If your main work set fits into those 64GB, then that'll be read at RAM speed. Writing will be at HDD speed, roughly 100MB/s per vdev. Consider turning off sync on the dataset level for those Macs and CIFS. Reading from HDD for data that doesn't fit into ARC, ditto at roughly 100MB/s per vdev.
 
Last edited:

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
That's why I like the X11SSH-F, you can boot from M.2 and keep all 8 SATA.

How you structure this is of little concern. It'll work any which way. I'd personally probably use the HBA for everything, but that's more out of a personal sense of "neatness" than any technical reason.

Side note: If your switch supports it, you can do an active-active LACP LAG (Link Aggregated Ethernet) on the X11SSM. With 15 stations, you'll see some benefit. Monitor that and see whether you even need 10Gbit.
Also, a single vdev will give you gig throughput, roughly. How do you intend to structure those 8 drives to get beyond gig?

Thanks, Yorik! We could not find any X11SSHF boards at the time we HAD to buy this; I had planned to structure them as One RAIDZ2 of 8 drives. If you have a better idea, please share! I'm still very noob on ZFS

The 10GigE would be for expansion in 3-4 years when we might be able to move to SAS drives.

Staff understands that this is not a work-on-the-server environment. Boss would pass out at the infrastructure cost to make that happen (new switches, some new wiring, etc plus the cost of a server setup. Compared to the 2012 Mac mini server with attached firewire external drives it is replacing, I think this is a vast improvement.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Okay. With a single vdev you'll get roughly Gig speed. SAS drives wouldn't change that. I recommend raidz2 so you can survive a resilver in failure case.

Start there, maybe do a LAG, monitor. See how much of your work set fits into ARC. And from there, with some real metrics on throughput and access patterns, you can plan a multi-vdev system for more throughput, if warranted.
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
Okay. With a single vdev you'll get roughly Gig speed. SAS drives wouldn't change that. I recommend raidz2 so you can survive a resilver in failure case.

Start there, maybe do a LAG, monitor. See how much of your work set fits into ARC. And from there, with some real metrics on throughput and access patterns, you can plan a multi-vdev system for more throughput, if warranted.

Thanks again!

Learning new stuff here. Do I need to put in separate read and write caches? We are putting the system on a very nice battery backup system and hopefully setting it up to gracefully shut down in even of power loss.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Read cache: Your ARC, RAM, is that. Watch the ARC in production, see whether an L2ARC (SSD) would even make sense. You can always add one if you like. Keep in mind that you won't get above Gig speed with a single link, and your HDD can supply that, so additional ARC likely won't do much. There are specific (tons of small files) use cases where additional ARC might help.

Write cache: Not really a thing. ZFS is copy-on-write. There is SLOG, but that's for sync writes. That's why I am recommending to just turn sync off on the dataset that your Macs access via CIFS. Sync is overkill for CIFS, and it slows your writes to a crawl. I don't think adding a battery-backed redundant SLOG is worth it just to keep Mac CIFS sync write. Your Windows clients won't sync write on CIFS, anyway.

Edited: If you look out to the future, there is a TON of exciting stuff coming in the next two years. DRAID, fusion pools - you'll be able to have distributed parity (lightning fast resilvers) and keep metadata and small files on SSD, while the bulk of data is on HDD. Very, very cool. That'll be 12.0, 12.1 and later, and, it's a game changer. In the future. For now, just raidz2, maybe a LAG, and gathering metrics.
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
Read cache: Your ARC, RAM, is that. Watch the ARC in production, see whether an L2ARC (SSD) would even make sense. You can always add one if you like. Keep in mind that you won't get above Gig speed with a single link, and your HDD can supply that, so additional ARC likely won't do much. There are specific (tons of small files) use cases where additional ARC might help.

Write cache: Not really a thing. ZFS is copy-on-write. There is SLOG, but that's for sync writes. That's why I am recommending to just turn sync off on the dataset that your Macs access via CIFS. Sync is overkill for CIFS, and it slows your writes to a crawl. I don't think adding a battery-backed redundant SLOG is worth it just to keep Mac CIFS sync write. Your Windows clients won't sync write on CIFS, anyway.

Edited: If you look out to the future, there is a TON of exciting stuff coming in the next two years. DRAID, fusion pools - you'll be able to have distributed parity (lightning fast resilvers) and keep metadata and small files on SSD, while the bulk of data is on HDD. Very, very cool. That'll be 12.0, 12.1 and later, and, it's a game changer. In the future. For now, just raidz2, maybe a LAG, and gathering metrics.

Very awesome response. I thank you for the education.

JC
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
I'm looking over Jails and Plugins and thinking about installing nextcloud. When I try, I get networking errors. Digging in, it seems there are lots of issues with plugins. I suppose they are easier than by-hand installs, but they don't seem to fit the normal person's interpretation of a PLUG-IN. Sigh. I'll post some errors and such in a bit. Disappointed in the nomenclature right now.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
suppose they are easier than by-hand installs,
I haven't heard that before. The plug-ins are usually behind in terms of version and so many users try to manually update them to the latest. But the plugins work just fine.

What are the networking errors that you are seeing?
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
I haven't heard that before. The plug-ins are usually behind in terms of version and so many users try to manually update them to the latest. But the plugins work just fine.

What are the networking errors that you are seeing?
I'm certain that the problem is with me. It really feels like I'm missing important steps in the set up. Almost as if there are things EVERYONE knows (about FreeNAS, FreeBSD, setting up plugins) but not me.

I get this error
Code:
 pkg.cdn.trueos.org could not be reached via DNS, check nc's network configuration Partial plugin destroyed


When I select the plugin from the list, name it and hit save without doing anything (NAT is checked)

I get
Code:
Error: nc had a failure Exception: RuntimeError Message: Stopped nc due to VNET failure Partial plugin destroyed


When I choose VNET and vnet0:bridge0 as interface, assign aa valid internal IP and mask, and specify the default router 192.168.1.1

I think I need to do some networking setup in FreeNAS. May be wrong.

Current network is 1 gige physical with static 192.168.1.254/24 and router 192.168.1.1

I cannot get the plugin to get a DHCP address, either.

I'm going to search the community for networking tips. If you know of any links, please share.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'd check the absolute basics first. What DNS server did you specify, and is it reachable. Does your NAT gateway out to "the Internet", the thing that's on 192.168.1.1, allow the traffic out.

I'm a CLI person at heart, so from FreeNAS CLI, I'd try:

ping 192.168.1.1

If this fails, do an "arp -na" next. If there's no entry for 192.168.1.1, or not the expected entry, you have a L1/L2 issue. Fix that. If arp works but ping doesn't, allow ping for troubleshooting purposes.

ping 8.8.8.8

If this fails, you have a basic routing issue. Fix it.

ping www.google.com

If this fails to resolve, you have a DNS issue. Fix it.
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
All those Pings worked (192.168.1.1, 8.8.8.8, www.google.com) got back replies instantly from the freeNAS shell.

It seems to me the issue is not with the main OS networking; but with the networking that is trying to be created by the plugin/jail. That is something I know little about --> do I need to make vlans or bridges or is it just supposed to work with NAT?

There are a couple of threads with supposed bugs in the 11.3 u1 but they don't match my situation.
Thanks
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
There were reports that the resolv.conf file was not keeping changes made to it, and it was advised that people clear it out and enter DNS server info there. Tried that, same issue. It is as if the NAT is not woorking right; I'm checking out my router as well. Sadly, it is a brand with terminology I do not follow easily. There are instructions for setting up port forwarding, etc from the outside, but the ports keep showing up as closed. Sigh. Thanks for listening. (adtran netvanta 3140)
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I have never used NAT for plugins or jails, and am unclear what the actual use case is. How does this behave when you don't use NAT?
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
Aha! I thought you HAD to use one of the checkbox options. It never occurred to me to just put in a valid IP and mask. I'm a moron. Wow. I'm missing the forest for the trees! Thanks. I have just successfully installed Nextcloud.... Now for the rest of it... LOL
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'm glad you got that going! As for NAT, there is one use case I've seen discussed in the FreeBSD world: A FreeBSD box with an interface directly to the public Internet, running pfSense, and then NAT the jails behind that external IP.

That use case does not seem relevant to FreeNAS at all. I'd expect FreeNAS to be internal, and some other piece of networking gear handles NAT to the outside. Which means "every jail with its own IP" is an easier way to set it all up. I am firmly in the "no NAT is good NAT" camp - NAT as little as necessary.

Maybe there are people who run FreeNAS as their Internet-facing router. Maybe there are other use cases I'm not thinking of.

Personally, I'll just pretend that NAT checkbox isn't even a thing, I want no part of it.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
For the sake of understanding what's going on, I was re-reading your post. Aha.

> When I choose VNET and vnet0:bridge0 as interface, assign aa valid internal IP and mask, and specify the default router 192.168.1.1

The point of NAT is that you are behind the IP address of the FreeNAS server's interface. Interface, address and netmask would be blank. No one, likely, tested setting those manually and choosing NAT, because that's mutually exclusive. Agreed that the middleware should probably give a better error message when someone unchecks NAT, enters values, and then re-checks NAT, leaving those values in place. IPv4 address, interface and netmask are greyed out for DHCP and NAT, for a reason. They won't be specified manually in those modes.

So, my personal distaste for NAT notwithstanding, setting up a plugin with NAT works in my testing. You cannot specify an IP address for the plugin in that case though, it'll use the one that belongs to FreeNAS.
 

binarybcc

Dabbler
Joined
Mar 16, 2020
Messages
20
Good to know. I'm fine with a dedicated IP, so I'll stick with that.

Thanks again.
 
Top