Using the rest of a big boot SSD - Best Practices?

Status
Not open for further replies.

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
So I finally got around to replacing my boot sataDOM with a 120GB SSD. Yay, I guess? I had the vague intention of using some of the rest of the space for boot zvols for some sandbox bhyve VM cattle (don't care that it's not redundant, see cattle). So the freenas-boot zpool isn't really that accessible from the GUI for the making of zvols. What's the Best Practice here? Partition the SSD? That would be a PITA. Manually create the zvols from the shell? Quit whining and let 112GB of SSD sit idle? :p Suggestions?

The FreeNAS is a fresh 11.1 install and updated to U4, and the zpool currently looks like this:

Code:
root@haai:~ # zfs list -r freenas-boot
NAME															USED  AVAIL  REFER  MOUNTPOINT
freenas-boot												   3.25G   112G	64K  none
freenas-boot/.system										   7.99M   112G	35K  legacy
freenas-boot/.system/configs-3bc724bbb18d493585697320a07060e7   123K   112G   123K  legacy
freenas-boot/.system/cores									  412K   112G   412K  legacy
freenas-boot/.system/rrd-3bc724bbb18d493585697320a07060e7	  7.17M   112G  7.17M  legacy
freenas-boot/.system/samba4									  47K   112G	47K  legacy
freenas-boot/.system/syslog-3bc724bbb18d493585697320a07060e7	224K   112G   224K  legacy
freenas-boot/ROOT											  3.23G   112G	29K  none
freenas-boot/ROOT/11.1-U4									  3.23G   112G  2.41G  /
freenas-boot/ROOT/Initial-Install								 1K   112G   825M  legacy
freenas-boot/ROOT/default									   169K   112G   825M  legacy
freenas-boot/grub											  6.84M   112G  6.84M  legacy


<insert wind whistling noise and graphic of Tumbleweed here>
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Ya you said it, quit whining and leave the boot disk alone. Use another disk for the zvol’s
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Ya you said it, quit whining and leave the boot disk alone. Use another disk for the zvol’s

Ya see I don't *have* another SSD just lying about the place, if I did it would be in a mirror with the boot disk. I want my 112 GB...
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Now that we have mirrored swap since 11.1-RELEASE one would need mirrored boot SSDs to not impair the stability of the system when moving swap from the data disks to the boot pool.

Looked into that. If I ever get a mirrored boot disk I might consider it. Although I have scar tissue when it comes to swap killing early SSD's. <twich> "Wear Leveling? What's that?" :rolleyes:

At this point in time I'm leaning towards backing up the configuration, partitioning the SSD and then reinstall, recover. Bleh...

I get why the boot zpool is inaccessible if it's a USB stick, but it's 2018 and SATA SSD's are cheap, and cheaper than DOMs?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Ya they are cheap.. so get a second one for what ever else and leave the boot disk alone
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I get why the boot zpool is inaccessible if it's a USB stick,
So you want the system to special-case the treatment of the boot device?

I don't think you really do get the design. FreeNAS is designed so that the boot device is completely separate from your data, and thus can be relatively disposable. Problems with the boot device therefore can't harm your data (unless you're using encryption and aren't managing your keys exactly perfectly). Meanwhile, both the installer and the manual tell you, quite clearly, that you can't use the boot device for anything else (yes, you can hack something together, but it will be completely unsupported). So the "best practices" for what you're trying to do are, "don't do it."
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
So I’m now thinking of using the rest of the space as L2ARC since the SSD is 500MB/s read/write (at least that’s what it says on the tin). That way the non-redundancy is not an issue, and for my occasional usage that can benefit from some level 2 read cache it’s still useful. SLOG on a non-redundant device is a non-starter and I’m not doing anything that could use it anyway. My main zpool is 2X4X4TB (historical reasons, plus expansion) so I should get at least *some* milage out of the SSD L2ARC for workloads that can warm up the cache. Especially since I’m now “borrowing” some of the RAM occasionally for VMs and Jails.

There’s still no way I’m getting around the backup-partition-install-restore cycle for this though, am I?
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Don’t do that either

Seriously? For my specific use case which is Home media and Lab Server? Not exactly Enterprise Production. From previous experience with ZFS I know that adding L2ARC to a zpool at worst won’t hurt performance, and at best might give a small boost for some workloads because my VDEV configuration is not optimal.

Please expand on your assertion. Under-utilized resources offend my frugal Engineering spirit. ;):p:cool:
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
From previous experience with ZFS I know that adding L2ARC to a zpool at worst won’t hurt performance
How much RAM are you running for the system? Is it at least 32 GB?

Aside from the bad idea (IMHO) to try and use the boot device for anything other than what is suggested... I will quote this from the v11 ZFS Primer (with some highlights):
If an SSD is dedicated as a cache device, it is known as an L2ARC. Additional read data is cached here, which can increase random read performance. L2ARC does not reduce the need for sufficient RAM. In fact, L2ARC needs RAM to function. If there is not enough RAM for a adequately-sized ARC, adding an L2ARC will not increase performance. Performance actually decreases in most cases, potentially causing system instability. RAM is always faster than disks, so always add as much RAM as possible before considering whether the system can benefit from an L2ARC device. When applications perform large amounts of random reads on a dataset small enough to fit into L2ARC, read performance can be increased by adding a dedicated cache device. SSD cache devices only help if the active data is larger than system RAM but small enough that a significant percentage fits on the SSD. As a general rule, L2ARC should not be added to a system with less than 32 GB of RAM, and the size of an L2ARC should not exceed ten times the amount of RAM
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
How much RAM are you running for the system? Is it at least 32 GB?

Aside from the bad idea (IMHO) to try and use the boot device for anything other than what is suggested... I will quote this from the v11 ZFS Primer (with some highlights):

16GB, but at most two simultaneous NAS users and the working set could fit into half the memory. NAS performance off the 2X4X4TB disks is adequate, the limiting factor tends to be the Gb Ethernet uplink. 10Gb is Just not in the budget.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I wouldn’t bother with an l2arc when you only have 16GB of RAM.
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
<vent>
Fine. Two 128GB SSDs in a mirrored pair, and two 32GB sataDOMs ordered for freenas-boot it is. In the meantime a 16GB sataDOM that's on it's last legs will do until the 32's get here along with an extra 16GB of DDR3-1600 ECC RAM. BTW the whole separate boot volume/appliance separate from data thing would be a lot more convincing if I hadn't just wasted most of a day troubleshooting an issue that turned out to be the "appliance" not being able to ignore any other previously installed boot volume. Using "freenas-boot" as the hardcoded pool name instead of the disk UUID and whatnot kinda breaks the appliance model as I had to do much spelunking around in the installer shell to find a workable way to nuke the previous install to make it grub-safe ignorable. Camcontrol is not the new friend I wanted to make. Don't even get me started on the screwball encryption architecture.

Did I mention that I wound up with a bricked install in the first place because that release-that-shall-not-be-named "upgrade" destroyed my setup a while ago? I haven't had the chance to fix it until now. The new UI is a buggy eyesore and heavens alone knows why cloning a clone of a zvol-based bhyve VM is flakey.

I'll also mention that having to hit the command line to find out anything but the most trivial information on my UPS doesn't exactly scream "Appliance" either? That's been on the wish list for so long I think it's got mold, and UPS' are pretty basic stuff for a NAS. Ditto for copying/moving zvols and datasets between pools.

Color me peeved as this is 101-level User Acceptance Testing stuff. I *AM* the better idiot as far as testing goes and why Yes Virginia I am qualified to bitch about this with a 20 Year plus career as a Systems Architect and Sales Engineer, not to mention a ZFS enthusiast since pre build 50.

To paraphrase a well know quote: FreeNAS 11.1-U4 is the worst Free NAS software out there, except for all the others. Bah, Humbug. :p

</vent>

Where's Cyberjock when I want to berate him? :rolleyes::cool:
 
Last edited:

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Ya, if you are complaining on that level you’ll do best to stick to 9.10 ...
 

DaPlumber

Patron
Joined
May 21, 2014
Messages
246
Ya, if you are complaining on that level you’ll do best to stick to 9.10 ...

No thanks. Some sleep and some caffeine and I’m feeling less grouchy. :cool: But seriously these are design and development issues rather than break/fix. Will consult for ERs/Bugfixes... :rolleyes:

I only complain because I care! :p
 
Status
Not open for further replies.
Top