Sanity Checking new Build

Status
Not open for further replies.

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
Greetings! I just registered but I've been watching FreeNAS off and on for nearly a decade. I have a strong storage/NAS background and I believe I'm caught up on the latest stickies and guides, but I wanted to seek real human advice about my upcoming build. Please don't hesitate to call me out if something I'm planning sounds stupid! :p I wont take it personally, and I'd much rather build it right the first time! I also tend to overanalyze things and spec stuff out for future growth, so recommending a little bit of overkill is no problem, vs. saving $50 today.

So, this will be my 3rd home server build, but my first experience with BSD, FreeNAS or ZFS. As a storage geek I've always wanted to run ZFS at home, but when I built my second server in 2008, ZFS on Linux (Fuse?) was nowhere near where I'd want it to be to trust my data to it. FreeNAS existed at the time, but it also wasn't there, IMHO. Needless to say things have come a long way in ~8 years, I'm ready to upgrade my server, and FreeNAS has evolved into exactly what I've always wanted!

My plan is to retire my original home server (~2004, P4, HW RAID, WinXP) currently serving as my backup target and move my current server (~2008, see specs below) into the backup role. Then I'll build a shiny new FreeNAS server (proposed specs below) as the primary.

Current Server specs (future backup target):
  • Intel S3210SHLC Mobo
  • Celeron E1500 (2x 2.2GHz)
    • Can drop in a spare Q6600 (4x 2.4GHz) which would add VT-d support
  • 4GB (2x2GB) DDR2 ECC
    • I could fill the two empty slots to max out the board at 8GB, but I'd have to buy the RAM
  • 2x IBM BR10i flashed to IT mode (yesteryear's M1015, based on 1068e - limited to 2TB drives)
  • 11x 2TB HDD (10x mdadm RAID6 XFS, 1x hot spare)
    • Plan to move 6 of these to the new FreeNAS server and rebuild the other 5 under ZoL as a replication target for the FreeNAS server
    • This chassis has 15 bays, so I figure eventual possible growth to 3x vdevs of 5x2TB drives each.
  • Ubuntu 12.04 LTS (upgraded in place from 8.04 LTS)
    • Plan to rebuild as either Centos 7 (first choice?) or Ubuntu 16.04 LTS
    • Update: If I go ESXi on the primary server and let it handle any future VMs, then the backup machine could be bare-metal FreeNAS

Proposed FreeNAS Server specs:

  • Supermicro X11SSM-F or X11SSH-LN4F Mobo
    • $30 more adds two extra NICs (not too useful unless I go ESXi and VT-d them to FreeNAS) and an M.2 slot.
  • E3-1245v5 or E3-1230v5 or i3-6100
    • All have ECC, VT-x, VT-d and AES-NI. Update: I considered Quicksync as a growth feature, but apparently it's not supported in FreeBSD at all.
  • 32GB (2x16GB) DDR4 ECC (empty slots for growth to 64GB)
  • 2x 16GB SATA DOM for OS (mirrored)
    • or if even possible, could mirror a SATA DOM with an M.2?
    • Update: System boot drives TBD. Options include DOM, SATA SSD, USB and M.2
  • 2x IBM M1015 flashed to IT mode
  • Update: 1x IBM M1015 flashed to IT mode + 1x HP SAS Expander + 1x PCIe Molex power 'mining card'.
    • Allows VT-d passthrough of 24+ SAS ports while consuming only one motherboard PCIe slot
  • 6x 3TB WD Red (plus 6x 2TB from current server) as two RAIDZ2 vdevs
    • 20TB usable pool (~40-50% free)
    • Chassis has 20 bays, growth for one future RAIDZ2 vdev w/ 8 drives
  • 2x (or 3x if using M.2 instead of 2nd SATA DOM) empty SATA ports in case I need a ZIL-SLOG or L2ARC down the road (no free bays, but could stick SSDs to internal chassis walls )
  • Update: SLOG: If determined to be required in the future, there are lots of options:
    • NVME PCIe AOC, ie: Intel 750. Theoretically could be passed through with VT-d.
    • NVME M.2 SSD. Same theory on VT-d. Potentially cheaper/cleaner than AOC, but to my knowledge there are currently none available with power loss protection. (ie: Intel S3500 is SATA M.2, not PCIe). If that changes, it's a viable option.
    • Standard 2.5" SATA SSD on the SAS Expander, ie: Intel S35x0/37x0 - 'Guaranteed' to work, just not as low latency as the NVME options.
  • Planning to build early-ish 2017 with FreeNAS 10-STABLE
    • As a fun side note, I actually wrote a (very) small piece of software that runs under the hood in FreeNAS 10, so it will be neat to have that running on my own server :)
  • Update: Completely forgot to mention that I'll be reusing my APC Smart-UPS 1500RM2U to power both servers.
Finally, before I start in with the questions, here are my minimum additional use cases for the servers:
  • Plex or Emby or Kodi (jail or VM)
  • Nextcloud or Owncloud (jail or VM)
  • Windows VM with IP Camera NVR software - should record to a dataset on the FreeNAS zpool
I would prefer that the backup server not have to run 24/7 sucking power, in which case all functions above should be either a jail or VM on the FreeNAS server. However, if it makes the most sense, I could host a VM or two on the backup server (hence the CPU & RAM upgrades mentioned above) and run it 24/7... Update: TBD on ESXi vs. bare metal for the primary server, but leaning towards ESXi now for easier hosting of non-FreeNAS VMs. Currently testing both options in my "home lab".

Questions:
  1. Have I said anything outrageously dumb yet? :D
  2. Almost all of my old 2TB drives are 512B sectors, while the new 3TB drives will be 4kB sectors. I'm not proposing to mix these devices within a vdev, but would be combining the respective vdevs into one pool. I understand that this is not "perfect", nor is having vdevs with different disk counts and capacities, or otherwise unbalancing the pool in any way, but as far as I can tell this is not "asking for trouble" either? A small performance hit is OK. An increased risk of data corruption/loss is not. Am I missing anything here?
  3. Similar to the above question, down the road I may want to upgrade the 2TB drives to something larger (one at a time + resilver). For this reason I'd like to set ashift=12 on the vdev at creation, even though it's currently using 512b sector drives. From my research, this will waste some space with overhead/padding, but again shouldn't lead to corruption, etc? Am I missing anything here or is this good planning for the long term?
  4. For my final 20-drive 3xRAIDZ2 vdev configuration, I'm leaning towards 6/6/8 vs. 6/7/7, even though the "2n+2" rule seems to be moot nowadays, particularly when compression is used. 6 additionally seems to be the "perfect" size for minimizing RAIDZ2 overhead, so I figured I'd capitalize on that for two vdevs. 7 disks vs. 8 is about the same in terms of overhead, both much worse than 6. Does anyone have a counterargument for going 6/7/7 instead?
  5. Speaking of compression... On a pure multimedia dataset dedicated to already compressed data, should I leave the default compression on anyway? I don't care about a "negligible" CPU hit, but are there other possible downsides? possible upsides? For example some compression algorithms make compressed data larger... I have to assume lz4 wouldn't do that since it's the default and most people use their NAS for compressed media, but I hate assumptions and my searching of both this forum and Google turned up very little...
    1. Update: On further research, I found that LZ4 'aborts' quickly if it detects that it can't compress data by a set amount. As such, it might as well be left on, even for datasets of incompressible media.
  6. Plex/Emby/Kodi and Nextcloud/Owncloud should all be easy enough to set up in a jail, but I see three options for the Windows VM:
    1. Host it on the Backup Server (ie: KVM)
      The only argument for this seems to be that the backup server will do literally nothing when it's not receiving replication data (so much so that I'd prefer to power it down), so the Windows VM should be able to get great throughput. Unless there's some limitation in the hypervisor...
    2. Host it in a FreeNAS virtualbox/Bhyve jail
      Most of what I've read suggests FreeNAS isn't the greatest VM host, even with the new-ish Virtualbox jails, but speculation is Bhyve might be better? I wouldn't hang my hat on speculation anyway, but does anyone have opinions on hosting a VM inside FreeNAS and maintaining good GbE throughput?
    3. Virtualize FreeNAS and Windows on top of ESXi.
      I've also read that hosting FreeNAS on ESXi and then sharing back the zpool to host VMs generally gives terrible performance, and the reasons why make sense. My main concern is keeping high GbE throughput to the Windows VM for recording the IP cameras. A side issue is that if I go this way I'd lose access to AES-NI and Quicksync from the CPU, though at this point they're both mooe for "future possibilities". Does anyone have thoughts on these options (assuming whatever I do, I do it "correctly" [VT-d, etc.]) I'll likely end up testing all three before committing to anything, to satisfy my OCD... :)
    4. Update: It turns out I was grossly over-estimating the bandwidth required by typical 1080p H.264 IP cameras (I'll have 4). I have no concerns now about bandwidth due to hosting the camera recording software on a Windows VM, whether inside FreeNAS or on ESXi.
  7. I feel like I have to be missing something, but I've searched, I swear. Is there a list of all the volume properties that can be set in FreeNAS, or even a list of common ones recommended to be modified at pool creation time? For example 'autoexpand=on' and 'atime=off' come to mind as things I'd want on my pool. The best list I've found is http://docs.oracle.com/cd/E19253-01/819-5461/gazsd/index.html, but I've learned not to assume that Oracle documentation is 100% applicable to FreeNAS (not to mention 'autoexpand' isn't even in that list...). Any pointers would be much appreciated.
Thank you all, and I apologize for the super long first post. I tend to over-analyze everything!
 
Last edited by a moderator:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
If you're planning to replicate to the backup server, the zol ZFS implementation needs to support *all* the "features" enables on the FreeNAS pool.

Alternatively, upgrade the backup to FreeNAS too. After adding 4GB of ram.

I'd go for the iohyve VM approach

I think id go 6+7+7. Cheaper to buy 7 drives rather than 8 ;)

Also. You might want to consider, replace the 6 smaller drives with the higher drives, then buy another smaller drive. And add your 3rd raidz2 with the now spare smaller drives. Just means you get the benefit of capacity expansion without having to buy so many large drives at once. Theoretically, you can do the same trick next time too.


Maybe the integrated graphics in the 1245 might be useful one day, but today it will be unused. I think I saw some talk of vt-g or something like that for passing through integrated graphics into vms. If that will be possible one day, and is supported, it might solve your quick sync concerns. You should research it.

Otherwise, the 1245 is only 3% faster than the 1230 and the premium is not worth it.

I think the core 2 processors will be a bottleneck, that and the whole motherboard, but I don't think it matters.
My backup system is a QX9650 w 8GB, and yes, it bottlenecks the transfers and also disk up with just 5 3TB WDs. Still a fine backup replication target.

I did a proof of concept to it. WOL when I needed to backup to it. Wait x minutes. Start replication script. Have replication script send shutdown command (ssh/sudo lockdown magic) and it worked fine.

Then I found more uses for the backup and leave it on Actually.

Good luck
 
Last edited:

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
If you're planning to replicate to the backup server, the zol ZFS implementation needs to support *all* the "features" enables on the FreeNAS pool.
Now that is some great information! I had not considered that at all. How does that work with something like rsync.net, which is offering cloud-based ZFS replication targets? Wouldn't they have to run the Oracle implementation with *all* the features, just in case one of their clients is using a feature their cloud doesn't support? Can I ask, is this a real concern, as in, people have seen it not work replicating to ZoL? Or is this more of a "just keep your eyes open when selecting initial versions and before upgrading FreeNAS versions down the road"

Alternatively, upgrade the backup to FreeNAS too.
I knew this would be one of the first replies I got :) I have no rebuttal, other than I "want" to maintain a Linux machine. In a perfect world, I'd "want" that to be Red Hat, which is why I'm currently leaning towards Centos. A lame argument could be made for the value of heterogeneity as well (eg: in case a nasty FreeNAS bug shreds the main pool, the backup on ZoL is less likely to be affected by the same bug), but frankly that's so low probability it's not even on the radar...

I'd go for the iohyve VM approach
Any particular reason? Have you tried it with good results? Or just following good practices and avoiding ESXi?

I think id go 6+7+7. Cheaper to buy 7 drives rather than 8 ;)

Also. You might want to consider, replace the 6 smaller drives with the higher drives, then buy another smaller drive. And add your 3rd raidz2 with the now spare smaller drives. Just means you get the benefit of capacity expansion without having to buy so many large drives at once. Theoretically, you can do the same trick next time too.
I did mention that spending $50 now ($99 in this case, for a 3TB Red) is perfectly OK as long as I get something "better" in the long run. So short of saving the cost of one drive, any other reason 6+7+7 might be better? Am I wrong in thinking that 6+6+8 has the potential to be (minimally) both higher performing and more efficient in terms of capacity overhead? I did like your recommendation on upgrading later in either case, just resilver a vdev with 6x 3TB drives and use the replaced 2TB drives (with a friend or two) to create the new vdev. Clever suggestion to keep in mind.

Maybe the integrated graphics in the 1245 might be useful one day, but today it will be unused. I think I saw some talk of vt-g or something like that for passing through integrated graphics into vms. If that will be possible one day, and is supported, it might solve your quick sync concerns. You should research it.

Otherwise, the 1245 is only 3% faster than the 1230 and the premium is not worth it.
Agree 100% the 1245 is not worth it, but it's the cheapest E3 w/ 4c/8t, AES-NI and Quicksync. I need to seriously decide whether I want Quicksync or not, and again, it boils down to the "spend $50 more now..." philosophy, where having it in the future for 4K content might be worthwhile. Some day Plex might support it, or I might decide to run Emby, and if I didn't have support in my CPU my OCD would drive me nuts until I bought a new CPU... I'm building a machine that will run for at least a decade, so I try to go "big" without going "crazy"...

I think the core 2 processors will be a bottleneck, that and the whole motherboard, but I don't think it matters.
My backup system is a QX9650 w 8GB, and yes, it bottlenecks the transfers and also disk up with just 5 3TB WDs. Still a fine backup replication target.

I did a proof of concept to it. WOL when I needed to backup to it. Wait x minutes. Start replication script. Have replication script send shutdown command (ssh/sudo lockdown magic) and it worked fine.

Then I found more uses for the backup and leave it on Actually.
Yeah the old server will never match the new server, but it mostly saturates GbE (85-95 MBps) on large writes to the RAID today, so as a backup target it should be more than sufficient. I agree that eventually I'll probably end up finding something useful for it to do, or otherwise be scared to mess with a perfectly configured FreeNAS, thus allocate the backup server a new task in 5 years :) Until then, I'd like to save a little bit of power, and was contemplating a very similar WoL concept to what you described.

Thank you for all of your thoughts!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Now that is some great information! I had not considered that at all. How does that work with something like rsync.net, which is offering cloud-based ZFS replication targets? Wouldn't they have to run the Oracle implementation with *all* the features, just in case one of their clients is using a feature their cloud doesn't support? Can I ask, is this a real concern, as in, people have seen it not work replicating to ZoL? Or is this more of a "just keep your eyes open when selecting initial versions and before upgrading FreeNAS versions down the road"

I believe FreeNAS is relatively bleeding edge about enabling features on new pools.

If you test a replication, and it works, then it works. You will have to manually upgrade your pool for future features, so it won't change on you.

Otherwise, you can manually craft the pool without any incompatible features.

I knew this would be one of the first replies I got :) I have no rebuttal, other than I "want" to maintain a Linux machine. In a perfect world, I'd "want" that to be Red Hat, which is why I'm currently leaning towards Centos. A lame argument could be made for the value of heterogeneity as well (eg: in case a nasty FreeNAS bug shreds the main pool, the backup on ZoL is less likely to be affected by the same bug), but frankly that's so low probability it's not even on the radar...

That's what VMs are for ;)


Any particular reason? Have you tried it with good results? Or just following good practices and avoiding ESXi?

I'm a fan of hypervisors rather than jails, and I'm not necessarily a fan of ESXI. I haven't looked into iohyve yet, but I plan too.

I did mention that spending $50 now ($99 in this case, for a 3TB Red) is perfectly OK as long as I get something "better" in the long run. So short of saving the cost of one drive, any other reason 6+7+7 might be better? Am I wrong in thinking that 6+6+8 has the potential to be (minimally) both higher performing and more efficient in terms of capacity overhead? I did like your recommendation on upgrading later in either case, just resilver a vdev with 6x 3TB drives and use the replaced 2TB drives (with a friend or two) to create the new vdev. Clever suggestion to keep in mind.

Your choice. IOPS will be the same. sequential performance will probably be the same. I have a 24 bay build, to consist of 3x8. 8 is a lot of drives to purchase at once, that is all :)

I'm still vacillating between "or should I go with 4x6 or 12x2". I think I'll re-evaluate when I get 10gbe going. Luckily I know, when I buy the next set of drives, I'll have room to replicate things around.

Agree 100% the 1245 is not worth it, but it's the cheapest E3 w/ 4c/8t, AES-NI and Quicksync. I need to seriously decide whether I want Quicksync or not, and again, it boils down to the "spend $50 more now..." philosophy, where having it in the future for 4K content might be worthwhile. Some day Plex might support it, or I might decide to run Emby, and if I didn't have support in my CPU my OCD would drive me nuts until I bought a new CPU... I'm building a machine that will run for at least a decade, so I try to go "big" without going "crazy"...

Yep, so you've worked out that you'd be paying for Quicksync, which may or may not ever work for you.

This is a promising reason to consider it:
https://01.org/igvt-g

Intel® Graphics Virtualization Technology (Intel® GVT) allows VMs to have full and/or shared assignment of the graphics processing units (GPU) as well as the video transcode accelerator engines integrated in Intel system-on-chip products. It enables usages such as workstation remoting, desktop-as-a-service, media streaming, and online gaming

Yeah the old server will never match the new server, but it mostly saturates GbE (85-95 MBps) on large writes to the RAID today, so as a backup target it should be more than sufficient. I agree that eventually I'll probably end up finding something useful for it to do, or otherwise be scared to mess with a perfectly configured FreeNAS, thus allocate the backup server a new task in 5 years :) Until then, I'd like to save a little bit of power, and was contemplating a very similar WoL concept to what you described.

Thank you for all of your thoughts!

The WoL worked very well... but then I upgraded my plans from FreeNAS being a backup dump, to actually being a primary... and went big :)

(see signature)
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
$30 more adds a second dual-NIC chip (not too useful unless considering ESXi and VT-d one chip to FreeNAS) and an M.2 slot (Meh?).
No, it adds two Intel i210 NICs that use two PCI-e lanes that aren't connected in the X11SSH-F. The M.2 slot is the same in both versions of the board.

(no real use today, especially if I go with Plex :rolleyes:, but for future 4K transcoding maybe?)
Don't count on it, unfortunately.

2x 16GB SATA DOM for OS (mirrored)
  • Or if even possible, could mirror a SATA DOM with an M.2?
Overkill. Stick to a single cheap SSD (or a single DOM, if the neatness factor is worth the extra cash to you).
As for mirroring, you can mirror any devices you want, the question is always "does it make sense?". When my server experienced some trouble before I could bring the backup server online, I made a mirror with a 3TB WD Red and two 1TB USB HDDs to store some emergency backups.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
3-way 1TB mirror?
I was panicking because my backups were out of date, anticipating the imminent onlining of the new server, so I grabbed the three drives I could spare at the time.

Fortunately, everything is now fixed.
 

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
I have no rebuttal, other than I "want" to maintain a Linux machine. In a perfect world, I'd "want" that to be Red Hat, which is why I'm currently leaning towards Centos.
Just to point out a possible further option, Ubuntu 16.04 now has ZFS included - see https://wiki.ubuntu.com/ZFS . I have no idea how solid it runs or how well it plays with others (though my main computer is in fact running Xubuntu 16.04) but in theory you could upgrade the RAM on the backup machine and migrate those drives to ZFS. And then please let us know if it exploded in your face :).
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
Just to point out a possible further option, Ubuntu 16.04 now has ZFS included.
Thank you for pointing that out. In this case I did mention in my OP that Centos 7 was my 'first choice' and Ubuntu 16.04 was the other option under consideration. I didn't highlight this as the reason why, but the inclusion of ZFS in the kernel is the main reason I'd consider Ubuntu, and the optimistic ease of upgrading from my current 12.04 (heck, I made it from 8.04 to 12.04 with no pain :))

The reason I'm leaning Centos instead is that in my line of work we're only allowed to use one flavor of Linux, which is Red Hat. I learned Ubuntu back in '08 to build the server, and other than the occasional drive replacement I never have to do anything with it, so the skills rust. If you were to put me on the spot right now, for example, I honestly couldn't tell you how to do something as simple as changing the IP address on the Ubuntu server. I know I'd have to edit a config file and 'ifdown / ifup', but where the heck is that config file? I'd have to Google it. On the other hand, because I occasionally touch Red Hat at work, I know that the config file would be /etc/sysconfig/network-scripts/ifcfg-eth0.

I figure since I'm a storage geek, not especially a Linux geek, it doesn't make sense trying to keep multiple flavors of Linux in my head just for the 'fun' of it. Going with Centos at home would ease that burden plus give me the occasion to 'keep sharp' or experiment with things that might later apply to my work. Thanks again though for the suggestion!
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
No, it adds two Intel i210 NICs that use two PCI-e lanes that aren't connected in the X11SSH-F. The M.2 slot is the same in both versions of the board.
Actually I said the X11SSM-F, which is the version with 4 PCIe slots, no M.2, and just 2 Intel NICs. You're right that the X11SSH-F removes a PCIe slot from that, trading it for an M.2, and "wasting" 2 lanes. Then the X11SSH-LN4F uses the two wasted lanes to add 2 more Intel NICs, like you said. I see no value in the in-between SSH model, the SSM and SSH-LN4F each seem clearly superior to me, unless there's something else I overlooked.

Don't count on it ... [QS being useful for future 4K transcoding]
My understanding is that in the Skylake architecture, QS added support for 4K transcoding of H.265(HEVC) and VP9, so while SW like Plex doesn't take advantage of QS today, it seems like it could be a decent thing to have with 4K bluray coming, which you might want to transcode to a 720p/1080p client stream. Not knowing what's coming or what media software I'll be running over the ~10 year service life of this server, I'm the type to pay a little up front and make sure I have support for the feature. I'll concede that in the end you're probably right though.

Overkill. Stick to a single cheap SSD (or a single DOM, if the neatness factor is worth the extra cash to you). As for mirroring, you can mirror any devices you want, the question is always "does it make sense?".
Agree DOM/M.2 is overkill, heck most people get by with USB sticks for boot. Since I have no spare bays in my chassis, the appeal of DOM/M.2 (or USB) is that they attach directly to the Mobo, so I don't have to velcro them to the chassis wall :) 16GB is so cheap that mirroring two seems worthwhile to save me the eventual headache if one fails. As far as mirroring DOM to M.2, I suspect that it would work fine, but I wouldn't go as far as saying you can mirror 'anything you want'. There were problems in the early days of M.SATA, M.2, etc. where you couldn't even boot from the things in many boards, forget trying to mirror them together and then boot. Similarly, some of the newest boards are PCIe to the drive instead of SATA, and I don't know if you can mirror a PCIe disk to a SATA disk and then boot from it. I've simply never built that configuration to try it, maybe it works great. Either way, I have lots of options for my boot device, and if I end up with the board w/ an M.2 slot, that's just one more option I could play with.

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Actually I said the X11SSM-F
Bah, coffee hadn't kicked in yet...
understanding is that in the Skylake architecture, QS added support for 4K transcoding of H.265(HEVC) and VP9, so while SW like Plex doesn't take advantage of QS today, it seems like it could be a decent thing to have with 4K bluray coming, which you might want to transcode to a 720p/1080p client stream. Not knowing what's coming or what media software I'll be running over the ~10 year service life of this server, I'm the type to pay a little up front and make sure I have support for the feature. I'll concede that in the end you're probably right though.
The problem is FreeBSD support for the GPU, including acceleration stuff, which is currently nonexistent, with no signs of changing.
There were problems in the early days of M.SATA, M.2, etc. where you couldn't even boot from the things in many boards, forget trying to mirror them together and then boot.
SATA has been implemented over many sockets, and none of them make a difference. AHCI is AHCI is AHCI.
Some of those sockets (M.2 being the prime example) can also provide PCI-e connectivity in addition to/instead of SATA. If the device uses AHCI, any remotely modern BIOS will boot from it. If it uses NVMe, you need a reasonably modern BIOS.

Similarly, some of the newest boards are PCIe to the drive instead of SATA, and I don't know if you can mirror a PCIe disk to a SATA disk and then boot from it.
The first boot stage is always from a single device, unless someone customizes a BIOS to include GRUB in firmware. Once GRUB takes over, ZFS' features and restrictions apply (with some exceptions to keep things simple).
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
The problem is FreeBSD support for the GPU, including acceleration stuff, which is currently nonexistent, with no signs of changing.
Well darn... It figures among everything I've been digesting to plan this build, there were bound to be some important nuggets like this that I completely overlooked... I appreciate you clarifying that! Now the disappointment sets in :)
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
I spent the morning updating some network diagrams to plan out my new network/server deployment, and it hit me that if I were to go with FreeNAS on ESXi, I can only VT-d a maximum of 16 drives, or else buy a third M1015. The original thought was to buy the better mobo w/ 4 NICs, then VT-d two NICs and two SAS adapters to keep FreeNAS as close to a physical machine as possible. Now I'm seeing that wouldn't give me enough ports to fill my 20 bays (duh! :mad:), nevermind adding any SLOG or L2ARC down the road. The board does have three PCIe slots, so it's doable to VT-d 24 ports, but I'm starting to lean on the hope that FN-10 on bare metal will be able to host a Windows VM for IP Camera software so I can avoid the complication of ESXi...

I've also gotten most of my options stood up as VMs in my "home lab", just to gain familiarity. Right now I have FN9.10, FN10-BETA, Centos7.2 and ESXi6 all running as VMs under Workstation 10. Then under ESXi6 I have FN9.10 running to simulate that environment. Now I need to get a Windows VM stood up under ESXi, FN10-BETA (bhyve) and Centos (KVM), so I can compare all the options. Should be lots of fun over the next couple months as I beat on these things, then I'll get to buy hardware and try it all out again!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You could use a sas expander and just one HBA. Sas expanders don't use PCI slots, even if they look like they do.
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
You could use a sas expander and just one HBA.
You're right of course. Most of the chassis I work with have SAS expanders built in, but not the cheap chassis I'm using at home. I'd never really considered adding an expander to a cheap chassis for home use. Mainly because decent SAS expanders used to cost much more than the extra HBAs it would replace, sometimes even more than the chassis itself... I also have reservations about expanders because it's one more link in the chain which can introduce incompatibilities, timing issues, etc.

While 99% of the time I've had no trouble at all with expanders, I've had at least two instances where the combination of controller + expander + drive model led to some odd timing glitches, causing drives to randomly be dropped from the RAID set. The three components all worked individually, but the interaction of the three of them together was intermittently glitchy. And this was on expensive equipment, dual-redundant controllers, >$25k chassis, in an allegedly tested and supported configuration, not something homebrewed off Newegg. ;)

All that said, I did a quick Google search, and it seems there's a popular HP expander that's about the same cost as an M1015 and known to be compatible. This one does in fact use a PCIe slot for power (no Molex), but that's no problem since it would replace one or more M1015s. This does pique my interest a bit, and for the marginal cost I'll probably just buy one and test it when I get around to ordering hardware. I could start with a 1015 and the expander, and if I'm not satisfied just buy another 1015 or two and re-sell or re-purpose the HP. Thanks for the tip!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Intel's expander is probably not too much more expensive and does take molex power, if you're interested.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You're right of course. Most of the chassis I work with have SAS expanders built in, but not the cheap chassis I'm using at home. I'd never really considered adding an expander to a cheap chassis for home use. Mainly because decent SAS expanders used to cost much more than the extra HBAs it would replace, sometimes even more than the chassis itself... I also have reservations about expanders because it's one more link in the chain which can introduce incompatibilities, timing issues, etc.

While 99% of the time I've had no trouble at all with expanders, I've had at least two instances where the combination of controller + expander + drive model led to some odd timing glitches, causing drives to randomly be dropped from the RAID set. The three components all worked individually, but the interaction of the three of them together was intermittently glitchy. And this was on expensive equipment, dual-redundant controllers, >$25k chassis, in an allegedly tested and supported configuration, not something homebrewed off Newegg. ;)

All that said, I did a quick Google search, and it seems there's a popular HP expander that's about the same cost as an M1015 and known to be compatible. This one does in fact use a PCIe slot for power (no Molex), but that's no problem since it would replace one or more M1015s. This does pique my interest a bit, and for the marginal cost I'll probably just buy one and test it when I get around to ordering hardware. I could start with a 1015 and the expander, and if I'm not satisfied just buy another 1015 or two and re-sell or re-purpose the HP. Thanks for the tip!

Also, you can get PCIe extension things which can be used to power the expanders without actually using up a slot
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
Also, you can get PCIe extension things which can be used to power the expanders without actually using up a slot
I came across those shortly after posting, but thank you for mentioning them. Looks like they're used for mining cryptocurrencies, so you can cram more GPUs into a box without using slots on the motherboard. (Hint: that's what you want to search for - "pcie mining card" - if anyone stumbles into this thread and has no idea what we're talking about). Pretty neat, and ~$10. I'll have to add that to my list of things to play with. Seems like my design concept is becoming more of a Frankenstein's monster by the minute :)
 

Magius

Explorer
Joined
Sep 29, 2016
Messages
70
Intel's expander is probably not too much more expensive and does take molex power, if you're interested.
Generic searching for "Intel SAS Expander" is bringing up cards costing ~$275 or more. Is there a cheaper one that I'm missing, or are these pretty common on Ebay for significantly less? (sorry Ebay is blocked by my proxy so I can't check now myself)

The HP SAS expander looked to be in the ballpark of $60-$70 on Server Supply, and $90-$120 on Amazon, which puts it in the same range as an M1015, which it would replace. At that point it's break even cost wise, and you get more SATA ports. With one of those ~$10 mining boards you even save a PCIe slot, like you could with the Intel's Molex power. All in all, a pretty good sounding deal!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The HP SAS expander looked to be in the ballpark of $60-$70 on Server Supply, and $90-$120 on Amazon
That's actually rather cheap...
 
Status
Not open for further replies.
Top