December 2017: current state of Marvell 88SE9230 compatibility?

Status
Not open for further replies.

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
I do not mean to double post things, but I find it hard to find conclusive information about the current FreeNAS compatibility of the Marvell 88SE9230 and the severity/impact of any incompatibility it still might have. Information that I can find is either old or contradictory. What I have found is that it is somewhat AHCI-compatible, but with quirks, and that at least a couple of years ago the FreeBSD AHCI driver did not yet address these quirks. I have looked at the FreeBSD hardware compatibility documentation and the controller is not mentioned here. On the forums I see builds of people that apparently use it with no problems, as well as issues like this.

So what I am wondering: has anything changed lately with respect to the compatibility of these controllers? Can anyone perhaps shed some light onto this or point me somewhere where I can find more conclusive information about this?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
From my perspective these controllers generally work without issue for FreeNAS, however some people do have an occasional issue and it appears to be the controller, but without further troubleshooting it's difficult to know with 100% certainty. I prefer to use true Intel SATA controllers however I am using a pair of Marvell controllers and I do recommend these, they are fully supported by FreeBSD/FreeNAS. If you by chance are looking at an ASRock motherboard, I'd steer clear of those for a while. I have not been impressed with the fact that they had a few failures in design recently. But I think they did fix the issue, it's your gamble.

But you are correct, it can be difficult to figure out what hardware is compatable or not.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I would ask, why do you ask? The best course of action, if you need more ports to connect drives to, is using a SAS controller which is well supported and reliable. Some people use the Marvell cards and have no trouble, but it really depends on how you plan to use it.
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
Why I ask is because they are onboard on the Gen10 HPE Microservers. Also see here: the compatibility of this controller is my last remaining concern about whether or not I should get this box. I plan on putting 3xWD Red 2TB into it in RAIDZ1.

It would be my first NAS ever and I need an entry-level machine that keeps my data safe for an affordable price. It's for storing documents and photos in a two person household, not much more. The stuff in the hardware recommendation list is more expensive than what I am willing to spend at this point.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
From what I read, there is a workaround for the Optron CPU, so it should be fine. This is the same chip-set as the SATA controller in several other systems that have been used successfully. It might not be perfect, but it appears to be good enough.
As for your plan, I would suggest using 4 drives in RAID-z2. It will not give you any more storage, but it will give you the ability to suffer a drive fault without loss of redundancy. Also, realize that you will only have a little more than 3TB of usable storage. The ZFS filesystem stores a lot of checksum data, so you can't use all the space for your files. What amount of storage do you need?
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
Well, thanks, it's nice to hear that it should work!

As for drives: someone else suggested something similar, striped mirror, for extra redundancy and better performance. It is something I would need to think about. My initial idea was to be able to expand the ZFS pool with an additional drive later if I need more space, as this is a feature that recently has been added to ZFS and of which I trust that it will find its way to FreeNAS at some point. Furthermore, an additional drive incurs additional cost and having ZFS and redudancy at all is already a massive improvement over my current situation where I just have a Windows 10 PC with a hard disk that is backed up to an external drive and to iDrive. Backup options that I intend to continue using once I have a NAS, of course. And perhaps I'll throw in additional cloud backups using the Azure credits that come with my MSDN subscription.

As for needed space: I do not make videos for a hobby and our media library consists of DVDs and Bluray disks sitting in a bookshelf. I just checked the size of the files we would place on the NAS and currently this amounts to 614GiB. Which probably can be reduced by quite a bit with some proper cleaning. Photos from holiday or zoo trips probably form the majority of data that will be added to this amount, plus biweekly recordings of sermons in our church (~300MiB each). Somewhere in the future I might want to use the machine as a Git server for some coding projects at home, but I do not think that is likely to happen, as I already spend a lot of time coding on the job.

The ZFS calculator that I found says I get about 3.5TiB worth of storage with my plan. Frankly I think we will struggle to fill this amount of storage. Or are there any obvious use cases that I forgot?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
My initial idea was to be able to expand the ZFS pool with an additional drive later if I need more space, as this is a feature that recently has been added to ZFS and of which I trust that it will find its way to FreeNAS at some point.
That feature has not been added to ZFS yet. A presentation was made within the last couple of months outlining the plan to add it, and the goal is that proof-of-concept code will be shown/demonstrated at next year's conference. I'd be surprised if FreeNAS has this ability within two years. You might want to consider two larger disks (perhaps 4 TB) mirrored, and if you need more space you can buy another pair and add them to the pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The ZFS calculator that I found says I get about 3.5TiB worth of storage with my plan. Frankly I think we will struggle to fill this amount of storage. Or are there any obvious use cases that I forgot?
The calculator you used is fine, but you didn't take into account the fact that you should not fill the array to 100%. The usable storage is only about 2.7TB or 80% of capacity. Due to the way that the copy on write file system works and the way that checksum data is stored with the data, the performance really tanks when the pool reaches 90% filled, so you should get a warning at 80% to allow you time to expand the pool before it reaches the soft limit at 90%. Anyhow, I guess it isn't a real issue with the small amount of data that you have to store.
You might want to take the advice @danb35 gave and use a mirror of two drives, then you can add a second mirror if you see that you need more capacity.
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
I think that the bottom line of what you are telling me is that, given the current features of ZFS, using 3 drives in a 4 bay system will basically leave me with an empty bay that will never be really useful for anything. Good point, I will need to think about this.
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
I have done some further reading, especially about striped mirrors. If I understand correctly, this boils down to having mirrored vdevs (2 drives each) which then are put together to form a pool (in a "RAID0" fashion), in my case of two vdevs. From what I understand this setup can suffer one drive failure per vdev and does not require any parity calculations. Also, resilvering appears to be a lot quicker and less stressing on the remaining drives compared to RAIDZx. And lastly, the vdevs need not be the same size, so I could upgrade the pool two drives at a time rather than having to replace all drives at once with slow/stressful resilvering. Isn't this basically the kind of setup @danb35 is suggesting, only with postponing the actual striping of data?

So, two things for me to find out/make a decision about:
  1. Does the ability to suffer the failure of any two drives in RAIDZ2 offset the advantages of striped mirrors?
  2. 2x4TB disks are about 25% cheaper for me than 4x2TB. Striping would perform better (although, 1Gbit ethernet is probably the bottleneck), but I am not yet sure whether the risk of losing the pool is any different between 1 mirrored vdev and striped mirror vdevs.
I'll look into it, thanks for your input so far!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Isn't this basically the kind of setup @danb35 is suggesting, only with postponing the actual striping of data?
Pretty much.
Does the ability to suffer the failure of any two drives in RAIDZ2 offset the advantages of striped mirrors?
That is an advantage of RAIDZ2. Whether it's enough to outweigh the noted advantages of striped mirrors is something you'd have to decide, but from what you've posted, I'm thinking it wouldn't.
Striping would perform better (although, 1Gbit ethernet is probably the bottleneck)
The network would definitely be the bottleneck.
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
Well, I finally got around to playing around a bit with FreeNAS in a VM. In my opinion, it is very straightforward, I'm pretty much sold on it :) .

One thing that bothers me a bit about the Microserver however is that the bootup problems somehow relate to acpi, which in turn makes me wonder what to expect in terms of power consumption.

And lastly: is there any reason not to use encryption? I mean, given that you really need to take care that you do not lose your key, passphrase or recovery key, of course. I take it that the key is stored in the configuration database? My main concern for wanting to use it is to protect my personal files in the event of theft of my NAS.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
My main concern for wanting to use it is to protect my personal files in the event of theft of my NAS.
There is nothing wrong with encrypting your hard drives provided you are fully aware of how to maintain your pool and the proper steps to replace a failed hard drive, where most people get hung up. Another option is to use a product like TrueCrypt or similar to encrypted just the stuff you need encrypted.

If you plan to use encrypted hard drives then I highly recommend that you test this out and build a pool then fault it and replace a drive, see if you have the process worked out. And remember that if the drive takes 3+ years to fail that you need to remember how to do it.
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
There is nothing wrong with encrypting your hard drives provided you are fully aware of how to maintain your pool and the proper steps to replace a failed hard drive, where most people get hung up. Another option is to use a product like TrueCrypt or similar to encrypted just the stuff you need encrypted.

If you plan to use encrypted hard drives then I highly recommend that you test this out and build a pool then fault it and replace a drive, see if you have the process worked out. And remember that if the drive takes 3+ years to fail that you need to remember how to do it.

I take it that you are referring to the bold-faced comments here? But I will try to replace a failed disk in my VM, I just need to find a way to mess up one of the VirtualBox disk images in Windows 10 :) .

But as for knowing how to maintain a pool... I have seen that FreeNAS does weekly scrubs and did a, rather uneventful, manual one. I see that I can configure automatic snapshots. I tried using snapshots, I got deleted files back after rolling back to a snapshot, I then destroyed the snaphot ("destroy" sounds prohibitively scary) and the files were still there. I do want to practice this some more, see what happens after rolling back a couple of snapshots and try out cloning them.

I have read about expanding pools with new vdevs. I understand that I need to keep track of disk usage to plan upgrades early. I understand that I can import my pool into another system and that I can use that to get going after reinstalling from scratch. I have seen that I can export and import FreeNAS' configuration. I also understand that I should keep an eye on the disks' SMART values to notice any problem early on.

I also have backups planned: rclone to Backblaze B2 and Azure Blobs and I want to try out sending ZFS snapshots to an external drive and test out if I can mount that drive on a VM. And I have seen that I can enable logs, that sounds like a sensible plan.

So... am I missing anything important? I all honesty, most of it looks like just keeping one's cool and sticking to the well documented procedures. The former bit is probably the cause of most of the problems that people get...?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
I've been running a 3x2TB RaidZ1 pool for a very long time. I had my first disk failure about a year ago. Replacing the failed drive was easy and didn't really take very long. I think that, with 2TB drives, this configuration is OK. Just keep in mind that, when one drive fails, you have no redundancy so you should be diligent about maintaining your system and move quickly to replace a drive when it fails.

I am in the process of upgrading my system and will build a 4 x 4TB RaidZ2 pool. Larger hard disks take longer to resilver, so I decided that the extra redundancy was worthwhile. I chose a 4 x 4TB RaidZ2 over striped mirrors because, with RaidZ2, any two drives can fail.

HOWEVER... remember that redundancy is not a substitute for keeping good backups. I keep good backups, so with 2TB drives, RaidZ1 was acceptable for my purposes. If the pool did fail, I could always restore my data from backup.
 
Last edited:

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
Well, I know that raid != backups. In fact, I have been thinking of the event in which I become incapacitated in such a way that I cannot maintain my NAS and backups anymore. My wife should then still be able to access things. This is a tough one.
 

julklappor

Cadet
Joined
Mar 29, 2018
Messages
2
We purchased 2 of the HPE Microserver Gen10 "devices". Boot mode has been UEFI, which might not be optimal for FreeNAS. We used 4 WD Red Pro disks of 10 terabytes. Hefty 40 TB in total. 2 RAID1 volumes is the layout. 1 zvol in both.
In hind-sight everything works perfectly.
What we have experienced, are two major things:
- abysmal write speed ~30 Mbytes/sec when writing randomly to disks; sequential writes are 100+ Mbytes/sec.
- Sudden reboots of the system. And I mean totally sudden with nothing logged anywhere.
FreeNAS 11.1U1 has been the latest tested. I could some day retry and update to latest.
Oh, in addition to those big disks, we also tested with WD Red 2 terabyte disks. No change.
And as we have two boxes, we have tried all this on both boxes.
Later we have also installed and pretty extensively tested CentOS 7 and Windows Storage Server 2016. Both of those are stable. No sudden reboots. But the speed is roughtly 30 Mbytes/sec total to all disks, then apparently the controller saturates.

Totally can not recommend the Gen10 "devices".
 

Dwarf Cavendish

Contributor
Joined
Dec 19, 2017
Messages
121
Overall I am quite satisfied with my box, as I demand very little of it in terms of performance and it gave me the opportunity to have ECC at a low price point. However, this price point has risen considerably in the last couple of months, so if I needed to buy stuff for a NAS again I would probably get something else.
 
Status
Not open for further replies.
Top