Atom D525 FreeNAS replication target

Status
Not open for further replies.

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I acquired some X7SPE-HF-D525 supermicro 1U systems used for pfSense systems a few years ago, and have had them sitting around ever since. I recently stumbled across a thread where someone had installed 8GB of memory in one, and it worked (the specs from both Supermicro and Intel say that the D525 only supports 4GB). I was skeptical, but went ahead and bought some memory to try it, and sure enough, all 8GB work. Everything works properly in Memtest86+, and stress was able to load all 8GB of memory.

Now that I had 8GB of memory, my mind jumped to the possibility of using FreeNAS on this thing. Of course I realize that this is not ECC memory, but I'm thinking I would use this system as a third backup system (in addition to an external USB drive and a cloud backup) that would actually give me the ability to use ZFS replication (something I've always wanted to try), so I'm not super worried about the data risks. My bigger concern is the D525: I know it's a super weak processor, and I'm not sure it could even manage the basic task of being a replication target.

This is where the community comes in: does anyone have any experience running a similar low horse-power system with FreeNAS/ZFS? I don't have hard drives for this thing yet (I'm thinking 3x 4TB in RAIDZ1), and before I spend any money on this thing, I'd love some validation that my idea might work. If no one has any experience, I'll probably go ahead and buy the drives anyway to try it out.

Any thoughts/comments/ideas?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
8GB is barely enough for a stable FreeNAS system, and if you can replicate to it, a process that requires a bunch of metadata retrieval, it's going to be slow.

The normal suggestion for a system with 12TB of raw disks would be at least 12GB of RAM, and any shortfall will come out as a performance hit. If you're actually intending to load that up with 6TB or more of data on a constant replication schedule, be aware that your initial performance is probably the best that it will ever be, and that performance will degrade until it hits some sort of steady state that might kind of suck.

That having been said, it is worthwhile to remember that if your purpose is just to have a copy of the data in case of disaster, it doesn't really matter if you're tormenting some box 24 hours a day 7 days a week, as long as it isn't falling behind too far. If being slow is okay, and it keeps up, and you understand that there's some risk associated with non-ECC memory, then you're all good to give it a shot and see what happens.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm OK with slow, I'm just not OK with too slow, if that makes sense. In a perfect world, I would set up this server, replicate my daily snapshots (about 4TB right now), and as long as they were done before the next day's replication, that'd be perfect. I don't anticipate ever serving data directly from this server, except perhaps to recover my main FreeNAS config to rebuild it if/when the main server goes down.

I did some testing last night, and with an SSD, I was able to get about 70 MB/s write speed. Assuming with spinning rust that I'm still able to hit 35 MB/s, I've calculated that the initial replication will probably go over the 24-hour time budget, but the daily updates should have no problem (the deltas are around 100GB/month).

I'll probably go ahead and get the drives. I'll do some benchmarks and let you know what I find.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The specific problem I'm referring to is that as the fragmentation on the pool increases, and while you might be thinking "well that just means a seek or two", that can actually mean "substantial searching of the free space by prowling through metadata and doing lots of seeks."
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm following you on the fragmentation issue. Based on my understanding of ZFS, if I can keep the pool utilization around 50%, I should not really run into performance issues because of fragmentation, and that was my design goal with the 3x 4TB drives.

Is there an easy metric to look at for the extra seeking, or is it something that has to be inferred from the fragmentation percentage? I'm thinking that if I can add it to my Icinga monitoring system, it should be easy for me an eye on if the performance tanks too much.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm following you on the fragmentation issue. Based on my understanding of ZFS, if I can keep the pool utilization around 50%, I should not really run into performance issues because of fragmentation, and that was my design goal with the 3x 4TB drives.

Is there an easy metric to look at for the extra seeking, or is it something that has to be inferred from the fragmentation percentage? I'm thinking that if I can add it to my Icinga monitoring system, it should be easy for me an eye on if the performance tanks too much.

No, the fragmentation "percentage" is actually sort of an inferred metric that doesn't actually mean what you might expect it to mean - it actually refers to the fragmentation of the free space!

https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning

Because replication will continuously write data to the pool, replacing old data, over time your pool will become excessively fragmented. When it has lots of free space, ZFS tends to write all pending data into contiguous freespace regions, without seeks, which is part of why it seems like ZFS has SSD-like write speeds. Things like mid-file block rewrites, including databases and VM virtual disks, cause fragmentation of the file but there isn't a good way to measure that, other than to note the read speed of the file. If you are reading a file with lots of mid-file rewrites that must be read from pool, you'll see the lower performance as the system seeks for that. The design intent of ZFS is to mitigate the seeks with ARC and L2ARC, which you don't have much/any of.

More worrying is that ZFS will have to fragment NEW writes of long sequential data if there isn't a lot of free space available, which sucks. Once this starts happening in earnest, the game is over and you're pretty screwed, and you have to free up lots of space on the pool to have a chance at correcting this. Once ZFS can again do contiguous allocation for new writes of long sequential data, pool speeds would improve. But this is probably more around 15-35% pool occupancy. Keeping a 50% limit on pool occupancy is good to slow the progress of fragmentation, but probably doesn't halt it in the long run.

If you look at this:

delphix-small.png


this is the performance observed on a pool once it has reached its "fully fragmented" state. You should see that the emptier the pool, the better it is, and that by the point you get to 50% occupancy, things are performing kinda poorly. Again, this is AFTER it has reached a highly fragmented state. Performance before that is likely to be substantially better. But you're in an unhappy position for RAM, so performance will be reduced by that.

If none of this sounds particularly happy or ideal, then you get the point.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
It's there. If you're on chrome it's kinda ridiculous with caching and does weird things with web pages.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm getting a broken image icon. And I hate the web, come to think of it I hate you all too, and I'm evil, so why should I care! Ha!

Anyways, stupid frickin' build just broke on FreeBSD 11, damn it, now I'm really APO.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The image loads fine for me from your server, on whatever the latest Firefox is.
 
Status
Not open for further replies.
Top