Hi All,
I am not a FreeNAS user per se, as I have been using FreeBSD for 15 years+ now - and yes in case you are wondering, I started playing with FreeBSD when I was 5 (joke).
Anyway, I built a NAS on FreeBSD when ZFS was released a long time ago and managed to keep upgrading my zpool across different versions of FreeBSD.
I tested FreeNAS but was comfortable enough with FreeBSD to almost feel frustrated by some limitations of the Web GUI - don't get me wrong, FreeNAS is great but being behind FreeBSD release cycle may be frustrating especially when new features in FreeBSD are convenient.
So, I currently have the following box (build early 2013 currently running FreeBSD-10.0-RELEASE):
Norco RPC4224
Supermicro X9SRL-F
Xeon E5-2665 v1
128GB ECC
3 x M1015 (properly flashed)
6 x 3 TB
12 x 1TB (2.5'')
RAIDZ2 built in such a way that I can lose one of my 3 controllers and still feel like nothing happened.
4x 1Gb NIC in LACP to my switch
2 x 80 GB SSD for the system as a ZFS mirror
Many FreeBSD jails for various purposes
I have another older server - DL380 G5 - 20GB - Dual E5x00 (can't remember the exact CPU) and 8 x 1 TB 2.5 in a zfs pool as well - raidz only if I remember well.
I have other gears but not relevant to this topic.
As the HDD are getting bigger, I would love to convert my compute power (the E5 and the memory) for playing with Openstack and/or ESXi at home.
I am also interested in getting a more concentrated solution and lower power consumption.
I have done the calculation of running 24/7/365 and there is no way I will actually save anything (electricity wise) by buying something new ;-) It's more to get something compact and dedicated.
So here is what I am considering -
Main system:
ASRock C2750D4I
64GB ECC
[ Reuse my 6 x 3TB of my current system - Add 2 x 3 TB drives I have as spare and connect them all straight to the Mobo] = 8 x 3 TB = 24TB -> 18TB in RAIDZ2
Backup system - coming up online once a week or something like that:
ASRock C2750D4I
32GB ECC
[Reuse my 12 x 1TB of my current main system and 4 from the DL380 and connect them as follows: 8 on IBM-1015 and 8 on the MOBO] = 16 x 1TB = 16TB
Or will you keep a mix of 3TB and 1TB drives to just get a similar storage space between Main and backup?
something like [4x 3TB + 10 x 1TB] in each system (I have 20x1 TB 2.5 in total)
With such an approach, I address the following:
1/ I can reuse my compute power (E5, memory and DL380) only when and as needed
2/ I can reuse my current drives and still increase my storage capacity by replacing the 1TB drives in the future by 3TB drives as they (hopefully become cheaper)
3/ I still have some power thanks to the C2750D4I mobo to run a few apps on top of the storage applications
4/ it should not be power hungry (at least the CPU ;-))
5/ I have a backup server (today, I have my data copied on my different servers, but specs are different and this piggy DL380 G5 draws a lot of power while being shutdown and just accessible with IPMI - something like 30W which I found surprisingly high)
Now I might be wrong and it could be better to stick to a lower number of HDD per system, need to investigate a bit on that front.
That's it for my intro, I will create a thread under hardware to discuss my ideas/options for evolution of NAS builds.
I am not a FreeNAS user per se, as I have been using FreeBSD for 15 years+ now - and yes in case you are wondering, I started playing with FreeBSD when I was 5 (joke).
Anyway, I built a NAS on FreeBSD when ZFS was released a long time ago and managed to keep upgrading my zpool across different versions of FreeBSD.
I tested FreeNAS but was comfortable enough with FreeBSD to almost feel frustrated by some limitations of the Web GUI - don't get me wrong, FreeNAS is great but being behind FreeBSD release cycle may be frustrating especially when new features in FreeBSD are convenient.
So, I currently have the following box (build early 2013 currently running FreeBSD-10.0-RELEASE):
Norco RPC4224
Supermicro X9SRL-F
Xeon E5-2665 v1
128GB ECC
3 x M1015 (properly flashed)
6 x 3 TB
12 x 1TB (2.5'')
RAIDZ2 built in such a way that I can lose one of my 3 controllers and still feel like nothing happened.
4x 1Gb NIC in LACP to my switch
2 x 80 GB SSD for the system as a ZFS mirror
Many FreeBSD jails for various purposes
I have another older server - DL380 G5 - 20GB - Dual E5x00 (can't remember the exact CPU) and 8 x 1 TB 2.5 in a zfs pool as well - raidz only if I remember well.
I have other gears but not relevant to this topic.
As the HDD are getting bigger, I would love to convert my compute power (the E5 and the memory) for playing with Openstack and/or ESXi at home.
I am also interested in getting a more concentrated solution and lower power consumption.
I have done the calculation of running 24/7/365 and there is no way I will actually save anything (electricity wise) by buying something new ;-) It's more to get something compact and dedicated.
So here is what I am considering -
Main system:
ASRock C2750D4I
64GB ECC
[ Reuse my 6 x 3TB of my current system - Add 2 x 3 TB drives I have as spare and connect them all straight to the Mobo] = 8 x 3 TB = 24TB -> 18TB in RAIDZ2
Backup system - coming up online once a week or something like that:
ASRock C2750D4I
32GB ECC
[Reuse my 12 x 1TB of my current main system and 4 from the DL380 and connect them as follows: 8 on IBM-1015 and 8 on the MOBO] = 16 x 1TB = 16TB
Or will you keep a mix of 3TB and 1TB drives to just get a similar storage space between Main and backup?
something like [4x 3TB + 10 x 1TB] in each system (I have 20x1 TB 2.5 in total)
With such an approach, I address the following:
1/ I can reuse my compute power (E5, memory and DL380) only when and as needed
2/ I can reuse my current drives and still increase my storage capacity by replacing the 1TB drives in the future by 3TB drives as they (hopefully become cheaper)
3/ I still have some power thanks to the C2750D4I mobo to run a few apps on top of the storage applications
4/ it should not be power hungry (at least the CPU ;-))
5/ I have a backup server (today, I have my data copied on my different servers, but specs are different and this piggy DL380 G5 draws a lot of power while being shutdown and just accessible with IPMI - something like 30W which I found surprisingly high)
Now I might be wrong and it could be better to stick to a lower number of HDD per system, need to investigate a bit on that front.
That's it for my intro, I will create a thread under hardware to discuss my ideas/options for evolution of NAS builds.
Last edited: