Zon
Cadet
- Joined
- Aug 2, 2014
- Messages
- 8
I'm a Freenas noob who's been experimenting with it for about the last 6 months. I've decided to migrate off a dedicated hardware NAS (with 4 2TB drives in RAID5) to an AMD A8-6600K Quad-Core 3.9GHz machine with 16GB RAM running FreeNAS-9.2.1.6-RELEASE-x64. The machine has 4 2TB drives in a RAIDZ1 volume for testing, with about 4TB out of 6TB being used right now by live data.
I am adding 4 4TB drives, and my thought was to configure them as a separate RAIDZ2 array, copy the data off the test RAIDZ1 array, then copy the data from the hardware NAS, and finally extend the RAIDZ2 volume with the 8 no longer used 2TB drives in a RAIDZ2 configuration as well.
Having a single ZFS volume makes management of the storage easier, however I assume I am increasing my risk by having 12 drives in a volume even though there would be multiple vdevs? If any drive fails, the entire volume is compromised, correct? And by inference, there is no advantage to configure the 8 remaining 2TB drives as two 4 drive RAIDZ2 vdevs rather than a single 8 drive RAIDZ2 vdev? I assume risk, performance, and ease of administration either configuration is identical, right?
During my experimenting when I realized that different datasets are seen as different storage volumes by Freenas, I decided I would use a single main media dataset. I had originally thought I should use separate datasets for all different kinds of data (large video files, documents, pictures, music, home directories, upload / download directories, etc) because it gave me granularity of compression, for example.
However, I very commonly move (cut/paste) large files across these domains. Having them in different datasets makes this a very slow byte-by-byte copy operation rather than the very fast "just update the pointers" when they are on the same dataset. I realize this loses me the granularity of configuration multiple datasets would give me, but it seems like the speed increase of many future operations and simplicity of management makes this a reasonable trade off. Is there an obvious negative that I am missing?
Finally, percentage-wise, quite a bit of my data (h.264 video, JPEG's, MP3's, etc) is incompressible. Given the CPU of my system, is it reasonable to leave the default compression level of the single large dataset as lz4? The system will just be doing file server duty, so wasting cycles trying to compress / decompress incompressable data doesn't harm anything, presuming it can keep up with a 1G LAN.
Thanks for any suggestions!
I am adding 4 4TB drives, and my thought was to configure them as a separate RAIDZ2 array, copy the data off the test RAIDZ1 array, then copy the data from the hardware NAS, and finally extend the RAIDZ2 volume with the 8 no longer used 2TB drives in a RAIDZ2 configuration as well.
Having a single ZFS volume makes management of the storage easier, however I assume I am increasing my risk by having 12 drives in a volume even though there would be multiple vdevs? If any drive fails, the entire volume is compromised, correct? And by inference, there is no advantage to configure the 8 remaining 2TB drives as two 4 drive RAIDZ2 vdevs rather than a single 8 drive RAIDZ2 vdev? I assume risk, performance, and ease of administration either configuration is identical, right?
During my experimenting when I realized that different datasets are seen as different storage volumes by Freenas, I decided I would use a single main media dataset. I had originally thought I should use separate datasets for all different kinds of data (large video files, documents, pictures, music, home directories, upload / download directories, etc) because it gave me granularity of compression, for example.
However, I very commonly move (cut/paste) large files across these domains. Having them in different datasets makes this a very slow byte-by-byte copy operation rather than the very fast "just update the pointers" when they are on the same dataset. I realize this loses me the granularity of configuration multiple datasets would give me, but it seems like the speed increase of many future operations and simplicity of management makes this a reasonable trade off. Is there an obvious negative that I am missing?
Finally, percentage-wise, quite a bit of my data (h.264 video, JPEG's, MP3's, etc) is incompressible. Given the CPU of my system, is it reasonable to leave the default compression level of the single large dataset as lz4? The system will just be doing file server duty, so wasting cycles trying to compress / decompress incompressable data doesn't harm anything, presuming it can keep up with a 1G LAN.
Thanks for any suggestions!