Hello Everyone,
I am new to FreeNAS and ZFS in general and I apologize in advance if this is the incorrect forum for these question; I have looked over a few guides but did not get a direct answer to these question. Side note - I work as a software developer with a background in electrical engineering but I have never worked as a system or network engineer. If I am completely misunderstanding a concept please let me know and point me in the direction of some solid reading material, I would love to fill in the gaps as much as possible.
Summary:
The box I just built is using the Supermicro 8 core avoton CPU with 16GB ECC ram and 5 * 3TB WD Red drives and 4 intel NICs. I did order the drives from two different large distributors (Amazon/Newegg) in hopes of getting drives from different batches, although I don't think that really matters in this question. I should also add that this is used in my home where usage is split between my family storage and work (I work from home). I have not yet created any volumes until I figure out the two following question:
Questions:
1. Can iSCSI & "Other storage" be on the same ZFS raid volume? - I would like to use a small part ~ 1-2TB of my total volume as iSCSI for my ESXi cluster as their storage volume. The rest of the space I want to use for backups, general storage, media server storage, ect. Can this be done with some sort of l0gical break up of the single raid array or simply via folders?
2. I am trying to decide between Raid 5 & 6, I understand with raid 5 I should get ~ 12GB and raid 6~ 9TB. I also understand the benefit of having an extra parity drive adds higher reliability and a lower chance of losing the entire volume. However, from my understanding with Raid 5 I still have a single parity, so if I do lose one drive I should be able to swap it out with the brand new one (which I keep sealed in a box on hand) and regain original integrity. Is this incorrect and or not how it would work in practice? If this is true how long after replacement of the drive would the array regain full integrity? Am I mostly protecting against the chance of 2 drives failing within the time it takes to replace the first failed drive?
Lastly I have a few additional questions but it seems they may be more "network" related, I will put them here incase anyone had some suggestions but if you believe they are better put elsewhere just let me know. Both my freeNAS and ESXi machines use the same family of supermicro avoton boards that have 4 intel NICs, as of right now my freeNAS is setup for LACP across 3 of the interfaces which connects to my internal LAN only subnet. Controlled by my pfsense firewall this subnet is blocked from all incoming requests from the outside world. The last NIC is part of a subnet that is part of what could essentially be called the DMZ. The idea here is any service I host on the NAS box that I want to access outside my network without VPN will be listening on the last NIC while everything else on the LACP. Do you see any issues with this? Thus far it seems to work but there is something I want to change. I was thinking of removing one of the NICs from the LACP group to connect to a dedicated switch that is not connected to my network in anyway, but is connected to a single NIC on each ESXi machine. The goal here would be for all traffic between the ESXi and their iSCSI target would be on a dedicated network separate from everything else while incoming traffic to both ESXi and freeNAS goes over the other interfaces. Do you believe this would increase performance?
Also, if the above would work as I though, does freeNAS support any type of "affordable" >1Gbps solutions? From a few posts I have seen 10G support is limited to one card ~ $900 each, is this truly the case? Are there any other solutions in the <$300 per card range?
Thank you for taking the time to read this, if you have any questions or need clarification on anything please let me know!
I am new to FreeNAS and ZFS in general and I apologize in advance if this is the incorrect forum for these question; I have looked over a few guides but did not get a direct answer to these question. Side note - I work as a software developer with a background in electrical engineering but I have never worked as a system or network engineer. If I am completely misunderstanding a concept please let me know and point me in the direction of some solid reading material, I would love to fill in the gaps as much as possible.
Summary:
The box I just built is using the Supermicro 8 core avoton CPU with 16GB ECC ram and 5 * 3TB WD Red drives and 4 intel NICs. I did order the drives from two different large distributors (Amazon/Newegg) in hopes of getting drives from different batches, although I don't think that really matters in this question. I should also add that this is used in my home where usage is split between my family storage and work (I work from home). I have not yet created any volumes until I figure out the two following question:
Questions:
1. Can iSCSI & "Other storage" be on the same ZFS raid volume? - I would like to use a small part ~ 1-2TB of my total volume as iSCSI for my ESXi cluster as their storage volume. The rest of the space I want to use for backups, general storage, media server storage, ect. Can this be done with some sort of l0gical break up of the single raid array or simply via folders?
2. I am trying to decide between Raid 5 & 6, I understand with raid 5 I should get ~ 12GB and raid 6~ 9TB. I also understand the benefit of having an extra parity drive adds higher reliability and a lower chance of losing the entire volume. However, from my understanding with Raid 5 I still have a single parity, so if I do lose one drive I should be able to swap it out with the brand new one (which I keep sealed in a box on hand) and regain original integrity. Is this incorrect and or not how it would work in practice? If this is true how long after replacement of the drive would the array regain full integrity? Am I mostly protecting against the chance of 2 drives failing within the time it takes to replace the first failed drive?
Lastly I have a few additional questions but it seems they may be more "network" related, I will put them here incase anyone had some suggestions but if you believe they are better put elsewhere just let me know. Both my freeNAS and ESXi machines use the same family of supermicro avoton boards that have 4 intel NICs, as of right now my freeNAS is setup for LACP across 3 of the interfaces which connects to my internal LAN only subnet. Controlled by my pfsense firewall this subnet is blocked from all incoming requests from the outside world. The last NIC is part of a subnet that is part of what could essentially be called the DMZ. The idea here is any service I host on the NAS box that I want to access outside my network without VPN will be listening on the last NIC while everything else on the LACP. Do you see any issues with this? Thus far it seems to work but there is something I want to change. I was thinking of removing one of the NICs from the LACP group to connect to a dedicated switch that is not connected to my network in anyway, but is connected to a single NIC on each ESXi machine. The goal here would be for all traffic between the ESXi and their iSCSI target would be on a dedicated network separate from everything else while incoming traffic to both ESXi and freeNAS goes over the other interfaces. Do you believe this would increase performance?
Also, if the above would work as I though, does freeNAS support any type of "affordable" >1Gbps solutions? From a few posts I have seen 10G support is limited to one card ~ $900 each, is this truly the case? Are there any other solutions in the <$300 per card range?
Thank you for taking the time to read this, if you have any questions or need clarification on anything please let me know!