ZFS - 6 drive raid - iSCSI / Storage share question

Status
Not open for further replies.

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
Hello Everyone,

I am new to FreeNAS and ZFS in general and I apologize in advance if this is the incorrect forum for these question; I have looked over a few guides but did not get a direct answer to these question. Side note - I work as a software developer with a background in electrical engineering but I have never worked as a system or network engineer. If I am completely misunderstanding a concept please let me know and point me in the direction of some solid reading material, I would love to fill in the gaps as much as possible.

Summary:
The box I just built is using the Supermicro 8 core avoton CPU with 16GB ECC ram and 5 * 3TB WD Red drives and 4 intel NICs. I did order the drives from two different large distributors (Amazon/Newegg) in hopes of getting drives from different batches, although I don't think that really matters in this question. I should also add that this is used in my home where usage is split between my family storage and work (I work from home). I have not yet created any volumes until I figure out the two following question:

Questions:
1. Can iSCSI & "Other storage" be on the same ZFS raid volume? - I would like to use a small part ~ 1-2TB of my total volume as iSCSI for my ESXi cluster as their storage volume. The rest of the space I want to use for backups, general storage, media server storage, ect. Can this be done with some sort of l0gical break up of the single raid array or simply via folders?
2. I am trying to decide between Raid 5 & 6, I understand with raid 5 I should get ~ 12GB and raid 6~ 9TB. I also understand the benefit of having an extra parity drive adds higher reliability and a lower chance of losing the entire volume. However, from my understanding with Raid 5 I still have a single parity, so if I do lose one drive I should be able to swap it out with the brand new one (which I keep sealed in a box on hand) and regain original integrity. Is this incorrect and or not how it would work in practice? If this is true how long after replacement of the drive would the array regain full integrity? Am I mostly protecting against the chance of 2 drives failing within the time it takes to replace the first failed drive?

Lastly I have a few additional questions but it seems they may be more "network" related, I will put them here incase anyone had some suggestions but if you believe they are better put elsewhere just let me know. Both my freeNAS and ESXi machines use the same family of supermicro avoton boards that have 4 intel NICs, as of right now my freeNAS is setup for LACP across 3 of the interfaces which connects to my internal LAN only subnet. Controlled by my pfsense firewall this subnet is blocked from all incoming requests from the outside world. The last NIC is part of a subnet that is part of what could essentially be called the DMZ. The idea here is any service I host on the NAS box that I want to access outside my network without VPN will be listening on the last NIC while everything else on the LACP. Do you see any issues with this? Thus far it seems to work but there is something I want to change. I was thinking of removing one of the NICs from the LACP group to connect to a dedicated switch that is not connected to my network in anyway, but is connected to a single NIC on each ESXi machine. The goal here would be for all traffic between the ESXi and their iSCSI target would be on a dedicated network separate from everything else while incoming traffic to both ESXi and freeNAS goes over the other interfaces. Do you believe this would increase performance?

Also, if the above would work as I though, does freeNAS support any type of "affordable" >1Gbps solutions? From a few posts I have seen 10G support is limited to one card ~ $900 each, is this truly the case? Are there any other solutions in the <$300 per card range?

Thank you for taking the time to read this, if you have any questions or need clarification on anything please let me know!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
1. Yes. But do realize that iSCSI can perform so slowly you get write timeouts (which means lost data) without a beefy system. Even i don't run iSCSI on my server and mine is beefier than yours.
2. You are correct, but UREs throw all of that out the window. That's why RAID5/RAIDZ1 is dead. Please read up on the topic if you want to know more.

Your LACP setup is reasonable, but keep in mind that with LACP you will only get the throughput from a single connection when linking between your ESXi box and FreeNAS. So you can have all 3 in an LACP connection and you will still only get 1Gb throughput between ESXi and FreeNAS. If you want more than 1Gb you need to do MPIO, which pretty much means "get rid of LACP". This is covered in the FreeNAS documentation if you want to go this route.

At present there are no >1Gb alternatives that are less than $300. The only card that has been recommended lately was the X520, but it's not currently recommended as it has a tendency to randomly disconnect on FreeBSD. AFAIK there is no fix at present and no ETA on a fix. There is the possibility of more options with 9.3, but I am not going to discuss them at all until 9.3 is out as I've learned my lesson with regards to talking about future development before it's already out.
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
1. Yes. But do realize that iSCSI can perform so slowly you get write timeouts (which means lost data) without a beefy system. Even i don't run iSCSI on my server and mine is beefier than yours.
2. You are correct, but UREs throw all of that out the window. That's why RAID5/RAIDZ1 is dead. Please read up on the topic if you want to know more.

Your LACP setup is reasonable, but keep in mind that with LACP you will only get the throughput from a single connection when linking between your ESXi box and FreeNAS. So you can have all 3 in an LACP connection and you will still only get 1Gb throughput between ESXi and FreeNAS. If you want more than 1Gb you need to do MPIO, which pretty much means "get rid of LACP". This is covered in the FreeNAS documentation if you want to go this route.

At present there are no >1Gb alternatives that are less than $300. The only card that has been recommended lately was the X520, but it's not currently recommended as it has a tendency to randomly disconnect on FreeBSD. AFAIK there is no fix at present and no ETA on a fix. There is the possibility of more options with 9.3, but I am not going to discuss them at all until 9.3 is out as I've learned my lesson with regards to talking about future development before it's already out.

Sorry, accidentally hit reply.

Thank you, that answers alot of my questions. I will look into what you mention regarding RAID. As for iSCSI, is there any solution to this? Do you mean that freeNAS in general has a hard time with iSCSI and the recommendation is to run a fully dedicated system if you plan on using it as a target for a ESXi system? Any recommendations you have on other alternatives would be great.

Thanks again!
 

imanz

Dabbler
Joined
Oct 13, 2014
Messages
11
If it matters, the specific motherboard I am using is the Supermicro A1SAI-2750F-O, if one alternative is increasing memory I am not against that, I don't know if 32GB would resolve the performance issues you mention or if the bottleneck is the CPU? I was just hoping to be able to keep it to a single storage server in the house both for space and electrical benefits. The ESXi machine is only used for testing of work related and does not really have anything intensive or highly sensitive. I think apart from that it will have a MythTV backend server but that still uses the NAS target for storage.


I am also not opposed to using something other than iSCSI. As I mentioned before I typically only use all of this at work and the system/network guys set it up, my focus is elsewhere. I know that NFS is an option, would that have higher performance? Is there any downside between that and iSCSI?

EDIT: I read though your powerpoint again, great stuff. Just now noticed that ZIL would benefit my specific configuration. Last time I had only skimmed over it with the idea that it may not be necessary.

With that in mind does anyone have any thoughts of upgrading the system to 32GB ECC ram (up from 16GB) and 2 SSD drives either Intel PRO 2500 or Intel S3500 attached to the IBM ServeRAID m1015 for ZIL. I am not sure if this is overkill, I would usually test this out not am strapped for time in the next two weeks.
 
Last edited:

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
When putting block storage (either iSCSI or NFS) on top of RAIDZ/RAIDZ2 you should be careful in choosing optimal ZVOL or dataset block size to keep reasonable space efficiency and performance. With default ZVOL block size of 8K and ashift of 12 (4K disk sector) RAIDZ2 will give you 2x parity overhead. You may need to increase block size to her better space efficiency (at least 16K for 5-6 disks and 32K for 7-10 disks), but that may result in bigger Copy-on-Write overhead, depending on your workload. Depending on number of disks, if block storage performance is important to you, you may prefer RAID10 instead of some RAIDZs.
 
Status
Not open for further replies.
Top