ZFS best settings for iSCSI as ESXi Storage with ZIL/L2ARC

Status
Not open for further replies.

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Dudes, we are mounting the machine at this moment. But there's an inconsistency in /dev/da with SCSI Disk Number in the controller.

There's a way to fix this, or we should ignore this?


Code:
freenas# camcontrol devlist
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 8 lun 0 (pass0,da0)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 9 lun 0 (pass1,da1)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 10 lun 0 (pass2,da2)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 11 lun 0 (pass3,da3)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 12 lun 0 (pass4,da4)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 14 lun 0 (pass5,da5)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 15 lun 0 (pass6,da6)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 16 lun 0 (pass7,da7)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 17 lun 0 (pass8,da8)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 18 lun 0 (pass9,da9)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 19 lun 0 (pass10,da10)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 20 lun 0 (pass11,da11)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 21 lun 0 (pass12,da12)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 22 lun 0 (pass13,da13)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 23 lun 0 (pass14,da14)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 25 lun 0 (pass15,da15)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 26 lun 0 (pass16,da16)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 27 lun 0 (pass17,da17)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 28 lun 0 (pass18,da18)
<LSI SAS2X36 0e0b> at scbus0 target 30 lun 0 (ses0,pass19)
<ATA WDC WD1000DHTZ-0 6A00> at scbus0 target 31 lun 0 (pass20,da19)
<ATA WDC WD1000DHTZ-0 6A00> at scbus0 target 32 lun 0 (pass21,da20)
<ATA WDC WD1000DHTZ-0 6A00> at scbus0 target 33 lun 0 (pass22,da21)
<ATA WDC WD1000DHTZ-0 6A00> at scbus0 target 34 lun 0 (pass23,da22)
<ATA ST3000DM001-1CH1 CC24> at scbus0 target 35 lun 0 (pass24,da23)
<KINGSTON SV300S37A120G 506ABBF0> at scbus1 target 0 lun 0 (pass25,ada0)
<KINGSTON SV300S37A120G 506ABBF0> at scbus2 target 0 lun 0 (pass26,ada1)
<SanDisk Cruzer Blade 1.26> at scbus7 target 0 lun 0 (pass27,da24)

In the controller we have:
Disk 0-3: ATA WDC WD1000DHTZ-0 6A00
Disk 4-23: ATA ST3000DM001-1CH1 CC24
Thanks in advance,
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
I've done some benchmarks with:
Code:
iozone -ao


Here are the results:
Code:
4x WDC Velociraptor 10k RPM 1TB
2x Mirrored vdevs
1x Pool of vdevs
SLOG Device: 1x Kingston SSDNow V300
 
With SLOG Device: 
 
Async:
real0m18.361s
user0m1.584s
sys0m16.719s
 
Standard:
real3m44.766s
user0m1.933s
sys0m51.236s
 
Always:
real4m25.620s
user0m1.770s
sys0m53.255s
 
Without SLOG Device:
 
Async:
real0m18.413s
user0m1.488s
sys0m16.903s
 
Standard:
real56m14.807s
user0m1.950s
sys0m30.759s
 
Always:
real71m12.395s
user0m1.625s
sys0m35.916s
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I wouldn't worry about it. I just make sure if a disk fails I know what the serial number is so I can find that exact disk to replace it.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
I've another question about the zvol creation. There's a way to set zvol size to something like MAX available storage on the zpool? This pool will only be serving iSCSI, so the zvol should be the same size of zpool. Any reason why we must specify the size of zvol instead of something like MAX?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, uh, um, no. You never ever want to do that. ZFS is a copy on write filesystem. You CANNOT fill it, it will stop working. First it will stop working quickly. Then it will stop working slowly. Then you'll be hosed. And with a zvol, you will be truly fscked because you cannot even recover the "traditional" way. You'd have to toast a zvol.

If you do not care about performance, or have other mitigations in place such as ARC/L2ARC, you can approach 80% capacity without too much hazard. If you had an exclusively read-only load, you might even be able to approach 95%. To retain some level of performance on a busy r/w pool without sufficient ARC/L2ARC for the working set, though, 80% is almost certainly "too much" and you might be better off at around 50-60% of capacity, to help keep the allocator playing more nicely.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
I've understood jgreco. Another question: there's a way to expand the zvol or I must create another one? If yes, I can crate something with 50% of the capacity and then grow it on demand.

Thanks for your clarification once again.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It is probably safer to create another one rather than expand an existing one, since expanding one requires some very careful changes both within FreeNAS and within ESXi.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Thanks jgreco. I found that I can create another iSCSI shares on the same target, so it's better than have only one huge block.

I just created two 5TB block in the 20TB pool, if we need more later we will study the situation! :)
 
Status
Not open for further replies.
Top