NFS Performance with VMWare - mega-bad?

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, if you know of any benchmarks I can perform to test the 4k write size let me know.

Probably the most relevant thing would be something like

Code:
root@:/var/tmp # iozone -r 4k -s 1g -i 0 -i 1 -i 2
[...]
        Record Size 4 KB
        File size set to 1048576 KB
        Command line used: iozone -r 4k -s 1g -i 0 -i 1 -i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000002 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride            
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
         1048576       4  472846  452951  1559111  1548248  341378   56112                                      

iozone test complete.


That's on a VMware guest on an ESXi RAID1 SSD datastore. Now on an ESXi RAID1 spinny rust datastore:

Code:
        Record Size 4 KB
        File size set to 262144 KB
        Command line used: iozone -r 4k -s 256m -i 0 -i 1 -i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride            
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
          262144       4   91565   70460  1482371  1505632  458744    1606                                      

iozone test complete.


Note that I used a smaller filesize to keep the test from taking too long, I have other things to be doing tonight. But the random read/write is really where the interesting stuff is (at least to me). ZIL usage isn't random though, so there's probably a better test that one could devise for the purpose.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
This is with my ANS9010B, latest firmware,6x2GB with ECC RAM.

Code:
# dd if=/dev/zero of=/dev/ada4 bs=4k count=20000
20000+0 records in
20000+0 records out
81920000 bytes transferred in 1.126017 secs (72752004 bytes/sec)
# dd if=/dev/zero of=/dev/ada4 bs=4k count=20000000
dd: /dev/ada4: short write on character device
dd: /dev/ada4: end of device
3145728+0 records in
3145727+1 records out
12884901376 bytes transferred in 176.257721 secs (73102621 bytes/sec)
[root@zuul] /dev# diskinfo -t ada4
ada4
        512             # sectorsize
        12884901376     # mediasize in bytes (12G)
        25165823        # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        24966           # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        5619336 0       # Disk ident.

Seek times:
        Full stroke:      250 iter in   0.008329 sec =    0.033 msec
        Half stroke:      250 iter in   0.008306 sec =    0.033 msec
        Quarter stroke:   500 iter in   0.016576 sec =    0.033 msec
        Short forward:    400 iter in   0.013356 sec =    0.033 msec
        Short backward:   400 iter in   0.013355 sec =    0.033 msec
        Seq outer:       2048 iter in   0.066645 sec =    0.033 msec
        Seq inner:       2048 iter in   0.067029 sec =    0.033 msec
Transfer rates:
        outside:       102400 kbytes in   0.581992 sec =   175947 kbytes/sec
        middle:        102400 kbytes in   0.582340 sec =   175842 kbytes/sec
        inside:        102400 kbytes in   0.582356 sec =   175837 kbytes/sec


So do those numbers blow your hair back? I'd say 72MB/sec of 4k random seeks is pretty amazing.


IOZONE: 256M test took a few seconds, 1G test will take longer and I'll post those later.

Code:
# iozone -r 4k -s 256m -i 0 -i 1 -i 2 -f /dev/ada4
        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
                     Ben England.

        Run began: Mon Apr 22 04:18:59 2013

        Record Size 4 KB
        File size set to 262144 KB
        Command line used: iozone -r 4k -s 256m -i 0 -i 1 -i 2 -f /dev/ada4
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
          262144       4   71835   71761    78835    79133   79000   71655

iozone test complete.


These drives are supposed to perform at the same speed regardless of if it is a read or write test, and regardless of it being sequential or random, so I'd expect the 1G test to be almost identical.

Personally, I think the writes should be all that matters. A ZIL is never read from except on a system boot when the ZIL has unwritten transactions on it. I don't think you'd care too much if it took 1 minute instead of 58 seconds to sync the ZIL to the zpool. :P

Reading up on the ZeusRAM it looks like my Acard box is simply a cheaper version of it, but also doesn't have the same very high performance numbers. But I think that my box would be more than capable of putting some smackdown on performance numbers as a ZIL. My box is cheaper but does give you the option of buying as much RAM as you can afford for it. Mine has 12GB of ECC, but I have 2 others with 32GB of non-ECC. If you choose to use non-ECC RAM the box converts it to ECC RAM at the cost of 1/9th the RAM size. As a business I'd probably recommend someone buy the ZeusRAM. As a home user, I'd probably push more for the Acard ANS-9010 series.

Maybe we should get someone that has performance issues to try one of these Acard devices to see how well it works? I've seriously considered selling the ones I have several times, but I've never done it because they're so cool and with unlimited writing I have a hunch they'll be worth something to me someday. But for now I use one as a spare drive for my pagefile and temp folder. :)

Last edit: values for 1GB test file

Code:
#  iozone -r 4k -s 1G -i 0 -i 1 -i 2 -f /dev/ada4
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.397 $
		Compiled for 64 bit mode.
		Build: freebsd 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
	             Ben England.

	Run began: Mon Apr 22 04:24:59 2013

	Record Size 4 KB
	File size set to 1048576 KB
	Command line used: iozone -r 4k -s 1G -i 0 -i 1 -i 2 -f /dev/ada4
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                   
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
         1048576       4   68574   71759    79197    79214   78641   71679                                                          

iozone test complete.


So it looks like its around 70MB/sec or so. If you were doing lots of random writes, this could really be a big deal. Of course, there is the possibility that your zpool write speeds could be bottlenecked by the speed of this drive. I think further testing would be necessary to really see if this device can perform.

I found http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/49904 someone said that a good test would be iozone -m -t 16 -T -O -r 8k -o -s 2G so I may try to run that later. I'm about to go take a nap.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's actually about what I'd expect, given that there shouldn't be complicating factors in the storage such as seek time or writing to flash (the things that kill spinny rust and flash performance). Flat performance on that test suggests that it is a fairly simple and straightforward device.

The Solaris list suggestion for testing basically turns on threading in iozone, which increases the stress on everything. With a single-threaded iozone, there can be some latency between the end of one command and the start of the next (from the device, to the controller, to the opsys, to userland, program runs, sends command from userland, to opsys, to the controller, to the device). With multithreaded, you get many processes all queuing up things, and the driver can stuff them at the device more rapidly, and with technologies such as NCQ and DMA, that could result in the backend being kept as busy as reasonably possible.

So I should do a better job on this I suppose :smile: Running iozone on /var/tmp introduces too much noise from the FFS cache.

On an ESXi RAID1 SSD datastore (no hardware cache):

Code:
        Record Size 4 KB
        File size set to 262144 KB
        Command line used: iozone -r 4k -s 256m -i 0 -i 1 -i 2 -f /dev/da2
        Output is in Kbytes/sec
        Time Resolution = 0.000002 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride            
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
          262144       4   18814   19183    17943    18075   12859   18647                                      

iozone test complete.


Ok so that's single-thread, and you'll immediately notice that the transfer rates do not come anywhere near what they were above when iozone was run on top of the FFS layer (on the same disk subsystem).

Multi-threaded is quite frankly a PITA on a raw disk device. You have to set up individual partitions. On a blank disk, you can do

Code:
root@:/var/tmp # gpart create -s GPT /dev/da2
da2 created
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p1 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p2 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p3 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p4 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p5 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p6 added
root@:/var/tmp # gpart add -t freebsd-ufs -a 4k -s 1G /dev/da2
da2p7 added


Code:
        Record Size 4 KB
        File size set to 1047552 KB
        Multi_buffer. Work area 16777216 bytes
        Command line used: iozone -r 4k -s 1023m -i 0 -i 1 -i 2 -m -t 6 -T -F /dev/da2p1 -F /dev/da2p2 -F /dev/da2p3 -F /dev/da2p4 -F /dev/da2p5 -F /dev/da2p6 -F /dev/da2p7
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 6 threads
        Each thread writes a 1047552 Kbyte file in 4 Kbyte records

        Children see throughput for  6 initial writers  =  332394.33 KB/sec
        Parent sees throughput for  6 initial writers   =   40382.27 KB/sec
        Min throughput per thread                       =    6010.05 KB/sec
        Max throughput per thread                       =  105043.58 KB/sec
        Avg throughput per thread                       =   55399.06 KB/sec
        Min xfer                                        =   60060.00 KB

        Children see throughput for  6 rewriters        =  299854.13 KB/sec
        Parent sees throughput for  6 rewriters         =  298884.15 KB/sec
        Min throughput per thread                       =    9099.35 KB/sec
        Max throughput per thread                       =   90947.98 KB/sec
        Avg throughput per thread                       =   49975.69 KB/sec
        Min xfer                                        =  104808.00 KB

        Children see throughput for  6 readers          = 2339877.93 KB/sec
        Parent sees throughput for  6 readers           = 2307184.12 KB/sec
        Min throughput per thread                       =    9463.69 KB/sec
        Max throughput per thread                       =  777960.56 KB/sec
        Avg throughput per thread                       =  389979.66 KB/sec
        Min xfer                                        =   12748.00 KB

        Children see throughput for 6 re-readers        = 2282112.66 KB/sec
        Parent sees throughput for 6 re-readers         = 2277326.98 KB/sec
        Min throughput per thread                       =   10105.68 KB/sec
        Max throughput per thread                       =  750572.75 KB/sec
        Avg throughput per thread                       =  380352.11 KB/sec
        Min xfer                                        =   14004.00 KB

        Children see throughput for 6 random readers    =  407735.07 KB/sec
        Parent sees throughput for 6 random readers     =  405887.66 KB/sec
        Min throughput per thread                       =    6441.62 KB/sec
        Max throughput per thread                       =  131020.89 KB/sec
        Avg throughput per thread                       =   67955.85 KB/sec
        Min xfer                                        =   51508.00 KB

        Children see throughput for 6 random writers    =   57040.94 KB/sec
        Parent sees throughput for 6 random writers     =   27513.22 KB/sec
        Min throughput per thread                       =    1778.69 KB/sec
        Max throughput per thread                       =   17333.91 KB/sec
        Avg throughput per thread                       =    9506.82 KB/sec
        Min xfer                                        =  107496.00 KB



iozone test complete.


Interestingly enough, this appears to be complete bull ... I wonder what's going on here. Probably operator error of some sort. I know the storage is capable of ~210MB/sec read and ~120MB/sec write, but I wasn't seeing it push north of 40 during that test. Reported I/O was spot on for the nonthreaded test.

But in all likelihood, SLOG activity is not all that likely to be similar to a heavily threaded workload. I would think that a "dd if=/dev/zero of=/yourdev bs=4096" would be more likely to resemble the worst of a heavy and stressy ZIL workload, so your existing dd test is probably fine to establish that your gadget is capable of "at least" 70MB/sec of sustained random workload. I haven't actually tried it or looked at the code with a critical eye towards this (I'm pretty sure the Sun guys did though).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ok, so, now for some fun.

Got a BBU for this LSI2208. Hooked up a pair of 500GB Momentus XT's in RAID1. Turned on writeback. These drives in RAID1 are capable of around 100MB/sec.

Code:
        File size set to 9216000 KB
        Record Size 4 KB
        Command line used: iozone -f /dev/mfid1 -s 9000m -r 4k -i 0 -i 1 -i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bk                                                    wd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    re                                                    ad  rewrite     read   fwrite frewrite   fread  freread
         9216000       4   87720   87529    87401    90037     749     883

iozone test complete.


Great except for random reads/writes, where the cache on the BBU is too small to be effective, and basically it was running at around 180 tps. But for sequential ops it was running >20K tps.

Code:
        File size set to 921600 KB
        Record Size 4 KB
        Command line used: iozone -f /dev/mfid1 -s 900m -r 4k -i 0 -i 1 -i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
          921600       4   91248   87168    65833    90215   69391    6376

iozone test complete.


Works better with a larger percentage of the data being in cache, it was running around 1600 tps on the random.

Code:
        File size set to 204800 KB
        Record Size 4 KB
        Command line used: iozone -f /dev/mfid1 -s 200m -r 4k -i 0 -i 1 -i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
          204800       4   97924   97344    81277    74404   82129   90283

iozone test complete.


But working primarily out of cache? Way cool speed.

Now the thing is, that also seems to be about the max speed that the controller can handle from a single process, latency and all that. The controller seems to peak out around 72000 tps if I flood it with cachework.

Code:
[root@freenas] ~# dd if=/dev/zero of=/dev/mfid1 bs=4096
^C4019701+0 records in
4019700+0 records out
16464691200 bytes transferred in 182.622291 secs (90157073 bytes/sec)


A SLOG device should mostly be writing sequentially, so maybe the poor random performance isn't a major issue (and small bursts of random writes perform very well anyways). And I would expect that for many uses, SLOG writes at a rate of 21,000 or 22,000 per second would be pretty acceptable. And that does seem to be a controller-dictated limit... with a larger block size, we bang up against the speed of the underlying drives (we were already doing 90MB/sec). Trying these tests with the SSD RAID1 also peaks out around 22,000 tps at 4KB, or 18,000 tps at 16KB.

So.

Assuming generally sequential ZIL writes, it appears that using hard disk with a write-back cache could get you substantial numbers of ZIL ops per second, possibly pushing nearly the capacity of a moderately fast hard drive.

I haven't looked to determine whether the write path in ZFS would allow concurrency to happen; if not, then it appears that there might be a practical cap (in my case ~21K IOPS) with this technique. Otherwise, it would appear likely that use of SSD would let you push harder and get up past that point.

So I guess now I have to go cram some disks in this and make a pool and do some tests.

And... I'm stunned. Wow.

Code:
       tty           mfid1             cpu
 tin  tout  KB/t tps  MB/s  us ni sy in id
   0    44  0.00   0  0.00   0  0  0  0 100
   0   133  0.00   0  0.00   0  0  0  0 100
   0    45  0.00   0  0.00   0  0  0  0 100
   0    45 35.93 755 26.50   0  0  6  2 92
   0    44 35.90 2975 104.30   0  0 20  7 73
   0    46 35.94 3084 108.24   0  0 19  8 72
   0    46 35.93 3060 107.35   0  0 20  9 71
   0    46 35.93 3041 106.70   0  0 20 10 70
   0    46 35.93 2956 103.71   0  0 25  7 68
   0    46 35.93 3017 105.87   0  0 25  6 69
   0    46 35.96 3140 110.28   0  0 23  8 68
   0    46 35.92 3103 108.84   0  0 22  9 70
   0    46 35.94 3116 109.37   0  0 21 11 68
   0    46 35.96 2981 104.69   0  0 26  9 65
   0    46 35.94 3124 109.62   0  0 21 10 69
   0    46 35.91 2723 95.50   0  0 18  8 75
   0    45 35.92 2446 85.83   0  0 15  7 78
   0    45 35.92 2384 83.61   0  0 14  8 78
   0    45 35.90 2252 78.95   0  0 22  8 70
   0    45 35.93 2358 82.74   0  0 19  9 72
   0    45 35.96 2410 84.61   0  0 17  6 77
       tty           mfid1             cpu
 tin  tout  KB/t tps  MB/s  us ni sy in id
   0    45 35.93 2342 82.17   0  0 16  7 78
   0   133 35.95 2430 85.31   0  0 15  6 78
       tty           mfid1             cpu
 tin  tout  KB/t tps  MB/s  us ni sy in id
   0    46 35.93 2076 72.84   0  0 18  6 76
   0   133 35.94 2226 78.12   0  0 15  7 78
   0    45 35.97 2013 70.70   0  0 13  5 82
   0    45 35.94 2661 93.41   0  0 17  7 76
   0    45 35.93 2215 77.73   0  0 16  5 78


That's the SLOG device. On the ESXi host, I'm ssh'd in and I did

Code:
/vmfs/volumes/256d35e6-3b3b14a0 # dd if=/dev/zero of=file2 bs=1048576 count=2048
2048+0 records in
2048+0 records out
/vmfs/volumes/256d35e6-3b3b14a0 #


Now if you look at the iostat output, you see that the write starts off really aggressively at 100-110MB/sec, but you can see the writeback cache fill up and it drops to 70-85MB/sec after about 10 seconds.

That's ... awesome. It would be unrealistically expensive to go out and buy a BBU RAID controller just for this purpose, but if you can get a system board that already integrates one for a modest price differential over a different server board, this is something to really think about.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Interesting. Personally I think I'd rather do the Acard device because it has persistent storage. If you lose power for longer than the BBU has power for, you lose data. While rare, its definitely not unheard of. I've had 3 such situations in my 4 years in this house. Other places may be far more frequent, others less.

I know in many 3rd world countries, if a power generation station trips offline, you lose power until it comes back online.. sometimes for days. Here in the USA we are very lucky that if one plant trips offline there is spare capacity to pick up the load.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
An Adaptec RAID controller might work really nicley for a cheap ZFS setup, you can setup both JBODs and RAID volumes on the controller. That's what I'm stuck with in my offsite host and when I get a chance I'll get a drive thrown in and setup a RAID 0 for the SLOG and report back what I get. I wouldn't exactly recommend the controllers at the moment, the built in driver in FreeNAS is a complete no go(at least in my virtual SAN setup), the current driver from Adaptec's website that I backported to 8.3 and installed into FreeNAS makes it work but trips a bug what I think is this bug(http://forums.freebsd.org/showthread.php?t=34698). Also a quick search of Adaptec & FreeBSD typically nets you a bunch of forget it and buy a LSI card, I've been trying to pressure my hosting provider into swapping in a LSI card into my box for me, but no success yet.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Interesting. Personally I think I'd rather do the Acard device because it has persistent storage. If you lose power for longer than the BBU has power for, you lose data. While rare, its definitely not unheard of. I've had 3 such situations in my 4 years in this house. Other places may be far more frequent, others less.

I know in many 3rd world countries, if a power generation station trips offline, you lose power until it comes back online.. sometimes for days. Here in the USA we are very lucky that if one plant trips offline there is spare capacity to pick up the load.

At least for the ESXi scenario, this is ~~irrelevant. The danger is your storage crashing or losing power while the VM's continue to run, unaware, and the uncommitted changes result in unnoticed file system corruption. If your storage loses power for several days but your VM hosts are still running, buy an extension cord and plug them into the same UPS/genset/wotevr. In the more likely scenario, the VM host goes all lights out too. In that case, consistency is much less of an issue: the virtual machines know that they were terminated abruptly and can take mitigation steps such as fsck. That does not make it pretty, but you can design for and plan for it.

Basically this devolves into edge cases. If you have an important storage system, it should be on UPS. If it is on UPS, it should be set up to shut down cleanly on imminent power loss. The real concern should actually be failure of UPS, which can result in load drop (siiiigh), or power supply failure (siiiigh). You work around that by use of redundant supplies and/or redundant UPS's with RATS. But a NAS server hardware failure is still possible, and your BBU is really only saving you from that, except by the point that happens, storage has probably been down hard for 15 minutes by the time you localize the problem, and most virtual machines will have panicked or crashed without their underlying storage.

So in the context of ESXi NFS acceleration, I am failing to see a situation where the BBU isn't sufficient.
 

daimi

Dabbler
Joined
Nov 30, 2013
Messages
26
Ok, no points for originality then.

So an LSI MegaRAID SAS 9265-8i is ~$700. The BBU is another ~$200, so figure $900.


Q1. Is that how you would use **RAID with BBU** as outlined below:
i) Assume ESXi running from USB
ii) FreeNAS VM (vmdk1) + SLOG (vmdk2) will be stored under local datastore (created through **drives** connect to **RAID with BBU**)
iii) FreeNAS VM (with pass through M1050 flashed IT mode and connected with HDD) will create a zpool, and then NFS share to ESXi for hosting all other VMs

Q2) The **drives** mentioned above should be of:
~ Type : HDD?/SSD?/supercap SSD?/SLC SSD?
~ Size : 7.5GB? = 3.5 GB (zil memory -> to be half of 7GB FreeNAS memory -> 6GB baseline + 1GB/1TB storage) + 4GB (FreeNAS install)

Q3) Rather than spending $900 on **RAID with BBU**, how about buying a supercap SSD/SLC SSD and connect to M1050 as SLOG?
 
Status
Not open for further replies.
Top