Local performance worse than over iSCSI?!?

quoxel

Cadet
Joined
Mar 21, 2021
Messages
3
Hello everybody,

i have a strange behaviour in my test environment and appreciate every kind of tip!

TrueNAS is installed on a Proxmox hypervisor - the VM has 8 GB of RAM, 4 CPUs and a "normal" HDD and a 40 GB SSD passed throug from the proxmox host.
I set up one pool for each disk which is provided as iSCSI volume to the proxmox machine.
The goal of this scenario is to benchmark the performance of the VM-provided storage compared to the same storage via iSCSI.

Testing with fio, i get the strange behaviour that the storage accessed via iSCSI seems to be faster (~18 MB/s) than the fio test from the TrueNAS VM itself (~283 kb/s).
The tests are running with sync=always and the following fio command:
fio --name=random-write --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

ZFS Deduplication is off, atime is off, compression is set to LZ4. The other options are set to TrueNAS defaults. The Network between the VM and its host is 10 G via virtio driver. The CPU of the VM is set to qemu64.

Normally, testing with a block size of 4k should result in a poor throughput perfomance, but i expected it the other way round because of the latency of the iSCSI connection.
Could somebody help me, finding my fault? Something in here is very weird...
- Is fio under FreeBSD acting in an other way than under Linux?
- Are there probably any buffers which affect the results?
- Am I using wrong parameters in the fio test command?

Any help is appreciated - I hope I didn't forget any important information...

Thanks in advance!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We don't recommend iSCSI with less than 64GB of RAM. Boost your VM to something like 96 or 128GB of RAM so that you are giving ZFS the resources it needs to make effective use of iSCSI, and make sure the memory is locked. Also, unless your hypervisor has a bunch of CPU's, well past a dozen, try reducing the number of CPU's by one or two and reserving them for FreeNAS.

Small test sizes (--size=4g) are not expected to give rational results in my experience. Try much larger.

Suggest you review the following:

https://www.truenas.com/community/r...quires-more-resources-for-the-same-result.41/

Also, disable sync writes, because the way your OS and iSCSI may be coalescing and treating writes may be very different than what's happening directly on the ZFS host. You want an apples-to-apples comparison, which is never going to be possible, but you can at least get "closer".
 

quoxel

Cadet
Joined
Mar 21, 2021
Messages
3
Okay... sorry, I forgot to mention that I'm talking about a HomeLab... ;)
So my test server has 128 Gigs of RAM, but taking at least half of it for the storage is not my goal...

So in your opinion it would be better to use NFS for the VM Storage?

But I don't see the exact problem... you say, iSCSI should have at least 64 GB of RAM... is it a problem of ZFS (=filesystem) or the dedicated storage? The article you linked describes the process of reading and writing very well... but it conflicts with my understanding, based on my experience.
In my company we have some Infortrend SANs with Fibre Channel which have 4 GB RAM onboard for example. The performance is enough to support 25-30 VMs running on 3 ESXi Hosts...
The filesystem is VMFS, thats one of the differences. But why the gap between 4 Gigs of RAM and 64 Gigs of RAM? What is the performance killer? FC is also block storage, so why can a Hardware-SAN be so much faster than a Software SAN? Because of the different file system? Or because of Block/File-based access?
What would be the best shared storage for a virtualization environment like proxmox? The VM harddisks are single files (which contain a file system), but the container is a block storage from the view of the hypervisor, isn't it?
Okay, DAS would be the best storage type for performance, but I'm looking for a shared storage like a SAN which can be shared across proxmox server for testing HA, failover etc. It doesn't need to have be the best performance (which is expensive) but the best affordable performance. Could you recommend me something?
 

quoxel

Cadet
Joined
Mar 21, 2021
Messages
3
Just to be clear... in my case iSCSI on the same ZFS pool is faster, because the TrueNAS OS (FreeBSD) uses an other way to the ZFS storage than iSCSI does? Or did I get something wrong? I can't get why iSCSI is FASTER than the direct access to the storage...
 
Top