Slow write speeds ~10-20MB/s

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
Just had my
- DL360 arrive with a Smart Array P440ar
- DL380 arrive with a Smart HBA H240ar

I've set it to HBA mode and its installing, just not sure if I should load a better firmware?
What's the best firmware and method to upgrade these to?

Will report performance soon
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Those are not suitable disk adapters. Setting a RAID controller to HBA mode is not the same as having an HBA. Please have a look at

 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
Argh, I think this is the same issue with my current server; I flashed the RAID controller to LSI and that's probably why my performance is so rubbish. I'm gonna have to try find a drop in replacement
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
So, I installed TrueNAS with the controller set to HBA mode

Getting 133MiB/s write and 226MiB/s read; I had to disable SYNC for these speeds

Is this acceptable for 10k rpm drives?
Screenshot 2023-02-15 at 12.30.30 PM.png
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
So, I installed TrueNAS with the controller set to HBA mode

Getting 133MiB/s write and 226MiB/s read; I had to disable SYNC for these speeds

[...]
Testing what which way?
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
On the TrueNAS writing and reading direct to the volume

sync; fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/pve/truenas-proxmox-nfs/testWrite --bs=4k --iodepth=256 -size=2G --readwrite=randwrite --ramp_time=4
Run status group 0 (all jobs):
WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=1463MiB (1534MB), run=10885-10885msec

sync; fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/pve/truenas-proxmox-nfs/testWrite --bs=4k --iodepth=256 -readwrite=randread --ramp_time=4
Run status group 0 (all jobs):
READ: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=1272MiB (1333MB), run=6478-6478msec

Then a Ubuntu VM on Proxmox mounted NFS storage (10g tested both ways with iperf)
Write, 3 tests:
sync; fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/root/testWrite --bs=4k --iodepth=256 -size=2G --readwrite=randwrite --ramp_time=4

WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=1607MiB (1685MB), run=14883-14883msec
WRITE: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=1824MiB (1912MB), run=29843-29843msec
WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1601MiB (1678MB), run=14609-14609msec

Read, 3 tests:
sync; fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/root/testWrite --bs=4k --iodepth=256 -readwrite=randread --ramp_time=4

READ: bw=179MiB/s (187MB/s), 179MiB/s-179MiB/s (187MB/s-187MB/s), io=1375MiB (1441MB), run=7689-7689msec
READ: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=1359MiB (1425MB), run=7630-7630msec
READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=1263MiB (1324MB), run=6679-6679msec

So much better than the 20-30MB/s my other server I'm replacing is getting.
My question is, disabling SYNC (from Standard) on the volume, is this ok to do?
I'm waiting on my NVMe drives to arrive for log cache

Many thanks again
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
re: SYNC or not

In the end it's up to you to decide that. If the network or the nfs server goes down/is unavailable, your data is (possibly) lost. I personally would never disable sync. If data integrity wasn't important, I would not have chosen ZFS at all. For me it doesn't make sense to choose the possibly most robust and secure (in terms of availability) filesystem, just to remove all those security features, to gain some performance.
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
I plan to re-enable it once I have the Log device.
As mentioned, majority of the storage is for media that can be re-written or re-downloaded, so even RAIDZ2 is overkill

Got everything working nicely, setup proxmox to LACP clicked save and it froze now won't boot up :/ just as I was about to chuck it all in the DC
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Is TrueNAS running virtualized on proxmox? Or is the another instance/box?
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
Truenas is a system of its own and unrelated other than using truenas for nfs.

I’m waiting on a lsi module to replace the raid controllers I have in hba mode.

I’d like to understand what performance increase I could see if I did do this. Currently getting say 100MB/s write and 200MB/s read with the 10k RPM SAS disks. Do you think changing the HBA id see a lot more or a small performance boost? I’m going to add NVMe drives for log

The current controller in the DL380 looks after 22 SAS drives, so locating a LSI card that will support 22 disks and fit in the system natively is proving difficult even for our hardware vendor to find
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
@icepicknz I'm sorry if that was intention or if somebody already mentioned it, but I see in fio tests you are using 4KB blocks, and it dd you've used default 512 byte I/Os. Whatever you are trying to measure likely has nothing to do with the storage, it is likely CPU bound. So small I/Os must be CPU bound. In perfect case ZFS may be able to do quarter million of 4KB blocks, but the actual question is: are you really trying to optimize for so small I/Os?
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
@mav@ Thanks, I did not consider the block size.

I'm just trying to ensure my system is performing as expected. I've seen people post read/writes of 400-500MB/s so was trying to compare results with my outputs; so please excuse my n00bness when it comes to systems; I'm more of an ASIC network engineer :)

[root@truenas ~]# sync; fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/Vol1/testFile --bs=4k --iodepth=256 -size=2G --readwrite=randwrite --ramp_time=4
WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=1483MiB (1555MB), run=8064-8064msec

I'm trying to understand if my system is at its best performance; or if I need to do something better before pushing it to production.
It will host around 10 proxmox shared storage VM's ~(1TB dataset max where most VM's are 16-30GB in size) and Media ~(14TB large made up of MKV / mp3 files etc)
 

icepicknz

Dabbler
Joined
Feb 11, 2023
Messages
19
So spun up a windows VM (Haven't used windows since XP!) and it took ages to install, suspected slow disk.
This says it all.
 

Attachments

  • Screenshot 2023-02-17 at 6.10.31 PM.png
    Screenshot 2023-02-17 at 6.10.31 PM.png
    362.4 KB · Views: 88
Top