start Virtual Machines fail

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
When I start a virtual machine on my TrueNAS Core system, it causes all TrueNAS services to stop working, including web management and SMB services.
I am not sure what could be causing it. I would appreciate any help or advice on how to diagnose and fix this issue.

Code:
root@truenas[~]# lscpu
Architecture:            amd64
Byte Order:              Little Endian
Total CPU(s):            48
Thread(s) per core:      2
Core(s) per socket:      12
Socket(s):               2
Vendor:                  GenuineIntel
CPU family:              6
Model:                   62
Model name:              Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
Stepping:                4
L1d cache:               32K
L1i cache:               32K
L2 cache:                256K
L3 cache:                30M
Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 cflsh ds acpi mmx fxsr sse sse2 ss htt tm pbe sse3 pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline aes xsave osxsave avx f16c rdrnd fsgsbase smep erms syscall nx pdpe1gb rdtscp lm lahf_lm

root@truenas[~]# grep VT-x /var/run/dmesg.boot
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Please describe your hardware in detail as per the forum rules (link in red at the top middle of the screen).
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
FreeBSD 13.1-RELEASE-p2 n245412-484f039b1d0 TRUENAS, from (TrueNAS-13.0-U3.1.iso)
Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz and 256G memory
10*16TB TOSHIBA SSD in hardware RAID-0
6 * 2TB TiPlus5000 SSD in RAIDZ2
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
10*16TB TOSHIBA SSD in hardware RAID-0
Are you completely insane? That can barely be expected to work long enough to make a ridiculous youtube video, much less work reliably.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
  1. Hardware RAID is a quick way to lose data.
  2. RAID0 is not in any way reliable, and having a 10-wide RAID volume means losing everything if any of the disks fails.
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
After listening to your advice, I changed the configuration from a 10*16TB SSD in hardware RAID-0 to using RAID-5 for combining the drives, which should address the data loss issue you mentioned.
During the creation of the storage pool, I opted for a RAID disk group rather than combining multiple individual disks.
Now, one issue I'm facing is significant bandwidth competition during concurrent file reads by multiple users. The bottleneck is not the network, as I have already configured a 10-gigabit Ethernet port. The bottleneck lies in the disk read speed. I'm contemplating how to resolve this problem.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I mean, that's technically better, but it's putting lipstick on a pig. Hardware RAID is not appropriate for use with TrueNAS.
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
> Hardware RAID is not appropriate for use with TrueNAS.

Thank you for your response. Is there any documentation that provides a more detailed explanation of this issue?
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
As mentioned before, I have another pool consisting of 6 * 2TB TiPlus5000 SSDs in RAIDZ2 configuration. Are there any recommended benchmarks to test the performance of these two pools? In previous proof-of-concept tests, I used the 'du' command-line tool, but didn't observe significant differences between these two pools at that time.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Thank you for your response. Is there any documentation that provides a more detailed explanation of this issue?
If you want to lose your data, follow the guidance in this section of the documentation (BTW, WTF iXsystems????)

If you care about the data you will put on the system, read this instead:
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Other than the HBA resource linked, you might want to read the following as well.

You can find more in the resource section or in my signature.
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
With the current number of disks I have, how should I configure them to achieve the best concurrent read performance?
Should I split the disks into multiple Mirror vdevs?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
With the current number of disks I have, how should I configure them to achieve the best concurrent read performance?
Should I split the disks into multiple Mirror vdevs?
Define concurrent. But yeah, mirror vdevs.
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
> https://www.truenas.com/community/resources/zfs-storage-pool-layout.201/

1 single drive:
• Read IOPS: 250
• Write IOPS: 250
• Streaming read speed: 100 MB/s
• Streaming write speed: 100 MB/s
• Storage space efficiency: 100%
• Fault tolerance: 0
1x 2-way mirror:
• Read IOPS: 500
• Write IOPS: 250
• Streaming read speed: 200 MB/s
• Streaming write speed: 100 MB/s
• Storage space efficiency: 50% (6 TB)
• Fault tolerance: 1
6x 2-way mirror:
• Read IOPS: 1200
• Write IOPS: 600
• Streaming read speed: 1200 MB/s
• Streaming write speed: 600 MB/s
• Storage space efficiency: 50% (36 TB)
• Fault tolerance: 1 per vdev, 6 total

Question: When it comes to 6x 2-way mirror, Why is Read IOPS / Write IOPS multiplied by 2.4 instead of 6 here?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Look at the errata in the resource description.
While the resource is valid there are a few mistakes, namely:

  • page 4, Storage space efficiency formula
    • correct: 1/N
  • page 5, 6x 2-way mirror
    • correct Read IOPS: 3000
    • correct Write IOPS: 1500
  • page 6, 4x 3-way mirror
    • correct Streaming read speed: 1200 MB/s
  • page 11, Scenario 3
    • a SLOG is not a write cache
 

adahsuzixin

Dabbler
Joined
Mar 7, 2023
Messages
14
How do I measure the following metrics for a pool?

• Read IOPS
• Write IOPS
• Streaming read speed
• Streaming write speed

I plan to build several pools and select the pool I need by measuring. Is it possible to test via `fio`? Also, should I testing on the client or on truenas server?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
zpool iostat or zpool iostat -v
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
fio, or dd.
There is also the solnet array.
 
Top