Supermicro 1028U-E1CR4+ Pure SSD

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
Hello everyone,
I buy this:
System: Supermicro 1028U-E1CR4+
CPU: 2 x Intel Xeon 8C E5-2620V4 2.1G 20M 8GT QPI
RAM: 4 x 16GB DDR4-2666 2RX8 ECC RDIMM (Total 64GB)
Disks: 6 x MZ7KM1T9HMJP-00005 Samsung SM863a,1.92TB,SATA 6Gb/s,VNAND,V48,2.5",7mm (3.6 DWPD)
HBA: AOC-S3008L-L8e HBA IT mode
LAN: AOC-STG-i4S 4-port10Gbe Standard LP with SFP+, Intel XL710-AM1
Freenas OS installed on DiskOnKey SanDisk Ultra Fit USB 3.1 up to 130MB/s in mirror (2 DiskOnKey).
NO SLOG/ZIL or Cache.

I want use this for storage to XCP-NG (XenServer) VMs storage.
I need at least 7TB usable.

Its going to Direct Attach SPF+ 10GB between hosts to storage via ISCSI.

What the raid for best performance ? I need for least one disk failure (So the storage continue running when one disk failure).
I know the best performance for this is Vdev (stripe). if i do 5 disks vdes (stripe) and one hot spare is good ? what you recommends ?

Do you recommends SLOG/ZIL or CACHE disks ? if so, how much GB i need for this ? I need 2 disks one for ZIL and one for CACHE , right ? (I don't think i need this because all disks is pure SSD and i don't see benefit)
I read in this forum and other tutorial and i think i have enough RAM, this is correct ?
For best performance for ISCSI i need to disable Sync and etc ?

Thanks.
 
Last edited:

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
I know the best performance for this is Vdev (stripe). if i do 5 disks vdes (stripe) and one hot spare is good ? what you recommends ?
If you do 5 vdevs striped together, then you would not need any hot spares, if one disk dies, your pool is already dead.
You need either multiple mirrored vdevs, or RaidZ1/Z2.

For best performance for ISCSI i need to disable Sync and etc ?
The performance might be better, but the integrity of the data might not be.. Just something to think about.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
This design being a preassembled Supermicro system explains the overkill CPU selection. You're unlikely to max out even a single one of those two CPUs, let alone need both. Consider adding more RAM, but not until after you've reviewed the below information.

Your understanding of how the pool configuration will work is a little misguided as @IQless has pointed out - a pure stripe vdev gives you zero redundancy, and a hotspare will not do anything to help that.

Given that you're interested in maximum performance I would suggest getting two more SM863a drives and setting it up as four two-way mirrors (total usable will be roughly 7.68T). Using a RAIDZ level is a potential alternative but will impact performance (yes, even on all-flash vdevs)

Since this will be an all-flash setup you can fill the array more fully without feeling the impact of fragmentation, but 7T would be about the absolute max I would load. Using the default LZ4 compression should help push down the actual physical usage to an acceptable level though. Once I'm not on a phone I'll find the right tunables again for that.

But if you're going to put Xen VMs on this datastore, you need to run synchronous writes for safety. This would run acceptably well on all-flash vdevs but an SLOG would be better. Your system gives the option of NVDIMMs; if they can be recognized as block or NVMe devices then FreeNAS can use them, but if there is no FreeBSD driver you may be out of luck there and need to use an Optane or other NVMe device. SAS/SATA SLOG will not be anywhere near fast enough to keep up with 10GbE.

It looks like a good base system to build on; let's make sure you get something solid as a result!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
At least the CPUs can be put to use compressing stuff. Gzip might be a stretch, but zstd might be nice when it finally arrives.
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
This design being a preassembled Supermicro system explains the overkill CPU selection. You're unlikely to max out even a single one of those two CPUs, let alone need both. Consider adding more RAM, but not until after you've reviewed the below information.

Your understanding of how the pool configuration will work is a little misguided as @IQless has pointed out - a pure stripe vdev gives you zero redundancy, and a hotspare will not do anything to help that.

Given that you're interested in maximum performance I would suggest getting two more SM863a drives and setting it up as four two-way mirrors (total usable will be roughly 7.68T). Using a RAIDZ level is a potential alternative but will impact performance (yes, even on all-flash vdevs)

Since this will be an all-flash setup you can fill the array more fully without feeling the impact of fragmentation, but 7T would be about the absolute max I would load. Using the default LZ4 compression should help push down the actual physical usage to an acceptable level though. Once I'm not on a phone I'll find the right tunables again for that.

But if you're going to put Xen VMs on this datastore, you need to run synchronous writes for safety. This would run acceptably well on all-flash vdevs but an SLOG would be better. Your system gives the option of NVDIMMs; if they can be recognized as block or NVMe devices then FreeNAS can use them, but if there is no FreeBSD driver you may be out of luck there and need to use an Optane or other NVMe device. SAS/SATA SLOG will not be anywhere near fast enough to keep up with 10GbE.

It looks like a good base system to build on; let's make sure you get something solid as a result!

Thanks! What is four two-way mirrors and how much is safety ? how much disk failure ?
What you recoomand for NVMe ? and how much space for SLOG?

8 Disks with four two-way mirrors and one NVMe for SLOG ? and set sync=true, yes ?

Thanks!
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
INTEL Optane SSD 900P 280GB 1/2 Height PCIe x4 3D XPoint Single Pack This good for SLOG? I need another or its enough ? i need another for CACHE ? if so what you recommend ? and how much space i need for this ? thanks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks! What is four two-way mirrors and how much is safety ? how much disk failure ?
What you recoomand for NVMe ? and how much space for SLOG?

8 Disks with four two-way mirrors and one NVMe for SLOG ? and set sync=true, yes ?

Thanks!
"Four two-way mirrors" refers to the desired pool configuration; get eight disks in total, configure them in four sets of mirrors. Each "pair" of mirrors could lose one drive, so in the best case scenario you could lose up to four drives. But worst case you can lose only one drive; if its "partner" in the mirror fails then the pool is unavailable.

You will want to set sync=always on your dataset or ZVOL in order to use the SLOG.

The selected NVMe device (Optane 900p) is excellent and only beaten by the Optane P4800X.

Since your vdevs will be all-SSD then you don't necessarily need an L2ARC/cache device; that is intended more for SSDs to speed up spinning drives. NVMe to speed up SATA SSD is technically faster but I would suggest the money would be better spent on increasing your RAM.
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
Do you have any other shape that is better for my situation? Storage for Xen VMs?
Do you think i need increase to 128 GB RAM or another amount? Why should I increase? According to the study I did 64GB RAM for get enough for 7 Terabytes.
So if i understand correctly i need put 8 disks in mirrors and use sync=always for best performance for ISCSI VM's 10GB link without SLOG/CACHE enabled, yes ?
If i buy another disk for hot spare its more safety ? So this solves the problem you specified (in mirror partner) ?

Have a different configuration recommendation? Because in the way we talk, I lose a lot of storage space and a pity. If there is another configuration that provides both good performance and a good survivability level without losing a lot of storage space (half I lose in mirror configuration) then I'd love to hear!
thank you very very much!
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
"Four two-way mirrors" refers to the desired pool configuration; get eight disks in total, configure them in four sets of mirrors. Each "pair" of mirrors could lose one drive, so in the best case scenario you could lose up to four drives. But worst case you can lose only one drive; if its "partner" in the mirror fails then the pool is unavailable.

You will want to set sync=always on your dataset or ZVOL in order to use the SLOG.

The selected NVMe device (Optane 900p) is excellent and only beaten by the Optane P4800X.

Since your vdevs will be all-SSD then you don't necessarily need an L2ARC/cache device; that is intended more for SSDs to speed up spinning drives. NVMe to speed up SATA SSD is technically faster but I would suggest the money would be better spent on increasing your RAM.

Anyone? you can see what i post ? thanks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Do you have any other shape that is better for my situation? Storage for Xen VMs?
Do you think i need increase to 128 GB RAM or another amount? Why should I increase? According to the study I did 64GB RAM for get enough for 7 Terabytes.
So if i understand correctly i need put 8 disks in mirrors and use sync=always for best performance for ISCSI VM's 10GB link without SLOG/CACHE enabled, yes ?
If i buy another disk for hot spare its more safety ? So this solves the problem you specified (in mirror partner) ?

Have a different configuration recommendation? Because in the way we talk, I lose a lot of storage space and a pity. If there is another configuration that provides both good performance and a good survivability level without losing a lot of storage space (half I lose in mirror configuration) then I'd love to hear!
thank you very very much!

Mirrors is the best performing vdev type for block (iSCSI) storage. Other options like RAIDZ are available, but won't perform as well.

In regards to the RAM, you can certainly use the 64GB you've selected, but the more RAM you have the better it will perform. And you will almost certainly get better results from adding more RAM (128GB or even beyond) than you would from adding an L2ARC/cache device.

For best performance, use eight disks in mirrors, use the Optane 900p as an SLOG, and set sync=always. An L2ARC/cache device accelerates reads only, not writes - and your main data disks are already SSDs.

RAIDZ2 is an option, and SSDs will help mitigate some of the performance lost, but it's still not as fast as mirrors.
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
Mirrors is the best performing vdev type for block (iSCSI) storage. Other options like RAIDZ are available, but won't perform as well.

In regards to the RAM, you can certainly use the 64GB you've selected, but the more RAM you have the better it will perform. And you will almost certainly get better results from adding more RAM (128GB or even beyond) than you would from adding an L2ARC/cache device.

For best performance, use eight disks in mirrors, use the Optane 900p as an SLOG, and set sync=always. An L2ARC/cache device accelerates reads only, not writes - and your main data disks are already SSDs.

RAIDZ2 is an option, and SSDs will help mitigate some of the performance lost, but it's still not as fast as mirrors.


finaly i buy Corsair NX500 800GB and i set him as slog.
But i have another problem if you can help me:

I have 4 nics 10GB on FreeNAS and two each host.
I connect each to freeNas with one port on the hosts.
Only one can connect to FreeNAS and the other can't.

My address on FreeNAS:
nic 1: 172.16.10.10/30 -> Go to Host 1 nic 1 -> 172.16.10.9/30
nic 2: 172.16.10.20/29 -> Go to Host 2 nic 1 -> 172.16.10.19/29
nic 3: 172.16.10.30/30 -> Go to Host 3 nic 1 -> 172.16.10.29/30

Only in the master (Host 1) i can connect to FreeNAS and speed like 100Mbps
from host 1 i can see the lun from this -> iscsiadm -m discovery --type sendtargets --portal 172.16.10.10
from host 2 i can see the lun from this -> iscsiadm -m discovery --type sendtargets --portal 172.16.10.20
from host 3 i can see the lun from this -> iscsiadm -m discovery --type sendtargets --portal 172.16.10.30

But i get empty response from this -.> multipath -ll (In all hosts except the host 1 that can connect to FreeNAS).
I config also iSCSI Multipath in xcp-ng and ISCI MPIO in FreeNAS but not success.
Anyone know ?

Thanks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Networking, why no switch?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
No, it has unimpressive latency for a consumer NVMe drive and no power loss protection. Even if performance is acceptable, the lack of power loss protection makes it about as safe as disabling sync writes, while still performing worse.
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
No, it has unimpressive latency for a consumer NVMe drive and no power loss protection. Even if performance is acceptable, the lack of power loss protection makes it about as safe as disabling sync writes, while still performing worse.

Oh.. i didn't know that.
You know why I get speed of 100Mbps ? and I have 10GB SPF+ ?
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
No, it has unimpressive latency for a consumer NVMe drive and no power loss protection. Even if performance is acceptable, the lack of power loss protection makes it about as safe as disabling sync writes, while still performing worse.

I can't get 10GB speed. All nics and switch support and work MTU 9000.
I get less the 1Gbps speed. You know why?
I test via iperf and I get around 4.5Gbps...
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Iperf measures raw performance of the network components. The lower number reflects what your system can manage in reality, at the moment.

If you need faster performance, consider adding more vdevs to your HDD pools (ie a lot more hard drives), a fast SLOG (for faster writes), or even going all-SSD.
 

dror

Dabbler
Joined
Feb 18, 2019
Messages
43
Iperf measures raw performance of the network components. The lower number reflects what your system can manage in reality, at the moment.

If you need faster performance, consider adding more vdevs to your HDD pools (ie a lot more hard drives), a fast SLOG (for faster writes), or even going all-SSD.

My machine is ALL-SSD based on disks: Samsung SM863a 1.92TB (X6).
I set cache device PCI-E: Corsair NX500 800GB
sync=always

I don't have yet SLOG device.

You know why I get 4~Gbps in place of 8-9Gbps (even 10Gbps) ?

Thanks.
 
Top