Add ramdisk as disk option

AlanOne

Dabbler
Joined
Apr 6, 2022
Messages
17
Hello everyone,

Plain and simple, I tried this:


and this:

but still I cannot see the ramdisk in the storage -> disk section.

Can someone provide a detailed guide (I have zero experience/knowledge on unix/linux) on how to add a ramdisk and see it as a "disk"?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
A few points about this...

What are you hoping to do which requires you to see it in "disks"?

RAM disks won't be ZFS, so they won't show up as a location in any of the GUI/middleware for any of the standard services like SMB shares, etc.

Since you gave no context other than you want a RAM disk, and I assume you now have (or had) one... what's actually the issue?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
RAM disks won't be ZFS,
Well, I guess they could be.
flat,750x,075,f-pad,750x1000,f8f8f8.u1.jpg
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Well, I guess they could be.
All that checksumming (in ECC RAM, which does that also if you're running on proper hardware) just to kill it all at reboot? I guess some people like to do things for fun.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

AlanOne

Dabbler
Joined
Apr 6, 2022
Messages
17
Hello again! I just want to test the actuall speed with iSCSI on 10gbe. I don't mind if the filesystem it uses is zfs or ext. This is pure test. Even in a SAN config it's practically waste touse ramdisks or ram caching, as it all comes down to sustained reads and writes of hdds.

RAM disks won't be ZFS, so they won't show up as a location in any of the GUI/middleware for any of the standard services like SMB shares, etc.
Are u sure that there is no way to circumvent this? if nothing else we are talking about the "mighty" unix/linux and not the windoze..
(Oh by the way, trying to mount a ramdisk in freenas version, using commands from various places (I can't remember which one), it didn't made a new "disk" to select, but it increased the ZFS cache. Again, don't mind me if what I am saying is BS, but I am a total noob on unix/linux)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you want to see maximum speed for iSCSI, just set sync=disabled on the Zvol you're using and let ARC take care of it in RAM.
 

AlanOne

Dabbler
Joined
Apr 6, 2022
Messages
17
lets say I dont have a spare empty hdd, andI ll use a usb stick as zvol.. to test a 10gbe connection... is it going to work? Still an option (workaround) to actually mount a ramdisk will be usefull
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
is it going to work?
I think in the time you have been looking at this in the forum, you could probably have found yourself a disk if suitably motivated.

Will you be able to create a pool on a USB stick? Technically, yes, that will work (at least for a few minutes... after that, who knows).

If you share a zvol on that pool over iSCSI and mark it as sync=disabled, will you be able to see the potential speed of a 10Gbit connection? yes, for at least a few seconds until the RAM fills up (but then when the ram tries to flush out to your USB stick, who knows what will happen to the USB stick in terms of its performance or endurance to that kind of pounding).

In short, your testing plan lacks merit and I don't think it will prove anything of value unless you have an actual pool (of mirrored VDEVs made of real SATA/SCSI disks on a proper controller) behind it.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
actuall speed with iSCSI on 10gbe.
If you're interesting in testing your pure network line rate, use iperf.

If you want to test specifically your iSCSI configuration (eg: adapter or network limitations, impact of TCP offload, if you've sized your CPU/etc effectively) then you could create a RAMdisk with mdconfig, but you won't learn a terrific amount outside of an academic context as you're far more likely to hit a vdev/disk-based bottleneck first.

Even in a SAN config it's practically waste touse ramdisks or ram caching, as it all comes down to sustained reads and writes of hdds.
RAM caching on reads definitely matters as it's the whole purpose behind the ZFS ARC (Adaptive Read Cache).

The question here is "what's your desired end goal of performance and use case?" and then we can help guide you towards a config that gives you the best chance to achieve it.
 

AlanOne

Dabbler
Joined
Apr 6, 2022
Messages
17
will you be able to see the potential speed of a 10Gbit connection? yes, for at least a few seconds until the RAM fills up
Thats why I made the question, it was rhetorical, I thought exactly the same.

In short, your testing plan lacks merit and I don't think it will prove anything of value unless you have an actual pool (of mirrored VDEVs made of real SATA/SCSI disks on a proper controller) behind it.
Wrong. Before puting actuall disks and investing in iSCSI with ZFS and this particular configuration (truenas), I want to know the max potential.

If you're interesting in testing your pure network line rate, use iperf.
I have already test it, I am getting ~9.8G. But iSCSI from windows to windows is only ~600MB.

If you want to test specifically your iSCSI configuration (eg: adapter or network limitations, impact of TCP offload, if you've sized your CPU/etc effectively) then you could create a RAMdisk
THANK YOU! This is exactly why I posted (also answering your last question too)!!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
also answering your last question too
Well, not really. You haven't pointed out what you are actually going to put on the iSCSI volumes once they're configured.

A device being used to spool off a stream of sequential I/O from a single machine needs a far different backing configuration than one expected to hold a few dozen VMs coming from a handful of hypervisors.

The theoretical limits of the ctl iSCSI back-end are influenced mostly by single-thread performance. Choose CPUs that have a lower thread count and higher clocks (eg: Xeon E5-1650 v4 or E5-2637 v4) to make sure the ceiling is as high as possible there.

But again, you're significantly more likely to hit a bottleneck at your storage level (individual vdev speed, SLOG ingest rate for sync writes, simultaneous R/W on NAND) first.
 
Last edited:

AlanOne

Dabbler
Joined
Apr 6, 2022
Messages
17
I am planning to use 6x WD Gold 8TB, compbined @ RAID0, more than enough to saturate the 10gb connection. The system is goint to be (for start) a g4560 but I have spare i5s and i7s to play with, so the system (nor the disks) is not going to be a bottleneck. The SAN is going to be used mostly for video editing.
 
Top