Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Esxi Drive Types - Part II

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
Also windows cluster shared volumes for hyper-v.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
12,824
As I can't post in the relevant thread , I'm opening a new one. @jgreco, I do have iSCSI targets to windows servers. I use them with windows backup. I use it even to make share space to a virtual machine. It might not be best practice, but I use it.
Yeah, sorry, closed that since I didn't want to risk another thread getting deleted. And we got a good answer that's now at the end of the thread, so I'm inclined to keep it closed.

We've seen a whole bunch of people trying to create NTFS on top of iSCSI and then maybe sharing it to multiple computers, which is of course ... "bad." It's interesting to see what the non-bad use models might be.

@jgreco ms sql server. typical iscsi use. you will not use cifs there.
I guess that makes kinda sense. It could perform much better than a hardware RAID, given sufficient resources...

From my perspective, I've usually viewed iSCSI as "poor man's SAN", which usually implies multiple initiators and a cluster-aware filesystem such as VMFS sitting on it. Relatively lower bandwidth to the array but possibly better IOPS. The whole idea of mounting it for use with NTFS instead of a local RAID is a little foreign, but, it's all about the workload ....

Also windows cluster shared volumes for hyper-v.
Interesting. What filesystem are they using for that, anyways?
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,558
Interesting. What filesystem are they using for [CSV], anyways?
NTFS or ReFS, the latter being in 2012R2 only. Locking is handled at the cluster level via SMB negotiation between Hyper-V hosts.

Hyper-V should never auto-defragment a CSV because that requires either exclusive mode (CSV offline) or redirected access mode (performance goes to hell) so it's a non-issue there.

For Windows iSCSI access in a non-shared mode, then yes, absolutely you don't want to let it think it should defragment the disk.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
12,824
So, as I understand it, showing as SSD is a good thing, right?...
Well, apparently it might be if your initiator is Windows and you're running NTFS on it. I'd have thought that to be a strange edge case, but apparently it is a little more common than I would have expected.

But showing it as SSD is also bad for other cases, as covered elsewhere in this thread. The root problem is that iSCSI doesn't really allow a bitfield of flags like "the underlying datastore is subject to fragmentation so don't do things like defrag" or "the iSCSI disk supports UNMAP" or "the underlying storage is hybrid". Because if you could indicate device capabilities, then you wouldn't need to overload the meaning of a tag like "SSD".
 

Tywin

Member
Joined
Sep 19, 2014
Messages
163
But showing it as SSD is also bad for other cases, as covered elsewhere in this thread. The root problem is that iSCSI doesn't really allow a bitfield of flags like "the underlying datastore is subject to fragmentation so don't do things like defrag" or "the iSCSI disk supports UNMAP" or "the underlying storage is hybrid". Because if you could indicate device capabilities, then you wouldn't need to overload the meaning of a tag like "SSD".
Ding ding ding. Ideally one shouldn't make decisions based on what the underlying thing is, but rather on what it can('t) do (see: duck typing).
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,078
The root problem is that iSCSI doesn't really allow a bitfield of flags like "the underlying datastore is subject to fragmentation so don't do things like defrag" or "the iSCSI disk supports UNMAP" or "the underlying storage is hybrid".
It is not true in part of UNMAP. SCSI provides enough information about UNMAP capabilities separately from SSD status. There is no official dependency in specifications between UNMAP and SSD status. The rest is indeed true -- SSD is the only flag to control request sorting, defragmentation and probably other not very related things.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
12,824
Well, yes, that's why I said a bitfield of flags like it. I wanted people to understand what I was talking about. Feature flags for iSCSI, haha.

Of course, what often happens in the RFC process is that you get distracted with the needs of today, while totally failing to predict what's going to happen tomorrow. Suddenly now we have hard drives that support shingled recording, so now we'd kinda need a new flag for 2015 era drives, "drags butt during writing." In 2004 when they were grinding out 3720, the storage world was very different. It's been a hell of a decade and we have some great stuff available now. Just thinking back, 4GB was a hell of a lot of RAM then... now I consider 256GB to be a lot of RAM. ;-)
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,558
Suddenly now we have hard drives that support shingled recording, so now we'd kinda need a new flag for 2015 era drives, "drags butt during writing."
I can't wait until I have to explain to people why those make for a terrible choice for your zpool.

"Well, if you never write to it, ever, it will do fine ..."
 

Tywin

Member
Joined
Sep 19, 2014
Messages
163
I can't wait until I have to explain to people why those make for a terrible choice for your zpool.

"Well, if you never write to it, ever, it will do fine ..."
Their performance doesn't have to be so terrible, we're just hamstrung by the interfaces. All of a sudden assumptions we made about hard drives 20 years ago are no longer valid. This happened with SSDs, and it took a couple years for TRIM support to really propagate through all the OSes, file systems, and controllers. I don't know what the final solution will look like, but it seems like this is going to be the future of spinning rust media; there's a lot of money in that, the workarounds will come.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,078
Comparing to that, SSD with UNMAP are just a children's games. SMR breaks the basic concept of disk as "direct access" device. Very few of existing file systems (and none of HDD-oriented) survive that. I hope that HAMR or some other technology arrive before we have to handle that madness.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,558
There was an interesting presentation at USENIX 2015 about investigations into SMR drives:

https://www.usenix.org/conference/fast15/technical-sessions/presentation/aghayev

Based on that, it looks like they'll be perfectly fine for a situation like a desktop or archival storage that doesn't need to have sustained writes, and can bandaid their way over the inherent shingled-writes with a non-volatile write cache (either a reserved section of disk or NAND) but for a server environment that doesn't have idle time to allow for garbage collection I can't see it being able to keep up.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
12,824
I can't wait until I have to explain to people why those make for a terrible choice for your zpool.
Well, they might or they might not... our largest pool here is archival in nature, which doesn't actually mean no rewrites, but typically something written will be left alone for years at a time, and lower write performance would be deemed acceptable to commit data to the pool. I would rather have somewhat fewer disks and lower power consumption since the pool isn't in need of a large number of disks (ftp server, iso data storage, etc).

Based on that, it looks like they'll be perfectly fine for a situation like a desktop or archival storage that doesn't need to have sustained writes, and can bandaid their way over the inherent shingled-writes with a non-volatile write cache (either a reserved section of disk or NAND) but for a server environment that doesn't have idle time to allow for garbage collection I can't see it being able to keep up.
It seems like it could potentially be a very useful tier of storage, but, yes, bad for many types of workloads. A write cache would only be helpful for bursty write traffic, and then you basically have this big unknown hazard hanging over your head if there's too much write traffic. The ideal workload would be one where the commit speed wasn't a serious concern. The Seagate Archive 8TB's are reported to be writing at about 1/3-1/4th the speed of a conventional HDD, so I'm not even convinced it's a big deal.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,558
Oh they'll be great for certain use cases. What I'm talking about is when they're just thrown at every situation because someone goes "omg 8TB for $cheap!" and expects them to perform like conventional HDDs. Then they wonder why their pool stalls, performs terribly, or just straight-up offlines a drive because it thinks it's failed due to high latency.

Once the support is there from the software layer to natively understand the "proper" way to do shingled recording, I can see them being phenomenal for disk-to-disk backups. That workload is almost exclusively sequential access in both reads and writes.

To summarize: Don't think of them as "slow hard drives" - think of them as "fast tape drives."
 

mjws00

Neophyte Sage
Joined
Jul 25, 2014
Messages
798
I hope we see some large pools built with these soon. Even if they suck compared to regular hdd, take a dozen of them mirrored, or two 6 disk z2 vdevs, and they should still CRUSH 1Gbe and likely put a dent in the throughput samba can manage on 10Gbe. Biggest challenge I see is that it is almost trivial to tag ~100TB, even the ~48TB available on a cheapy board is on the edge to manage without a box greater than an E3.

Good problems to have I suppose.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,473
Maybe the manufacturers should resurrect the old 5.25" FH standard. [emoji6]I remember the 5MB & 10MB drives. Back in ~1990 I had a 70MB ESDI drive.


Sent from my phone
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
12,824
Maybe the manufacturers should resurrect the old 5.25" FH standard. [emoji6]I remember the 5MB & 10MB drives. Back in ~1990 I had a 70MB ESDI drive.


Sent from my phone
I fondly remember interfacing ESDI drives to Sun workstations (SCSI) via an Emulex interface translator. Two drives as a single SCSI target (c0t0d1s0!)

I still have an 8" SMD on display in my office. That'd be interesting adapted to current tech .. cram in 50 platters per disk, about 3x the space per platter, so that'd be what, maybe a 200TB HDD? :smile:
 
Status
Not open for further replies.
Top