Help troubleshooting iSCSI bursty write performance.

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
Hello all,

I've been a long time novice user of FreeNAS and I need some help running an issue I'm seeing to ground.

Setup:
FreeNAS 11.2U7
i7-3770K
32GB of RAM at DDR3-800 non-ECC
6x2TB 7200 RPM consumer grade HDDs
all drives connected through motherboard SATA ports, no hardware RAID or anything
Pool of 3 RAID1 VDEVs
HP 4x1Gb ethernet card
2TB thin provisioned zvol for iSCSI
Home network use, hosting my steam games on here.
iSCSI share being accessed by one computer, my gaming desktop (Win10), everything else only uses SMB (and really nothing else is pulling data on the regular).

Issue:
I'm seeing weird, bursty write performance out of my iSCSI share. I get writes at full 1Gb speeds and then it just flat lines to 0 for a while and then picks up again after a minute or so. This cycle repeats for the whole length of whatever writes I'm doing. SMB writes do not seem to have this issue. Reads for iSCSI and SMB do not have any issues.

Troubleshooting I've tried:
I've upgraded the hardware. The box used to be based on an old LGA 775 dual core server board with 8GB of RAM. I upgraded it to an old i7-920 with 12GB of RAM first, got the same behavior. Then upgraded to the board and CPU above and still got the same behavior. I also switched from onboard LAN to an old server grade NIC based on Intel hardware, still got the same behavior.

I've reached my limit of troubleshooting expertise, I need some help. I searched the forums and couldn't seem to find anyone else having bursty behavior. If anyone can give me some direction on what to do next or other data points I need to gather, I'd greatly appreciate the help.

Extra:
Why don't I just run SMB if that works fine? Lots of games and game launchers just refuse to work on network shares for reasons that I don't understand. iSCSI mapped drive doesn't have that problem.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
So I'm thinking that maybe one of my RAID1 disk pairs is not completely pulling its weight. What's the best way to go about investigating that?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
So I'm thinking that maybe one of my RAID1 disk pairs is not completely pulling its weight. What's the best way to go about investigating that?
Checking from a shell on the FreeNAS machine with zpool iostat may help, as well as looking at the results of SMART tests - but in this case I think what you're seeing is simply the normal behavior of sustained async writes to a pool that can't keep up the same speed on the back-end.
Question rapid-fire time:

Do you see this same behavior from heavy writes to SMB shares?
How full is your pool?
What's the network configuration of the 4x1Gb card?
Are iSCSI and the SMB traffic on separate interfaces/subnets?
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
Checking from a shell on the FreeNAS machine with zpool iostat may help, as well as looking at the results of SMART tests - but in this case I think what you're seeing is simply the normal behavior of sustained async writes to a pool that can't keep up the same speed on the back-end.
I'll check that out when I get back home. If the pool just can't keep up, is my only option more vdevs?
Do you see this same behavior from heavy writes to SMB shares?
How full is your pool?
What's the network configuration of the 4x1Gb card?
Are iSCSI and the SMB traffic on separate interfaces/subnets?
No, heavy writes to the SMB share seem to work just fine. *confused*
The pool is currently at 75%
Just using 1 of the 1Gbit ports with an IP on my private network, the 4 port card was just what I had laying around. It's just a single IP for the whole box. iSCSI, SMB, and the web interface all use the same IP. The home network is just 1 big /24, nothing crazy going on there. MTU is the default, NOT set to 9000.

Thanks in advance for the help.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The ZVOL might be heavily fragmented, depending on how long it's been active, how much has been written/deleted from it (and whether or not space reclamation is working w.r.t the deletes)

About how much transfer happens before and between the stalls?

Edit: is there any antivirus in play here? iSCSI will probably be considered a "local drive" to those so it might be hanging up there if you have real time scanning enabled.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
The zpool iostat results

For iSCSI

root@freenas:~ # zpool iostat Pool1 2 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- Pool1 3.55T 1.89T 122 64 4.22M 1.11M Pool1 3.55T 1.89T 0 205 0 1.11M Pool1 3.55T 1.89T 5 0 22.6K 7.53K Pool1 3.55T 1.89T 1 2.08K 7.98K 31.4M Pool1 3.55T 1.89T 11 4.93K 128K 98.3M Pool1 3.55T 1.89T 14 120 194K 15.1M Pool1 3.55T 1.89T 0 31 16.0K 4.00M Pool1 3.55T 1.89T 0 36 0 4.52M Pool1 3.55T 1.89T 0 21 0 2.70M Pool1 3.55T 1.89T 0 8.26K 7.99K 130M Pool1 3.55T 1.89T 21 5.85K 647K 129M Pool1 3.55T 1.89T 0 806 2.00K 73.6M Pool1 3.55T 1.89T 1 839 15.9K 70.3M Pool1 3.55T 1.89T 0 807 0 75.5M Pool1 3.55T 1.89T 0 3.78K 0 95.8M Pool1 3.55T 1.89T 5 3.05K 102K 69.2M Pool1 3.55T 1.89T 11 469 142K 33.0M Pool1 3.55T 1.89T 0 4.98K 0 84.0M Pool1 3.55T 1.89T 0 54 0 3.41M Pool1 3.55T 1.89T 0 23 0 2.99M Pool1 3.55T 1.89T 0 31 0 3.94M Pool1 3.55T 1.89T 0 38 0 3.81M Pool1 3.55T 1.89T 0 26 0 3.32M Pool1 3.55T 1.89T 0 36 0 4.54M Pool1 3.55T 1.89T 0 39 0 3.09M Pool1 3.55T 1.89T 0 30 0 3.76M Pool1 3.55T 1.89T 0 44 0 4.15M Pool1 3.55T 1.89T 0 26 0 2.89M Pool1 3.55T 1.89T 0 39 0 3.85M Pool1 3.55T 1.89T 28 27 780K 3.46M Pool1 3.55T 1.89T 0 30 0 3.81M Pool1 3.55T 1.89T 0 34 0 4.29M Pool1 3.55T 1.89T 0 28 0 3.35M Pool1 3.55T 1.89T 1 39 35.8K 4.84M Pool1 3.55T 1.89T 17 71 824K 3.95M Pool1 3.55T 1.89T 0 27 0 3.38M Pool1 3.55T 1.89T 0 27 0 3.39M Pool1 3.55T 1.89T 0 28 0 3.50M Pool1 3.55T 1.89T 1 30 20.7K 3.76M Pool1 3.55T 1.89T 0 31 0 3.35M Pool1 3.55T 1.89T 0 27 0 3.49M Pool1 3.55T 1.89T 0 35 0 3.90M Pool1 3.55T 1.89T 0 24 0 2.98M Pool1 3.55T 1.89T 0 30 0 3.76M Pool1 3.55T 1.89T 0 33 0 3.91M Pool1 3.55T 1.89T 0 30 0 3.38M Pool1 3.55T 1.89T 0 28 0 3.49M Pool1 3.55T 1.89T 0 27 0 3.50M Pool1 3.55T 1.89T 0 31 0 3.88M Pool1 3.55T 1.89T 0 71 0 4.06M Pool1 3.55T 1.89T 0 30 0 3.77M Pool1 3.55T 1.89T 0 27 0 3.50M Pool1 3.55T 1.89T 0 30 0 3.71M Pool1 3.55T 1.89T 0 638 5.99K 79.2M Pool1 3.55T 1.89T 0 529 0 65.8M Pool1 3.55T 1.89T 0 209 0 25.8M Pool1 3.55T 1.89T 0 26 0 3.01M Pool1 3.55T 1.89T 0 30 0 3.76M Pool1 3.55T 1.89T 0 27 0 3.30M Pool1 3.55T 1.89T 0 34 0 4.24M Pool1 3.55T 1.89T 0 23 0 2.83M Pool1 3.55T 1.89T 0 31 0 3.90M Pool1 3.55T 1.89T 0 27 0 3.30M Pool1 3.55T 1.89T 0 51 0 3.94M Pool1 3.55T 1.89T 0 30 0 3.76M Pool1 3.55T 1.89T 0 27 0 3.41M Pool1 3.55T 1.89T 0 28 0 3.44M Pool1 3.55T 1.89T 0 32 0 3.91M Pool1 3.55T 1.89T 0 27 0 3.42M Pool1 3.55T 1.89T 0 27 0 3.44M Pool1 3.55T 1.89T 0 28 0 3.45M Pool1 3.55T 1.89T 0 38 0 4.76M Pool1 3.55T 1.89T 0 27 0 3.32M Pool1 3.55T 1.89T 0 27 0 3.48M Pool1 3.55T 1.89T 1 140 9.99K 16.8M Pool1 3.55T 1.89T 0 879 2.00K 45.2M Pool1 3.55T 1.89T 0 6.98K 0 138M Pool1 3.55T 1.89T 0 10.8K 0 177M Pool1 3.55T 1.89T 0 2.69K 0 46.7M Pool1 3.55T 1.89T 0 41 0 4.75M Pool1 3.55T 1.89T 0 29 0 3.52M Pool1 3.55T 1.89T 0 61 0 4.30M Pool1 3.55T 1.89T 0 27 0 3.42M Pool1 3.55T 1.89T 0 48 0 4.21M Pool1 3.55T 1.89T 0 33 0 3.91M Pool1 3.55T 1.89T 0 38 0 3.79M Pool1 3.55T 1.89T 0 34 0 3.84M Pool1 3.55T 1.89T 0 27 0 3.50M Pool1 3.55T 1.89T 0 60 0 4.23M Pool1 3.55T 1.89T 0 48 0 4.69M Pool1 3.55T 1.89T 0 26 0 3.29M Pool1 3.55T 1.89T 0 31 0 3.88M Pool1 3.55T 1.89T 1 2.38K 23.9K 54.6M Pool1 3.55T 1.89T 1 4.14K 7.99K 91.9M Pool1 3.55T 1.89T 0 445 7.68K 46.7M Pool1 3.55T 1.89T 0 113 0 4.68M Pool1 3.55T 1.89T 0 31 0 3.92M Pool1 3.55T 1.89T 0 48 0 3.82M Pool1 3.55T 1.89T 0 114 0 4.74M Pool1 3.55T 1.89T 0 114 0 5.31M Pool1 3.55T 1.89T 0 27 0 3.45M Pool1 3.55T 1.89T 0 120 0 5.31M Pool1 3.55T 1.89T 0 734 0 13.6M Pool1 3.55T 1.89T 0 14.5K 2.00K 231M Pool1 3.55T 1.89T 0 13.6K 0 216M Pool1 3.55T 1.89T 0 100 0 12.6M Pool1 3.55T 1.89T 0 1.16K 0 17.9M Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 77 3.83K 9.70M Pool1 3.55T 1.89T 0 1.01K 0 17.0M Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 243 0 1.77M Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 19 0 2.45M Pool1 3.55T 1.89T 0 413 0 5.15M Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 165 0 828K Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 162 0 809K Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 0 0 3.89K Pool1 3.55T 1.89T 0 173 0 819K Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 0 0 0 Pool1 3.55T 1.89T 0 195 0 1.53M Pool1 3.55T 1.89T 0 0 0 0 ^C

Same files written to a SMB share instead.

root@freenas:~ # zpool iostat Pool1 2 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- Pool1 3.56T 1.88T 122 64 4.22M 1.10M Pool1 3.56T 1.88T 0 1.32K 0 110M Pool1 3.56T 1.88T 0 1.32K 0 109M Pool1 3.56T 1.88T 0 1.25K 0 99.1M Pool1 3.56T 1.88T 0 1.28K 0 103M Pool1 3.56T 1.88T 0 882 26.0K 68.3M Pool1 3.56T 1.87T 0 1.27K 0 96.8M Pool1 3.56T 1.87T 0 1.20K 0 94.8M Pool1 3.56T 1.87T 0 1.43K 0 113M Pool1 3.56T 1.87T 0 1.48K 0 130M Pool1 3.56T 1.87T 1 1.21K 80.0K 109M Pool1 3.56T 1.87T 0 1.19K 0 73.8M Pool1 3.56T 1.87T 0 834 0 64.5M Pool1 3.56T 1.87T 0 1.38K 0 117M Pool1 3.56T 1.87T 0 1.08K 0 76.6M Pool1 3.56T 1.87T 0 1.24K 0 97.1M Pool1 3.56T 1.87T 0 1.30K 0 108M Pool1 3.56T 1.87T 0 1.06K 0 85.5M Pool1 3.56T 1.87T 0 1.52K 0 135M Pool1 3.56T 1.87T 0 1.08K 0 76.0M Pool1 3.56T 1.87T 0 1.14K 0 86.7M Pool1 3.57T 1.87T 0 1.19K 0 94.9M Pool1 3.57T 1.87T 0 1.22K 0 96.8M Pool1 3.57T 1.87T 0 1.22K 0 96.4M Pool1 3.57T 1.87T 0 1.29K 0 101M Pool1 3.57T 1.87T 16 1.21K 783K 109M Pool1 3.57T 1.87T 0 1.51K 0 125M Pool1 3.57T 1.87T 0 1.37K 0 103M ^C
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The pool is currently at 75%

The pool is way too full; if it is already fragmented, this will be a big problem for you.

Pool of 3 RAID1 VDEVs

The pool design is incorrect; you should use mirrors for ISCSI.


The behaviour you're seeing suggests difficulty in finding sufficient pool free space which suggests fragmentation and pool occupancy are components to the problem. You may be able to get more consistent write speeds by reducing the transaction group size to one second. To be clear, this does not mean "go faster" -- it means "be more consistent", as in almost certainly slower but much more consistent about it. Try clearing off most of your pool (reduce occupancy to maybe 20%) and see if it's better. This would provide some good clues.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
Also, the question about SMART data, nothing is jumping out at me. All relocated sector counts are zero. Anything else I should highlight there?

The ZVOL might be heavily fragmented, depending on how long it's been active, how much has been written/deleted from it (and whether or not space reclamation is working w.r.t the deletes)

About how much transfer happens before and between the stalls?

Edit: is there any antivirus in play here? iSCSI will probably be considered a "local drive" to those so it might be hanging up there if you have real time scanning enabled.
So the question of fragmentation, I've had all my games installed on here and they get periodic updates.

It seems to be a couple hundred MB between stalls.

There's Windows defender that does whatever it wants, not what I want.

The pool is way too full; if it is already fragmented, this will be a big problem for you.

The pool design is incorrect; you should use mirrors for ISCSI.


The behaviour you're seeing suggests difficulty in finding sufficient pool free space which suggests fragmentation and pool occupancy are components to the problem. You may be able to get more consistent write speeds by reducing the transaction group size to one second. To be clear, this does not mean "go faster" -- it means "be more consistent", as in almost certainly slower but much more consistent about it. Try clearing off most of your pool (reduce occupancy to maybe 20%) and see if it's better. This would provide some good clues.
OK, I'll pull data off onto my PC and see how it goes. Will report back in a few days.

It is a pool of mirrors. RAID1 = mirror I thought? Sorry, guess I messed up the lingo and I think you're thinking it's a RAIDZ1. It's a pool of mirrors as is recommended for this application.

It seems like 1 vdev is running behind the others. What I don't understand is why this is only coming up during iSCSI writes. There aren't a whole bunch of VMs writing data, it's just my one PC. Is there any way to defragment the pool once that fragmentation has set in?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It is a pool of mirrors. RAID1 = mirror I thought? Sorry, guess I messed up the lingo and I think you're thinking it's a RAIDZ1. It's a pool of mirrors as is recommended for this application.

Sorry, you're correct. Sometimes I'm just too tired to be writing this stuff. It's definitely preferred that you use the correct lingo, and it's unfortunate that the ZFS designers introduced overlap.


The worst bit is that "RAID1" and "RAIDZ1" are horribly different things. So you got me there. ;-)

It seems like 1 vdev is running behind the others. What I don't understand is why this is only coming up during iSCSI writes.

Did you add one vdev later? ZFS will tend to favor that one vdev because it is so very empty. The ZFS allocation strategy is to look for massive regions of lots of free space. So if you have two 80%-full vdevs and add a new one in order to bring pool occupancy down to around 50%, nearly 100% of writes will end up hitting that new one until the space allocation on each one is similar. The first-order effect ends there, but the second-order effect can be that the fragmentation on the added one is very low while the fragmentation on the old ones remains relatively high, such as if you did lots of file additions and deletions. Over time, assuming you're doing lots of write stuff on the pool, the fragmentation issues will sort themselves out and you will get to a "steady state."

There aren't a whole bunch of VMs writing data, it's just my one PC. Is there any way to defragment the pool once that fragmentation has set in?

ZFS lacks tools to do defragmentation. On a conventional filesystem like like DOS FAT, it isn't hard to move file blocks and then update the metadata for the file to point to the new blocks. The filesystems are relatively small and there aren't complications. Unfortunately with ZFS, you have all sorts of complications such as snapshots that mean that there's potentially a huge amount of metadata involved with a given file, and both the file data AND the metadata can be included in or modified by snapshots. Further, because ZFS is a copy-on-write filesystem, you aren't really allowed to overwrite a metadata block, you have to allocate a new one, so if you are "updating" where a block is located, there's a bunch of issues. Further, ZFS scales way past petabytes, so the sheer amount of data you'd need to keep track of can swamp a system.

The historical method to defragment ZFS is to get rid of all snapshots (simplifying the metadata issue), and then move the data. The best way is to move the data off the pool and then back on, but since this is often impractical, many people will find some way to increase the size of their pool, such as adding a vdev or two, and then by copying the data from the pool to the pool, that will cause defragmented writes to happen into the new vdevs.

And I'm not being critical here when I say this, many people take the long way around to learn it, and the information available on the 'net had historically been kinda tragic, so it was hard to learn, so I usually try to step into these threads to lay it all out there... if you run up against a fragmentation issue (common to iSCSI or database uses), the best solution is to give ZFS gobs of free space. ZFS uses compsci tricks to exchange one thing for another. If you give ZFS tons of free space, like maybe ~90% free, it will make all writes feel like SSD all the time. Random, sequential, it doesn't matter, because ZFS will try to lay it down in the pool as a sequential write of the transaction group (txg). So if the txg being written contains random file data blocks, of course those random data blocks are fragmented and likely to cause seeks when read back. ZFS uses ARC and L2ARC to reduce this. So if you use lots of ARC and maybe an SSD for L2ARC and set things up appropriately, you get SSD-like read speeds too. You spend a lot of resources to get there, but ZFS can give near-SSD speeds on all your activities if you resource it generously.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
To your second point, I didn't add a vdev, but I upgraded one. When I first created the pool I had 4 2TB 7200rpm drives and 2 1.5TB 5900rpm. A couple months ago I upgraded the 1.5TB drives to new 2TB 7200 drives, the cheapest ($50) at microcenter as an experiment. That's probably the cause of a lot of my trouble. There's disproportionally more free space there, but the drives are still mostly full. That causes the thrashing on that one vdev while it tries to find what little free space it has.

So the good news about all this is that this is my home system. We only store a couple GBs of things we really don't want to lose, the rest is video files and games. In the grand scheme of FreeNAS deployments, there's nothing super critical going on, I'm just messing around.

I'm going to slash and burn the current pool and test out iSCSI writes again. Once the pool is empty and is allowed to fill up evenly, I think the problem will go away. If the theory is proven, then I'll get more space and vdevs. I just got some LSI SAS3081 cards for cheap on ebay, that'll let me fill out my 12 bay chassis. Maybe add a couple SSDs for SLOG duty.

The easy solution here would be to just buy a 2TB SSD for my games locally, but where's the fun in that. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should really avoid the 3081's. They will not support drives larger than 2.2TB, and the driver support is a lot more questionable than the 6Gbps or 12Gbps stuff, because no one really uses the 3081's.

With ZFS the 7200RPM drives usually aren't anywhere near as much of a win as a 5400/5900RPM drive twice the size.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
I'm aware of the 2TB limitation, but I got 2 of them for less than $20. Is the driver support on the 3081's just questionable because they're old? I've used a couple 3041's on my old motherboard and they seemed to work fine in HBA mode.

Good to know about the low RPM being fine, but since I'm capped at 2TB for the moment, and I have 7200's already, seems better to keep them matchy matchy. The price difference is minor at 2TB. The 10+TB I should get out of 6 2TB mirrors should be fine for me for a good long while. I just need some redundant storage for a couple things and the rest is games or videos that I don't need redundant storage of.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
OK, so I emptied out the pool to 5% and the iSCSI writes from my PC are still behaving is this bursty fashion. The one vdev still seems to be showing much higher IO time than the others.

So I'm pretty well stumped at this point. Is it these cheap drives I bought? And why does it only happen during iSCSI writes? Why are SMB writes fine? Why can I create a Windows Virtual HD on my SMB share and do faster writes than iSCSI?

Perplexed...
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
So to further dig into this issue, I deleted the pool and my 3 mirrored vdevs and made 3 pools each with 1 mirror vdev.

On the first 2 pools I get iSCSI write performance around 30MB/sec on zpool iostat and perceived burst performance on my computer. On the 3rd pool I iSCSI write performance around 8MB/sec and again perceived bursty performance (slower though) on my computer. When I cancel the transfer though, the write speed seems to increase as it clears the cache.

However, when I switch to SMB, all 3 pools have no issue maintaining nearly the full 110MB/sec that my 1Gb ethernet is capable of delivering.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Are the two "slow" drives a different type? Probably have WCE disabled or something like that. Mmm. Interesting.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
The 2 slow drives are the new Seagate 2TB I got for $50 a piece from Microcenter. The other 4 drives are 2TB I inherited from a friend a while ago. 3 Seagates and a WD.

EDIT: What's WCE?

EDIT EDIT: Write caching? I didn't think that was something that could even be disabled.
 
Last edited:

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
So I'm trying to run camcontrol to figure out if WCE is enabled, but because I'm using ada devices currently I get no results. Currently using the motherboard's SATA ports, so I'll need to reset with the LSI cards and see what happens.
 

GeorgePatches

Dabbler
Joined
Dec 10, 2019
Messages
39
So I've taken the 2 problem drives out and that isolated the issue. These drives are just weird. As they fill up, they just start to get bursty with writes. The performance was recreated in my Windows 10 as a windows storage pool. So these things are just junk.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well! Way to take the ball and run with it with only modest input from the forum.

I normally suggest that people do burn-in testing of their NAS to help spot problems like this up front. If the foundation of the system isn't solid, that throws everything else off and makes it harder to debug. Hard drive problems are the most annoying, aren't they!
 
Top