SuperMicro X10SL7-F

Status
Not open for further replies.

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
What is this C1 vs C2 thing? Whats wrong?
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
It's the usual Intel cock-up. The original (C1 stepping) of PCH is defective, affecting some USB sockets on return from a sleep state. It's fixed in C2. If you don't want to run unattended with USB-attached devices it doesn't affect you.

i
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
That would depend on your UPS. But unless your batteries are old or you're using too small a UPS it should give you at least 5 minutes at full load - and shutting down shouldn't be full load.

Sent from my GT-I9300 using Tapatalk 4


Just been reading about performance when doing a zfs destroy. Looks like you might be able to shutdown in the middle of it at the expense of a "slow" restart. How slow I don't know, the length of the destroy maybe? If so, hibernation might sound a better idea?

i
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just been reading about performance when doing a zfs destroy. Looks like you might be able to shutdown in the middle of it at the expense of a "slow" restart. How slow I don't know, the length of the destroy maybe? If so, hibernation might sound a better idea?

i

?

What does this have to do with the X0SL7-F? Did you post to the wrong thread?
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
?

What does this have to do with the X0SL7-F? Did you post to the wrong thread?


Nope. If you're going to use a usb3 flash drive (as vegaman does) but need to hibernate you probably (Intel's not clear on exactly when reenumeration breaks) need the c2 stepping.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nope. If you're going to use a usb3 flash drive (as vegaman does) but need to hibernate you probably (Intel's not clear on exactly when reenumeration breaks) need the c2 stepping.

I'm still confused. zfs destroy deletes your zpool. What does a zfs destroy have to do with USB or hibernation? I can't imagine that in the few seconds it takes to destroy a zpool you'd go into hibernation....
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm reading that a zfs destroy can take hours or days in the worst case. Is this not true?

Nope. Destroying a pool takes just a few seconds. Basically the pool is unmounted, then the partition deleted. I've destroyed 30TB+ pools in seconds.
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
Does any one else have an issue with network access to Freenas 9.1.1 while using IPMI? I lose network connectivity to my Freenas server and then get directed to the Supermicro IPMI page to log on. I am not using the dedicated IPMI port on the motherboard, only 1 of the 2 Intel LAN ports. If I reboot the server I will then be able to access Freenas again.

I am guessing I may just need to disable IPMI unless I am using the stand alone IPMI LAN port?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not a zpool destroy, a zfs destroy. Maybe specifically of a snapshot?

You're right for snapshots. zfs destroy can destroy file systems too though.

zfs destroy can, in theory, take a VERY VERY long time(hours/days). But we'd be talking very very very large pools that are very very very busy with extremely big snapshots. I've deleted 1TB snapshots in 20-30 seconds. We'd probably be talking 100TB+ pools easily and the snapshot being deleted was a majority of the pool.

zfs v5000 is supposed to eventually have the feature so that it works in the background and cleans up your zpool when you delete a snapshot. But it hasn't been implemented yet.

As for hibernation, so much hardware doesn't like hibernation I'd never recommend it for a server under any circumstances. Far better for reliability of the server and the services it depends on and serves to do a shutdown. Besides, if you are talking about a pool big enough for a snapshot being deleted to affect a shutdown significantly, you'd also be talking about a server with 100s of GB of RAM. And saving that to disk for hibernation is far less feasible than doing a server shutdown.

Besides, for your average server, what's the likelihood you'd happen to be deleting a snapshot at the moment of power loss. I'm sure it happens, but its far more likely the server will be idle.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I've had ~800 gig zvol's take about an hour to 'zfs destory'. The freespace on the pool slowly climbs, and 'gstat' shows what must be lots of random io. I assume it's random because the percent utilized is quite high (~80% or so), but the total drive throughput is only 2-3 MB/sec.
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
Well I changed the IP of the IPMI firmware again, hopefully it will stick this time. Seems to have fixed the problem for now.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
You're right for snapshots. zfs destroy can destroy file systems too though.

zfs destroy can, in theory, take a VERY VERY long time(hours/days). But we'd be talking very very very large pools that are very very very busy with extremely big snapshots. I've deleted 1TB snapshots in 20-30 seconds. We'd probably be talking 100TB+ pools easily and the snapshot being deleted was a majority of the pool.

zfs v5000 is supposed to eventually have the feature so that it works in the background and cleans up your zpool when you delete a snapshot. But it hasn't been implemented yet.

As for hibernation, so much hardware doesn't like hibernation I'd never recommend it for a server under any circumstances. Far better for reliability of the server and the services it depends on and serves to do a shutdown. Besides, if you are talking about a pool big enough for a snapshot being deleted to affect a shutdown significantly, you'd also be talking about a server with 100s of GB of RAM. And saving that to disk for hibernation is far less feasible than doing a server shutdown.

Besides, for your average server, what's the likelihood you'd happen to be deleting a snapshot at the moment of power loss. I'm sure it happens, but its far more likely the server will be idle.

Hmm. Let me have a think about that. (Wearing my other hat I'm also putting together a non-NAS server which will use zfs under linux and 16GB ramdisks which need saving on a power outage for which an orderly shutdown of the software is not an option in the duration of even a good ups battery.)
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Well I changed the IP of the IPMI firmware again, hopefully it will stick this time. Seems to have fixed the problem for now.

Hi raidflex.

How are you finding the x10sl7-f? Apart from the ipmi/lan port issue any other problems?

How come you went for the LSI card rather than reflashing the onboard?

i
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
Hi raidflex.

How are you finding the x10sl7-f? Apart from the ipmi/lan port issue any other problems?

How come you went for the LSI card rather than reflashing the onboard?

i

So far it is working without issues aside from the IP problem, which seems to be fixed since I last change the IPMI IP. I already had the LSI HBA from my previous build and I have 8 HDDs so I figured I would keep it the same and then add additional drives using the on-board controller. I have about 6 different plugins, including Plex running on the system and it runs very smooth.

The only issue I had was Freenas 9.1.1 would not boot after install the first time. The issue had to do with the internal USB 3.0 header which is what I was using. So I needed to disable the XHCI hand-off and XHCI mode under ADVANCED -> PCH-IO CONFIG in the BIOS. I was not using USB 3.0 on the system for anything else, so this was not an issue.

The USB 3.0 issue should be fixed for 9.1.2 according to this bug report: https://bugs.freenas.org/issues/2369

Also now that IPMI is working properly I am really growing to like the ability to fully manage the BIOS/Firmware and also see what is currently on the screen remotely.
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
My X10SL7 is on the way! Even got it cheaper than the X10SLH sells for here in Norway :D
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
Hope so. Was originally going for the X10SLH wich has 6 native SATA3 but... extra ports might come in handy :)
 
Status
Not open for further replies.
Top