Disks Not Configured in FreeNAS 9.1 Release

Status
Not open for further replies.

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
FlynnVT, I'm assuming you're still creating the RDMs via CLI (probably from an ACHI or onboard controller) using the command (or something similar)
vmkfstools -r /vmfs/devices/disks/vml.020001000030000000d6f5b656695343534920 /vmfs/volumes/FreeNAS-iSCSI/DSL2/rdm1.vmdk -a lsilogic
?
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
Almost, but I'm using '-z' for physical rather than '-r' for virtual.

'-z' allows for SMART passthrough and spindown, but breaks with FreeNAS 9.
'-r' works with FreeNAS 9, but doesn't allow for SMART/spindown and probably fails with discs > 2Tb.

I don't have a true SAS card (yet!), so the GUI RDM option is greyed-out.
 

AZweimiller

Cadet
Joined
Jun 14, 2013
Messages
3
FlynnVT: It looks like the problem is fixed in FreeBSD 10? If so we just have to wait for FreeNAS to be upgraded to that base, right? I've been using FreeNAS for a very, very long time. Most recently, on the 8.3 branch. Tonight I upgraded to 9.2.0 and experienced this same problem. I run ESXi 4.1 and use the CLI RDM with -z option. Not a single issue for years, even though power losses, etc. After the upgrade, my disks are 0B and do not show any zpools. Luckily, I backed up my config, so going back to 8.3.1 will be cake. I was very thankful to find this thread so I could figure out what happened.

I've been very happy with virtual ZFS, but it's only ever been for home use. I use it as a NAS for all of my digital hoarding. I also have a cloud backup of everything that is on the volume. This is a forum for user-to-user support. No single user is obligated to offer assistance in any particular thread. I don't think anyone would blame the anti-virtualization crowd for completely ignoring threads where VT is employed. The anti-VT rants only aim to derail effort by users who are trying to solve their own problems. I have no doubt that avoiding virtualization is the best advice and is stated for good reason. That said, not everyone is able to devote bare metal to FreeNAS due to what I am sure is a variety of reasons. I suspect the constant thread crapping is what lead budmanxx to request a forum for VT users to help each other, unmolested.

On the flip side, no user should not feel entitled to help or answers, especially anyone using RDM. I'm very appreciative for this thread and the help it provided me tonight. Thanks again.
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
FlynnVT: It looks like the problem is fixed in FreeBSD 10? If so we just have to wait for FreeNAS to be upgraded to that base, right?

I haven't looked into the root cause and am not very well read up on how similar the FreeNAS kernel is to that of FreeBSD. However, given that they both stopped working with Physical RDMs in the transition from v8 to v9, I'm hoping that FreeNAS will follow FreeBSD back into functionality in the transition from v9 to v10.

I'm looking forward to being able to enable lz4 compression when the time comes.

I've added a comment to the bug tracker: https://bugs.freenas.org/issues/3197
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
More information, in case anyone else is in this situation.

I recently updated from ESXi 5.1 to 5.5 and saw no change in this behaviour. With identical physical RDM setups using the on-board SATA controller:
  • FreeNAS 8.3 works OK.
  • FreeNAS 9.3 sees 0-byte physical RDMs and fails.
  • FreeBSD 10 works OK.
Then, I got a Dell H310 and reflashed to 9211-8i IT mode. In contrast to the on-board SATA, this actually allows RDM configuration via the vSphere GUI. FreeNAS 9.3 then works OK with physical RDMs! (My machine doesn't support PCI passthrough)

So, if you hit this issue, your options are:
  1. Get a real SAS controller
  2. Use FreeNAS 8.3
  3. Use FreeNAS 9.3, with virtual RDMs (loosing SMART, spindown etc.)
  4. Use FreeBSD 10, or wait for FreeNAS 10
We could theorize all day as to whether Free[NAS/BSD], ESXi or particular hardware is at fault/responsible, but this sensitivity feels like a regression in v9.
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
FlynnVT.

Glad to hear you have upgrade to a hardware controller (even in IT mode).. This is absolutely a much better solution for virtualized FreeNAS (i'm on 9.2.1.9 still - nervous about the 9.3 upgrade - there is an ugly bug with HPET with FreeBSD 9.3 - causing VM to hang on soft reboots - turn off HPET in your VM advanced setting) and yes there is some sort of regression in 9.3 code that blows up "fake" RDMs through VMware to your FreeNAS VM.. I have been running FreeNAS ZFS Physical-RDM virtualized (via hardware IT controller) for over 2 years now without really any hiccups and I throw tons of stuff on it.. I even have EMC RecoverPoint (RP4VM) running on some iSCSI LUNs I have carved out..

As far as Power management.. I may try to play around with Version 10 VM hardware (it requires ESXi 5.5 and vCenter Web Client). Version 10 support an emulated ACHI controller which allows the VM to think the disks are coming off of ACHI.. This may support allowing FreeNAS to properly detect power management and allow spinning down drives. It did NOT fix the issue with the HPET bug, however, in 9.3
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
Interesting - I never thought about the advantage of SATA with v10 VM hardware. I always found that camcontrol (e.g. camcontrol stop /dev/da1) would spin down physical RDMs OK. There was a add-on script written around the time of 8.3 that used this and worked well under ESXi: https://forums.freenas.org/index.php?threads/unsure-of-sata-drive-spindown.1053/page-6 ...I actually decided to let the discs spinning in the end.

Thanks for the pointer on the HPET bug. I figured it out the long/hard way a few hours ago, but it's nice to have confirmation that I've done the right thing!
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Yea, have been using Millhouses script pretty successfully, which I kind of don't care for as its IMO is a work around..

Another simple trick I have learned is to create a separate 10-50GB virtual disk (on your non-pool devices of, obviously - hopefully you have a HW Raid protected boot/Main Datastore) and present to your FreeNAS VM. Format this as a separate ZFS pool and create a dataset and move your system dataset to it.. This gives the drives more chance to spin down (if you indeed are using the script to power down).
 
Status
Not open for further replies.
Top