Citadel - Build Plan and Log

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just schedule the SMART test for "after hours" and you can test them all at once.
Then schedule the scrub for a different day.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Cool, thanks! Now I know :)

The new arrangement is:
- All disks short test every 3 days.
- All disks long test twice a month, starting an hour after short tests.
- Scrubs twice a month, on alternating weeks from long tests.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Short tests usually finish in around 3 minutes or less. I have my system do those every day around 2 AM excluding one day a week and I setup the schedule for the long test to run on the day of the week the short test skips. Long test completion time depends on drive size. My 4TB drives take 4 to 6 hours, so I usually kick those off at 1 AM. I have another cron job that sends me a email at 7 AM daily and I want all the tests done by then.

I have a system at work that uses 6TB drives and they are about half full of data. It takes that system about 80 hours to complete a scrub.
Start it around midday on Friday and I am lucky if it is done by the following Monday afternoon.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Short tests usually finish in around 3 minutes or less. I have my system do those every day around 2 AM excluding one day a week and I setup the schedule for the long test to run on the day of the week the short test skips. Long test completion time depends on drive size. My 4TB drives take 4 to 6 hours, so I usually kick those off at 1 AM. I have another cron job that sends me a email at 7 AM daily and I want all the tests done by then.
Cool, yeah these drives take about 12 hours for a long test.

I have a system at work that uses 6TB drives and they are about half full of data. It takes that system about 80 hours to complete a scrub.
Start it around midday on Friday and I am lucky if it is done by the following Monday afternoon.
Yikes!

I'm still looking at setting up rsync backups, but can't decide on a format for ssh/users. Since root is the default user for the web UI, I'm tempted to just use root as the default for ssh and rsync as well... But at the same time, I don't really like the idea of using root for ssh and stuff.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
The docs say that the jails toolset is being migrated to iocage. It's hard to tell what that best approach is right now, but I'm considering trying to make the jails from cli, and hope that when the UI picks up iocage support everything just works (tm)...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The docs say that the jails toolset is being migrated to iocage. It's hard to tell what that best approach is right now, but I'm considering trying to make the jails from cli, and hope that when the UI picks up iocage support everything just works (tm)...
No guarantees. Personally, I am staying with my Warden Jail I am using (Plex) until the IOcage Jails are ready for "prime time". Then I will make the move.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I've read disagreeing takes on using FreeNAS as a VM server. On one hand it is and should only be a storage provider. On the other hand, it can make home servers more robust by removing some of storage layer guesswork. I'd like to try migrating my home servers (old laptops) to jails, so my plan is to start with a jail that runs ddclient and stuff for my domain, and then add another jail that serves things like my RSS reader.

Got my first warden jail up and running just now. Immediately slammed into some issues, but resolved it by uninstalling vim and installing vim-lite.
It's frustrating to see that jails are in a broken limbo right now; I wish whoever's working on this stuff for 11.2 the best of luck.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I have looked at VMs running on FreeNAS. My first and most important need is a fileserver and FreeNAS is doing a very good job for me. But I also have the need for some software and services that run on the Windows platform. And a Windows VM can be bothersome on FreeNAS. So after doing a lot of reading on this forum and other places I decided to turn my FreeNAS server in to an ESXi server. It is a bit of an investment when it comes to hardware and time to gather some knowlegde.

At this moment I am running an ESXi server one of my lab computer and so far it looks good. I am running a Windows VM, A FreeNAS VM with a small mirrored pool (with a borrowed HBA for the storage drives) and some Linux VMs with one hosting a VPN service. As soon as I am confident in what I am doing I start running ESXi on my FreeNAS hardware and run FreeNAS on a VM.
 
Last edited by a moderator:

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Good read. I learned quite a bit. Thank you to all who contributed.

Ctag and Everyb1,

In considering the added resources needed by ESXi and additional VMs, how did you come to your conclusion on the Mobo, CPU, and RAM choices? Specifically, how much power do you expect to use from the CPU? Is there something that held you back from X11 boards?

Ctag,

I too am looking at using 6x the 8TB shucked drives, but am worried an array of smaller drives with a separate vDev for a backup may be best. Is the price of the 8TB just too tempting for us both? Is it simply best to get all the drives in an array the same size? How long should these drives last? I have only had two drives ever poop out on me, and I blame myself for both. With all these tools that FreeNAS and ZFS have is there really any harm in going for the bigger size if you are not concerned with peak performance? I plan to run my VMs off an SSD, so the FreeNAS array is for storage, backups, and media playback only, nothing too pressing.
 
Last edited by a moderator:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
In considering the added resources needed by ESXI and additional VMs, how did you come to your conclusion on the Mobo, CPU, and RAM choices?
My initial choices were based on a lot of reading (the guide on this forum for example). I bought the things that were needed to run FreeNAS as a file server only. When I decided that I wanted to switch to ESXi I expanded my memory to the max and upgraded my CPU from an i3 to a Xeon.
Specifically, how much power do you expect to use from the CPU? Is there something that held you back from X11 boards?
I expect the CPU I have to be sufficent for what I want to do with it. I will run FreeNAS on it and at its most three other VM's. I have no plans for things like a media server.
At the time that I bought my current X10 motherboard it was pretty much main stream with no X11 in sight. Nothing but money keeps me away from the X11 boards right now. I do know how my ideal configuration would look like but that is not in the budget. Besides of that, it would be crazy to do away with my current hardware. It's running great.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'd like to replace the ReadyNAS with a FreeNAS box
How did this come out? Are you going to share some progress photos?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
On one hand it is and should only be a storage provider.
Just a matter of opinion, but I wouldn't want to use it very extensively as a hypervisor because I don't want to risk impacting the performance of my storage by burdening the same system with too many other tasks.
 
Last edited by a moderator:

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Ctag,

I too am looking at using 6x the 8TB shucked drives, but am worried an array of smaller drives with a separate vDev for a backup may be best. Is the price of the 8TB just too tempting for us both? Is it simply best to get all the drives in an array the same size? How long should these drives last? I have only had two drives ever poop out on me, and I blame myself for both. With all these tools that FreeNAS and ZFS have is there really any harm in going for the bigger size if you are not concerned with peak performance? I plan to run my VMs off an SSD, so the FreeNAS array is for storage, backups, and media playback only, nothing too pressing.
I wouldn't trust myself to give you advice good advice ;) For me the 8tb disks were largely impulse purchases that had ostensibly good value. I bought about 8x 2tb WD Red drives in 2014, and the first one just failed a few days ago, so I'd say I'm happy enough with the perceived quality of WD Red drives.

Just a matter of opinion, but I wouldn't want to use it very extensively as a hypervisor because I don't want to risk impacting the performance of my storage by burdening the same system with too many other tasks.
Yeah :( That's a very good point, and I've already seen how the Rsync backups max out all 24gigs of ram on this FreeNAS box. But the temptation of consolidation is pretty big for me. I'm really torn about it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yeah :( That's a very good point, and I've already seen how the Rsync backups max out all 24gigs of ram on this FreeNAS box. But the temptation of consolidation is pretty big for me. I'm really torn about it.
Well, the Precision T7500 can be maxed out to around 192GB of RAM and dual 6 core 3.4 GHz processors, so you have some room to grow.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is the price of the 8TB just too tempting for us both?
The price of the shucked WD 8TB drives is about as good as you can get and the cost to operate (cost of electricity per GB of storage) is about as good as you can get too.
Is it simply best to get all the drives in an array the same size?
All the drives in a single vdev of a pool should be the same size because you can only use the amount of space on each drive that is equal to the smallest drive.
How long should these drives last?
The set of drives I replaced around the middle of last year had been in continuous operation for a little over 5 years. I have 2 vdevs in my main pool I replaced all 12 drives (6 in each vdev) last year. In 2016, I had replaced all the drives in vdev-0 because they had reached the 5 year limit where I had originally planned to replace them. Unfortunately I replaced them with Toshiba 2TB drives simply because the Toshiba drives were inexpensive. I had 3 of those drives fail within the first six months of operation and for that reason I became concerned that they would not hold up in the long term. In 2017, I replaced the six drives in vdev-1 (they had been 2TB drives) with 4TB drives to expand the capacity of the pool. I selected Seagate Barracuda drives for this pool expansion and after six months I have had no issues with them. None at all. I got a little lost there. It is late.
I have only had two drives ever poop out on me, and I blame myself for both. With all these tools that FreeNAS and ZFS have is there really any harm in going for the bigger size if you are not concerned with peak performance? I plan to run my VMs off an SSD, so the FreeNAS array is for storage, backups, and media playback only, nothing too pressing.
I am seriously considering moving my pool to 8TB drives for the lower energy use. The comparisons I have done make me think that that is the way to go. The biggest reason I have my pool in the configuration it is in, is the speed of access. More vdevs give more IOPS and more IOPS generally equates to faster access, especially when looking at a bunch of small files.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Rsync backups max out all 24gigs of ram on this FreeNAS box.
ARC (Adaptive Replacement Cache) will use all available memory if you don't limit it with a tunable. If you want to run VMs, go for it, but you do need to adjust the default settings to make set aside some memory and you might want to add more memory depending on how much you want to do with VMs. I use a T7500 at work to run VMs and I can run 8 Windows VMs in that thing with no trouble using Windows server as the host (with Hyper-v) and having some of the VMs being additional Windows Server instances or Windows 10 instances.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
How did this come out? Are you going to share some progress photos?
I'm still working out the details. Initially I wanted to have the ReadyNAS become a backup for the most important files on the FreeNAS box, but I feel like that'd be a mess to maintain with how much I dislike the ReadyNAS. Now I'm considering giving the ReadyNAS to a family member to use.

My "server closet" isn't anything to look at, but here it is :D
h_DVTFEdeMjHcDftq_7GUyt5jKIsNbLF1WI7Mv_0-evMWLX2QxWt0itpJpG6RSbNXmWme1E3k2xs47qkQ5ZWQOpTCBuQCW1Eno5axhCBs4YHDAojKCkSp1zDIotq1LhVEpEb8h4iTsUt4_zmXE4HBrMntR58dakzDffx_KJ9i2WpRYD9e3G1F89oKPE9eOiYL9WoPC7RNVZTVXfjC54I_WoCYz_nWGcWwsTC0I0-N7zKHftptFSMgiSFcI8PSKYex6HbRy0zxhp1QIbTluAaIz8zjg3D-TFkAUqrPwGVOLNg26V6KkRlQWA1DAFRl0qwK0iVVhtMuFAY4DbzZzQ7bL_9HGIohbDFl6rUWsRjT6y8ZzbMy19zrfery7ptl6fEgwliojGi8wCx5Z5KRwC5_pXQ8eJL8IUCUdxEGwd8OAfs4Y_m9VJ8PMIQN4T2ONrLQpPbhXP8as8Es8r8pf80A1RevSwnPwGZlsahsfh-jZfJmydJ3V8OD-t0bo-KzmryM2a0EUxSyy3KdXHgMyEcRY7Sjpw_0RwvqISJmpkCpVqF7rHUGb7jeEK1OJ2b3eGJcbVN-sfTW5SB-zNylYYrBHRowPtI7f2IP8T8bT-YDSeOq4QjvOth6hD7J0r-x-SYB6GVMGVEz4kcawkzL-F4rsxWLJjNaPIq=w733-h977-no


The two laptop motherboards on the shelf are the systems that I'd like to consolidate into jails.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Well using jails is more and more shaky :confused: Feels like I'm not proficient enough with FreeBSD yet to tackle the required troubleshooting to get things up and running.

I tried installing PostgreSQL on one of the jails. It installed from pkg alright, but then wouldn't run. Eventually I found that I needed to put allow.sysvipc=true in the Sysctls box of the jail configuration dialog. And then I couldn't import my old databases, apparently the postgresql package was compiled for 11.1 and wanted a function that wasn't available.

So I tried compiling from ports. Did the fetch, extract, update stuff to take care of CVE vulnerability warnings, and then make wouldn't work because "support for your FreeBSD version has ended." So I added ALLOW_UNSUPPORTED_SYSTEM=yes to /etc/make.conf for the time being. That seems to work, but I've run out of time and will have to come back and set up the database and webserver.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
After the last update (11.1-U4) logging in to the new UI brings up a modal error message:

Code:
Error 22:rrdtool failed: ERROR: opening '/var/db/collectd/rrd/localhost//aggregation-cpu-sum/cpu-user.rrd': No such file or directory

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 150, in call_method
	result = await self.middleware.call_method(self, message)
  File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 109, in __next__
	return self.gen.send(None)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 908, in call_method
	return await self._call(message['method'], serviceobj, methodobj, params, app=app)
  File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 109, in __next__
	return self.gen.send(None)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 876, in _call
	return await methodobj(*args)
  File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 109, in __next__
	return self.gen.send(None)
  File "/usr/local/lib/python3.6/site-packages/middlewared/schema.py", line 491, in nf
	return await f(*args, **kwargs)
  File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 109, in __next__
	return self.gen.send(None)
  File "/usr/local/lib/python3.6/site-packages/middlewared/plugins/stats.py", line 111, in get_data
	raise ValueError('rrdtool failed: {}'.format(err.decode()))
ValueError: rrdtool failed: ERROR: opening '/var/db/collectd/rrd/localhost//aggregation-cpu-sum/cpu-user.rrd': No such file or directory


And when I click through it, there's another error:

Code:
Error 201:[ENOMETHOD] Method "summary" not found in "network.general"

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 893, in _method_lookup
	serviceobj = self.get_service(service)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 822, in get_service
	return self.__services[name]
KeyError: 'network.general'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 150, in call_method
	result = await self.middleware.call_method(self, message)
  File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 109, in __next__
	return self.gen.send(None)
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 902, in call_method
	serviceobj, methodobj = self._method_lookup(message['method'])
  File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 896, in _method_lookup
	raise CallError(f'Method "{method_name}" not found in "{service}"', CallError.ENOMETHOD)
middlewared.service_exception.CallError: [ENOMETHOD] Method "summary" not found in "network.general"
 
Top