- Joined
- Dec 11, 2015
- Messages
- 1,410
I've not researched the matter thoroughly so don't take my caution's for law in this regard.Is it that risky to have all these SMR drives?
...now i'm getting nervous about this.
A few searches yeilds a couple of threads that might be worth reading:
https://forums.freenas.org/index.php?search/3548170/&q=SMR&o=date
https://forums.freenas.org/index.php?search/3548172/&q=shingle&o=date
https://forums.freenas.org/index.php?search/3548173/&q=seagate+archive&o=date
https://forums.freenas.org/index.php?search/3548174/&q=ST8000AS0002&o=date
I am also prospecting 8TB drives, slowly*.
I remember reading somewhere a while back that these were not ideal for ZFS use. Ie, that was the conclusion I took away from the reading. A lot of speculation could've been contributed, confirmed or dispelled since then.
Some food for thought, on at least getting ANY other drive than the SMR's - if that would be appealing.
Other cheaper (than WD reds) alternatives include Toshiba X300 HDWF180EZSTA (3000SEK), Seagate ST8000VN0002 (256mb cache, NAS rated)(3000SEK), Segate Desktop *shruG* ST80000DM002(3200SEK). Segate Archive drives (2500SEK)
At 20% cheaper than the Seagate ARchive drive comes Intenso's OEM usb3 external drive, which has been documented to recently use Segate Desktop drive - as per above (2000SEK)
In my market, these drives are all cheaper than WD reds at 3300SEK.
Ie, the idea of getting different drives stems from a philosophy at which its extreme would argue you'd never get more than one drive at once, all to be of different kinds. Then there are lesser purist versions where it is argued to get 'a drive once in a while' (kind of like you are doing) to avoid batch-related issues (since drives operating together see very similar wear).
Since SMR are somewhat uncharted territory over the long term, it keeps me watching out.
I sense you confuse terminology of "vdev" and "pool". *waves frenetically with the newbie guide by cyberjock*In a earlier post you've mentioned there is no reason to have multiple volumes... i slept over this one and i can think of a reason to have many small volumes instead of a big one: Later expansion.
As you know freenas doesn't support adding or removing disks from a volume (increasing or reducing the number of disks). If i were to build a 20 disk volume with 1tb disks when it came to upgrade i would have to swap all 20 disks before i see the extra available space. that would make it quite expensive.
By having smaller 4-8 disk pools increasing the size of the pools can be done gradually.
I assume that this is not a problem for a big enterprise user. But for a "normal guy" buying 20 disks in a go can be prohibitive.
A pool consists of at least one vdev, but can contain any number of vdevs. A set of 11 drives configured in Raidz2 is referred to as a vdev.
Later expansion can be achieved via sizing the vdev, or zfs stripe width to fit needs (see readings in my signature for an inspirational blog post on the topic).
For example, you could choose to get 7 drives wide raidz2 in one vdev as your big pool. Expanding by adding another vdev of 7 drives.. set in raidz2. Boom.
I slept on the matter too, and could probably clarify a couple of other reasons that may be more or less valid to consider multiple pools (there might be other)
- When both requiring high IOPS for VM's (mirrors - low space utilization efficiency) AND storage where speed matters less - RaidZ3 for larger storage arrays. This motivates 2 pools.
- When having a a sort of 'download/temp/scrap' significant chunk of data that has such high turnover there might be arguments for avoiding further fragmentation on the main pool, and minimizing wear on the main pool's drives. This setup could be anything from raidz1 or mirrors. I've tried both.
~ this last point is somewhat of a dodgy argument in which I'm not confident to what extent it applies. The scenario is having a significant performance/speed discrepancy between two vdev's joined in a single pool. The vdev's will be 'striped' for new incoming data, the performance should be still increased. The advice that the slowest drive determine the speed of the set of drives should apply to vdev - but not in the case of multiple vdevs in the same pool as far as I am informed. I'm not 100% confident on this. In the same vein, also not necessarily a problem depending on circumstances - drives of different sizes. For example a 7x drive raidz2 of 500GB drives are joined in a pool by 7x drive raidz2 of 8TB drives will not allow for perfect striping due to the difference in size. This leads to the anticipated theoretical idea of doubling performance from adding a second vdev is destroyed. This also applies to adding a 2nd vdev, to a pool which already contains a lot of data. due to the CoW the data will - once edited - somewhat be balanced, but - naturally there is no such thing as "defragmentation" and "balancing" built into ZFS, or for users to actively pursue. This motivates upgrades to happen before pools become filled to the brim.
(also if you'd stop at ~7 drives Raidz2 for the 8TB's in the first vdev - you could ad the second later on - fusing with different drives - rather than detonating budget to get all the way up to the stripe width of 11...)
Cheers, Dice