Hey,
I have a server with 24 bays, were 23 of them are populated with 300Gb 10K SAS drives and one with an SSD.
At the moment, I have 2 pools:
(I named them SMALL/LARGE for simplicity)
The reason I have 2 pools, is because I host 2 types of XenServer VMs (~20 are ON at any given moment) with it, while some are more "mission critical" than others. Another reason is the nice Supermicro-Bridge-Bay, which actually allows me to have a dedicated host for each pool (with its own quad-core and 24Gb of RAM, which is plenty).
Both capacity and performance are sufficient at the moment to host *all* VMs in either pool (specially if the SSD stays on the LARGE pool). As an example, I can (tested) moved all VMs to the SMALL pool, and I can do a scrub and a backup (snapshot) at the same time, without overloading anything. but this is obviously not the desired state.
For those curious, at peak I can get from the SMALL pool: >3000 IOPs and > 400 Mb/s.
but this is very far from the average (current) usage and was never tested for more than a few seconds of burst.
What I would like to know, is if there is a better layout of the pools, taking into consideration future expansions like:
Thanks!
I have a server with 24 bays, were 23 of them are populated with 300Gb 10K SAS drives and one with an SSD.
At the moment, I have 2 pools:
- LARGE: 2 x 6 RaidZ2 (= 12 drives) and the SSD for cache.
- SMALL: 5 x 2 mirror vdev (= 10 drives)
(I named them SMALL/LARGE for simplicity)
The reason I have 2 pools, is because I host 2 types of XenServer VMs (~20 are ON at any given moment) with it, while some are more "mission critical" than others. Another reason is the nice Supermicro-Bridge-Bay, which actually allows me to have a dedicated host for each pool (with its own quad-core and 24Gb of RAM, which is plenty).
Both capacity and performance are sufficient at the moment to host *all* VMs in either pool (specially if the SSD stays on the LARGE pool). As an example, I can (tested) moved all VMs to the SMALL pool, and I can do a scrub and a backup (snapshot) at the same time, without overloading anything. but this is obviously not the desired state.
For those curious, at peak I can get from the SMALL pool: >3000 IOPs and > 400 Mb/s.
but this is very far from the average (current) usage and was never tested for more than a few seconds of burst.
What I would like to know, is if there is a better layout of the pools, taking into consideration future expansions like:
- more storage needed - which can be achieved by upgrading to bigger disks.
- adding SSDs for both L2ARC (1 bay) and/or ZIL (2 bays)
- spare drives - though I have one warm (in a bay inside the server) and several cold (out of the server, but in a short distance)
Thanks!