Very poor throughput copying a large number of small files.

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
@Andrew Ostrom was your copy problem resolved? I had a similar problem when moving ten's of thousands of images around and it was pegging the SMB process on FreeNAS.
No. I just let it run for 2 days and it finally finished. In the future I think I would access the drive from the Unix side via SSH and roll everyone up into a zip or tar file, copy that over, and then decompress locally.
 

Andrew Ostrom

Explorer
Joined
Jul 28, 2017
Messages
57
Clearly, I need to measure it but I'm cautiously optimistic. The CPU has a TDP of 35W but idle power draw for the whole motherboard is allegedly around 31W. Active power for each drive is 7W, 5W idle. So it's still in the realm of possibility that the whole system idles at less than 100W. But there is no way to know until I hook up a power meter and start measuring.

But first, the air shroud has to go into place. That in turn will allow the SAS and CPU to get a lot more air flow from the Noctua Industrial fans I'm using, which in turn will allow me to run them at a lower than "blast" speed.
I will be interested to see how you make out, please keep us updated. With power supplies that are only 80% to 90% efficient, redundant power supplies, lots of fans, and plenty of other power-consuming parts I think it's going to be tough. With a full chassis if 24 HDDs I was expecting to burn about 1,200W (just a guess, didn't calculate anything), so I guess I'm satisfied.
 
Joined
Jan 18, 2017
Messages
524
try doing a similar transfer with RoboCopy which is bundled with windows, if that is much faster this thread's last post may help you.
 

GREBULL

Cadet
Joined
Dec 10, 2018
Messages
4
I have a system I support for work that uses a massive number of small files for a kind of database. There is nothing worse for performance than a mass quantity of small files.

Hello Chris, I have a similar problem but with a program called Eplan.
What recommendations can you give me to mitigate the lack of speed?

My system is:
SERVIDOR SUPERMICRO SYS 5049P-E1CTR36L
PROCESSOR INTEL XEON 6130 16 CORES 2,1Ghz 22MB CACHE
128GB DDR4 16GB 2666MHz ECC REG 1.20V
24 HDD 4TB SAS ENTERPRISE ISE 512e SE P3 (Vela)
Intel DC P4510 1TB NVMe PCIe 3.1x4 3D TLC 2.5" 15mm 1DWPD
TARJETA SUPERMICRO 10Gb ETHERNET 2 PUERTOS RJ45

3Vdevs x 8HDD RAIDZ2

The disc Intel DC P4510 1TB NVMe PCIe 3.1x4 3D TLC 2.5" 15mm 1DWPD is not active neither in SLOG nor in L2ARC waiting to do tests to see if it really is veneficious.

Thanks' You
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
We added more drives to form additional vdevs. More vdevs gives you more IOPS. Our production server has ten vdevs of six drives each in one of these chassis:
https://www.supermicro.com/en/products/system/4U/6048/SSG-6048R-E1CR60l.cfm
If I can get management to fund it, we will be adding another ten vdevs in an expansion chassis like this:
https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform
For our purposes, having all those drives is fine because our data is pretty massive. If your data is small, but you need high IO on small files, you are going to need to use mirrored pairs of disks instead of RAIDz2 vdevs. With a 24 bay chassis, you could get 12 vdevs using mirror pairs and it is the vdev count that makes the difference with the random IO of accessing small files. Each file is a new head seek for the drives which is just horrible for performance.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I will be interested to see how you make out, please keep us updated. With power supplies that are only 80% to 90% efficient, redundant power supplies, lots of fans, and plenty of other power-consuming parts I think it's going to be tough. With a full chassis if 24 HDDs I was expecting to burn about 1,200W (just a guess, didn't calculate anything), so I guess I'm satisfied.
I used a kill-a-watt today to look at power consumption throughout the boot cycle. I know folk here don't like them much due to their averaging (i.e. you might miss a peak) and/or the perhaps less than stellar performance with very distorted loads. But I'm not motivated enough to bring a Yokogawa home, sorry. Anyhow, I took a movie of the killawatt screen during the entire boot process until the machine was idling w/the FreeNAS server up.
  • IPMI only: 9-10W
  • Peak power draw during boot: 192W (about 11 seconds in)
  • Power draw for idle FreeNAS (fully booted): 90-93W
I don't doubt that my FreeNAS will pull far more during a scrub, massive data transfer, etc. But most of its life is one of idle pleasure so the 90W figure is the most relevant one to me. Seems like I achieved my goal. I'm using a Seasonic Titanium 650W in there. I might swap it for a 450W platinum just to see if there is a difference. (comparing a titanium PS running at 13% vs. a 450W Platinum running at 20% of capacity)

My biggest takeaway is that the PS capacity is well within spec. As the drives spun up, the power needs spun up similarly, so I doubt I missed much of the peak. Based on the very predictable power consumption trajectory during boot (which featured staggered drive launches) and the subsequent decline, I'd wager the maximum power draw was at most around 200W. Those Helium-filled drives are simply awesome.

If I ever have the need to expand my pool by a VDEV and go for 16 drives (necessitating the use of something like the CSE-836 series), I'll be looking into using a 500W set of power supplies. But only if the likes of He10 drives with their very low power consumption are still available. Otherwise, I'll likely have to up the power supply capacity.
 
Last edited:
Top