webserver file storage

Status
Not open for further replies.

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Hey,
my NAS serves as a file storage for webservers, and its spec is:
  • 4Core E5620 CPU
  • 24 GB of ECC RAM
  • pool = 2 vdev x 6 300GB SAS (10K) < 3TB of usable space
  • 2 x Intel Gbit NICs
Reading (and actually also writing) both locally and through NFS a single large file (also bigger than RAM size) works well and I am getting bandwidth of around 100-110Mb/s - which seems like the Network's max - which is good, right?

the problem is that the server itself, basically stores web files (mostly PHP and CSS) files, for 1000s of domains.
we are talking about the actual /home directory of each domain:
* /home/web1.com/{Wordpress files}
* /home/web2.com/{Joomla files}
* /home/web3.com/{Wordpress files}
* /home/web4.com/{Wordpress files}
* /home/web5.com/{Joomla files}
* /home/web5.com/{Drupal files}
etc...
{the /home partition itself is being NFSd to the NAS}
so we are talking about lots of small random reads (let's leave aside writes for the moment).
I am getting iowait from the client side (which don't seem to be network) and zpool iostat on the server shows and average of < 30Mb/s.

before looking into improvements (more RAM / L2ARC device / SSD pool ?), I would like to actually find the bottleneck...

what can I do to test this out, and see where the problem is?
{if possible, I'd like to start locally, directly on the NAS, to rule out network/nfs issues/settings}


thanks!
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I am sorry, I did not understand.

Web server mounts, using NFS, files from FreeNAS. And the home directory comes from where?
the /home partition itself is being NFSd to the NAS
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
1. NAS machine - NFS Server
2. Web Server machines - NFS Client

The /home partition of each webserver machine (CentOS) is NFSd to the NAS, were the files are stored...
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Just curios,
I have 2 x RaidZ (vdev = 6 SAS 10K), does this look OK?

Code:
# dd bs=1K if=/dev/zero of=testfile count=32000
32000+0 records in
32000+0 records out
32768000 bytes (33 MB) copied, 0.306852 s, 107 MB/s

Code:
# dd bs=1M if=/dev/zero of=testfile count=32000
32000+0 records in
32000+0 records out
33554432000 bytes (34 GB) copied, 12.5623 s, 2.7 GB/s

this was tested from within the server itself (no network involved).
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Such a test, likely is 100% meaningless. You are writing a stream of zeroes to a dataset that likely has compression enabled.

Concatenate large video files to a size larger than twice your RAM size, then use that file for testing. Get an mp4 file from YT, if you do not have any.

Read the file to /dev/null. Then copy it from one dataset to the other.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I will try to do this.
how can I test the server for a big amount of small files?
 
Status
Not open for further replies.
Top