|
Post by N5200user on Jan 27, 2007 5:00:22 GMT 7
After I migrated my Thecus N5200 (3 x WD5000YS) from RAID 0 to RAID 5 write performance significantly went down.
With Iometer I get now 6.5 to 8 MB/sec write only. I used a "Transfer Request Size" of 1 MB. If this is lower the performance is even lower. (Read performance is 3 to 4 times higher.) I used "Jumbo Frame Support" of 4000 bytes for best speed, but influence of this is very low.
There seem to be no or nearly no influence of N5200 and drive firmware versions.
Support up to now has no answer on this.
Anybody knows how to improve write performance?
Thank you for hints!
Best regards
|
|
|
Post by Arctra on Jan 27, 2007 6:23:38 GMT 7
After I migrated my Thecus N5200 (3 x WD5000YS) from RAID 0 to RAID 5 write performance significantly went down. I'm afraid this is always going to happen mate. The fact that the box has to calculate parity data for every piece of data it writes to disk slows things somewhat. Take a read of www.smallnetbuilder.com/content/view/29616/75/1/7/ and see how your performance stacks up against those charts. With Iometer I get now 6.5 to 8 MB/sec write only. I used a "Transfer Request Size" of 1 MB. If this is lower the performance is even lower. (Read performance is 3 to 4 times higher.) I used "Jumbo Frame Support" of 4000 bytes for best speed, but influence of this is very low. I used IOzone to test my box over varying record sizes and with Jumbo frames set to 8K. My performance was only in teh range you're talking about when writing record sizes of 4K. Otherwise my performance was impressive. Is your box operating in degraded mode at the moment by any chance?
|
|
|
Post by N5200user on Jan 27, 2007 15:08:20 GMT 7
Thank you for the link! It was new to me.
I know that write performance is lower in RAID 5 when compared to RAID 0. But the average of app. 7.3 MB/sec is poor.
RAID Status was "Healthy" and "CPU Loading" around 90 % when I run my performance tests.
One more information: The iobw.tst file which Iometer generates to do the 100 % sequential write performance tests with sector alignment was 50 GB. If this file is 50 MB the write performance is app. 8.6 MB/sec.
Problem could be caused by number and/or type of HDs?
Best regards
|
|
|
Post by N5200user on Jan 27, 2007 16:02:31 GMT 7
I just found: When I copy 20 small files of app. 10 MB each to the N5200 the speed is around 23 MB/sec. And this limit might be caused by my virus protection. But this is not my application ...
My video software also measure around 8 MB/sec write speed.
|
|
|
Post by Arctra on Jan 27, 2007 18:17:39 GMT 7
Glad the link helped! A CPU load of 90% is high! That will certainly impact your performance. In Small Net Builder they use IOzone to do their tests and I've used it too. Pretty easy to use and then plot your performance graphs in Excel. Here's the results I got with this command www.rarhost.com/download-tw3gea.html: iozone -Rab c:\Series360a.xls -i 0 -i 1 -i 2 -g 360000 -n 170000 -s 350m -f \\ThecusNAS\test.tmpTranslated: IOzone output to binary Excel file "C:\series360.xls" in automatic mode (read, write, and random read/write) to a max file size of 360MB starting at 170MB for \\ThecusNAS\ network share using temp.tmp as the temporary test file. You can download and use IOzone for free from www.iozone.org/For some reason the results when doing the 170MB go a little off course around the 128K record size so the 350MB file tests look a little more consistent.
|
|
|
Post by N5200user on Jan 27, 2007 18:35:12 GMT 7
Reason for write performance problems is (ok, still seems to be) the "Stripe Size"! I changed to 512 KB.
The default value of 64 KB is very wrong for large files!
Even in status "Build" video software reports more than 13 MB/sec now.
I'll report new values here when "Build" is completed.
|
|
|
Post by omega on Jan 27, 2007 20:37:37 GMT 7
Hi N5200user, I cannot believe that it only depends on the stripe size of the RAID array... even if you get better results for your special type of application. I've done the IOZONE benchmark like Arctra accessing the N5200 using the CIFS and the NFS protocol. First of all it is interesting that the CPU percentage of the smbd SAMBA daemon handling CIFS was almost never above 25% and normally around 10%. The same applies to the NFS daemon when doing NFS transfers. The below table gives the results for a transfer of 340 MB with different block sizes for a filesystem mounted via NFS and CIFS. All numbers are in kilobytes. Both computers are using 1GB LAN interfaces, the N5200 and the compute are connected to a 1GB switch. Frame size is just normal Ethernet frame size (1500 bytes), i.o.w. NO jumbo frames. The iozone command was: iozone -Rab N5200.xls -i 0 -i 1 -i 2 -g 360000 -n 170000 -s 350m -f /thecus/tmp/test.tmp Mode / Record Size | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | 4096 | 8192 | 16384 | NFS: Write 340MB | 42229 | 28552 | 26524 | 27811 | 25180 | 26372 | 27951 | 25850 | 26857 | 25470 | 26225 | 25005 | 24104 | NFS: Read 340MB | 28103 | 26681 | 26808 | 26009 | 27440 | 26307 | 25947 | 26580 | 26516 | 26690 | 26211 | 26572 | 25339 | CIFS: Write 340MB | 9321 | 16017 | 21725 | 25923 | 27852 | 32234 | 34797 | 35634 | 35825 | 38577 | 36293 | 39496 | 36979 | CIFS: Read 340MB | 11530 | 14785 | 17953 | 20967 | 21678 | 23172 | 23061 | 23183 | 23955 | 23759 | 23231 | 22674 | 23981 |
To summarize: for NFS you can expect between 25-28 MB/s and for CIFS between 11-23 MB/s. This are just quick tests. For quotable results more benchmarks are needed. Me too I've seen spikes when doing the 170MB benchmark and until now I don't know why. But the numbers in the table above are at least in the right area. In my opinion and from what I've read in the past, the Thecus N5200 is one of the fastest NAS devices in that price category and without using any vendor specific protocol. Andreas
|
|
|
Post by N5200user on Jan 28, 2007 19:15:13 GMT 7
I cannot believe that it only depends on the stripe size of the RAID array... After N5200 reached status "Healthy" with a "Stripe Size" of 1024 KB and 500 GB used data capacity the write performance is measured with - Iometer 2006.07.27, 100 % sequential, 100 % write of 1 MB "Transfer Request Size" around 12.8 MB/sec. No matter if the iobw.tst test file is 5 GB or 50 GB. "CPU Loading" app. 90 % during test. - video software also around 13 MB/sec. This is app. 70 % more than before with stripe size 64 KB and should be barely enough now for my applications. I saw no significant change in copying smaller files (10 MB) or read performance measured with Iometer. Service used: SMB/CIFS connected to XP SP2 So, stripe size may not be the only reason for poor write performance when it comes to larger files and larger transfer sizes. But it is an important one. May be I should check with NFS too.
|
|
|
Post by gideon007 on Jan 28, 2007 20:09:41 GMT 7
I wonder, can strip size be changed without data loss or is that a thing to do before putting data on?
|
|
|
Post by N5200user on Jan 28, 2007 20:22:26 GMT 7
-------------------------------- No -------------------------------------------- Yes
Best regards
|
|
|
Post by gideon007 on Jan 28, 2007 22:25:27 GMT 7
thanks.
|
|
|
Post by N5200user on Jan 29, 2007 4:09:23 GMT 7
Finally some SMB/CIFS, 340000 KB results of iozone -Rab c:\temp\N5200.xls -i 0 -i 1 -i 2 -g 360000 -n 170000 -s 350m -f y:/test.tmp
Record size: 8 --- 32 -- 128 -- 512 - 2048 - 8192 16384
Write: -- 13204 23041 17819 18350 17547 23051 20593
Read: -- 14968 22129 23713 24556 21815 25700 27145
As mentioned above N5200 has three WD5000YS in RAID 5 configuration with stripe size 1024 KB and 500 GB used data capacity.
I think IOzone test file is far to small. There should be big variations caused by lucky and unlucky storage places.
I saw "CPU Loading" up to 62 %. May be there have been higher values too.
|
|
|
Post by omega on Jan 29, 2007 6:39:47 GMT 7
Hi N5200user,
there are two things I don't understand:
1) Why are almost all write speeds lower than the read speeds? In my measurements it was the opposite. And it makes sense because when you write you cannot disable the caching at the target. 2) Why are my throughput numbers the same or even higher that your numbers and I'm not using bigger stripe sizes nor jumbo frames.
For the CPU load you mention: what number do you use / where do you get the number?
I use the top command to see how much CPU load the smbd process takes.
Maybe I'll do another iozone measurement with 2 GB of transfer data. This should give us reliable results.
Andreas
|
|
|
Post by Arctra on Jan 29, 2007 9:31:23 GMT 7
Hey Omega I'm a little confused at your comment that write speeds should be quicker than read speeds. I've always believed the opposite, and I think the benchmarking results show that. If you look at the latest Small Net Builder benchmarks for the updated Infrant ReadyNAS NV+ it confirms it www.smallnetbuilder.com/content/view/29829/75/1/3/As far throughput goes, I cannot see why the stripe size should have any significant effect on your transfer rates. I thought the strip size affected your effective storage capacity. i.e. if you write a 4k file to a RAID array with 64k stripe size you lose 60k but if your stripe size is 128k you lose 124k. *shrug* N5200user: What do you mean you think IOzone test file is too small? You can set the test file size to be as big or small (and anywhere in between) as you want by changing the various arguments. If you read the IOzone manual it explains the arguments fairly well. If you want some help with it I'm happy to try help. For a 2GB file try this command: iozone -Rab c:\Test1.xls -g 2G -i 0 -i 1 -i 2 -q 1g -y 1k -n 1k -z -f \\ThecusN5200\share\test.tmp
|
|
|
Post by N5200user on Jan 29, 2007 12:59:27 GMT 7
1) Why are almost all write speeds lower than the read speeds? In my measurements it was the opposite. And it makes sense because when you write you cannot disable the caching at the target. Any performance test of RAID 5 I know show lower values for writing than for reading. See also en.wikipedia.org/wiki/RAID_5#RAID_5As far throughput goes, I cannot see why the stripe size should have any significant effect on your transfer rates. See user's manual page 25. (This is a known fact.) "This sets the stripe size to maximize performance ... A larger stripe size is better for large files." What do you mean you think IOzone test file is too small? I used the IOzone command you and omega recommended to get comparable results. I don't have the time to use it with 5 GB or even 50 GB test file. In my meaning (only short experience) Iometer give results which are more like real live applications than IOzone. Try it with "Maximum Disk Size" 10000000 Sectors (5 GB test file). Otherwise it checks the whole RAID. You also can see intermediate values during test, not only average. Best regards
|
|