|
Post by Arctra on Jan 29, 2007 13:45:50 GMT 7
1) Why are almost all write speeds lower than the read speeds? In my measurements it was the opposite. And it makes sense because when you write you cannot disable the caching at the target. Any performance test of RAID 5 I know show lower values for writing than for reading. See also en.wikipedia.org/wiki/RAID_5#RAID_5See user's manual page 25. (This is a known fact.) "This sets the stripe size to maximize performance ... A larger stripe size is better for large files." I've done some more reading on this and realise I've been confusing cluster size for stripe size. I see now that stripe size will affect performance from this quote from another thread: " Smaller stripe sizes improve STR of smaller files at the expense of service times (similar to "access time") for 2 or more randomly positioned/accessed files. The larger the stripe size, the greater the probability that any two files will be on opposite drives in their unstriped form, thus facilitating concurrent access to them. This improves multitasking ability and service times of multiple random I/O requests (even at lower I/O queue depths) at the expense of small file average STR. " STR= Sustained Transfer Rate What do you mean you think IOzone test file is too small? I used the IOzone command you and omega recommended to get comparable results. I don't have the time to use it with 5 GB or even 50 GB test file. Ah, OK. Yeah, I was only really interested in benchmarking for file sizes around the 350MB mark as that's the average size of files I store on my NAS. Testing for a 50GB file would take FAR too long, I agree ;D In my meaning (only short experience) Iometer give results which are more like real live applications than IOzone. Try it with "Maximum Disk Size" 10000000 Sectors (5 GB test file). Otherwise it checks the whole RAID. You also can see intermediate values during test, not only average. Best regards I haven't tried IOmeter yet. I may give it a go this weekend and see what the results are like. Cheers.
|
|
|
Post by omega on Jan 29, 2007 16:21:37 GMT 7
Hi Arctra, I'm a little confused at your comment that write speeds should be quicker than read speeds. I've always believed the opposite, and I think the benchmarking results show that. If you look at the latest Small Net Builder benchmarks for the updated Infrant ReadyNAS NV+ it confirms it www.smallnetbuilder.com/content/view/29829/75/1/3/I haven't seen this web site before, but I suggest to read the numbers carefully as they are showing what I said Omit the small files and let's have a look to the 524 MB example: write speed is 25111 KB/s and read speed is 17029 KB/s. When doing benchmarks on network drives where you access the files to be read or write using protocols where you almost don't have an influence on caching (like CIFS, NFS does have the sync option though) you shouldn't give much on the write speed. The reason is, if you are writing the last blocks of a file, the receiver takes that data, put it to the file cache (in main memory) and immediately sends back the confirmation. Especially in CIFS and as far as I know there is nothing like a "sync" option which tells the receiver to wait with the acknowledge until the data has been successfully written to disk. Wen reading data from the NAS, there is no other way than waiting until the last block of data is send by the NAS, so you don't have this kind of caching effect. Therefore the write procedure seems to be faster than the read. And this is what my measurement and even the Small Net Builder benchmarks are showing. Andreas P.S. I don't like the quoted benchmark site very much as their interpretation of the results is misleading (IMHO). Why is the read/write throughput decreasing so much with the file size? The only reason I see are caching effects and is directly related to the amount of main memory. But with benchmarks and their interpretation it is like with a bikini: although you see almost everything the most interesting parts are covered ;D
|
|
|
Post by N5200user on Jan 29, 2007 17:18:26 GMT 7
For your reference only:
During my tests XP PC was directly connected to N5200. Nothing in between other than cable.
All our RAID 5 (app. 8 of different manufacturers) write slower than they read. N5200 was first to be checked because of poor write performance. So I have no comparison data up to now.
Best regards
|
|
|
Post by omega on Jan 29, 2007 17:40:11 GMT 7
N5200user1) How did you measure the CPU load when doing the benchmarks? 2) Your iozone benchmark is showing approx. the same speeds than my benchmark, but you've tuned your system.... I don't understand that. And I agree, iozone is doing basic brute force data transfer benchmarks where IOMeter tries to simulate real live scenarios. So the iozone numbers should show the practical maximum. Don't get me wrong, I only want to learn and understand how it works... Andreas
|
|
|
Post by Arctra on Jan 29, 2007 18:31:06 GMT 7
*LOL*
You're right Omega! I guess this is a perfect example of me seeing what I want to see ;D
So do you think that'd be the hard drive's cache (my Samsungs have 16MB caches) or the N5200's RAM that is affecting the speeds? Or maybe both?
At the end of the day I gotta say I think the N5200 is more than quick enough for my uses so I'm a happy man.
|
|
|
Post by N5200user on Jan 29, 2007 18:50:44 GMT 7
1) How did you measure the CPU load when doing the benchmarks? "CPU Loading" can be seen under "Status" -> "System" in N5200 WebGui. I have no idea what Iometer and IOzone are doing in the very details. I never wrote programs like these. I just see that the performance measurements of Iometer are similar to my different applications. When I had a software crash caused by low data transfer, it was at the 6 - 8 MB/sec write limit. 13 MB/sec seem to be enough. Time will show. You have also 3 WD5000YS and 500 GB data on them? You know where the small (compared to 1TB) test files are placed on the disks. Disk speed is much different from outside to inside. So, why the results shall be direct comparable? The load on the N5200 seems to be much higher with Iometer because CPU Loading of N5200 is very much higher in this case. Best regards
|
|
|
Post by omega on Jan 29, 2007 20:19:48 GMT 7
@arctra No, it's not the write cache of the hard disks, it's the write cache of the Linux operating system (buffer cache). But indeed, the read speed is influenced by the disk cache too, especially when reading sequential data from the disk. N5200userI wouldn't give much on the percentage showed by the Thecus software. I don't really understand their calculation. I'm using the native reporting e.g. what the top command shows (or ps) for which I know that the results could be trusted. And according to this, even at highest speed the average CPU load was between 25% and 30%. My configuration for doing the tests: N5200 with RAID-5, 5 harddisks SAMSUNG HD400LJ, using already 350GB out of 1400GB of data, connection to Linux and WinXP PC is over a 1GB switch. The RAID setup was done using the default values and no jumbo ethernet frames. So, I really think our results should be comparable, don't you? Andreas
|
|
|
Post by N5200user on Jan 29, 2007 22:05:36 GMT 7
So, I really think our results should be comparable, don't you? Don't know. We have also another N5200 with 5 x WD5000YS and we will measure with Iometer may be tomorrow. This has not so heavy write load as the one with the 3 x WD5000YS -> there was no problem. I think a test file should be at least 5 GB for reproducible results.
|
|
|
Post by omega on Jan 30, 2007 16:23:23 GMT 7
Hi, I've redone the iozone benchmarks using bigger file size (2GB - 8GB). The command i used was this: iozone -Razeo -b n5200-5GB.xls -n 2g -g 8g -y 8k -q 64k -i 0 -i 1 -f \\n5200\test.binThis is the result for accessing the N5200 using the SMB/CIFS protocol: Filesize (KB) | reclen (KB) | write (KB/s) | rewrite (KB/s) | read (KB/s) | reread (KB/s) | 2097152 | 8 | 14199 | 14306 | 15604 | 15747 | 2097152 | 16 | 21148 | 20225 | 18066 | 18302 | 2097152 | 32 | 26114 | 25266 | 22391 | 22726 | 2097152 | 64 | 28833 | 27817 | 24045 | 22757 | 4194304 | 8 | 15043 | 14680 | 15786 | 15853 | 4194304 | 16 | 21337 | 20477 | 18231 | 18736 | 4194304 | 32 | 26028 | 24803 | 22775 | 22528 | 4194304 | 64 | 26625 | 27980 | 23227 | 23138 | 8388608 | 8 | 14772 | 14554 | 15746 | 15748 | 8388608 | 16 | 21087 | 20360 | 18677 | 18678 | 8388608 | 32 | 26192 | 24887 | 22935 | 22813 | 8388608 | 64 | 27811 | 26555 | 23439 | 23747 |
IMHO these values should say something about the maximum transfer speed you can expect with a somewhat standard configuration (1 GB network, no jumbo frames, "normal" WinXP PC, N5200 already filled with 350 GB). Andreas
|
|
|
Post by Arctra on Jan 31, 2007 19:04:27 GMT 7
I ran the same IOzone test as Omega on my N5200 which has 3 Samsung 500GB drives in a RAID5 array 64k stripe size. I have Jumbo Frames enabled at 8k on the N5200 and 7k on my PC (That's the highest it'll go). My PC is an out of date Athlon XP 2800+, 1GB RAM, Gigabyte GA7N400 Pro2 motherboard with gigabit ethernet running Windows XP Pro SP2. The connection is through a Netgear GS605 v2 Gigabit Switch (4 port). Tests run over a SMB share. Filesize (KB) | reclen (KB) | write (KB/s) | rewrite (KB/s) | read (KB/s) | reread (KB/s) | 2097152 | 8 | 12376 | 17596 | 23602 | 24012 |
[/td][/tr] [tr] [td]2097152[/td], [td]16[/td], [td]16152[/td], [td]18224[/td], [td]23964[/td], [td]24383[/td] [/td][/tr] [tr] [td]2097152[/td], [td]32[/td], [td]19480[/td], [td]18598[/td], [td]17652[/td], [td]17849[/td] [/td][/tr] [tr] [td]2097152[/td], [td]64[/td], [td]21388[/td], [td]18794[/td], [td]21182[/td], [td]21698[/td] [/td][/tr] [tr] [td]4194304[/td], [td]8[/td], [td]20371[/td], [td]20282[/td], [td]24531[/td], [td]24623[/td] [/td][/tr] [tr] [td]4194304[/td], [td]16[/td], [td]20979[/td], [td]19991[/td], [td]24409[/td], [td]24893[/td] [/td][/tr] [tr] [td]4194304[/td], [td]32[/td], [td]21260[/td], [td]20372[/td], [td]18240[/td], [td]18321[/td] [/td][/tr] [tr] [td]4194304[/td], [td]64[/td], [td]23116[/td], [td]20650[/td], [td]21684[/td], [td]21607[/td] [/td][/tr] [tr] [td]8388608[/td], [td]8[/td], [td]19152[/td], [td]19855[/td], [td]24145[/td], [td]24312[/td] [/td][/tr] [tr] [td]8388608[/td], [td]16[/td], [td]20885[/td], [td]20459[/td], [td]24480[/td], [td]24469[/td] [/td][/tr] [tr] [td]8388608[/td], [td]32[/td], [td]22824[/td], [td]21023[/td], [td]18612[/td], [td]18657[/td] [/td][/tr] [tr] [td]8388608[/td], [td]64[/td], [td]22910[/td], [td]21243[/td], [td]21698[/td], [td]21637[/td] [/td][/tr] [/table] Generally slower transfer rates although I'm not sure if that's because of the Thecus or my PC.
|
|
|
Post by omega on Jan 31, 2007 20:03:11 GMT 7
Arctra,
thank you for the benchmark.
The difference is most likely because of your network interface in the computer. The Thecus N5200 will always do the same speed in that region (about 20-30 MB/s) regardless what disks are used, because all actual disks will do at least 40 MB/s. Your switch will not be the bottleneck as the transfer rate only requires about 30% of the total network speed.
So what else is left? Either the network card or the PC or both.
Andreas
|
|
|
Post by N5200user on Feb 1, 2007 2:21:18 GMT 7
We have also another N5200 with 5 x WD5000YS and we will measure with Iometer may be tomorrow. This has not so heavy write load as the one with the 3 x WD5000YS -> there was no problem. The other N5200 with 5 x WD5000YS (Stripe Size 64 KB) has 7.x MD/sec write performance also when measured with the same Iometer settings ("Transfer request Size" of 1 MB) than the other one. The number of disks has no influence under these conditions. I think the different results between IOzone and Iometer can have the following reason. - With the "Transfer request Size" of 1 MB Iometer can produce a continuous data stream (like a backup- or HD videostream). The "CPU Loading" of N5200 is near its maximum. - In IOzone "record size" has a different meaning. There is no continuous data stream. This can be verified e.g. with "CPU Loading" of N5200. N5200 has much less stress. There seem to be small brakes between the write commands. In these slots N5200 seem to recover. Due to the IOzone manual "It is normal for the rewrite performance to be higher than the performance of writing a new file." But when I use IOzone command "iozone -RazMb c:\temp\N5200_3.wks -i 0 -i 1 -f y:/test2.tmp" I see many values where Re-Write is much lower than Write. Especially when test file is 131072 KB. Any ideas on this?
|
|
jaco
New Member
Posts: 13
|
Post by jaco on Mar 8, 2007 13:24:42 GMT 7
I transfer large HDTV video files using FTP (with Total commander) The behaviour i see seems to be a buffer 'problem' in the 5200. The transfer starts at 8000 kbyte/sec and than jumps up to ~25000 kbyte/sec and than down to 8000 again.
So it seems the 5200 needs some time to calculate the Raid-5 information, or flush its buffers to disc ?
My host is a Dual Xeon with on board intel pro/1000 LAN interfaces. In the PC is a PCI-X Areca raid-5 controller with 5 WD RE-2 400 GB HD's. I have a Linksys 2008 Gigabit switch and cat 6 cabling. I use no jumbo frames and have used a toll drtcp to adjust the TCP windows etc.
I will need to time what the average transfer rate is, but with 5 - 10 GB files it is nowhere like 20 mbyte/sec i fear. But since it is much faster than my old NAS i am still happy with it.
When i run it on 100 mbit switch, the performance did not jump up and down soo much (between 5000 and 8000 kbyte/sec)
|
|
jaco
New Member
Posts: 13
|
Post by jaco on Mar 8, 2007 14:45:07 GMT 7
Ok i clocked 2 transfers: 4168 MByte in 6:05 minutes ~ 11.4 Mbyte/sec 4014 MByte in 5:43 minutes ~ 11.7 Mbyte/sec
CPU shows 30 - 47% (in web interface)
RAID Level RAID 5 Status Healthy Total Capacity 1,903,888 MB Data Capacity 594,706 MB / 1,841,691 MB (32.3%) Snapshot Capacity N/A USB Capacity N/A Stripe Size 4096 KB
I think this is big stripe size, perhaps too big ?
|
|