|
Post by howarddavidp on May 4, 2009 4:28:48 GMT 7
Hello,
I have my interfaces configured in Bond0 and the default MTU was set to 1500, even though on the web interface I have jumbo frames enabled.
I guess my question is, I did a "ifconfig bond0 mtu 9000" and it took, so how I do make this change permenant on reboot?
I used 9000 instead of 8000 because my Dell Powerconnect 2716 switch only supports Jumbo frames of 9000. My NIC's in my PC's are Intels and they will do 8000 and 9000.
Will setting the MTU cause any problems on the 5200 Pro? I have 5 Seagte 750GB drives in RAID 5 and I have the SSH and SYS user modules installed.
Thanks,
David
|
|
|
Post by valshare on May 5, 2009 23:37:28 GMT 7
Hi,
how fast is the N5200 Pro? I have done bonding the intefaces but it was slower as a single interface. Only 9 MB/s was the transfer rate with bond0.
Regards, Valle
|
|
|
Post by howarddavidp on May 6, 2009 4:04:58 GMT 7
Hello,
That might be why 1 inteface was faster than 2. When you bond, my unit did NOT make the BOnd0 interface use jumbo frames, it automatically set the MTU to 1500 for that interface, as well as the individual interfaces. This would explain why 1 interface would be faster than 2, simply the lack of jumbo frames on the bonded interface.
I was able to login with SSH using the SSH mod and manually set the bonded interface to a MTU of 9000. With the jumbo frames enabled, I can get 40+ MB/s unsing HD Tune.
David
|
|
|
Post by drewy on May 8, 2009 6:22:02 GMT 7
jumbo frames can be a bit of a lottery. It's very much as case of "suck it and see". I've seen improvements with some client devices with a certain conbination of mtu, os, lan card and drivers and worse performance on a slightly different mix. I've also found (depending on the hardware) it's not always the biggest supported block size that gives the best performance. You can easily spend days benchmarking different configurations......
|
|
|
Post by howarddavidp on May 14, 2009 2:47:19 GMT 7
Hello,
I did my testing, here is what I saw:
Setup: Dell 690 with onboard Broadcom, no jumbo. PCI Intel NIC with Jumbo, setting says 9014. Dell 2716 Switch, settings says Jumbo, specs from Dell says it is 9k.
I have the 5200 Pro setup with MTU 9000 on Bond0 using both NICS to the switch, the switch has a LAG(trunk) or 2 ports.
I used HD tune's File Benchmark, using 4MB files.
Broadcom (no jumbo) Reads 70MB/s Writes 60MB/s
Intel (no jumbo) Reads 60MB/s, Writes 45MB/s Intel (jumbo) Reads 68MB/s, Writes 58MB/s
Team (no jumbo) Reads 68MB/s, writes 60MB/s Team (jumbo) Reads 68MB/s, writes 62MB/s
So, from this, the fastest overall was barely the Broadcom without jumbo(driver didn;t have the option for it). If you are going to team, use jumbo. The only other thing is that you get some redundancy with teaming, useful for some server environments.
David
|
|