Network Issues

From Blue-IT.org Wiki

Revision as of 21:44, 15 July 2008 by Apos (talk | contribs) (Links)

Testing network performance

There are a lot of possibilities to messure the network throughput on *nix machines. Some of the most valuable I am going to sumup in this article. Please read the corresponding links for further explanation at the end.

netperf

Installation:

sudo apt-get install netperf
sudo apt-get install ethstatus

Start netperf as a daemon (ubuntu / debian way)

sudo /etc/init.d/netperf start

Open a seperate terminal.

ethstatus -i eth0

Then start from the server...

netperf -H ip_of_client

...and vice versa on client

netperf -H ip_of_server

The last column will show the througput in (power of ten)-bits/sec (which has to be divided through 8 to get byte/sec)

netcat

First approach

From client machine to server ...

On server:

netcat -l -p 1234 > /dev/null

On client:

dd if=/dev/zero bs=1M count=1000 |netcat server 1234

From server to client machine ... On client:

netcat -l -p 1234 > /dev/null

On server:

dd if=/dev/zero bs=1M count=1000 |netcat client 1234

You can exchange /dev/zero with e.g. /dev/md0 (for a raid device) or /dev/sda for testing your harddisk performance.

Second approach

1st computer. (The Server)

nc -v -v -l -n -p 5096 >/dev/null

2nd computer. (The Client)

time yes|nc -v -v -n 192.168.1.1 5096 >/dev/null
Ctrl+C

On the server note the rcvd. Multiply by 8 and divide by the time you see on your client. This are the maximal (theoretical) MB/s speed you will get for file transfers over the network.

Combine

If you like to get the real disk perfance use e.g.

dd if=/dev/zero of=3_2-GIG-file bs=1M count=3096

or

dd if=/dev/sda of=/dev/null bs=1M
Ctrl + C

or with hdparm (the buffered disk read is for interest!):

 hdparm -tT /dev/disk

For realistic results you should also consider local disk read/writes for the operating system, other running tasks and that all components share the same (!) PCI bus. For that reason you have to sum up disk usage and network usage.

iperf

Messures the network perfomance.

On the server do (start iperf in server mode)

iperf -s

On the client machine do

iperf -c server_ip -i 1 -t 10

Typical values

  • 15-25 Mbits/sec for a 54Mbit Wireless LAN
  • 300-350 Mbits/sec for a Gigabit LAN

The values depend heavily to

  • quality of the network card
  • quality of the cable (already CAT 6 !?) or wireless connection
  • bus speed
  • processor speed


ethtool

Ethtool gives you important informations over your network card, its settings and actual modes:

ethtool eth0

sysctl

Settings for tpc ip settings:

for i in rmem wmem mem; do sysctl net.ipv4.tcp_${i}; done

gives e.g. on an older machine (366MHz AMD K6, 33MHz bus)

net.ipv4.tcp_rmem = 4096	87380	577600
net.ipv4.tcp_wmem = 4096	16384	577600
net.ipv4.tcp_mem = 13536	18050	27072

and on a "newer" system (AMD Athlon XP 2000+, 133MHz bus)

net.ipv4.tcp_rmem = 4096	87380	4194304
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_mem = 170592	227456	341184

I can be set with e.g.:

sysctl -w net.ipv4.tcp_wmem="4096 16384 1048576"

But consider: better know what you are doing! On modern linux systems this should not be necessary.

MTU - Jumbo Frames

So called "Jumbo Frames" can reduce the CPU usage on the server. The network throughput is _NOT_ improved very much. Kernel greater than 2.6.17 support frame bigger than 1500 bythes.

Note, that both network card (driver) and used switches must support this kind of settings. Also it is not advisable to use this above large networks (internet connection might suffer). Better use this for direct connection between seperate NICs. Test e.g with

ifconfig eth0 mtu 9044; echo -- DO SOME TESTS NOW --; sleep 120; ifconfig eth0 mtu 16436

16436 is the default setting on actual systems. Other good values seem to be 4022

Network configuration files /etc/sysconfig/network-script/ifcfg-eth0 (CentOS / RHEL / Fedora Linux)

MTU=9044

Debian / Ubuntu Linux use /etc/network/interfaces configuration file.

mtu 9044

Test with

/etc/init.d/networking restart 
ip route get IP-address

NFS performance

For gigabit networks

rsize=32768,wsize=32768
rsize=65536,wsize=65536

See [5], [7]

Links