Network Issues

From Blue-IT.org Wiki

Revision as of 20:39, 15 July 2008 by Apos (talk | contribs) (Links)

Testing network performance

netcat

Netcat

From client machine to server ...

On server:

netcat -l -p 1234 > /dev/null

On client:

dd if=/dev/zero bs=1M count=1000 |netcat server 1234

From server to client machine ... On client:

netcat -l -p 1234 > /dev/null

On server:

dd if=/dev/zero bs=1M count=1000 |netcat client 1234

You can exchange /dev/zero with e.g. /dev/md0 (for a raid device) or /dev/sda for testing your harddisk performance.

Combine

If you like to get the real disk perfance use e.g.

dd if=/dev/sda of=/dev/null bs=1M
Ctrl + C

or with hdparm (the buffered disk read is for interest!):

 hdparm -tT /dev/disk

For realistic results you should also consider local disk read/writes for the operating system, other running tasks and that all components share the same (!) PCI bus. For that reason you have to sum up disk usage and network usage.

iperf

Messures the network perfomance.

On the server do (start iperf in server mode)

iperf -s

On the client machine do

iperf -c server_ip -i 1 -t 10

Typical values

  • 15-25 Mbits/sec for a 54Mbit Wireless LAN
  • 300-350 Mbits/sec for a Gigabit LAN

The values depend heavily to

  • quality of the network card
  • quality of the cable (already CAT 6 !?) or wireless connection
  • bus speed
  • processor speed


ethtool

Ethtool gives you important informations over your network card, its settings and actual modes:

ethtool eth0

sysctl

Settings for tpc ip settings:

for i in rmem wmem mem; do sysctl net.ipv4.tcp_${i}; done

gives e.g. on an older machine (366MHz AMD K6, 33MHz bus)

net.ipv4.tcp_rmem = 4096	87380	577600
net.ipv4.tcp_wmem = 4096	16384	577600
net.ipv4.tcp_mem = 13536	18050	27072

and on a "newer" system (AMD Athlon XP 2000+, 133MHz bus)

net.ipv4.tcp_rmem = 4096	87380	4194304
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_mem = 170592	227456	341184

I can be set with e.g.:

sysctl -w net.ipv4.tcp_wmem="4096 16384 1048576"

But consider: better know what you are doing! On modern linux systems this should not be necessary.

MTU

So called "Jumbo Frames" can reduce the CPU usage on the server. The network throughput is _NOT_ improved very much. Kernel greater than 2.6.17 support frame bigger than 1500 bythes. Note, that both network card (driver) and used switches must support this kind of settings. So be careful and test e.g. with

ifconfig eth0 mtu 9000; sleep 30 ; ifconfig eth0 mtu 16436

16436 is the default setting on actual systems.

Network configuration files /etc/sysconfig/network-script/ifcfg-eth0 (CentOS / RHEL / Fedora Linux)

MTU=9000

Debian / Ubuntu Linux use /etc/network/interfaces configuration file.

mtu 9000

Test with

/etc/init.d/networking restart 
ip route get IP-address

Links