Difference between revisions of "Network Issues"
From Blue-IT.org Wiki
(→iptraf-ng) |
|||
Line 114: | Line 114: | ||
== iptraf-ng == | == iptraf-ng == | ||
+ | * [http://www.slashroot.in/linux-iptraf-and-iftop-monitor-and-analyse-network-traffic-and-bandwidth 03/2014 Linux iptraf and iftop: Monitor,Analyse Network Traffic and Bandwidth] | ||
+ | * [http://www.slashroot.in/network-traffic-analysis-linux-tools 12/2012 Network Traffic Analysis With Linux Tools | ||
+ | |||
Ncurses-Interface: | Ncurses-Interface: | ||
Line 134: | Line 137: | ||
└─────────────────────────────────┘ | └─────────────────────────────────┘ | ||
− | Displays current IP traffic | + | Displays current IP traffic information3 |
− | |||
Interesting ar the IP checksum errors - which show possible hardware errors - in the detailed view: | Interesting ar the IP checksum errors - which show possible hardware errors - in the detailed view: | ||
Revision as of 07:23, 1 May 2014
Contents
Links
* Network Lab (GER)
nmap
nmap -v -A ip_address
netstat
Show the name of the process using port 1812:
netatst -anp | grep 1812
Own internet ip
Fastest:
my_public_ip="$(dig @ns1.google.com -t txt o-o.myaddr.l.google.com +short | sed s/\"/\/g)"
Via curl an ipchicken or other:
curl -s http://www.ipchicken.com/ | awk '/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/ {print $1}' | uniq
Own local ip
all ip's:
ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'
or for a particualar interface:
ifconfig wlan0 | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'
or:
alias myip="ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'"
Montitor ip changes
Network Manager
Status of all devices
export LC_MESSAGES="en_US@.UTF-8" \ export LC_TYPE="en_US.UTF-8" \ nmcli dev status
Testing network performance
There are a lot of possibilities to messure the network throughput on *nix machines. Some of the most valuable I am going to sumup in this article. Please read the corresponding links for further explanation at the end.
netperf
Installation:
sudo apt-get install netperf sudo apt-get install ethstatus
Start netperf as a daemon (ubuntu / debian way)
sudo /etc/init.d/netperf start
Open a seperate terminal.
ethstatus -i eth0
Then start from the server...
netperf -H ip_of_client
...and vice versa on client
netperf -H ip_of_server
The last column will show the througput in (power of ten)-bits/sec (which has to be divided through 8 to get byte/sec)
netcat
First approach
From client machine to server ...
On server:
netcat -l -p 1234 > /dev/null
On client:
dd if=/dev/zero bs=1M count=1000 |netcat server 1234
From server to client machine ... On client:
netcat -l -p 1234 > /dev/null
On server:
dd if=/dev/zero bs=1M count=1000 |netcat client 1234
You can exchange /dev/zero with e.g. /dev/md0 (for a raid device) or /dev/sda for testing your harddisk performance.
Second approach
1st computer. (The Server)
nc -v -v -l -n -p 5096 >/dev/null
2nd computer. (The Client)
time yes|nc -v -v -n 192.168.1.1 5096 >/dev/null Ctrl+C
On the server note the rcvd. Multiply by 8 and divide by the time you see on your client. This are the maximal (theoretical) MB/s speed you will get for file transfers over the network.
iperf
Messures the network perfomance.
On the server do (start iperf in server mode). -u für UDP.
iperf -s [-u]
On the client machine do
iperf -c server_ip [-u] -i 1 -t 10 # sends 10 Packets iperf -c server_ip [-u] -b 1000M # sends 1Gig
Typical values
- 15-25 Mbits/sec for a 54Mbit Wireless LAN
- 300-350 Mbits/sec for a Gigabit LAN
The values depend heavily to
- quality of the network card
- quality of the cable (already CAT 6 !?) or wireless connection
- bus speed
- processor speed
iptraf-ng
* 03/2014 Linux iptraf and iftop: Monitor,Analyse Network Traffic and Bandwidth * [http://www.slashroot.in/network-traffic-analysis-linux-tools 12/2012 Network Traffic Analysis With Linux Tools
Ncurses-Interface:
iptraf-ng 1.1.4 ┌─────────────────────────────────┐ │ IP traffic monitor │ │ General interface statistics │ │ Detailed interface statistics │ │ Statistical breakdowns... │ │ LAN station monitor │ │─────────────────────────────────│ │ Filters... │ │─────────────────────────────────│ │ Configure... │ │─────────────────────────────────│ │ About... │ │─────────────────────────────────│ │ Exit │ └─────────────────────────────────┘ Displays current IP traffic information3
Interesting ar the IP checksum errors - which show possible hardware errors - in the detailed view:
iptraf-ng 1.1.4 ┌ Statistics for eth1 ──────────────────────────────────────────────────────── │ │ Total Total Incoming Incoming Outgoing Outgoing │ Packets Bytes Packets Bytes Packets Bytes │ Total: 1499246 4399M 91269 4911505 1407977 4394M │ IPv4: 1498983 4399M 91093 4881798 1407890 4394M │ IPv6: 263 37155 176 29326 87 7829 │ TCP: 126957 2345M 90591 4799078 36366 2341M │ UDP: 1372093 2054M 528 78146 1371565 2053M │ ICMP: 192 37152 147 33804 45 3348 │ Other IP: 4 128 3 96 1 32 │ Non-IP: 0 0 0 0 0 0 │ │ │ Total rates: 4.65 kbps Broadcast packets: 44 │ 4 pps Broadcast bytes: 8252 │ │ Incoming rates: 2.48 kbps │ 1 pps │ IP checksum errors: 0 │ Outgoing rates: 2.17 kbps │ 2 pps │ └ Elapsed time: 0:05 ───────────────────────────────────────────────────────
ethtool
Ethtool gives you important informations over your network card, its settings and actual modes:
ethtool eth0
sysctl
Settings for tpc ip settings:
for i in rmem wmem mem; do sysctl net.ipv4.tcp_${i}; done
gives e.g. on an older machine (366MHz AMD K6, 33MHz bus)
net.ipv4.tcp_rmem = 4096 87380 577600 net.ipv4.tcp_wmem = 4096 16384 577600 net.ipv4.tcp_mem = 13536 18050 27072
and on a "newer" system (AMD Athlon XP 2000+, 133MHz bus)
net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.ipv4.tcp_mem = 170592 227456 341184
I can be set with e.g.:
sysctl -w net.ipv4.tcp_wmem="4096 16384 1048576"
But consider: better know what you are doing! On modern linux systems this should not be necessary.
MTU - Jumbo Frames
So called "Jumbo Frames" can reduce the CPU usage on the server. The network throughput is _NOT_ improved very much. Kernel greater than 2.6.17 support frame bigger than 1500 bythes.
Note, that both network card (driver) and used switches must support this kind of settings. Also it is not advisable to use this above large networks (internet connection might suffer). Better use this for direct connection between seperate NICs. Test e.g with
ifconfig eth0 mtu 9044; echo -- DO SOME TESTS NOW --; sleep 120; ifconfig eth0 mtu 16436
16436 is the default setting on actual systems. Other good values seem to be 4022
Network configuration files /etc/sysconfig/network-script/ifcfg-eth0 (CentOS / RHEL / Fedora Linux)
MTU=9044
Debian / Ubuntu Linux use /etc/network/interfaces configuration file.
mtu 9044
Test with
/etc/init.d/networking restart ip route get IP-address
Comparing with real (hard)drive performance
If you like to get the real disk perfance use e.g.
dd if=/dev/zero of=3_2-GIG-file bs=1M count=3096
or
dd if=/dev/sda of=/dev/null bs=1M Ctrl + C
or read a file from a mounted directory
dd if=a_huge_file of=/dev/null bs=1M
or with hdparm (the buffered disk read is for interest!):
hdparm -tT /dev/disk
For realistic results you should also consider local disk read/writes for the operating system, other running tasks and that all components share the same (!) PCI bus. For that reason you have to sum up disk usage and network usage.
NFS performance
Mount options
For gigabit networks
rsize=32768,wsize=32768 rsize=65536,wsize=65536
See [5], [7]
Seperate NICs
If you like to use two seperate NICs on an extra private subnet, e.g. 10.0.0.0/24 (25.255.255.0) to communicate for NFS, you should disable the packet forwarding in /proc/sys/net/ipv4/ip_forward
sudo echo 0 >> /proc/sys/net/ipv4/ip_forward
Add the according NIC to your /etc/host
10.0.0.10 mynfsserver 10.0.0.1 mynfsclient1
TIP: You don't have to use a switch instead of a crossover cable connecting two PCs for Gigabit connection. There is merely no difference in performance. For gigabit (!) you don't even have to use a crossover cable because the standard supports using normal cables!
Links
- Main insperation for this articele: 1. linuxquestions.org - gigabit ethernet performance
- 2. Linux User (GER) - Highspeed-LAN mit Gigabit Ethernet, Schnelle Netze
- 3. Jumbo Frames
- 4. A thread about performance issues regarding NFS/SMB with gigabit at ubuntuforums.com
- 5. Arch Linux - Nfs
- 6. Linuxstreet.net - Articles related to gigabit
- 7. Linux.com - How much can you improve network throughput with a high-end NIC?
- 8. VDRPortal (GER)
- 9. linuxhomenetworking.com