Difference between revisions of "KVM"

From Blue-IT.org Wiki

(comand line foo)
Line 69: Line 69:
 
  cd /to/the/SnapShot/dir
 
  cd /to/the/SnapShot/dir
 
  VBoxManage clonehd -format RAW "SNAPSHOT_UUID" /home/vm-exports/myNewVM.raw
 
  VBoxManage clonehd -format RAW "SNAPSHOT_UUID" /home/vm-exports/myNewVM.raw
 +
 +
== Accessing a - via Zentyal configured - bridged machine ==
 +
On ubuntu 12.04 there is a [https://bugs.launchpad.net/ubuntu/+source/procps/+bug/50093 Ubuntu bug #50093] (mentioned [http://wiki.libvirt.org/page/Networking#Debian.2FUbuntu_Bridging here]) which prevents accessing a machine inside the bridges network:
 +
 +
> vim /etc/sysctl.conf
 +
net.bridge.bridge-nf-call-ip6tables = 0
 +
net.bridge.bridge-nf-call-iptables = 0
 +
net.bridge.bridge-nf-call-arptables = 0
 +
 +
Acitvate
 +
sysctl -p /etc/sysctl.conf 
 +
 +
Make permanent
 +
> vim /etc/rc.local
 +
*** Sample rc.local file ***
 +
/sbin/sysctl -p /etc/sysctl.conf
 +
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS  --clamp-mss-to-pmtu
 +
exit 0
 +
 +
Verify
 +
tail /proc/sys/net/bridge/*
 +
iptables -L  FORWARD
 +
 +
> brctl show
 +
bridge name    bridge id              STP enabled    interfaces
 +
br1            8000.50e5492d616d      no              eth1
 +
                                                        vnet1
 +
[...]
  
 
== Accessing services on KVM guests behind a NAT ==
 
== Accessing services on KVM guests behind a NAT ==
Line 313: Line 341:
 
LiveBackup (under development - --[[User:Apos|Apos]] ([[User talk:Apos|talk]]) 18:07, 30 October 2013 (CET))
 
LiveBackup (under development - --[[User:Apos|Apos]] ([[User talk:Apos|talk]]) 18:07, 30 October 2013 (CET))
 
* http://wiki.qemu.org/Features/Livebackup
 
* http://wiki.qemu.org/Features/Livebackup
 +
 +
[Category:Virtualisation]

Revision as of 13:11, 1 November 2013

Preface

As time of writign, I am using KVM on a Lenovo ThinkServer TS430. The Machine uses an raid 5 array for the storage of the virtual machines. The Hypervisor is an Zentyal / Ubuntu 12.04 LTS distribution. It runs separately on a single harddisk. Zentyal makes the administration of network, bridges and firewall tasks a lot more easier than a bare ubuntu system, but also adds some complexity to the system.

Concerning creating a good network topology. For the beginner, the follwing articles are a good starting point:

Using VirtualBox and KVM together

Many Torials say using VirtualBox and KVM together at the same server at the same time is NOT possible!!!

One says, it is possible:

You don't have to uninstall either of them! But you have to choose the runtime:

Use VirtualBox

sudo service qemu-kvm stop
sudo service vboxdrv start

OR use KVM

sudo service vboxdrv stop
sudo service qemu-kvm start

Decide!

Command line foo

Prerequisites:

 sudo apt-get install ubuntu-vm-builder

Show running machines

virsh -c qemu:///system list

Save and restart a machine (hibernate)

#!/bin/bash
VM=$1
virsh save $VM /data/savedstate/$VM
virsh restore /data/savedstate/$VM

Show bridges

brctl show

Show iptables rules

watch -n2 iptables -nvL -t nat

Migration from VirtualBox to KVM

This boils down to

  1. having a lot of time
  2. having a lot of free harddisk space
  3. creating a clone of the vbox-machine with VBoxManage clonehd (this can take a looooong time!). Kloning is the easiest way of getting rid of snapshots of an existing virtual machine.
  4. converting the images from vdi to qcow-format with qemu-img convert
  5. creating and configuring a new kvm-guest
  6. adding some fou to NAT with a qemu-hook (see next section)

To clone an image - on the same machine - you have to STOP kvm and start vboxdr (see above). Also be aware, that the raw-images take up a lot of space!

# The conversion can take some time. Other virtual machines are not accessible in this time
VBoxManage clonehd -format RAW myOldVM.vdi /home/vm-exports/myNewVM.raw
0%...
cd /home/vm-exports/
qemu-img convert -f raw myNewVM.raw -O qcow2 myNewVM.qcow

Cloning a Snapshot:

# for a snapshot do (not tested)
cd /to/the/SnapShot/dir
VBoxManage clonehd -format RAW "SNAPSHOT_UUID" /home/vm-exports/myNewVM.raw

Accessing a - via Zentyal configured - bridged machine

On ubuntu 12.04 there is a Ubuntu bug #50093 (mentioned here) which prevents accessing a machine inside the bridges network:

> vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Acitvate

sysctl -p /etc/sysctl.conf  

Make permanent

> vim /etc/rc.local
*** Sample rc.local file ***
/sbin/sysctl -p /etc/sysctl.conf
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS  --clamp-mss-to-pmtu
exit 0

Verify

tail /proc/sys/net/bridge/*
iptables -L  FORWARD

> brctl show
bridge name     bridge id               STP enabled     interfaces 
br1             8000.50e5492d616d       no              eth1 
                                                        vnet1
[...]

Accessing services on KVM guests behind a NAT

This is done by editing a hook-script for quemu:

/etc/libvirt/hooks/qemu

I am referring to this article:

which ist mentioned in the libvirt wiki:

I installed the qemu-python script of the first article under ubuntu 12.04 LTS, which worked like expected.

So I can access a port in the virtualmachine-guest with the IP/Port of the host (!). From within the host, it is possible to reach the guest via it's real ip. I am using the virtio-Interface (performance).

Control NAT rules

IpTables is what you want. But there are some pitfalls:

  1. the prerouting rules, that enable a port forwarding into the nat'ed machine must be applied before (!) the virtual machine starts
  2. if you have a service installed like zentyal, or you are restarting your firewall, all rules are set back
  3. libvirt nat-rules for the bridges are applied at service start time - this can interfere with other rules
  4. This is done by a quemu-hook script, called /etc/libvirt

An example

The PREROUTING rules vor vm-1 open up the ports 25, 587, 993 and 8080 for the NAT'ed virtual machine with the IP 192.168.122.2. So they are accessible from the outside world (webserver, e-mail-server, ...). This also means, that they can not be used any more in the host sytem (you should set the admin interface of e.g. zentyal to a different port).

he POSTROUTING chains are set automatically by virt-manager and allow the virtual-machine accessing the internet from inside of the machine using NAT.

iptables -nvL -t nat

Then you should see something like the following.


root@myHost:# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 216 packets, 14658 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   6   312 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:1222 to:192.168.122.2:80
   2   120 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:1223 to:192.168.122.2:443
   0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:1444 to:192.168.122.2:8080
  
Chain INPUT (policy ACCEPT 14 packets, 2628 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 
Chain OUTPUT (policy ACCEPT 12 packets, 818 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 
Chain POSTROUTING (policy ACCEPT 17 packets, 1048 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   0     0 MASQUERADE  tcp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
   6   406 MASQUERADE  udp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
   0     0 MASQUERADE  all  --  *      *       192.168.122.0/24    !192.168.122.0/24

Solution: qemu hook script

A script from here is a little bit altered:

  • Guest port same as host port
  • ability to apply more than one port
  • ability to serve more than one guest
  • you can not distinguish between inside and outside port - not yet ;)
  • ... will be updated!

Prerequisites: On a (kvm)-host with the IP 192.168.0.10, a natted virtual kvm network bridge with the network 192.168.122.0/24 was created with e.g. virt-manager.

If your virtual server has the ip 192.168.122.2/24 - in our example here the vm-webserver - this machine must be applied with virt-manager to the natted-bridge 192.168.122.0/24.

Inside of the machine you have to apply the gateway 192.168.122.1 of the bridge. Then, and only then you can reach this machine (and ports) with the following script using your hosts (!!!) ip 192.168.0.10.

The port 443 of your webserver on 192.168.122.2 can be reached from outside with 192.168.0.10:443.

If there is a firewall in front of the kvm-host-machine, you will forward the ports to exacly this address ( 192.168.0.10:443) to reach your https-website inside the vm-webserver with the ip 192.168.122.2.

Everything is handeled by the gateway of the bridge 192.168.122.1 and the nat rules you apply at start time.

Capiche?

> script="/etc/libvirt/hooks/qemu"; \
touch $script; \
chmod +x $script; \
vim $script

--Apos (talk) 20:18, 31 October 2013 (CET)

#!/bin/bash

Guest_name="vm-webserver"
Guest_ipaddr=192.168.122.2
Host_ports=" 80 443 8080 "

for Host_port in $Host_ports;
do
if [ $1 = $Guest_name ]
then
	if | $2 = "reconnect" 
	then
	        iptables -t nat -D PREROUTING -p tcp --dport $Host_port -j DNAT \
                 --to $Guest_ipaddr:$Host_port

        	iptables -D FORWARD -d $Guest_ipaddr/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED \
                -m tcp --dport $Host_port -j ACCEPT

		#- allows port forwarding from localhost but 
		#  only if you use the ip (e.g http://192.168.1.20:8888/)
		iptables -t nat -D OUTPUT -p tcp -o lo --dport $Host_port -j DNAT \
		--to $Guest_ipaddr:$Host_port

    	fi
	if | $2 = "reconnect" 
	then
	        iptables -t nat -I PREROUTING -p tcp --dport $Host_port -j DNAT \
                --to $Guest_ipaddr:$Host_port

        	iptables -I FORWARD -d $Guest_ipaddr/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED \
                 -m tcp --dport $Host_port -j ACCEPT

		#- allows port forwarding from localhost but 
		#  only if you use the ip (e.g http://192.168.1.20:8888/)
		iptables -t nat -I OUTPUT -p tcp -o lo --dport $Host_port -j DNAT \
		--to $Guest_ipaddr:$Host_port

	fi
fi
done

Guest_name="vm-email"
Guest_ipaddr=192.168.123.2
Host_ports=" 993 587 25 465 143 "

for Host_port in $Host_ports;
do
if [ $1 = $Guest_name ]
then
	if | $2 = "reconnect" 
	then
	        iptables -t nat -D PREROUTING -p tcp --dport $Host_port -j DNAT \
                 --to $Guest_ipaddr:$Host_port

        	iptables -D FORWARD -d $Guest_ipaddr/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED \
                -m tcp --dport $Host_port -j ACCEPT

		#- allows port forwarding from localhost but 
		#  only if you use the ip (e.g http://192.168.1.20:8888/)
		iptables -t nat -D OUTPUT -p tcp -o lo --dport $Host_port -j DNAT \
		--to $Guest_ipaddr:$Host_port

    	fi
	if | $2 = "reconnect" 
	then
	        iptables -t nat -I PREROUTING -p tcp --dport $Host_port -j DNAT \
                --to $Guest_ipaddr:$Host_port

        	iptables -I FORWARD -d $Guest_ipaddr/32 -p tcp -m state --state NEW,RELATED,ESTABLISHED \
                 -m tcp --dport $Host_port -j ACCEPT

		#- allows port forwarding from localhost but 
		#  only if you use the ip (e.g http://192.168.1.20:8888/)
		iptables -t nat -I OUTPUT -p tcp -o lo --dport $Host_port -j DNAT \
		--to $Guest_ipaddr:$Host_port

	fi
fi
done

Troubleshooting Zentyal

Everytime you will alter your network setting in e.g. zentyal and thereby resetting your nat rules, you will need to

  1. shutdown or save the virtual machines
  2. restart zentyal
  3. restard libvirtd
  4. start over the virtual machines

This is done here (tested massively):

#!/bin/bash 

[ $UID==0 ] || echo "Only run as root"
[ $UID==0 ] || exit 1

myVMs="ac-g ac-f ac-w" 

echo "#############################################################"
echo "## IPTABLES NAT"
iptables -nvL -t nat

echo "#############################################################"
echo "## Save virtual machines ..."
for vm in ${myVMs}; 
do 
	export LC_ALL="en_US.UTF-8";
	virsh list | grep running | awk '{print $2}' | awk '{system("virsh save  " $0 " /data/virtual_machines_on_raid5/save/"$0)}'
done
 
echo "#############################################################"
echo "## Restart Network"
/etc/init.d/zentyal  stop
 
sleep 5
/etc/init.d/zentyal start 

sleep 15
/etc/init.d/libvirt-bin restart

echo "#############################################################"
echo "## Restore vms"
for vm in ${myVMs}; 
do 
	echo " ... restoring ${vm} ... please wait ..."
	virsh restore /data/virtual_machines_on_raid5/save/${vm} 
done

echo "#############################################################"
echo "## IPTABLES NAT"
iptables -nvL -t nat

Delete NAT rules

iptables -t nat -D PREROUTING 1

Where "1" is the first PREROUTING rule that appears with the above command.

In the above example the first line is the PREROUTING chain with the number "6" and the port 80. This is the FIRST rule.


Autostart at boot time

Set the 'autostart' flag so the domain is started upon boot time:

virsh autostart myMachine

Shutdown

On Ubuntu 12.04 LTS (Precise Pangolin) the shutdown scripts already take care of stopping the virtual machines (at least in the newest version of the libvirt-bin package). However, by default the script will only wait 30 seconds for the VMs to shutdown. Depending on the services running in the VM, this can be too short.

In this case, you have to create a file /etc/init/libvirt-bin.override with the following content:

> vim /etc/init/libvirt-bin.override
# extend wait time for vms to shut down to 4 minutes
env libvirtd_shutdown_timeout=240

Backup KVM

Via LVM

LiveBackup (under development - --Apos (talk) 18:07, 30 October 2013 (CET))

[Category:Virtualisation]