Network configuration for KVM servers

Last updated 18 January, 2019

KVM private cloud network

This following sections provide prerequisites and information about the initial configuration for HPE OneSphere in a KVM private cloud environment. Kernel-based Virtual Machine (KVM) is a virtualization infrastructure for the Linux kernel that turns the kernel into a hypervisor. A hypervisor allows multiple operating systems to share a single hardware host.

The following figure shows a minimum of two NICs clubbed as bond0 that carries both management and production traffic.

HPE OneSphere also supports one dedicated interface for management connectivity. This interface can be a virtual NIC, but if it is connected to the internet and it is dedicated for HPE OneSphere management, consider isolating the NIC physically. For example, you might have a dedicated 1 GB network physical NIC that does not use a common trunk and does not bond together with data center networks.

For a multiple physical network (Provider network) deployment, you must have one dedicated interface for each physnet.

The following figure shows a single physical network deployment where:

  • physnet1 is mapped to bridge br-physnet1 (an OVS bridge that maps to the Provider network)

  • the single physnet deployment has a dedicated interface bond0 associated with it.

Configuring KVM servers and networks to connect to HPE OneSphere

Use the following procedure to install and configure KVM servers and networks. You can then connect HPE OneSphere to these KVM servers, which enables HPE OneSphere to install and manage the OpenStack layer to turn the servers into a private OpenStack-enabled cloud.

NOTE:

OpenStack virtual routing is not supported.

See Connecting HPE OneSphere for the first time for more information about connecting HPE OneSphere to KVM servers.

Prerequisites
  • An account with root privileges to log in to the operating system console

  • Outbound HTTPS access on port 443 (to communicate with the HPE OneSphere management service)
  • At least two physical network interfaces (for production deployments)

  • An OVS bridge named br-physnet1 is configured on each KVM host that will be onboarded to an HPE OneSphere zone

    You can create additional OVS bridges, replacing 1 with any alphanumeric character. (br-physnet is case sensitive.) Any additional OVS bridges named br-physnetx must be present on each KVM host, using the same name.

  • IP address, netmask, gateway address, and DNS name server for your environment

Procedure
  1. Enable SSH over the management IP address.
    1. Install SSH, if it is not already installed.
      sudo apt-get install ssh
    2. Edit /etc/ssh/sshd_config and add the management IP address.
    3. Restart SSH.
      sudo service ssh restart
  2. Configure the top of rack switch to allow the VLANs that your hypervisor hosts will generate.
  3. Install the operating system (Enterprise Linux: CentOs or RHEL, or Ubuntu).
    1. Install a supported version. See the HPE OneSphere Support Matrix.
    2. Install the operating system on a server with Virtualization Technology (VT) enabled.
    3. Install the operating system with the virtualization host software collection.

      RHEL and CentOS:

      Select Virtualization Host packages while installing the operating system.

      Ubuntu:

      Select the Virtual machine host package while installing the operating system.

  4. (RHEL only) Register the RHEL server and subscribe to the Red Hat Customer Portal.
    subscription-manager register
    subscription-manager attach
    subscription-manager repos -enable=rhel-7-server-optional-rpms
  5. Configure the proxy.
    1. (Ubuntu only) Edit the file /etc/apt/apt.conf.
      Acquire::http::Proxy "http://{proxy IP}:{port}";

      Example: Acquire::http::Proxy "http://web-proxy.company.com:8080";

    2. Export the proxy to environment.

      RHEL, CentOS, and Ubuntu:

      $ vi /etc/environment
       export http_proxy=http://{proxy IP}:{port}
       export https_proxy=http://{proxy IP}:{port} 
      $ source /etc/environment

      Example:

      $ vi /etc/environment
      export http_proxy=http://web-proxy.company.com:8080
      export https_proxy=http://web-proxy.company.com:8080
      $ source /etc/environment
  6. Disable the firewall.

    Reason: Disabling the firewall is required for KVM and OVS to create and directly manage iptables rules. In this context, the iptables rules serve as a virtual firewall for KVM and OVS instead of the Firewalld service.

    An enabled Firewalld service adds rules that conflict with rules added from KVM and OVS, and Firewalld configures some of the kernel modules (for example, netfilter) that are also used by OVS for correctly routing traffic to the VMs. To avoid these scenarios, the Firewalld service must be disabled on the KVM server.

    RHEL and CentOS:

    systemctl disable firewalld
    systemctl stop firewalld
    systemctl disable NetworkManager
    systemctl stop NetworkManager

    Ubuntu:

    ufw disable
  7. (RHEL and CentOS only) Set SELinux to permissive mode.
    sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
    setenforce 0
  8. Install Open vSwitch (OVS).
    1. (RHEL and CentOS only) Download and untar the Open vSwitch packages tar file from the following location to your KVM server.
      yum install wget
      wget https://zing-staging.s3.amazonaws.com/generic/ovs_packages.tar
      tar -xvf ovs_packages.tar
    2. Install the Open vSwitch packages.

      RHEL and CentOS:

      rpm -i openvswitch-2.4.0-1.x86_64.rpm

      Ubuntu 16.04:

      sudo apt-get update
      sudo apt-get install -y openvswitch-common
      sudo apt-get install -y openvswitch-switch
    3. (Ubuntu only) Verify that Open vSwitch 2.5.4 is installed.
      ovs-vsctl --version

      Expected output:

      ovs-vsctl (Open vSwitch) 2.5.4
      Compiled Sep  3 2017 22:56:32
      DB Schema 7.12.1
    4. Enable and start the Open vSwitch service.

      RHEL and CentOS:

      systemctl enable openvswitch
      systemctl start openvswitch

      Ubuntu:

      systemctl enable openvswitch-switch
      systemctl start openvswitch-switch
  9. Install and configure NTP.
    1. Install NTP.

      RHEL and CentOS:

      yum install -y ntp
      systemctl enable ntpd
      systemctl start ntpd

      Ubuntu:

      sudo apt-get install ntp
    2. Edit /etc/ntp.conf and add NTP server details. For example:
      # Use servers from the NTP Pool Project. Approved by Ubuntu
      # Technical Board on 2011-02-08 (LP: #104525).
      # See http://www.pool.ntp.org/join.html for more information.
      server <your ntp server> iburst
      #server 0.ubuntu.pool.ntp.org
      #server 1.ubuntu.pool.ntp.org
      #server 2.ubuntu.pool.ntp.org
      #server 3.ubuntu.pool.ntp.org
    3. Restart the NTP service.

      RHEL and CentOS:

      sudo systemctl restart ntpd.service

      Ubuntu:

      systemctl restart ntp.service
    4. Verify the synchronization.
      sudo ntpq -np
  10. Configure host network modules required for OpenStack Neutron.
    sudo -i
    modprobe bridge
    modprobe 8021q
    modprobe bonding
    modprobe tun
    mkdir -p /etc/modules-load.d/
  11. Create the file /etc/modules-load.d/pf9.conf, if it does not already exist.
    echo bridge > /etc/modules-load.d/pf9.conf
    echo 8021q >> /etc/modules-load.d/pf9.conf
    echo bonding >> /etc/modules-load.d/pf9.conf
    echo tun >> /etc/modules-load.d/pf9.conf
  12. Configure systemctl options.
    sudo -i
    echo net.ipv4.conf.all.rp_filter=0 >> /etc/sysctl.conf
    echo net.ipv4.conf.default.rp_filter=0 >> /etc/sysctl.conf
    echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysctl.conf
    echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf
    echo net.ipv4.tcp_mtu_probing=1 >> /etc/sysctl.conf
    sysctl -p

    Expected output:

    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.default.rp_filter = 0
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.ipv4.tcp_mtu_probing = 1

    If you see the following error, execute the command modprobe br_netfilter.

    sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

  13. Configure the interfaces and DNS with a bonded configuration.

    The following commands show two example configurations for a bonded NIC model with two NICs.

    Skip to step 14 if you want to configure the interfaces and DNS with a single NIC model (for non-production use only).

    In the following steps, enter the IP address, netmask, gateway address, and DNS name server specific to your environment.

    RHEL and CentOS:

    1. Option 1: To configure the management IP on a bridge interface, edit /etc/sysconfig/network-scripts/ifcfg-br-physnet1.

      Skip this step (13a) and continue to RHEL and CentOS step 13b if you want to configure the management IP on a dedicated NIC.

      # Bridge network interface
      DEVICE=br-physnet1
      BOOTPROTO=none
      ONBOOT=yes
      TYPE=OVSBridge
      DEVICETYPE=ovs
      IPADDR=xx.xx.xx.xx
      NETMASK=255.255.255.xx
      GATEWAY=xx.xx.xx.xx
      DNS1=<dns-ip1>
    2. Option 2: To configure the management IP on a dedicated NIC, edit /etc/sysconfig/network-scripts/ifcfg-eth4 as follows.

      The interface names eth0, eth1, and eth4 are examples. Use the interface names provided by your specific OS version.

      # Management IP on a dedicated NIC 
      DEVICE=eth4
      BOOTPROTO=none
      ONBOOT=yes
      IPADDR=xx.xx.xx.xx
      MTU=<supported by your NIC-see Tip below>
      NETMASK=255.255.255.xx
      GATEWAY=xx.xx.xx.xx 
      DNS1=<dns-ip1>

      TIP:

      The ip a command shows all network card MTU values. For example:

      ip a
      eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 
         qdisc mq master bond0 state UP qlen 1000
         link/ether 00:9c:02:99:e6:f4 brd ff:ff:ff:ff:ff:ff
    3. Edit or create /etc/sysconfig/network-scripts/ifcfg-br-physnet1.
      DEVICE=br-physnet1
      BOOTPROTO=none
      ONBOOT=yes
      TYPE=OVSBridge
    4. Edit or create /etc/sysconfig/network-scripts/ifcfg-eth0.
      DEVICE=eth0
      ONBOOT=yes
      BOOTPROTO=none
      MTU=<supported by your NIC-see Tip above>
      MASTER=bond0
      SLAVE=yes
    5. Edit or create /etc/sysconfig/network-scripts/ifcfg-eth1.
      DEVICE=eth1
      ONBOOT=yes
      BOOTPROTO=none
      MTU=<supported by your NIC-see Tip above>
      MASTER=bond0
      SLAVE=yes
    6. Edit or create /etc/sysconfig/network-scripts/ifcfg-bond0.
      DEVICE=bond0
      ONBOOT=yes
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-physnet1
      BONDING_MASTER=yes
      BONDING_OPTS="mode=4"
      MTU=<supported by your NIC>
    7. Restart the network.
      systemctl restart network.service

    Ubuntu:

    1. Option 1: To configure the management IP on a bridge interface, edit /etc/network/interfaces as follows.

      Skip this step and continue to step Ubuntu step 13b if you want to configure the management IP on a dedicated NIC.

      # Bridge network interface 
      auto br-physnet1
      iface br-physnet1 inet static
          address xx.xx.xx.xx
          netmask 255.255.255.xx
          gateway xx.xx.xx.xx
          dns-nameservers X.X.X.X

      If you have more than one DNS server, add a space between each server. For example:

      dns-nameservers X.X.X.X Y.Y.Y.Y Z.Z.Z.Z

    2. Option 2: To configure the management IP on a dedicated NIC, edit /etc/network/interfaces as follows.

      The interface names em2, em3, and em4 are examples. Use the interface names provided by your specific OS version.

      # Management IP on a dedicated NIC 
      auto em4
      iface em4 inet static
          address xx.xx.xx.xx
          netmask 255.255.255.xx
          gateway xx.xx.xx.xx
          dns-nameservers X.X.X.X
    3. Edit /etc/network/interfaces.
      auto em3
      iface em3 inet manual
      bond-master bond0
       
      auto em2
      iface em2 inet manual
      bond-master bond0
      
      auto bond0
      iface bond0 inet manual
          bond-slaves em3 em2
          bond-mode 802.3ad
    4. Bring the interface down and up for the changes to take effect.
      sudo ifdown em2 && sudo ifup em3
  14. (Optional) Configure the interfaces and DNS for a single NIC.

    The following commands show an example configuration for a single NIC model.

    NOTE:

    A single NIC configuration is not recommended for production environments.

    RHEL and CentOS:

    1. To configure the management IP on a bridge interface, edit /etc/sysconfig/network-scripts/ifcfg-br-physnet1.
      # Bridge network interface
      DEVICE=br-physnet1
      BOOTPROTO=none
      ONBOOT=yes
      TYPE=OVSBridge
      DEVICETYPE=ovs
      IPADDR=xx.xx.xx.xx
      NETMASK=255.255.255.xx
      GATEWAY=xx.xx.xx.xx
      DNS1=<dns-ip1>
    2. Edit or create /etc/sysconfig/network-scripts/ifcfg-eth0.

      The interface name eth0 is an example. Use the interface name provided by your specific OS version.

      DEVICE=eth0
      ONBOOT=yes
      BOOTPROTO=none
      MTU=<supported by your NIC> 
      MASTER=bond0 
      SLAVE=yes 
    3. Edit or create /etc/sysconfig/network-scripts/ifcfg-bond0.
      DEVICE=bond0
      ONBOOT=yes 
      TYPE=OVSPort
      DEVICETYPE=ovs
      OVS_BRIDGE=br-physnet1
      BONDING_MASTER=yes
      BONDING_OPTS="mode=4"
      MTU=<supported by your NIC>
    4. Restart the network.
      systemctl restart network.service

    Ubuntu:

    1. To configure the management IP on a bridge interface, edit /etc/network/interfaces as follows.
      # Bridge network interface 
      auto br-physnet1
      iface br-physnet1 inet static
          address xx.xx.xx.xx
          netmask 255.255.255.xx
          gateway xx.xx.xx.xx
          dns-nameservers X.X.X.X

      If you have more than one DNS server, add a space between each server. For example:

      dns-nameservers X.X.X.X Y.Y.Y.Y Z.Z.Z.Z

    2. Edit /etc/network/interfaces as follows.

      The interface name em2 is an example. Use the interface name provided by your specific OS version.

      auto em2
      iface em2 inet manual
      bond-master bond0
      
      auto bond0
      iface bond0 inet manual
          bond-slaves em2
          bond-mode 802.3ad
    3. Bring the interface down and up for the changes to take effect.
      sudo ifdown em2
  15. Create an OVS bridge using the name br-physnet1.

    NOTE:

    If the OVS bridge is created at the OS level in the network interfaces file, skip steps 15 and 16 (this step and the next step).

    ovs-vsctl add-br br-physnet1
    • Make sure that a bridge named br-physnet1 is configured on each KVM host that will be onboarded to an HPE OneSphere zone.

    • You can create additional OVS bridges, replacing 1 with any alphanumeric character. (br-physnet is case sensitive.) Any additional OVS bridges named br-physnetx must be present on each KVM host, using the same name.

    • The bridge on the first host is considered the base, and any non-matching bridges on subsequent hosts are ignored.

    • A minimum of one bridge is required on each host. There is no restriction on the number of bridges you can create on the hosts.

  16. Add the bond interfaces to the bridge.
    ovs-vsctl add-port br-physnet1 bond0

Next step: Validate the KVM network and server configuration using a Python script.

 

Validating KVM network and server configuration

Download and execute a Python script on your KVM host. This script will validate the network and server configuration before you deploy virtual machines to the KVM host.

NOTE:

To obtain the zone ID that you specify in the HPE OneSphere API REST call, you must download HPE OneSphere Connect and add a private zone. From the Providers > Private Zones screen in the HPE OneSphere portal, click the green plus icon . Then from the Create Zone panel, download and launch HPE OneSphere Connect.

Creating a zone can take up to an hour. Zone creation begins after you enter details and sign-in information about the KVM server and the Connecting all the bits for your Private Cloud screen is displayed in HPE OneSphere Connect. While HPE OneSphere is creating the zone, run the script described below to verify that you configured the server correctly.

Procedure
  1. Log in to the KVM server that you configured for use with HPE OneSphere.
  2. Create a file named validate_kvm_prerequisites.py in any directory where the python command runs.
  3. Execute an HPE OneSphere API REST call.
    https://<onesphere-url>/rest/zones/<zone-id>/node-recipes/validate_kvm_prerequisites.py

    Headers to set:

    Authorization: <session token for your HPE OneSphere URL>
    Content-Type: application/Json
  4. Download the contents and add the contents to the file created in step 2.
  5. Execute the downloaded script.
    python validate_kvm_prerequisites.py
  6. Review the output and correct any errors discovered by the verification script.

Sample output of validate_kvm_prerequisites.py

Check Name: Check Proxy
Result: OK
Message: Set

..................................................................
Check Name: Check Firewall
Result: OK
Message: Firewall is Disabled

..................................................................
Check Name: Check RPM
Result: OK
Message: RPM openvswitch is Installed

..................................................................
Check Name: Check Selinux
Result: OK
Message: Selinux is set to Permissive

..................................................................
Check Name: Check openvswitch Service
Result: OK
Message: Service openvswitch is Installed

..................................................................
Check Name: Check SSH
Result: OK
Message: SSH is Enabled

..................................................................
Check Name: Check OVS Bridges
Result: OK
Message: OVS Bridge Check is Successful

..................................................................
Check Name: Check NTPD
Result: OK
Message: NTPD is Enabled

..................................................................
Check Name: Check Network Modules
Result: OK
Message: module bridge loaded
module 8021q loaded
module bonding loaded
module tun loaded

..................................................................
Check Name: Check Sysctl Options
Result: OK
Message: sysctl option net.ipv4.conf.all.rp_filter = 0 set
sysctl option net.ipv4.conf.default.rp_filter = 0 set
sysctl option net.bridge.bridge-nf-call-iptables = 1 set
sysctl option net.ipv4.ip_forward = 1 set
sysctl option net.ipv4.tcp_mtu_probing = 1 set

..................................................................
Please fix Failures,Critical and Warnings if any

Configuring high availability for the image library on KVM servers

Use the following procedure to configure the Image Library for high availability (HA).

After performing these steps, you can enable one server per zone for the Image library registry service (the ImageLibrary role) in the HPE OneSphere portal. You can enable additional server(s) using the HPE OneSphere REST API.

Prerequisites

The administrator:

  • Installed Network File System (NFS) on a machine that is reachable from your KVM hypervisor.

    HPE recommends that you do not install NFS on the server machine.

  • Installed NFS client on your KVM hypervisor.

Procedure
  1. Connect HPE OneSphere to your KVM server using HPE OneSphere Connect.
  2. Log in to the KVM server as a sudo user.
  3. Create the following directory.
    mkdir –p /var/opt/hpe/imagelibrary/data/
  4. Assign roles to the following directories.
    chown pf9:pf9group /var/opt/hpe/
    chown pf9:pf9group /var/opt/hpe/imagelibrary/
    chown pf9:pf9group /var/opt/hpe/imagelibrary/data/
  5. Mount NFS to the data directory.
    mount <IP>:<shared-directory> /var/opt/hpe/imagelibrary/data/
  6. Enable one server per zone with the Image library registry service in the HPE OneSphere portal.

    Enable the server on the Providers > Private Zones > Update Zone screen.

  7. (Optional) Enable additional server(s) with the ImageLibrary role using the HPE OneSphere REST API.
    1. Use a GET call to /rest/zones to fetch the ZoneID from the list of zones.
    2. Use a GET call to /rest/zones/zoneid to fetch the serverUri of the server that you want to enable.
    3. Use a PATCH call to /rest/zones/zoneid to enable the server.
      zone
      [
       {
        "op": "add",
        "path": "/inTransitKVMServers",
        "value":[{"serverUri": "/rest/servers/3ac6afa0-9145-4120-bccc-88205939674b", 
        "state":"Enabled", 
        "roles": ["ImageLibrary"]
       }]
      } 
      ]

Creating images and networks for KVM servers using the OpenStack CLI

Before deploying a virtual machine to a KVM server, you must create an image and a network using the OpenStack CLI.

Prerequisites

The administrator:

  • Connected a KVM server to HPE OneSphere using HPE OneSphere Connect.

  • Enabled the KVM server in the Server Connection section of the Providers > Private Zones > Update Zone screen in the HPE OneSphere portal.

  • Created a project. Select the project in the HPE OneSphere portal in your browser, and note the project ID at the end of the URL. An example is shown in bold below, beginning with 2b64. The project ID is required while creating a network.

    Example: /project?uri=%2Frest%2Fprojects%2F2b64d9fa5f3b4e99bbd62986aaed828c

Procedure
  1. Log in to the KVM server where the OpenStack CLI is installed and source the OpenStack RC file. See Installing OpenStack CLI clients for more information.
  2. (Optional) Configure high availability for the image library on KVM servers.
  3. Create an image using one of the following methods.
    1. Use the OpenStack CLI to create an image.
      openstack image create --disk-format <disk format> --container-format 
       <container format> --public --file <image file with path> <image name>
    2. Download and copy an existing image to /var/opt/hpe/imagelibrary/data/ on the KVM server where the Image Library role is enabled.

      NOTE:

      • The server on which you are executing OpenStack commands must be in the same network as the KVM server where the Image Library role is enabled.

      • If you configured a proxy server when you connected HPE OneSphere to the KVM server where the Image Library role is enabled, exclude the server's IP address from your proxy configuration.

  4. Create a network.
    neutron net-create --provider:physical_network <physnet-name> 
    --provider:network_type vlan --provider:segmentation_id <vlan-id> <network-name>

    NOTE:

    You must obtain the physnet-name from the enabled KVM server. If the bridge that is created in the KVM server is br-physnet1, the physnet name will be physnet1.

  5. Create a subnet.
    neutron subnet-create <net-name> <cidr>
  6. Assign the network to the respective project.

    Obtain the project ID by selecting the project in the HPE OneSphere portal in your browser. The project ID is appended to the URL.

    neutron rbac-create --target-tenant <project-id> --action access_as_shared --type network 
    <network-id>

    NOTE:

    Do not use the OpenStack client for this network to project association. Use the neutron client only, as in the above example.

    NOTE:

    It takes approximately 10 minutes to reflect the newly created image and network in the HPE OneSphere portal.