Skip to content
Advertisement

Using DPDK Kernel NIC Interface in a virtualized environment

I’m going to develop a DPDK Linux application on a laptop, but the laptop’s hardware is not supported by DPDK. Furtunately, DPDK supports paravirtualized devices including QEMU‘s virtio-net.

So I’m trying to configure a QEMU guest for running the Kernel NIC Interface(KNI) on a virtio-net-pci device. The problem is that the KNI sample application doesn’t accept the virtio-net-pci driver.

QEMU command

exec qemu-system-x86_64 -enable-kvm 
  -cpu host -smp 2 
  -vga std 
  -mem-prealloc -mem-path /dev/hugepages 
  -drive file=GentooVM.img,if=virtio 
  -netdev user,id=vmnic,hostname=gentoo 
  -device virtio-net-pci,netdev=vmnic 
  -m 1024M 
  -monitor stdio 
  -name "Gentoo VM"

Running the KNI sample application in the guest

sudo ./examples/kni/build/app/kni -c 0x3 -n 4 -- 
-P -p 0x1 --config="(0,0,1)"

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Probing VFIO support...
EAL:   IOMMU type 1 (Type 1) is supported
EAL:   IOMMU type 8 (No-IOMMU) is not supported
EAL: VFIO support initialized
EAL: Setting up physically contiguous memory...
...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
unreliable clock cycles !
EAL: Master lcore 0 is ready (tid=657d58c0;cpuset=[0])
PMD: rte_igbvf_pmd_init():  >>
EAL: lcore 1 is ready (tid=305ff700;cpuset=[1])
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   Not managed by a supported kernel driver(0), skipped
PMD: virtio_read_caps(): failed to map pci device!
PMD: vtpci_init(): trying with legacy virtio pci.
Segmentation fault

Output of lspci command in the guest

...
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device

I’ve noticed that pci_scan_one() function sets dev->kdrv = RTE_KDRV_NONE, while the driver is detected as virtio-pci (from /sys/bus/pci/devices/0000:00:03.0/driver).

TAP networking

The same issue persists with TAP networking. On the host, I’ve configured a bridge from Wi-Fi interface, and connected it to a TAP interface:

wifi_iface=wlp3s0
br_iface=br0
br_network='172.20.0.1/16'
br_dhcp_range='172.20.0.2,172.20.255.254'
tap_iface=tap0
user=ruslan

ip link add name $br_iface type bridge
ip addr add "$br_network" dev $br_iface
ip link set $br_iface up
dnsmasq --interface=$br_iface --bind-interfaces 
  --dhcp-range=$br_dhcp_range

modprobe tun
chmod 0666 /dev/net/tun

ip tuntap add dev $tap_iface mode tap user "$user"
ip link set $tap_iface up promisc on
ip link set $tap_iface master $br_iface

sysctl net.ipv4.ip_forward=1
sysctl net.ipv6.conf.default.forwarding=1
sysctl net.ipv6.conf.all.forwarding=1

iptables -t nat -A POSTROUTING -o $wifi_iface -j MASQUERADE
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i $tap_iface -o $wifi_iface -j ACCEPT

QEMU command:

sudo qemu-system-x86_64 -enable-kvm 
  -cpu host -smp 2 
  -vga std 
  -mem-prealloc -mem-path /dev/hugepages 
  -drive file=GentooVM.img,if=virtio 
  -netdev tap,id=vm1_p1,ifname=tap0,script=no,downscript=no,vhost=on 
  -device virtio-net-pci,netdev=vm1_p1,bus=pci.0,addr=0x3,ioeventfd=on 
  -m 1024M 
  -monitor stdio 
  -name "Gentoo VM" 
  $@

ifconfig output in the guest:

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.20.196.253  netmask 255.255.0.0  broadcast 172.20.255.255
        inet6 fe80::59c1:f175:aeb3:433  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:12:34:56  txqueuelen 1000  (Ethernet)
        RX packets 9  bytes 1039 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 1802 (1.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

The following command fails the same way as in the case of “user” networking above:

sudo ./examples/kni/build/app/kni -c 0x3 -n 4 -- 
-P -p 0x1 --config="(0,0,1)"
...
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   Not managed by a supported kernel driver(0), skipped
PMD: virtio_read_caps(): failed to map pci device!
PMD: vtpci_init(): trying with legacy virtio pci.

The question

Is it even possible to run KNI with virtio-net-pci device?

If it’s impossible, are there other options to develop a DPDK KNI application in a virtualized environment?

Advertisement

Answer

virtio_read_caps() fails to map the PCI device because DPDK requires binding network ports to one of the following kernel modules:

  • uio_pci_generic,
  • igb_uio,
  • vfio-pci:

As of release 1.4, DPDK applications no longer automatically unbind all supported network ports from the kernel driver in use. Instead, all ports that are to be used by an DPDK application must be bound to the uio_pci_generic, igb_uio or vfio-pci module before the application is run. Any network ports under Linux* control will be ignored by the DPDK poll-mode drivers and cannot be used by the application.

Thus, before running a DPDK KNI application, we should

  1. Load at least one of the supported I/O kernel modules(uio_pci_generic, igb_uio or vfio-pci)
  2. Load $RTE_SDK/$RTE_TARGET/kmod/rte_kni.ko
  3. Bind the network ports to the kernel modules by means of $RTE_SDK/tools/dpdk_nic_bind.py script.

For example:

$ sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko
$ sudo insmod $RTE_SDK/$RTE_TARGET/kmod/rte_kni.ko
# Obtain ID of the network port
$ python2 $RTE_SDK/tools/dpdk_nic_bind.py --status
Network devices using DPDK-compatible driver
============================================
0000:00:03.0 'Virtio network device' drv=igb_uio unused=vfio-pci,uio_pci_generic
$ sudo python2 $RTE_SDK/tools/dpdk_nic_bind.py --bind=igb_uio 0000:00:03.0

where

  • $RTE_SDK – Points to the DPDK installation directory.
  • $RTE_TARGET – Points to the DPDK target environment directory.
User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement