Skip to content

AsterNOS-VPP Quick Start

This guide is primarily intended for network engineers, system administrators, and developers who want to build a high-performance network testing platform in a virtualized environment.

To successfully complete this task, it is recommended that readers have the following basic knowledge:

  • Linux Fundamentals: Proficiency with the Linux command line, including file editing and system administration.
  • Networking Fundamentals: An understanding of L2/L3 network concepts such as IP addresses, subnet masks, gateways, and VLANs.
  • Virtualization Concepts: A basic understanding of Virtual Machines (VMs) and Host systems, with some familiarity with QEMU/KVM.

This document provides a detailed guide on how to configure an AsterNOS-VPP virtual machine on an Ubuntu host system using QEMU/KVM and PCI Passthrough technology. The final goal is to build and validate a high-performance virtual gateway that supports Inter-VLAN routing and NAT for internet access.


  • Hardware:
    • Host Machine: ThinkCentre-M8600t-N000 (Example model only).
    • Network Card: Intel Corporation I350 Gigabit Network Connection (4-Port).
    • CPU: The host CPU must support thesse4 instruction set. You can verify this with the command lscpu and ensure that the output containssse4. chapter2-1
  • Software
    • Host OS: Ubuntu Linux 24.04.
    • Virtualization: QEMU/KVM 8.2.2,libvirt 10.0.0.
    • VM System: AsterNOS-VPP.

  • PCI Passthrough: A virtualization technology that allows a virtual machine to directly and exclusively control a physical host’s hardware device,providing near-native performance.
  • Inter-VLAN Routing: A core function of a router that enables traffic forwarding between different subnets by creating virtual interfaces (gateways) for different VLANs.
  • Network Address Translation (NAT): Allows devices on a private network to access the internet by sharing the router’s public IP address.

4. Typical Configuration Example: Dual-Subnet Routing and NAT

Section titled “4. Typical Configuration Example: Dual-Subnet Routing and NAT”
  1. Deploy an AsterNOS-VPP virtual router with one dedicated WAN port and multiple dedicated physical LAN ports.
  2. Divide the LAN ports into two different VLANs, each connecting to a separate PC.
  3. Ensure PCs in both subnets can access the internet through the router’s NAT function.
  4. Ensure PCs in the two subnets can communicate with each other.
  • Physical Connections:

    • Host ens3f0 (PCI Address 02:00.0) -> Upstream Router (WAN)

    • Host ens3f1 (PCI Address 02:00.1) -> PC1 (LAN1)

    • Host ens3f2 (PCI Address 02:00.2) -> PC2 (LAN2)

    • Host ens3f3 (PCI Address 02:00.3) -> PC3 (LAN3) chapter4-1

Device TypeModel/SystemRole/Description
Host MachineThinkCentre-M8600t-N000Ubuntu, QEMU/KVM,libvirt Host
VMAsterNOS-VPP8GB RAM, 4-Core CPU,64GB DISK
PC1Windows PCLAN1 Client, connected to ens3f1
PC2Windows PCLAN2 Client, connected to ens3f2
PC3Windows PCLAN3 Client, connected to ens3f3
Network PlanInterface (AsterNOS )IPAddress / RangeDescription
WANEthernet1192.168.200.178/24Connects to upstream router 192.168.200.1
LAN1Vlan10010.0.1.0/24Subnet for PC1 and PC3, Gateway 10.0.1.1
LAN2Vlan20010.0.2.0/24Subnet for PC2, Gateway 10.0.2.1
  1. BIOS/UEFI Settings:

    • Objective: To enable the IOMMU function at the firmware level (BIOS/UEFI), making the hardware feature available to the operating system.
    • Action: Reboot the host and enter the BIOS/UEFI setup. Ensure that both Intel(R) VT-d and Intel(R) Virtualization Technology are enabled.
  2. GRUB Parameter Configuration:

    • Objective: To instruct the Linux kernel to activate and use the IOMMU feature that was enabled in the firmware.

      Terminal window
      # 1. Edit the GRUB configuration file
      sudo nano /etc/default/grub
      # 2. Find the line starting with GRUB_CMDLINE_LINUX_DEFAULT and add "intel_iommu=on iommu=pt" inside the quotes.
      GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt"
      # 3. After saving the file, update the GRUB configuration
      sudo update-grub
  3. Configure VFIO Driver:

    • Objective: To use the dedicated vfio-pci driver to take control of the physical NICs intended for passthrough. This prevents the host OS from loading its default drivers, making the NICs available to the VM.

    • Operations:

      • A. Find the NIC’s Device ID:

        Terminal window
        # This command lists all network devices and their IDs
        lspci -nn | grep -i ethernet

        chapter4-2

        Note: [8086:1521] is the device ID. If your network card is different, replace8086:1521in the command below with the ID you found.

      • B. Configure Driver Binding and Blacklist:

        Terminal window
        # Tell the system that devices with ID 8086:1521 should be managed by vfio-pci
        echo "options vfio-pci ids=8086:1521" | sudo tee /etc/modprobe.d/vfio.conf
        # Prevent Ubuntu from loading the default 'igb' driver for this NIC to avoid conflicts
        echo "blacklist igb" | sudo tee /etc/modprobe.d/blacklist-igb.conf
      • C. Force Early Loading of VFIO Modules: Edit the /etc/initramfs-tools/modules and add the following lines at the end:

        Terminal window
        vfio
        vfio_iommu_type1
        vfio_pci
        vfio_virqfd
      • D. Update Configuration and Reboot:

        Terminal window
        sudo update-initramfs -u
        sudo reboot
  4. Verify Host Configuration: After rebooting, run the following command in the host terminal: lspci -nnk | grep -iA3 02:00.

    • Expected Result: TheKernel driver in use: field for all four NICs(from02:00.0to02:00.3) should now show vfio-pci. chapter4-3
4.5.1 Method A: Manual Launch with QEMU (For Quick Tests)
Section titled “4.5.1 Method A: Manual Launch with QEMU (For Quick Tests)”

This method starts the virtual machine directly with a single command. It is simple and convenient, suitable for temporary testing and validation.

  1. Launch the Virtual Machine: Run the following QEMU command on the host.

    Terminal window
    sudo qemu-system-x86_64 \
    -enable-kvm \
    -m 8192 \
    -smp 4 \
    -cpu host \
    -drive file=/var/lib/libvirt/images/sonic-vpp.img,if=virtio,format=qcow2 \ # Please replace this with the actual path to your image file
    -device vfio-pci,host=02:00.0,id=wan-nic \
    -device vfio-pci,host=02:00.1,id=lan-nic1 \
    -device vfio-pci,host=02:00.2,id=lan-nic2 \
    -device vfio-pci,host=02:00.3,id=lan-nic3 \
    -nographic \
    -serial mon:stdio
  2. Interface Mapping: The order of the -device parameters determines the interface names inside the AsterNOS VM. For this example:

    QEMU -device ParameterPCI Address (Host)Interface Name (AsterNOS VM)Planned Use
    host=02:00.002:00.0Ethernet1WAN Port
    host=02:00.102:00.1Ethernet2LAN Port (PC1)
    host=02:00.202:00.2Ethernet3LAN Port (PC2)
    host=02:00.302:00.3Ethernet4LAN Port (PC3)

    ⚠️ Important Notice: Network Port Order The order of interfaces such as Ethernet1, Ethernet2, etc., as recognized internally by AsterNOS-VPP, is determined by the order of the -device parameters in the QEMU startup command (i.e., the order of PCI addresses). This order may not match the physical arrangement of network ports on the back panel of your server chassis (e.g., top to bottom, left to right).

    Strong Recommendation: Before proceeding with the next configuration step, connect only one network cable (for example, the WAN port), start the virtual machine, and use the show interface status command to identify which Ethernet interface changes to the up state. This helps you correctly map physical ports to logical ports and avoid configuration failures caused by incorrect cabling.

    chapter4-4

Section titled “4.5.2 Method B: Persistent Launch with libvirt (Recommended)”

This method uses libvirt to manage the virtual machine, enabling persistent operation and auto-start on boot.

  1. Create the VM: Run the following command on the host. After executing this command, the virtual machine will be automatically defined and started. You will see the boot process and login prompt directly in your current terminal.

    Terminal window
    sudo virt-install \
    --name AsterNOS \
    --virt-type kvm \
    --memory 8192 \
    --vcpus 4 \
    --cpu host-passthrough \
    --disk path=/var/lib/libvirt/images/sonic-vpp.img,bus=virtio \ # Please replace this with the actual path to your image file
    --import \
    --os-variant debian11 \
    --network none \
    --host-device 02:00.0 \
    --host-device 02:00.1 \
    --host-device 02:00.2 \
    --host-device 02:00.3 \
    --nographics
  2. Auto-Start the Virtual Machine: Once the virtual machine has been created successfully, open a new terminal on the host machine and run the following command to set it to start automatically on boot:

    Terminal window
    sudo virsh autostart AsterNOS
4.6 Access and Configure the AsterNOS-VPP VM
Section titled “4.6 Access and Configure the AsterNOS-VPP VM”

Regardless of which method you used to start the virtual machine, the subsequent configuration steps are the same.

  1. Access the Virtual Machine Console: If you used Method A (QEMU), the VM console is already displayed in your current terminal. If you used Method B (libvirt), you can connect to the virtual machine console at any time using the following command in the host terminal:

    Terminal window
    sudo virsh console AsterNOS
  2. Log In and Enter Configuration Mode: At the login prompt, use the default credentials to access the system:

  • Username: admin

  • Password: asteros

  1. Step-by-Step Configuration and Verification:

    • Step A: Launch the command-line interface & Enter configuration mode

      Terminal window
      sonic-cli
      configure terminal
    • Step B: Configure WAN Interface

      Terminal window
      interface ethernet 1
      description WAN_Port
      ip address 192.168.200.178/24
      # Assign this interface to NAT zone 1. By convention, the outside (WAN) interface is a non-zero zone, and inside interfaces are zone 0.
      nat-zone 1
      exit
    • Step C: Configure VLANs and Gateway Interfaces

      Terminal window
      vlan 100
      exit
      vlan 200
      exit
      interface vlan 100
      description LAN1_Gateway_for_PC1_and_PC3
      ip address 10.0.1.1/24
      exit
      interface vlan 200
      description LAN2_Gateway_for_PC2
      ip address 10.0.2.1/24
      exit
    • Step D: Assign Physical LAN Ports to VLANs

      Terminal window
      interface ethernet 2 # Connects to PC1
      description Port_for_PC1
      switchport access vlan 100
      exit
      interface ethernet 3 # Connects to PC2
      description Port_for_PC2
      switchport access vlan 200
      exit
      interface ethernet 4 # Connects to PC3
      description Port_for_PC3
      switchport access vlan 100
      exit
    • Step E: Configure Routing and NAT

      Terminal window
      # Configure the default route to point to the upstream router
      ip route 0.0.0.0/0 192.168.200.1
      # Enable NAT globally
      nat enable
      # Create a NAT pool named 'lan_pool' using the router's public IP
      nat pool lan_pool 192.168.200.178
      # Bind the pool to a policy named 'lan_binding' to apply NAT to all traffic crossing zones
      nat binding lan_binding lan_pool
    • Step F: Save Configuration

      Terminal window
      write
    • Step G: Verify Configuration

      Terminal window
      show ip interfaces
      show ip route
      show vlan summary
      show nat config

      NOTICE: Please ensure that the Admin/Oper status of the interface shows up/up.

      chapter4-5 chapter4-6 chapter4-7 chapter4-8

  1. PC1: Set IP to 10.0.1.10, subnet mask to 24, gateway to 10.0.1.1, and DNS to 8.8.8.8.
  2. PC2: Set IP to 10.0.2.10, subnet mask to 24, gateway to 10.0.2.1, and DNS to 8.8.8.8.
  3. PC3: Set IP to 10.0.1.11, subnet mask to 24, gateway to 10.0.1.1, and DNS to 8.8.8.8.

This chapter will comprehensively verify that the virtual router’s core functions and performance metrics meet expectations through a series of tests.

We will proceed with the following sequence of tests:

  1. Layer 2 Switching Performance (Intra-VLAN): Use iperf3 to test the transfer rate between PC1 and PC3 to verify switching performance within the same VLAN.
  2. Layer 3 Routing Performance (Inter-VLAN): Use iperf3 to test the transfer rate between PC1 and PC2 to verify routing performance between different VLANs, monitored with router-side commands.
  3. External Connectivity (NAT Verification): Use ping to test if internal PCs can access the public internet, verifying basic NAT connectivity.

5.2 Layer 2 Switching Performance Test (PC1 <-> PC3)
Section titled “5.2 Layer 2 Switching Performance Test (PC1 <-> PC3)”
  • Objective: To verify the Layer 2 (L2) data forwarding capability of the virtual router within the same VLAN. Since PC1 and PC3 are both in VLAN 100, communication between them is handled by L2 switching.
  • Procedure:
    1. On PC1 (10.0.1.10), open a command prompt and ensure the iperf3 server is running: iperf3 -s.
    2. On PC3 (10.0.1.11), open a command prompt and execute the client test: iperf3 -c 10.0.1.10 -t 30.
  • Results Analysis: The test rate should stabilize around 950 Mbits/sec, achieving Gigabit line-rate. chapter5-1

5.3 Layer 3 Routing Performance Test (PC1 <-> PC2)
Section titled “5.3 Layer 3 Routing Performance Test (PC1 <-> PC2)”
  • Objective: To verify the Layer 3 (L3) routing performance of the virtual router between different VLANs. Communication between PC1 (VLAN 100) and PC2 (VLAN 200) requires L3 routing.

  • Procedure:

    1. On PC1 (10.0.1.10), open a command prompt and ensure the iperf3 server is running: iperf3 -s.
    2. On PC2 (10.0.2.10), open a command prompt and execute the client test: iperf3 -c 10.0.1.10 -t 30.
  • Results Analysis: The test rate should also achieve line-rate performance of around 950 Mbits/sec. chapter5-2

  • Router-Side Verification: During the iperf3 test, you can monitor the interface statistics in real-time on the AsterNOS device by running show counters interface. chapter5-3

    Analysis: As seen above, the receive (RX) rate for Ethernet3 (connected to PC2) is approximately 1000 Mbits/s, which perfectly matches the iperf3 results.

  • Objective: To verify that the NAT function is effective for all internal VLANs.
  • Ping Connectivity Test:
    1. On PC1 (VLAN 100), ping 8.8.8.8. You should receive successful replies. chapter5-4
    1. On PC2 (VLAN 200), ping 8.8.8.8. You should also receive successful replies. chapter5-5

This guide demonstrates that AsterNOS-VPP successfully combines the robust SONiC ecosystem with the high-performance VPP data plane.

By leveraging virtual machines and PCI passthrough on standard x86 servers, users can easily build an enterprise-grade virtual gateway capable of line-rate Layer 2/3 forwarding and NAT. For network environments seeking high performance, flexibility, and cost efficiency, AsterNOS-VPP is an ideal solution.