User Case
HQoS-VPP Case
24 min
introduction introduction this guide provides a step by step tutorial for configuring hierarchical quality of service (hqos) on the asterfusion et2500 open intelligent gateway running asternos unlike traditional "flat qos" which only manages traffic based on interface or packet priority, hqos introduces the concept of organization into your network it allows you to model your traffic policies based on real world structures—tenants, departments, and users—ensuring critical business isolation in congested environments what this guide will accomplish what this guide will accomplish by following this guide, you will upgrade a standard layer 3 gateway into an intelligent, multi tenant traffic manager you will learn how to map the logical 4 level scheduler hierarchy (port group user queue) to enforce strict service level agreements (slas) the scenarios covered are phase 1 multi tenant resource isolation (group shaping) we will configure two distinct departments "r\&d" and "guest" we will demonstrate that the "guest zone" is strictly capped at a specific bandwidth,preventing it from affecting the "r\&d department" even when the guests try to flood the network phase 2 micro level service assurance (queue scheduling) within the r\&d bandwidth pipe, we will implement a "voice first" policy we will verify that latency sensitive traffic strictly pre empts bulk data during congestion phase 3 traffic classification & mapping learning how to use access control lists (acls) to classify traffic from different physical subnets and map them into their respective hqos logic branches supported platforms & modes supported platforms & modes asternos hqos is designed with a unified architecture that adapts to your underlying hardware hardware mode on supported platforms (e g , et2500), hqos policies can be offloaded to the npu for zero cpu overhead execution software mode on standard vms or non npu interfaces, hqos runs in software mode (vpp based), providing identical functionality with cpu dependent performance note this guide uses a virtual machine environment for demonstration preparation and environmental overview preparation and environmental overview network topology plan network topology plan the following diagram illustrates the logical and physical hierarchy we will implement it maps physical ports to logical "zones" with specific bandwidth guarantees target configuration plan target configuration plan device / interface ip address / subnet gateway role asternos (eth1) 192 168 200 166/24 192 168 200 1 wan uplink (nat outside / hqos root port) asternos (eth2) 10 10 10 1/24 n/a r\&d gateway (high priority zone / nat inside) asternos (eth3) 10 20 20 1/24 n/a guest gateway (restricted zone / nat inside) r\&d pc 10 10 10 100/24 10 10 10 1 traffic source a (simulating vip users) guest pc 10 20 20 100/24 10 20 20 1 traffic source b (simulating guest users) upstream server 192 168 200 153 traffic target (iperf3 server) basic network & nat setup basic network & nat setup before configuring hqos, we must ensure basic connectivity and nat are working, as hqos relies on the underlying network flow we will configure port 2 for r\&d and port 3 for guests #1 global nat enable #1 global nat enable sonic(config)# nat enable #2 configure nat pool (using wan ip) #2 configure nat pool (using wan ip) sonic(config)# nat pool pool1 192 168 200 166 #3 configure nat binding #3 configure nat binding sonic(config)# nat binding bind1 pool1 #4 configure wan interface (ethernet 1) #4 configure wan interface (ethernet 1) sonic(config)# interface ethernet 1 sonic(config if 1)# ip address 192 168 200 166/24 sonic(config if 1)# nat zone 1 sonic(config if 1)# exit #5 configure lan interface 1 (ethernet 2 r\&d) #5 configure lan interface 1 (ethernet 2 r\&d) sonic(config)# interface ethernet 2 sonic(config if 2)# ip address 10 10 10 1/24 sonic(config if 2)# exit #6 configure lan interface 2 (ethernet 3 guest) #6 configure lan interface 2 (ethernet 3 guest) sonic(config)# interface ethernet 3 sonic(config if 3)# ip address 10 20 20 1/24 sonic(config if 3)# exit #7 configure default route #7 configure default route sonic(config)# ip route 0 0 0 0/0 192 168 200 1 building the hqos hierarchy building the hqos hierarchy we construct the hqos policy from the bottom up maps > user profile > group profile > port profile step 1 qos mapping (dscp to tc) step 1 qos mapping (dscp to tc) define how packets are mapped to internal traffic classes #map dscp 0 (data) to tc 0 #map dscp 0 (data) to tc 0 sonic(config)# qos map dscp to tc voice prio 0 0 #map dscp 46 (voice) to tc 7 #map dscp 46 (voice) to tc 7 sonic(config)# qos map dscp to tc voice prio 46 7 step 2 user profiles (queue scheduling) step 2 user profiles (queue scheduling) we define two user templates one for standard employees (r\&d) who need voice priority, and one for guests who only get best effort service #template for r\&d employees #template for r\&d employees sonic(config)# hqos user profile emp standard #bind the map for egress queue alignment #bind the map for egress queue alignment sonic(config user emp standard)# qos map bind dscp to tc voice prio #queue 0 dwrr (data) #queue 0 dwrr (data) sonic(config user emp standard)# tc queue 0 mode dwrr 1 #queue 7 strict priority (voice) #queue 7 strict priority (voice) sonic(config user emp standard)# tc queue 7 mode strict sonic(config user emp standard)# exit #template for guests #template for guests sonic(config)# hqos user profile emp guest sonic(config user emp guest)# tc queue 0 mode dwrr 1 sonic(config user emp guest)# exit step 3 user group profiles (department isolation) step 3 user group profiles (department isolation) here we define the bandwidth limits for each department #group 1 r\&d department #group 1 r\&d department #r\&d group limit 100 mbps (12,500,000 bytes/s) \#r\&d user limit 50 mbps (6,250,000 bytes/s) #r\&d group limit 100 mbps (12,500,000 bytes/s) \#r\&d user limit 50 mbps (6,250,000 bytes/s) sonic(config)# hqos user group profile rd dept sonic(config group rd dept)# user profile emp standard shaping pir 6250000 pbs 1000000 sonic(config group rd dept)# exit #group 2 guest zone \#guest group limit 25 mbps (3,125,000 bytes/s) #group 2 guest zone \#guest group limit 25 mbps (3,125,000 bytes/s) sonic(config)# hqos user group profile guest zone sonic(config group guest zone)# user profile emp guest shaping pir 3125000 pbs 1000000 sonic(config group guest zone)# exit note pir is in bytes/sec pbs is in bytes we set pbs to 1mb to ensure smooth tcp performance step 4 port profile (global level) step 4 port profile (global level) define the physical port limit and attach the department groups sonic(config)# hqos profile wan policy #global port rate #global port rate sonic(config hqos wan policy)# global rate 125000000 #attach r\&d group (limit 100 mbps) #attach r\&d group (limit 100 mbps) sonic(config hqos wan policy)# user group profile rd dept shaping pir 12500000 pbs 1000000 #attach guest group (limit 25 mbps) #attach guest group (limit 25 mbps) sonic(config hqos wan policy)# user group profile guest zone shaping pir 3125000 pbs 1000000 sonic(config hqos wan policy)# exit #enable hqos globally #enable hqos globally sonic(config)# hqos enable classification & application classification & application now we map the subnets to the correct profiles and apply them to interfaces step 1 classification (acl) step 1 classification (acl) identify traffic from the lan subnets and mark them with the correct user profile sonic(config)# access list l3 download class ingress #rule 1 map 10 10 10 x (port 2) to r\&d user #rule 1 map 10 10 10 x (port 2) to r\&d user sonic(config l3 acl download class)# rule 1 src ip 10 10 10 0/24 packet action permit set hqos user emp standard #rule 2 map 10 20 20 x (port 3) to guest user #rule 2 map 10 20 20 x (port 3) to guest user sonic(config l3 acl download class)# rule 2 src ip 10 20 20 0/24 packet action permit set hqos user emp guest sonic(config l3 acl download class)# exit step 2 interface binding step 2 interface binding apply the configuration to the physical ports #wan interface #wan interface sonic(config)# interface ethernet 1 sonic(config if 1)# hqos profile wan policy sonic(config if 1)# exit #lan interface 1 (port 2 r\&d) #lan interface 1 (port 2 r\&d) sonic(config)# interface ethernet 2 sonic(config if 2)# qos map bind dscp to tc voice prio #apply acl #apply acl sonic(config if 2)# acl download class priority 10 sonic(config if 2)# exit #lan interface 2 (port 3 guest) #lan interface 2 (port 3 guest) sonic(config)# interface ethernet 3 #apply acl #apply acl sonic(config if 3)# acl download class priority 10 sonic(config if 3)# exit verification scenario 1 inter department isolation verification scenario 1 inter department isolation this test validates the "firewall" between departments we demonstrate that even when the guest zone attempts to saturate the network with excessive traffic (dos simulation), the r\&d department remains completely unaffected test setup test setup bottleneck none at the port level (1 gbps), but strict shaping at the group level victim (guest zone) configured with a hard cap of 25 mbps observer (r\&d dept) configured with a guaranteed 100 mbps attack scenario the guest pc attempts to blast 100 mbps of traffic while r\&d is transferring critical data at 40 mbps validation command validation command we execute these commands simultaneously on two different terminals (representing port 3 and port 2) \# terminal a (guest port 3) attempt to use 100m iperf3 c \<server ip> p 5202 u b 100m t 20 \# terminal b (r\&d port 2) normal usage 40m iperf3 c \<server ip> p 5201 u b 40m t 20 observed result observed result the screenshots below illustrate perfect isolation the guest (suppressed) as shown in the first screenshot, despite requesting 100 mbps, the guest traffic is ruthlessly throttled by the hqos group shaper throughput flatlines at 23 4 mbps (effective payload for a 25m shaper) packet loss high loss ( 77% ) confirms that excess traffic is dropped at the ingress, preventing it from consuming shared resources the r\&d department (unaffected) simultaneously, the r\&d traffic flows without interruption throughput maintains a rock solid 40 0 mbps packet loss 0% the congestion in the guest zone does not bleed over into the r\&d zone verification scenario 2 service assurance (r\&d internal) verification scenario 2 service assurance (r\&d internal) to verify the hqos logic within the r\&d department, we simulate a congestion scenario where the total traffic demand exceeds the configured user shaper bandwidth test setup test setup bottleneck r\&d user profile limited to 50 mbps (pir) traffic a (vip voice) 30 mbps stream (dscp 46, queue 7, strict priority) traffic b (bulk data) 40 mbps stream (dscp 0, queue 0, dwrr) total demand 70 mbps > 50 mbps ( congestion triggered! ) validation command validation command we initiate the bulk data stream first to saturate the link, then inject the voice stream to observe pre emption \# terminal 1 bulk data (target port 5201) iperf3 c \<server ip> p 5201 u b 40m dscp 0 t 30 \# terminal 2 voice (target port 5202) the "vip" \# start this 10 seconds after terminal 1 iperf3 c \<server ip> p 5202 u b 30m dscp 46 t 10 observed result observed result as shown in the screenshot below, the hqos scheduler exhibits textbook strict priority behavior phase 1 (0s 9s) the bulk data stream (dscp 0) runs alone, utilizing 40 mbps with 0% packet loss phase 2 (congestion) as soon as the voice stream (dscp 46) starts, it instantly claims its required 30 mbps the squeeze the bulk data stream is immediately throttled down math 50 mbps (total) 30 mbps (vip)= 20 mbps (remaining) actual the iperf3 output shows the bulk stream stabilizing at 17 8 mbps note the difference between 20 mbps (physical) and 17 8 mbps (throughput) is due to ethernet/ip/udp header overheads phase 3 (recovery) once the voice stream stops, the bulk data stream immediately recovers to full capacity conclusion conclusion this guide has successfully demonstrated the implementation of a 4 level hierarchical quality of service (hqos) architecture on the asterfusion et2500 gateway it verifies the comprehensive qos capabilities of asternos, enabling granular traffic management from basic port limits to complex flow based and elastic bandwidth strategies this validated configuration transforms the gateway into a powerful, service aware edge device capable of enforcing complex service level agreements (slas) in multi tenant environments
