Nginx-VPP Case
27 min
introduction introduction this guide provides step by step instructions for deploying nginx services on the asterfusion open intelligent gateway running asternos vpp leveraging asterfusion's unique ldp (ld preload) architecture, asternos achieves 100% compatibility with native nginx configurations while delivering extreme concurrency performance through the vpp user space data plane and hardware accelerated crypto offloading what this guide will accomplish what this guide will accomplish scenario 1 native static web server – utilizing the gateway's local storage (nvme/emmc) to provide high performance file hosting services scenario 2 high performance l7 reverse proxy – acting as an application aware intermediary to forward traffic to backend server pools scenario 3 https & ssl termination – terminating ssl/tls securely at the gateway edge to simplify centralized certificate management and relieve backend computational workloads scenario 4 upstream load balancing – distributing incoming traffic across multiple real backend nodes for high availability network planning & global configuration network planning & global configuration before deploying specific business scenarios, the underlying network and global nginx parameters must be initialized network topology plan network topology plan device / interface ip address / mask role description asternos (eth1) 192 168 1 166/24 wan port (receives client requests) asternos (eth2) 172 16 10 1/24 lan port (connects to backend servers) backend server 172 16 10 100/24 business server hosting actual content global nginx service initialization global nginx service initialization enter the sonic cli to start the nginx process and isolate cpu cores to ensure maximum data plane performance sonic# configure terminal # 1 start the nginx core process # 1 start the nginx core process sonic(config)# nginx start # 2 configure performance parameters (based on standard test specifications) # 2 configure performance parameters (based on standard test specifications) sonic(config)# nginx worker connections 1500 sonic(config)# nginx keepalive timeout 80 # 3 isolate cpu cores allocate 2 cores to nginx, keep vpp in auto adaptation mode # 3 isolate cpu cores allocate 2 cores to nginx, keep vpp in auto adaptation mode sonic(config)# cpu core nginx num 2 vpp num auto # 4 apply global settings and save # 4 apply global settings and save sonic(config)# nginx reload sonic(config)# exit sonic# write scenario 1 native static web server scenario 1 native static web server this scenario demonstrates how asternos serves local files through the vpp ldp path directly to external clients p p repare persistent files repare persistent files note since asternos nginx runs within a docker container, only directories under /etc/sonic/ are persistently mounted into the container's file system files placed elsewhere may be inaccessible to nginx \# enter the linux bash terminal of the device admin\@sonic $ sudo mkdir p /etc/sonic/nginx mmc admin\@sonic $ sudo bash c 'echo "\<h1>success! asternos static web server is alive!\</h1>" > /etc/sonic/nginx mmc/index html' configure and deploy static web configure and deploy static web create the static web conf file on the host \# content of /home/admin/static web conf server { listen 8081; # use a dedicated port to avoid management web conflicts server name asternos; # catch all wildcard for testing location / { root /etc/sonic/nginx mmc; # points to the container visible mount point index index html; } } apply the configuration using asternos specific commands sonic(config)# nginx update server /home/admin/static web conf sonic(config)# nginx reload verif verif y the service y the service client test execute a request from an external client curl v http //192 168 1 166 8081 expected result the terminal should display the "success! asternos static web server is alive!" text with an http/1 1 200 ok status scenario 2 high performance l7 reverse proxy scenario 2 high performance l7 reverse proxy reverse proxy is the most common gateway deployment asternos intercepts external traffic and transparently forwards it to the internal backend ( 172 16 10 100 ) note please avoid having multiple server blocks listening on the same port to prevent traffic conflicts you can view the current configuration of nginx by using the show nginx status command if a previous configuration occupies the target port, remove it first using the delete command sonic(config)# nginx delete server /home/admin/static web conf configure and deploy reverse proxy configure and deploy reverse proxy prepare proxy conf following the official test case syntax server { listen 8080; server name asternos; location ^ / { proxy pass http //172 16 10 100 80; proxy set header host $host; proxy set header x real ip $remote addr; proxy set header x forwarded for $proxy add x forwarded for; } } update and reload sonic(config)# nginx update server /home/admin/proxy conf sonic(config)# nginx reload verify verify the service the service start backend service run sudo python3 m http server 80 on backend server( 172 16 10 100 ) client test run curl v http //192 168 1 166 8080 backend log check return to the backend server's terminal you should see the access log generated by the python http server 172 16 10 1 "get / http/1 1" 200 scenario 3 https & certificate management (ssl termination) scenario 3 https & certificate management (ssl termination) in a typical enterprise environment, backend application servers should not be burdened with managing certificates across dozens of internal nodes asternos serves as a unified secure boundary , terminating https traffic at the edge (ssl termination) and proxying plain http to the backend it also provides a simplified cli for persistent certificate management inside the containerized environment upload upload ssl certificates ssl certificates do not manually copy certificates into the linux file system, as the nginx process runs inside an isolated container use the native sonic cli to import them into the persistent vault assuming you have uploaded your commercial certificate ( asterfusion crt ) and private key ( asterfusion key ) to /home/admin/ # the system will automatically mount these to /etc/sonic/nginx/cert/ inside the container # the system will automatically mount these to /etc/sonic/nginx/cert/ inside the container sonic(config)# nginx update cert /home/admin/asterfusion crt sonic(config)# nginx update cert /home/admin/asterfusion key configure h configure h ttps proxy ttps proxy create a new configuration file (e g , https proxy conf ) using the standardized certificate paths \# content of /home/admin/https proxy conf server { listen 8443 ssl default server; server name asternos; \# core point to the system managed certificate vault ssl certificate /etc/sonic/nginx/cert/asterfusion crt; ssl certificate key /etc/sonic/nginx/cert/asterfusion key; location ^ / { proxy pass http //172 16 10 100 80; \# pass essential headers to the backend proxy set header host $host; proxy set header x real ip $remote addr; proxy set header x forwarded for $proxy add x forwarded for; proxy set header x forwarded proto $scheme; # tells backend the original request was https } } deploy deploy configuration configuration apply the https proxy configuration and reload the service sonic(config)# nginx update server /home/admin/https proxy conf sonic(config)# nginx reload verif verif y the service y the service client test run curl v k https //192 168 1 166 8443 expected result a successful tls handshake ( ssl connection using tlsv1 3 ) followed by an http/1 1 200 ok response with the backend content the backend server logs will show a standard http connection scenario 4 upstream load balancing (with advanced strategies) scenario 4 upstream load balancing (with advanced strategies) in a real world production environment, critical applications are rarely hosted on a single server to ensure high availability (ha) and distribute high concurrency workloads, organizations deploy multiple identical backend servers asternos natively supports nginx's upstream module, allowing the gateway to act as an intelligent layer 7 load balancer it distributes incoming traffic across a pool of real backend nodes using various scheduling algorithms c c onfigure upstream pool onfigure upstream pool in this scenario, assume you have three backend business servers located at 172 16 10 101 , 172 16 10 102 , and 172 16 10 103 create a configuration file (e g , load balancer conf ) the default distribution method is round robin , which distributes requests sequentially and evenly \# content of /home/admin/load balancer conf \# 1 define the pool of real backend servers upstream enterprise backend pool { server 172 16 10 101 80; server 172 16 10 102 80; server 172 16 10 103 80; } \# 2 define the gateway's proxy behavior server { listen 8080 default server; server name asternos; location ^ / { proxy pass http //enterprise backend pool; proxy set header host $host; proxy set header x real ip $remote addr; proxy set header x forwarded for $proxy add x forwarded for; } } deploy configuration deploy configuration apply the configuration always use absolute paths to avoid directory context errors sonic# configure terminal # apply the new load balancer configuration # apply the new load balancer configuration sonic(config)# nginx update server /home/admin/load balancer conf sonic(config)# nginx reload verif verif y the service y the service run curl s http //192 168 1 166 8080 multiple times you will see responses alternately and evenly coming from node 101, 102, and 103 advanced weighted round robin advanced weighted round robin when backend servers have different hardware specifications, you can assign weights to distribute traffic proportionally to make the first node process 3 times more traffic than the others, modify the upstream block in your file upstream enterprise backend pool { server 172 16 10 101 80 weight=3; # receives 60% of traffic server 172 16 10 102 80 weight=1; # receives 20% of traffic server 172 16 10 103 80 weight=1; # receives 20% of traffic } reload nginx, and execution results will show a 3 1 1 request distribution ratio advanced ip hash advanced ip hash for stateful applications (e g , shopping carts, user logins), ensuring a specific client always reaches the same backend server is critical add the ip hash directive to bind the client's ip to a specific node upstream enterprise backend pool { ip hash; # enable session persistence server 172 16 10 101 80; server 172 16 10 102 80; server 172 16 10 103 80; } reload nginx, and multiple requests from the same client ip will consistently be routed to the exact same backend server conclusion conclusion this guide has successfully demonstrated the comprehensive layer 7 application delivery capabilities of asternos vpp by walking through real world scenarios—from basic static file hosting to complex, high availability upstream load balancing and secure ssl termination—we have verified the gateway's readiness for enterprise deployments
