Azure Application Gateway (Layer 7 Load Balancer)

Welcome to the Azure Networking Series! This is the 6th blog in this series.

Azure Application Gateway is a web traffic load balancer and Application Delivery Controller (ADC) that enables you to manage traffic to your web applications. This blog post is based on a case study and solution design for one of my recent client engagements.  Azure Application Gateway offers a Layer 7 load-balancing feature for HTTP and HTTPs traffic, and you can route traffic based on the incoming URL.  HTTP Traffic requests are routed in a round-robin method to the backend servers.  In cases where traffic needs to be routed to a specific server such as a shopping cart, cookie-based session affinity is used.  The Azure Application Gateway provides end-to-end SSL encryption, thus offloading the computational tasks for decoding SSL requests.

Traditional load balancers operate at the transport layer (OSI layer 4 – TCP and UDP) and route traffic based on the source IP address and port, to a destination IP address and port. In the Azure load balancer blog  we took an in-depth look at configuration of Layer 4 load balancing using Azure Load Balancer and DNS-based load balancing using Azure Traffic Manager.

Azure Application Gateway Features

These are some of the features of the application gateway. In the walk-through section, we’ll configure use cases for each of these features.

SSL Termination

SSL Offloading and End-to-End SSL encryption allows you to securely transmit sensitive data to the backend encrypted.

URL-Based Routing (Layer 7)

URL Path-Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.

Redirection

A common scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all communication between the application and its users occurs over an encrypted path.

Multiple-Site Hosting

Multiple-site hosting enables you to configure more than one web application on the same application gateway instance. Each website or domain can be directed to its own backend pool.

Cookie-Based Session Affinity

The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By using gateway-managed cookies, Azure Application Gateway can direct subsequent traffic from a user session to the same server for processing.

Azure Load-Balancing Methods

In the Azure Traffic Manager and Load Balancer blog we covered various load-balancing options in Azure.

There are different options to distribute network traffic using Microsoft Azure. These options work differently from each other, having a different feature set and support different scenarios. They can each be used in isolation, or combining them.

  • Azure Load Balancer works at the transport layer (Layer 4 in the OSI network reference stack). It provides network-level distribution of traffic across instances of an application running in the same Azure data center.
  • Azure Application Gateway works at the application layer (Layer 7 in the OSI network reference stack). It acts as a reverse-proxy service, terminating the client connection and forwarding requests to backend endpoints.
  • Traffic Manager works at the DNS level. It uses DNS responses to direct end-user traffic to globally distributed endpoints. Clients then connect to those endpoints directly.

Application Gateway SKU and Pricing

Application Gateway comes in two SKUs – Standard and WAF.

There are 3 service tiers based on the amount of data processed – Small, Medium and Large.   You could add upto 10 instances and scale horizontally.

The following table shows an average performance throughput for each application gateway instance with SSL offload enabled:

Average back-end page response size Small Medium Large
6KB 7.5 Mbps 13 Mbps 50 Mbps
100KB 35 Mbps 100 Mbps 200 Mbps

Create the Application Gateway

From the Azure Portal, go to All services -> Application Gateway -> Add

Note: A separate Application-GW-Subnet must exist


It may take several minutes for the application gateway to be created. After the application gateway is created, you will see this basic configuration.

  • BackendPool – An application gateway must have at least one backend address pool.
  • BackendHttpSettings – Specifies that port 80 and an HTTP protocol is used for communication.
  • HttpListener – The default listener associated with BackendPool.
  • FrontendIP – Assigns PublicIPAddress to HttpListener.
  • HttpRule – The default routing rule that is associated with HttpListener.

In the next section we will customize the application gateway to meet our requirements.

Use Case Walk-through

Now we will configure the Azure Application Gateway based on the following reference architecture, and use this design as a reference for step 1 through step 5.

Application Gateway Reference Architecture
Application Gateway Reference Architecture
Step #1:

Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully-qualified domain names (FQDN), and multi-tenant back-ends like Azure Web Apps.

Create backend pool for Image Servers, Default RGB Pool, Domain1 Pool and Domain 2 pool, as shown below.

Backend Pool
Backend Pool

 

Step #2

HTTP Setting defines cookie-based session affinity to route the traffic to a particular backend server. This also defines the port on the backend pool the traffic needs to be routed to. You can configure the Azure Application Gateway for end-to-end SSL communication mode.

Http Settings – Certificate in PEM format

When configured with end-to-end SSL communication mode, application gateway terminates the SSL sessions at the gateway and decrypts user traffic.

Create four HTTP settings for each TCP port 80, 443, 8080 and 8443 as shown below.

Http Settings

 

Step #3

Azure Application gateway frontend IP can be public or private IP. We will bind this IP with multiple listeners in step 4.

Front End IP

 

Bind Listeners to FrontEnd IP
Step #4

With multi-site hosting, the Azure Application Gateway can route the traffic to a particular backend pool based on the domain name.

Create multi-site listeners as show below for domain1.com and domain2.com.

Create basic listeners for HTTP and HTTPs traffic on port 80, 443 and 8443 and Bind the certificate (in .pfx format) to the https listener.

We will later bind these listeners to rules.

Listeners – Basic and Multi-Site
Certificate in .pfx format

Step #5

Finally, Rules map listeners to backend pool. URL Path-Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.

Create Basic and Path-based rules as shown below.

Rules – Basic and Path Based

Create a Path-based rule to route https://nnlab.com:8443/images to the images pool.

URL Based Routing

 

Create http to https redirection rule as shown below

HTTP to HTTPs redirection

Azure Application Gateway redirection support is not limited to HTTP -> HTTPS redirection alone. It has a generic redirection mechanism, which allows for traffic redirection received at one listener to another listener on Azure Application Gateway. External site redirection is also supported.

Conclusion

Use Azure Application Gateway for applications where you have to maintain session affinity (shopping carts) and for SSL-intensive workloads. Use it in concert with Azure Load Balancer for multi-tier applications.

For organizations looking to reduce costs, Azure Application Gateway supports most common use cases as described above. For advanced use cases, Azure supports third-party load balancers like Citrix VPX (covered in another blog series) and F5- Application Delivery Controller as marketplace images that you could readily deploy.

I will be covering WAF (Web Application Firewall) in the future series, which when used in conjunction with Azure Application Gateway prevents the common web attacks such as SQL injection and cross-site scripting.

Note: I’d like to thank my manager John Rudenauer and leaders from our Navisite Product Management – William Toll and Umang Chhibber, Marketing team – Chris Pierdominici and Carole Bailey, and Professional Services team – Mike Gallo for their continued support and direction.

Nehali Neogi

Nehali Neogi

Principal Network Engineer at Navisite
Nehali Neogi is a Principal Engineer at Navisite, leading many of their global initiatives on building the next generation of hybrid cloud services. She enjoys designing and architecting reliable and highly available solutions for Navisite’s clients. She is the resident expert on all things networking and beyond for VMware NSX, Cisco, and Azure based Hybrid Cloud offerings. Her interests are cloud technologies, Software Defined Networking, full stack engineering, and realizing the transition to DevOps and system Automation.Nehali holds an Expert Level Certification in VMware NSX(VCIX-NV) and Masters in Computer Engineering from UMass, Lowell.
Nehali Neogi