Set Up Network and HTTP Load Balancers

LAB 6

Overview

In this hands-on lab you'll learn the differences between a network load balancer and an HTTP load balancer and how to set them up for your applications running on Compute Engine virtual machines (VMs).

There are several ways you can load balance on Google Cloudarrow-up-right. This lab takes you through the set up of the following load balancers:

Lab

Set the default region and zone for all resources

with the following command:

gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1

Create multiple web server instances

Create 2 new VMs in your default zone

gcloud compute instances create www1 \
  --image-family debian-9 \
  --image-project debian-cloud \
  --zone us-central1-a \
  --tags network-lb-tag \
  --metadata startup-script="#! /bin/bash
    sudo apt-get update
    sudo apt-get install apache2 -y
    sudo service apache2 restart
    echo '<!doctype html><html><body><h1>www1</h1></body></html>' | tee /var/www/html/index.html"

Create a firewall rule to allow external traffic to the VM instances:

Run the following command to list IPs of all instances:

Configure the load balancing service

Create a static external IP address for your load balancer:

Add a legacy HTTP health check resource:

Add a target pool in the same region as your instances. Run the following to create the target pool and use the health check, which is required for the service to function:

Add the instances to the pool:

Add a forwarding rule:

Sending traffic to your instances

Enter the following command to view the external IP address of the www-rule forwarding rule used by the load balancer:

We can see below that every time the resource is requested, we get a response from a different server!

Create an HTTP load balancer

HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed globally and operate together using Google's global network and control plane. You can configure URL rules to route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, if that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.

To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer. For this lab, backends serve their own hostnames.

create the load balancer template with the following command:

Create a managed instance group based on the template.

Managed instance groupsarrow-up-right (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating

Create the fw-allow-health-check firewall rule.

This is an ingress rule that allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This lab uses the target tag allow-health-check to identify the VMs.

Set up a global static external IP address that people can use to reach your load balancer and take note of it:

Create a health check for the load balancer:

Create a backend service:

Add your instance group as the backend to the backend service:

Create a URL maparrow-up-right to route the incoming requests to the default backend service:

Create a target HTTP proxy to route requests to your URL map & Create a global forwarding rule to route incoming requests to the proxy:

Visiting the static public IP we took note of earlier returns a page with the backend group that the page is being served from as shown below!

http://34.149.196.121/arrow-up-right

The page shown above is dead if you're seeing it.

Last updated

Was this helpful?