In this hands-on lab you'll learn the differences between a network load balancer and an HTTP load balancer and how to set them up for your applications running on Compute Engine virtual machines (VMs).
There are several ways you can . This lab takes you through the set up of the following load balancers:
Lab
Set the default region and zone for all resources
with the following command:
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1
Add a target pool in the same region as your instances. Run the following to create the target pool and use the health check, which is required for the service to function:
We can see below that every time the resource is requested, we get a response from a different server!
while true; do curl -m1 34.121.197.56; done
Create an HTTP load balancer
HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed globally and operate together using Google's global network and control plane. You can configure URL rules to route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, if that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.
To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer. For this lab, backends serve their own hostnames.
create the load balancer template with the following command:
This is an ingress rule that allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This lab uses the target tag allow-health-check to identify the VMs.
Visiting the static public IP we took note of earlier returns a page with the backend group that the page is being served from as shown below!
(MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating
Create a to route the incoming requests to the default backend service: