Practice Exams:

Google Professional Data Engineer – Managed Instance Groups and Load Balancing part 2

  1. Forwarding Rules Target Proxy and Url Maps

In the last lecture, we saw a quick overview of all the components that make up an Http Https load balancer. In this lecture and in subsequent lectures, we look at each of these components in some detail. We’ll start off by looking at the global forwarding rule. These are rules that you configure on the Cloud platform in order to determine where particular a traffic will be sent. Every forwarding rule matches a particular IP address protocol, and optionally, you can choose to include the port as well. A forwarding rule matches traffic to a single target pool or a target instance. A global forwarding rule typically match this traffic to a load balancing proxy.

Global forwarding rules are so called because they can direct traffic to VM instances or proxies which are located in multiple regions. Global forwarding rules are only used with global load balancing such as Https load balancing, SSL proxy load balancing, and TCP proxy load balancing. All the other load balancers that the Cloud platform has to offer are regional load balancers. Gcp also offers regional forwarding rules that is used to match traffic and distribute them amongst instances and target pools within a single region. This can be used with regional load balancing as well as individual instances. Regional load balancing can be network load balancing or internal load balancing.

We’ve understood the role that global forwarding rules play within the Https load balancer. Let’s move on and look at the target proxy. The forwarding rule has directed traffic to our target Https or Http proxy. This target proxy is configured to receive traffic that is forwarded by the global forwarding rule. Once the URL requests come in, the target proxy uses the URL map that we’ve pre configured in order to determine where the traffic for that URL should be sent. This is the case for Http and Https target proxies. In the case of TCP or SSL target proxies, the traffic is routed directly to the back end service. There is no URL map there.

The target proxy that you configure will be different based on the kind of load balancing that you will set up. The Https load balancing will require an Https target proxy. There are also target SSL proxies, target TCP proxies, and so on. The target proxy will change based on the protocol that you use. If the target proxy receives Http connections, it doesn’t need to do anything, but in case it receives encrypted Https connection, then you need to install an SSL certificate on the target proxy so that it can terminate these Https connections. A target proxy can have up to ten SSL certificates installed. The incoming Http or Https connection is terminated at the target proxy, and the target proxy reestablishes new connections with the back end services.

These new connections can be either Http or Https connections, preferably Https, so that your traffic is encrypted. In the case of Https connections, which terminate at a VM instance, the VM instance also has to have an SSL certificate installed. We’ll now move on to look at one of the more interesting components of the load balancer and that is the URL map. The URL map is something that you configure to ensure that the incoming URL is mapped to the right back end service. The URL map is simply a mapping that you as the administrator of this load balancer will set up. You’ll basically map the incoming URL request to the back end service that is capable of handling these requests and the load balancer will then use this mapping to direct traffic to different instances.

For example, if you have a host called Example which serves both audio and video content, you might have your audio content located at URL. Audio is the path for that URL and video content at forward slash Video. The application which serves your audio content might be located on backend service one and your URL should map to back end service one, so that’s where traffic is routed. Similarly, you might host your video content on back end service two. So all incoming requests for video content should be directed to back end service two. We’ll start off by looking at a very simple URL map, the simplest possible, where there have been no rules configured, no mapping setup.

Here the target proxy simply looks at the URL map and all traffic is sent to the same group of instances. This is the default setting for the URL map. When you haven’t configured any additional host names or rules, the URL map automatically creates the forward slash star path matcher for your URLs and it directs all traffic to the same back end service so it does not distinguish between traffic based on their URLs. This is the default setting. A typical URL map where you actually want to split traffic across back ends based on what the URL is, will look somewhat like what you see on screen. The target proxy forwards the traffic to the URL map.

The URL map is configured with host rules, path matches and path rules. And based on these, the traffic is forwarded to the appropriate back end service. Host rules reference the domain of the URL. These can be example customer. com or whatever your domain is. Request to different domains will be forwarded onto different back end services, the path matcher and additional path rules that you can figure out specify what back end services the different URL paths will map to. You can have forward slash video map to one path, you can even be more specific and have HD video map to one path and SD video map to another. The path matcher is the default forward slash asterisk path matcher that we saw earlier.

This is created automatically within a URL map. Traffic which does not match any of the other path rules is sent to this default service. So this is kind of like a catchall backend service. Here is a block diagram of a URL map that has some host rules specified in order to direct the traffic. The host rule applies to the example. com domain, and all URL requests to this domain will be directed to the back end service represented by the component in the top row. If a request comes into this load balancer for any other back end, they are sent to the no match or the catch all service, which is the back end service. Dub, dub, dub.

At the bottom here is the block diagram for a URL map that has been configured with path rules but no specific host rule. So there is no domain matching in the URL. We only have path rules configured for video traffic. We have an explicit path rule for forward slash video, HD and video SD. We first set up a path matcher for the forward slash video path, indicating that video path can be directed to a different set of back end services. In addition, we can specify two path rules, one for HD video and another for SD video. These rules indicate that traffic to these URLs are directed to a different backend service.

There is a different backend service that takes requests for SD videos and a different backend service for HD videos. If the URL path still matches forward slash video, but it isn’t for SD or HD video, then we direct it to a different backend service. Notice that there is no specific host rule set. So if you have any URL for any domain which does not match forward slash video, you’ll be directed to a different backend service representative presented in these blocks at the very bottom. This is the dub dub dub back end service. This is the default back end service when no path rule matches.

  1. Backend Service and Backends

We’ve seen the big picture of how http https load balancing works and we’ve also studied the global forwarding rule, target proxy and URL map and detailed. Now we’ll look at how the back end service functions and see the various components that make up back end services. The back end service is a centralized service which manages the various backends that lie behind the load load balancer. You can imagine a back end to be a logical group that knows how to handle a particular kind of traffic. You might have a back end which handles say, video traffic, audio traffic, static content and so on. A back end service manages a number of these back ends. A single back end is made up of one or more instance groups.

These can be managed instance groups which allow auto scaling. These can be unmanaged instance groups which do not allow auto scaling or rolling updates. These instance groups contain the machines which actually handle user requests. The back end service knows what backend it can direct traffic to, it knows which instances it can use and how much traffic each of these instances can handle. It has an idea of the CPU utilization or the number of requests per second per instance. The back end service is also responsible for monitoring the health of the various back ends that it manages and it only sends traffic to healthy instances. The back end service is made up of four basic components.

The first of these is the health check which is used to determine which of the back ends are healthy and can receive traffic. The health check polls the instances periodically to determine which one is up and running and ready to receive requests. In addition to the health check, the backend service has the actual backends which are just instance groups, either managed or unmanaged, which are made up of a number of VMs capable of receiving requests. If the instance group is a managed instance group, then it can be automatically scaled. The back end service is also responsible for session affinity. Session affinity basically attempts to make sure that all requests which are made by a particular client are sent to the same VM and are fulfilled by the same virtual machine.

Backend services use either the IP address of the client or a cookie in order to determine session affinity. You can also configure a timeout on the back end service. This timeout is the length of time that the back end service will wait for a particular backend to respond. Let’s take a little more detailed look at the various kind of health checks that you can configure on the back end service. You can configure health checks of three types https health checks, SSL and TCP health checks as specified before, health checks that you configure for http https have the highest fidelity because not only do they verify that the instance is healthy, but they also check that the web server is up and running and serving traffic. You’ll use the non Http health checks such as SSL and TCP health checks. When the connection to the server is not Http. If there is a PCP or an SSL connection, you will configure the corresponding health check. The health checker is associated with the back end service, and Google creates multiple redundant copies of this health checker so that it ensures that it’s always up. When you look at the health check requests that are sent to your back end, you’ll find that it occurs more frequently than you would expect. The redundant copies of your health checker are polling the instances periodically. Let’s look at another component of the backend service session affinity.

In a little more detail, session Affinity can be achieved for your back end in one of two ways. The first is by using the client IP address. The back end service will create a hash from the client IP address to ensure that requests from the same IP are always sent to the same virtual machine. Using the IP address, though, is not a great way to set up session affinity for a number of reasons. It’s possible that many clients of your service are behind some kind of proxy, which makes it appear that all client requests originate from the same IP address, even though their individual IPS might be different. By the time the request reaches the load balancer, it will appear to come from the same IP address, which means you’ll probably heavily load a few VM instances.

It’s also possible that the users are on a mobile device, or they frequently change networks. That’s how the routing is set up. As the users move from network to network, the client IP will look different each time, which means it will hash to a different VM instance and session affinity will be lost. A better way to ensure session affinity for your back ends is by using cookies. When a VM sees a request for the very first time, it will issue a cookie named Gclb. This cookie is issued only for the first request. Every subsequent request that the client makes will include this cookie, and based on the value of this cookie, it can be routed to the same VM instance traffic will flow to the same VM instance as before.

Now that we’ve understood the back end service, one of the components in the back end service is the actual backend. Let’s take a look at what back ends are and what role they play within our load balancer. A back end is basically another term for an instance group. This instance group can be a managed or an unmanaged instance group. A managed instance group is made up of similar VM instances which have been created using a template. A managed instance group allows auto scaling. An unmanaged instance group is typically not recommended, and it is made up of dissimilar VMs. An unmanaged instance group does not use a template. Backends also have a configuration known as the balancing mode.

This balancing mode is the metric that is used to determine when the back end is at full usage. This is what the load balancer will use to determine how much capacity a particular backend has in order to make the decision of whether to direct traffic to it or not. The balancing mode is a metric that scales linearly with traffic and which you can reduce by adding more virtual machine instances to your back end. The balancing mode that a back end uses is typically either CPU utilization or requests per second per instance. In addition to the balancing mode, backends also have a capacity setting. The capacity setting is expressed in percentage form and basically is a percentage of the balancing mode which actually determines the capacity of the back end. So let’s take an example here.

Let’s say the balancing mode for CPU utilization is 75%, which means for a particular back end, we want to make sure that the average CPU utilization is at 75%. With this CPU utilization, let’s say we set the capacity setting to be 66%. That means at any point in time we want the back end to run at two thirds of the balancing mode utilization that we’ve set up, which means at 50% of CPU utilization when the balancing mode is 75%. On the Google Cloud platform, back ends can be VM instances which are part of an instance group. Back ends can also be back end buckets, that is, cloud storage buckets that are capable of serving static content. Setting up back end buckets allows you to use cloud storage buckets with Https load balancing.

The traffic is directed to the bucket instead of to a particular back end which hosts a service. And it’s very useful in load balancing, especially when you have dynamic content which can be directed to a back end service which has a web server running. And for static content, you can direct the request to a cloud storage bucket. Here is a simple architecture diagram of how you would configure back end buckets to be used with Https load balancing. These are the customers or clients of your application. They can be on mobile phones or they can be on a laptop, and they are distributed across the world. The request that they send to your service is first sent to a cloud load balancer that you’ve configured.

This cloud load balancer is configured with Stackdriver monitoring and logging and has back ends as well as back end buckets across which it distributes traffic. So there are two back ends here, one in the US central region and another in Asia. Both of these back ends hold the front end instances of your application. Yes, it’s a little confusing the terminology, but hopefully you understood what I meant. The load balancer is also configured to point to a cloud storage bucket in the US East region. So any static content, let’s say it’s configured to a path of forward slash static can be sent to the storage bucket and all other requests will be forwarded to the other backends.