Practice Exams:

Google Professional Cloud Network Engineer – Configuring Network Services: Load Balancer, CDN , DNS

  1. 3.1 Configuring load balancing

Load balancer, high availability and auto scaling. This is you can think of network service essentially but that comes in Compute service because it is related most of the services which we will use as a back end from the Compute and I do not actually am not able to differentiate that because all other containers service or app engine they have got their own internal load balancer purely in the context of compute service we can just see simple plane the learning way for the load balancer in terms of high availability as well as auto scaling. In one particular chapter this is considered as a load network service in your syllabus but I’m actually teaching it here or taking it here because I find it most appropriate to the Compute Plus network and this is the right time for you to understand load balancer. Let’s go ahead and get started. What is load balancer? Load Balancer actually takes the request from the customer or any user and distribute their back end distribute it to the back end services that’s the load balancer.

So whether the user is from anywhere it will just take the request and distribute the traffic how the traffic will be distributed. Google has its own way of actually doing it and Google load balancer is or you can think of external or Http load balancer is the end device it can actually route your traffic based on where the customer is located. Okay? So the external load balance is facing outside your GCP where internal load balancer is within your GCP and between your different services. So that’s the difference between external and internal load balancers. In external load balancer we have Http https load balancer or SSL proxy load balancers the regional load balancer which is like external to the GCP also considered as external load balancer and that is also called as a network load balancer.

Let’s go ahead and get some benefits out of the load balancer so you can actually scale your application because as and when it is required you can spin up many more instances and then the traffic is routed to the new instances. It supplies heavy traffic, detects and automatically removes unhealthy and this is very specific in GCP world okay? It is not applicable to all the other installations or I won’t actually guarantee you that this is the same concept applied to GCP AWS or Amazon. Okay? So it detects and automatically removes unhealthy virtual machine instances. Instances that become unhealthy healthy again are automatically added. The Hill checking configuration is used to identify Hill checking for the instances it reroutes the traffic to the closest virtual machine and that’s the intelligent routing. We will talk about that very shortly before we get in details of the load balancer in detail.

First we need to understand the terminologies which we learnt already has instance template which is a global resource and that template defines machine type, image zone and other instance property for the instance in managed instance group. Okay, what is the instance group? So using instance group you can actually define auto scaling policies and we have two different instance group. One is managed and second one is unmanaged instance group. You can actually just look at managed instance group actually points to the instances which are in the same template and so all the instances which instance group is managing it is identical in nature. It can be used for auto scaling. You can do rolling updates, changes to the instance will make the changes to all instance because everything is identical. Load balancing is possible for all those instances.

Similarly unmanaged is non identical instances and we saw this already, you cannot have auto scaling because it doesn’t know which one to spin upright when the load is increasing. Rolling update can’t be possible. You can make arbitrary changes to any instance and that is advantage. It is just that you are managing group of instances together. In unmanaged instance group load balancers typically as we said, you will get a request in your DNS or anywhere and that is getting routed into multiple back end services. Whether the service resides in one particular region or multiple region that does not really problem because this load balancer in Google’s context is SGN device and it is global in nature. You can either route plain simple the traffic to back end in round robin fashion or based on the usage or CPU usage or the back end instances.

But at the same time you can route the traffic based on type of the requests. As an example here if you look at if there is a static request which goes back it will go to some instances. If it is PHP request, it will go to some instances like PHP related instances and if it is a video then it will go to some other instances in the back end. You can actually do this with load balancer. We saw this already like the load balancer and how load balancer works, how the traffic gets routed via cloud load balancer and gets into the back end and how actually the traffic gets routed to the nearest to the user. And I’ll just run through this one very quickly, within a minute I hope. So DNS has actually connection with the load balancer and that’s where the IPS are configured.

When there is a request initiated from the central location, the load balancer understand that the request is from central and it will route the traffic to the back end instances whether it is compute or any other back end to the central. If the traffic is from east, it will get routed to the east. If the traffic is from west, it will get routed to west. Consider a case wherein there are considerable amount of users logged in from the west and the number of requests are getting continuously increased and the CPU utilization for those boxes get more than 80%. That’s where auto scalar will kicks in and we’ll try to create additional instance for those services.

But at the same time, what will happen is some of the requests, because in the process of auto scaling gets triggered. And usage is, say, 70 or 80% high based on the threshold which is configured for auto scalar. To trigger. Some of the requests will get transferred to central backend instances just to make sure that the request are served in expected latency. And we are going to see this in actual demo. So once we have actually the additional back end from the west zone created, load balancer will create a connection and we’ll start actually sending the traffic in between.

What will happen? What is considered like what happened is the central instances just disappeared, right? In this case there is no one actually to serve the traffic from the back end. The load of balancer understand that there is no one to serve this traffic from the central. It will just route it to nearest location. Next nearest location. And it’s not only the location on top of it, it will just try to calculate how many requests which are being served by the backend instances, plus the CPU utilization and all those parameters based on your again, configuration. And it will route your traffic to the next available plus who has the capacity to serve the traffic. Right? That is an intelligence which is built in in load balancer. So that is taken care. So you’re not experiencing any issues at search.

From the customer’s point of view there are a different type of load balancer that we have it I’ll just go to the console and show you so that we can even get started there. So if you go here and look at networking service I don’t find it. Okay, so it should be in compute engine. I don’t see it. Where is that? Oh my God, I need to just find it. Outright network service, that’s where it is. Load balancer okay, I need to delete this. If you go to create load balancer you will be presented with this UI. So the first one is http load balancer okay. Second one is TCP load balancer and this actually used for TCP SSL proxy and TCP proxy it is internet facing as well as internal to single or multiple region. Http load balancer internet facing only and it can actually have single or multiple region as a back end UDP load balancer. This is again internet facing. It could be internal internet facing or internal single region. So you need to make sure that the single region is very important here. Okay, so in a high level if you look at there is a global load balancer and then there is a regional load balancer. In global load balancer you have Https load balancing, SSL proxy load balancing, TCP proxy load balancing right? And then in regional you have regional, external and regional internal load balancing. For regional external they call it as network load balancer and for regional internal they call it as just internal load balancer. This is the decision tree which we have to choose between the load balancers which one to use. The simple thumb rule is if your traffic is Http or Https traffic, just go ahead and use whether it is global, original or whatever you do that, just use Https load balances. If your traffic is TCP traffic and you do need SSL of load, you use SSL proxy load balancer.

If you do not need SSL offload, that’s where you can use TCP proxy or network load balancer. Okay, this is again the regional one for internal IPV four clients it is just an internal load balancer. Whatever you have it, you just need to use internal load balancer. Load balancer ports there are different ports supported by different load balancer as because the protocol which we use it as an example here for Https or Http it will either use at or at. For Https it is very standard. Four four three ports for SSL proxy with TCP with SSL upload you’ll have these options out of GCP provided all of these are like numbers. I don’t think you need to remember these numbers for your exam. You just need to understand that there are different ports which are available for different load balancers. This is internal architecture. I’m not sure whether you are bothered about it, but I just wanted to give you high level thoughts around it. When request actually comes from anywhere like DNS or any internet, the first request first thing which will kick in inside the load balancer is global forwarding rules and that will pick that request.

It will hit check the target proxy, URL map and back end service which he has to hit and it will hit the back end services for the back end services to understand whether the instance is healthy or not. Currently it uses the health check and that information is available. It is an internal, you do not need to worry about it. But the back end service you configure the actual endpoint and health check you configure different numbers like how much time you want to do a health check and what is the link for a health check. Let me go here and just click on start configuration. So what it ask? It ask you the name of load balancer GCP train celb okay, create I just created I have not created it. I need to give the configurations right.

 What is the back end service which I have to give? One is back end service, second one is I can just give a bucket to retrieve that information. Create back end service and I have to give GCP train C back end. What is that? Your hub? Next hub. So when load balancer actually looks at your request where it should get routed, whether it is instance group or network endpoint group. I can actually configure that here. Network endpoint means any end point which is required for required to take your request and it could be anything, right? I’m just putting here instance group and add back end service. I do not have any instance group right now. So to create I’ll just cancel it what additional information it needs? The host and path rules like host and path rule determine how your traffic will be directed and you can direct your traffic to back end service or storage bucket. Any traffic not explicitly matched with the host the path rule will be sent to a default traffic. I can actually go ahead and create if there is Gcptrainc. com for images route it to I do not have any backend configured, so any cloud bucket GCP bucket that I have configured so you can route it to bucket or for PHP instance you can route it to virtual machines front end path similar front end IP address. What is that? And this is specific IP address, port and protocol. This address is the front end IP for your client request and this is where it will hit. It’s a Http protocol or Https because we are talking about Http load balancer here and network service tier, whether it is a premium current project level tier or standard.

And if you want to understand that traffic traverse Google high quality Google backbone entering existing Google Edge peering closes to the user standard means this will be routed to closest to region so traffic enters an exist Google network at the peering point closes to the cloud region it’s destined for originated. So this is standard means it can even take a public internet for a longer distance and traffic will be routed like that. So premium is the standard right now, which we are using. I’m not sure how this actually differs your pricing as of now because this option was not available earlier. IPV four or IPV six which traffic you want to serve? I’ll just put 80 there and that’s it, right? I can just name Http LB and you can add additional IP ports. Just save this one and we can add additional ones. I do not have complete configuration here. I just wanted to give you high level thoughts there. Let’s go ahead and proceed further on Http load balancer in detail in next lecture.

  1. 3.1 Configuring load balancing-HTTP LB

Let’s go ahead and get deep into http load balancer it is global external load balancer type usually Http load balancer or any load balancer use load distribution algorithm to route the traffic to the backend instances. In this particular case, Http load balancer actually does it’s a request per second or CPU utilization on of target back end instance. You can actually use the param balancing mode to provide this information. Session Affinity usually if session affinity is a term you can think of if one particular customer is logged into website and the back end service is maintaining it is all stateless but it’s a data and caching the data, right? So what you want to do is you want to always send the request from that particular customer back to the back end services.

 I would say same back end instance and not to multiple back end instances. So there is a session affinity, which you can configure and use it. So session affinity sends all the requests from the same client to the same virtual machine instance. As long as instance stays healthy and has capacity to serve the traffic. Okay. GCP https load balancing offers two type of session affinities. One is client IP affinity and generated cookies. So one is based on the client IP, that’s how it will route the traffic. And the second one is based on the cookies it is set WebSocket https load balancer has native support for WebSocket protocol backend that uses WebSocket communication with the client can use http load balancer as a front end for scale and availability. If your service requires longer load connections increase the timeout and this is additional configuration which you can think of if at all your back end is supporting WebSockets in this case these are the considerations illegal request handling how actually Http load balancer handles the illegal request right?

And this is some of the thoughts around it. So it will read this request as illegal if it cannot pass the first line of the request. A header is missing header or the first line contains invalid characters content. Length is not valid. There are multiple content lengths. There are multiple transfer encoding keys that there are unrecognized transfer encoding values. There is no chunked, non chunked body, no content length specified body chunks are unpersible. So all of this situation where http load balancer will treat that the request itself is illegal. And this will actually come to you as a cloud engineer to look at why the requests are not getting processed by http load balancer so these are the scenarios which you want to check it load balancer also blocks along with all these request. It’s also blocked the combination of request URL and the header is no longer than 15 KV. The request method does not allow a body, but request has one and request contains upgrade header.

Http version is unknown in all these cases. Http load balancer will block the request timeout and retries http load balancer has a default response time of 30 seconds, http load balancer has TCP time out of ten minutes, which is 600 seconds by default, and http load balancing retries for all get failed get request but does not retry post request. You can actually think that get request is usually the read request and post is you are submitting or making any change to the back end. So if that particular post request is failed, it will not retry logging. You can actually use trackdriver logging for logging your connections and you can use flow log for so much of your network communications if at all it is required. If your load balancer instance are running public operating system image supplied by Compute engine then firewall rules for the operating system will be configured automatically to allow load balancer traffic which is like traffic from the load balancer. But if consider this case right if you are using your own custom VM, you have to configure the operating system firewall manually because this is maintained by you and not by GCP.

This is separate from the GCP firewall rule that must be created as a part of configuration. So if you look at and we are going to see this in detail the firewall rules which we configure that is outside your virtual machine it has got nothing to do with whatever you configure as a firewall rule inside the virtual machine so it is outside the virtual machine and that is why it is independent. Load balancing does not keep instance in syncs the back end instances so it is a responsibility of in case of instance group you are using it. It is instance group’s responsibility to keep them in a sync. Http load balancer does not support sending http delete with the body to the load balancer you can actually think that if at all you are sending http delete it should not have body in it. I will go back here and just for our illustration, I will use this image again. We have actually seen this many times. But we are going to refer this in our demo.

What we are going to do is we do not have DNS all Asia, Europe and west. We are going to create these virtual machines. We have back end actually services which will support these clients. So I’m actually trying to mimic that someone is calling from or making the request from Asia. Someone is making requests from Europe. Someone is making the request from us, west. How do I do that because I’m located in US and that to east how can I actually make a request to Load balancer that I’m sitting in US, west or Europe or Asia? The way I’m going to do here is I’m going to create a virtual machine in Asia, I’m going to create a virtual machine in Europe, I’m going to create virtual machine in US west and that’s how I’ll make a request and then we’ll see how load balancer perform. Okay?

And this is the only way which I can check it. I should have actually done these virtual machines from either AWS or any other cloud provider. But just for simplicity, I’m just doing it on a GCP. Okay? So what we are going to do is we are going to create first the instance group for Asia, instance group for Europe, instance group for us west and then we are going to configure load balancer which will hit the request to the back end and all this client has a virtual machine again. Okay, let’s go ahead and get started. Create first the instance group. So for me to create an instance group, I need an instance template. I’ll go ahead and create instance template. So the first thing first, I’m going to create the Asia company. Let me go ahead and share this particular slide again, I just elaborated that what is that we are going to create as a part of load balancer demo. So what we are going to do is we need instance template for Hi, instance template for Europe, instance template for US West and instance group so that we can manage auto scaling and multiple instances in the back end. Then we’ll go ahead and configure load balancer on top of this one with health check inept we are not going to have DNS but the client from Asia. We are going to creating one instance in Asia, one instance in Europe. I can actually use my desktop to mimic the traffic from us or I can actually just have a client instance created, but we’ll see how it goes.

What we are going to see here is once we create the instances, we are going to generate the traffic from different client instances. Okay? So let’s do one step at a time. Let’s go ahead and create instance template and instance group. What I’m going to do is I’m going to create the script starter script in which script I will have actually CPU utilization go high. So if you invoke that particular script via Http, the CPU utilization of that instance will go high and that’s how we will be able to demonstrate that of scaling also for us west, okay? Or we can actually do it for any other instance. So let me jump back to console and in console I have to create the instance template first. So I’ll just mark this instance template. It Asia. For Asia I’m going to create a small instance so I’ll be charged with less. And what I’m going to do is I’m going to click here and go to startup script.

I have this particular README and I have all the script which are ready to be downloaded by you and the script created. So for Asia I have this particular startup script. What it does is it gets this front end Asia Python script and it will execute. This will create an http trigger and whenever you call that API Http link, then this will burn some CPU. Okay, I’m going to just copy this over. So copy from bin to here, not this dot. Okay, I’m not specifying any zone or region here, just to tell you. So this is Asia instance template. Create. Instance template is created. Now I’m just going to click on the instance template and let me see http is traffic aloud because we want to hit the web server keys there. I’m just going to copy this instance template and create Europe same instance size. But I’m going to modify the startup script this time. I’m going to take Europe startupscript. Okay, create again.

 I’m going to copy over the same one to create USA instance group and change the startup script. This one I’m not really worried about all other parameters right now. So instance templates are created as per our requirements. So instance template created for all three regions. Let me create instance group. I’ll just prefix all those instance group with IG and Asia. I am just using simple single zone, single zone location. Or I can just use multiple zones and I can just select the reason. So now I’m creating Asia. So I’m going to go ahead and use Asia east one instance template, Asia. I’m going to switch on auto scaling for Asia. CPU utilization. This is a target CPU utilization, right? When do you want to trigger additional instances and based on this one, it will trigger an additional instances if at all. So we start with one instance and if that particular instance goes over 60%, it will trigger multiple instances and you can go up to ten instances. Cooling down period 60 seconds.

And this is where you try to avoid if there are any spikes low and high. You want to wait for 60 seconds to measure the CPU utilization and then you take average and then it can decide. So we have http health check let me go here. Or I can actually go ahead and create one. I’ll just go ahead and create one. Create another health check. Http new port LB traffic is http 80 port number 80 so interval to check the health, whether the instance is healthy and we can wait for up to 5 seconds to respond back healthy thresholds consecutive two failures will be like unhealthy. Healthy is like conjugated two and unhealthy is conjugated three failures save. Okay, so that’s it as an instance group and let me just select whether I had chosen right things. Okay, create I can create another one for Europe instance group. Europe again single zone this time I’m going to select somewhere in London.

Ne is fine. Europe and he’ll check same our load balance, he’ll check all other things should be very same. I’m not going to change anything there. Creating another instance group. And this is instance group USA USA health check http so while we are creating this instance group it will create the number of minimum instances that we have asked for and you can actually see that is reflected here. I have this particular instance as an additional instance to mimic the USA traffic. So US east one, that is my instance. So the third instance group is getting created. I can go while this is creating, I can go here in network and services for Load balancer and currently it is in Network service and Load balancer. It may be somewhere else when you see this demo because Google is keep changing Create Load balancer. If I click on create Load balancer it will show me multiple options. I’m going to use http https load balancer

I’m saying Http ALB demo back end services I can give instance or bucket. Bucket means cloud storage bucket. We are going to see this later. I’m going to just use back end for Load balancer demo and I can start with Asia. Somehow we created multisone in Asia. That’s fine. Utilization this is balancing mode. This is CPU Utilization. If it goes to 80% then it will not send the request to this backend and that is the utilization mode. The number of requests if you mentioned 1000 requests per second will make you decide to go to somewhere else. But I’m going to consider the CPU utilization of backend instances to make the decision where to transfer and I’m not going to measure rates per second because if at all the utilization is lower then I’m fine with just hitting it. Done.

 I need to add another instance group as well. Europe port 80 utilization same. I think what I can do is utilization is 60. Okay, auto scaling is already configured and then US. So all the instances so you can actually enable cloud CDN for this backend if at all you want to have it. And I’m going to use health check which we created create so we have back end, we have host rules if at all you want to. And this is where actually we saw it, right? So if at all you want to transfer image traffic to somewhere else, PHP to somewhere else or your other web traffic to somewhere else, then this is how you can configure. You can just mention here images my GCP train or you can mention here images, it will route these two particular backend service as because we have just created one for this demo. If at all required, I’ll go ahead and create and this is the default back end it relates to I’m going to create another one if at all it is required but I don’t think it is required front end.

And this is your Http end point, right? So I’m going to say LB demo, the protocol I’m going to use is Http and restall as it is I can have additional front end hitting to the same back end for this load balancer but I do not have that requirement right now I’m just going to use as is. So this is finally what we are going to create. When I click on create and this is the thing, right? You need a really good patience here because it is going to hit you are going to once the IPS are there, you are just trying to access it and you will not be able to see it. While this is being created, let us go back to virtual machine instances compute VM and so we have one particular instance that mimics our US west or my desktop. I need to have another client instance and third client instance, one in Europe, one in Asia. So let me go ahead and create those instances as well. CLT in Asia. I need to give Asia.

Now we have given I think Hong Kong. Let me give Tokyo which is near anywhere. I’m fine, I just need a small small instance. I don’t need to enable this but I’m just doing it for my sake. So that’s the Asia one created, I’m going to create another one in Europe, in CLT in instance that is Europe and I’m going to choose near to London somewhere. Either I can actually host this in London or let me choose Netherlands and any zone is fine for me and I’m going to use micro. So the instance is getting created. Let me go ahead and hit this Asia and see if it is working. So usually when you click on directly click on the external IP it will hit to Https. You need to make sure that you take out Https there and then hit directly so you have okay Asia OK Europe. So this one we checked now US. So many of actually the student in my earlier course, they responded back saying that when we click on this one we get error. It says error connection refuse, right? So you need to make sure that you have chosen right protocol because the web server which we are trying to create here is Http and not Https. We have not enabled SSL connection on it. Okay, so that’s how you’ll get it. I’m just going to delete these ones. Okay now we have client in Europe, client in Asia and client in US. But at the same time if I hit the load balancer from my machine it should go to US because I am currently in US. So let’s go ahead and go back network service load balancer. So the Http load balancer is up and running. When you actually get this information right, check here what are healthy instances.

So it is not yet complete. If you look at the number of healthy instances at least you should have one healthy instance here to serve the request. Okay, let me pause this video to get these instances in healthy state and then we’ll continue again. Now we see the instances. You have one instance for each of this instance group which we have configured as a backend service here. Let’s go ahead and hit this particular IP in Our. Okay, it says US all the time. It says US that’s wrong, right? Why it is saying only us west. Or have I actually configured wrong script everywhere? No, it should not. Yeah, the catch here is I’m sitting in US and that’s why it is routing my request to US West instances going back here. If you look at if at all I’m sitting in Asia then it will just route the traffic to back end in Asia but in US West it is getting routed to USS. I have another alternative to go ahead and hit the request to this load balancer mimicking that the instance is in or the client is in Asia.

Okay, so I have Asia instance right now I can just click on SSH and what I’m going to do is I have created this curl script as well. Okay, I’m going to run these curl commands here as well. Okay, I just need to replace the IP address of my load balancer. What is it? 351-8635 dot one eight six dot 20 two dot 99. Don’t forget to actually put Http cold on there. So what this will do is every 2 seconds it will hit to this, every two second it will hit to back end. Not the back end actually the load balancer. Sorry, load balancer endpoint. This particular instance is in Asia somewhere. So the request is getting routed to Asia backend. Let me go ahead and create Europe client SSH terminal and for the same command we should get the request routed to Europe. What is that? That is not good. Somehow this copy paste is not that much good. Okay, I’m just going to get this copy. It says Europe. Good. Let me go ahead and hit this one and let’s execute the same command. So this is client in US. It says US West. Right? So everything is normal, everything is going intelligent, routing is working properly. Now what we need to do is we need to somehow create the load on US West.

I’m just choosing the US West because this is there in my presentation. But we can choose any of this back end and create considerable amount of requests in US West. So let me go ahead and choose US West Client. I’m just going to control C. Okay, now what I’m going to do is I’m going to change the timer to 0. 1 2nd. Let us just go ahead and review how many instances are running right now. So this is Client, this is Client and this is Client. Only one, one instance for each and every instance group is running right now. Okay, let’s go ahead and create some traffic in US West. So what is this? This is us west. Let’s hit it continuously and we can actually go ahead and Uswst right? Go to monitoring and you can actually see the traffic going up for these guys on 60% usage, it will go ahead and hit trigger another one.

I will pause this particular video and let this CPU burning go high and then I’ll come back. So if you look at this particular endpoint and the base endpoint will not give a CPU burner, I have another endpoint using which we can use the CPU burner. But we have seen that the requests are getting intelligently routed to their respective nearest back end services. So let me go ahead and hit the service which will do the CPU burning. And I want to actually use that for US West region. So this particular service, the endpoint service will hit heavily and it will create a CPU usage. So if I go here in US and monitoring, we should actually see the CPU. So I was able to just hit 25% with so many requests. But I have now service which will hit the CPU beyond 60%. We just should stay here for some time. So it’s a US west. I’m just putting 2 seconds for Europe and many more requests in US and in Asia. For Asia we are going to run every two second request. Hold on, I want to use this particular so we are not going to burn too much of CPU in Asia and in Europe we are hitting the same endpoint. So US West. Why the request from Europe is getting routed to US West. Okay, we can check that. Maybe I have not changed the text there. So US East, let’s burn CPU, okay, on the back end and let us see. I can actually I don’t want to burn CPU here. I’ll just keep our normal curl command so that we know where it is hitting. So now US waste is actually hitting considerable amount of request. This is 2 seconds. So this is not that much. And we are not burning any CPUs in Europe. So this is our case.

 Let’s go ahead and see the monitoring for US. So you can actually see for US as because I was hitting too many requests, it has considered a spin off additional instances for Europe. Also it has created additional instance because I mistakenly add number of the traffic in Asia though is because we are not at all hitting the service, right? So we are not actually hitting too many requests here. So let’s go ahead and what we do is kill some of the instances in Asia and let us see what happens. Asia, you can’t actually stop it remove from do something there. Let us see what happens. So now nothing is essentially working properly. It should trigger additional instance, but in the meantime, it has to send the traffic over to some other location. This is Europe. This is Asia.

So it has created additional instances. Delete, delete. Let us see what happens. Okay. So this will not be applicable because we have you, okay. There are multiple instances getting created for issue. Let me remove from group or delete instances. Let me just go ahead and delete the instances. Let us see what happens with the request still. Okay? In the meantime, you can actually see that we have some of the request already going to US West as because we were deleting those instances and there was load on, if at all. There was one instance it was load on it.

So additional request went to US West because that was near. If you look at the geography, that is what you can see it, right? That’s what it is for load balancer. Guys, if you have any questions, the way we have actually gone through was so we created all these, we created the clients and we burned actually CPU too much in US West and then we deleted. Instead of Europe depicted here in animation, we actually tried to delete the Asia and the traffic actually went to US West because that was near. That’s it guys, if you have any questions on the demo, please let me know. Otherwise you can actually just move to the next load balancer theory.