Practice Exams:

Google Professional Cloud Network Engineer – Designing, Planning, and Prototyping a GCP Network Part 3

  1. 1.3 Designing a hybrid network-Cloud Router

Cloud Router on Google Cloud Platform. Cloud Router actually help us announcing or informing your network changes on Google Cloud Platform or your network or data center to each other. And that’s the use of cloud router. So if you look at this particular scenario here, you have Google Cloud Platform and you have one particular subnet in one region, it says 1021 00:16 and you have connections VPN or Virtual private Cloud VPC you can think of. And you have these connections with some means like either it is VPN or peering right, or anything, right? And then you have your own subnetworks inside your data center, like Rack One, Rack Two, and up to Rack 29, right?

So consider a case where you have added Rack 30 here. And now this address space is also available and you have provisioned the servers to use the IP range from this particular Ferric, right? How do you inform this network information to here? Or you can think of how the resources from GCP can get to know our discover help these servers, right? There is no mechanism here. So in Cloud Router space, what you have is you have Cloud Router, which if at all any network is added on Google Cloud Platform, it announces the network changes to your on premises BGP and vice versa. If there are any changes on the network, on your data center, this announces the information into Cloud Router so that it understands that there is an additional network resources or the machines or physical machines or database servers like that, that is available for these resources to use it.

And that is the ultimately goal of Cloud Routers. So typically, Cloud Router provides BGP routing, it dynamically discovers and advertise routes. It supports graceful restart, it supports ECMP as because it is the software device and not the hardware device. It supports Grace pool restart, it supports ECMP traffic primary and backup tunnel for failure. It supports made as path length as. Prepend some of the thoughts around Cloud Router or remaining thoughts around Cloud Router is one router per region. Peers with BGP on premises advertise. All subnets of the regions link local IPS for BGP, private ASN on GCP and private or public ASN on premises. So typically, if you look at the architecture and how it works, if you take the earlier example wherein you have two regions, the resources in two regions in VPC, and you have three different subnetworks, right, or two different subnetworks existing and you have VPN connectivity established with your own premises. Like, you may have some departments and there are resources, computers, database servers or application servers and like that, right? So what happened is, after you establish all these connections and you make all those routes configuration inside it, there is another subnetwork created out of say employee or legal, right, legal or a customer as a department and they need some servers in Google Cloud Platform as well as on premises.

Now, you have created subnetworks, but who will announce the routes to the data center networks, right? So that is taken care by the cloud router. So it is one per region, if at all. You have multiple regions, you need to have those cloud router. And this cloud router is connected to your on premises BGP link. And that’s how it can exchange the network information from GCP with your own data center. And that is ultimately it is making sure that the routes are in place for all the network additions, whether it is on premises or whether it is in GCP.

So in a nutshell, what is the cloud router? Cloud router is fully distributed and managed cloud service. It scales with your network traffic. It is not a physical device that might cause a bottleneck. It is software defined network or component. When you extend your on premise network to Google cloud platform, you can use cloud router to dynamically exchange routes between your Google cloud network and your on premise network. And cloud router peers with your on premise VPN gateway or router that route. Routers exchange topology information through BGP protocol. And this is standard protocol.

You don’t have to learn it from scratch. It is standard protocol to exchange the route information between different network. That’s it in a nutshell, guys. On the cloud router, we are going to see the demo of cloud router inside VPN and we are going to have only one particular demo for hybrid interconnect and that is VPN and VPN with cloud router. Okay? If you have any questions on cloud router, let me know. Otherwise you can move to the next lecture. Thank you.

  1. 1.3 Designing a hybrid network-Cloud Interconnect

Cloud interconnect. We have two different ways you can connect your data center using dedicated connection and that is Peering one option and that’s where you do not need to expose your networking on premise networking onto the Google Cloud. Or you want to access G Suite collaboration suite from Google and you want to reduce the egress fees from Google, right? And that’s where you use peering and peering. You have two options. One is direct peering and career pairing. If you are available, your connection point is available at the Google’s H router.

Then you can do direct peering. Whereas when you don’t have connections, you go with the partner and that is what it is called as a carrier peering. And this is just to reduce your egress fee. And you have dedicated pipe for interconnect, though it’s a different aspect altogether. You have two different interconnects as well. One is dedicated interconnect with GCP and second one is with the service provider. With the interconnect you are accessing Google Cloud platform network from your premises as well as you’re making your network available to Google Cloud resources so that they can access it and they can communicate with each other. So this is dwell advantage.

It is not purely in the context of G suite. As we have seen, peering is in. So let’s go ahead and get in details of when to use which one. GCP interconnect dedicated private interconnect is a third party. It says no because it is direct connection with Google. It supports RFC 1918? Yes. Does it support encryption and this is a public IP? No, it is not supported. In fact it is not required because you are having dedicated connections with the Google Cloud platform, right? Does it meet SLA? Yes, it has got an SLA and you’ll have services provided under SLA. If at all you are not there in Google network or connection, then you go with the partner and that’s where the dedicated partner interconnection will come into play. Either you are available in Google’s Pub location or you may have the requirement which is less than ten Gbps pipe.

That’s where you use Google Partners network. It is third party. It can support that. It does not require public IP or external IP and definitely this is under SLA. So interconnect option is in under SLA. Direct pairing definitely is not under SLA. You have direct pairing and career pairing. In pairing options. Direct pairing is not a third party, whereas carrier or partner pairing is third party. You can have RFC 1918. You can do that means it’s not that it is not supported, but you want to have the VPN connection on top of your peering option, right? And that’s where you’ll be able to use RFC 1918 IP ranges. Does it support public IP or external IP? It is supported in case of direct peering. In case of carrier pairing, it is an optional and it depends on the ISP as well.

So typically if you look at the layers, the dedicated interconnect and partner interconnect, these are layer two services. So you have VLAN connection to your data center or office premises, whereas peering option is layer three interface and these are without the SLA but you can have RFC 1918 support using VPN connection. If you want to enable say, encryption, you can do that with the VPN as well, right? Here are some of the examples which are thought process on different use cases when you use direct pairing versus cloud interconnect.

I’m not going to get into details, but in a nutshell what you need to remember is Google Cloud Platform is not necessarily a requirement for Direct Peering whereas interconnect gets connected to Google Cloud platform. So let’s look at some of the aspect of interconnect here. So if you are available in Google’s Peering location, right, or Google’s Pop Location, you do your connections with Google and this is what you call it as a Google Direct Peering interconnect, okay?

This is via partner interconnect. So as an example, you are not available in Google’s Location Colocation facility. You go with the service partner. Service partner providers also has connections with Google from the back end and then you just connect with service provider and that’s how you can connect your data center with the Google Cloud Platform. Some of the high level thoughts on interconnect interconnect allows cloud platform customer to connect to Google via enterprise grade connection. Google Interconnect has high availability and low latency network as compared to traditional public interconnect. Google supports direct connection to its network through Direct Peering and carrier interconnect. So Direct Peering means again, it’s not a pairing option.

It is like direct interconnect and carrier interconnect. So high level features, enterprise grade connections reduce egress pricing and it is integrated with the partners as well. So you can reach out to any partners. And there are a whole lot of lists using which you can get it connected to Google for partner peering. So these are some of the latencies which are there for the regions for cloud interconnect connections. I’m not going into details some of the considerations which you need to make sure that you understand it while making those decisions. Minimum deployment per location should be location is ten Gbps, that is minimum. If your traffic doesn’t require that level of capacity, consider cloud VPN if at all you need encryption on top of it. The circuit between your network and Google’s network is not encrypted. If you require additional data security, use application level encryption or your own VPN so you can have on top of interconnect. Also you can have VPN so that your traffic is encrypted violent transit. Currently you can’t use Google Cloud VPN in combination with dedicated connection, but you can use with your own VPN solution. And this is some of the consideration. This is dedicated interconnect which has capacity ten to 80 Gbps per interconnect.

The cost is reduced to egress cost fees for each circuit and Vlad require you to have routing equipment in Colocation facility. If you require additional security, use application level encryption. Because encryption is not enabled by default, where in case of Google Cloud VPN tunnel it is encrypted by default. So you have encryption enabled. You can go from 1. 5 Gbps up to three Gbps per tunnel. This is pertinent and you can just add many more tunnels if at all. It does not allow your existing requirement. Traffic is built same as general network pricing, fees for each tunnel and fees for each tunnel and requires VPN device on your premises network.

And these are in nutshell the cloud interconnect, right? So what we are saying here is you can connect to your own data center with Google and depends on what type of that particular requirement is. You choose either the peering option or interconnect and interconnect you have dedicated peering interconnect and partner peering interconnect both has got plus and minuses but you need to be available in Google’s Colocation facility. On top of it you can use Cloud VPN tunnel and this is the option you can start with to get it connected to Google Cloud platform. And if at all your requirement is really high, you can think about having interconnect on top of it.

So that’s it guys for hybrid interconnections. So the connection between your data center or your office premises with the Google Cloud platform either to reduce the egress fees or to have secure channel or connection or you want to have SLA based the connection. We will not have any demo on cloud interconnect as such, but we will definitely have a demo on VPN and VPN plus cloud router. And that’s it for all the theory guys. If you have any questions on the interconnect, VPN, Peering or Cloud router, let me know. Otherwise you can move to the demo section. Thank you.

  1. 1.4 Designing a Container IP Addressing plan for Google Kubernetes Engine

This is roman out topic here, the Container networking and why it is actually required to be discussed here is because Container uses different kind of networking than the virtual machines and that’s why they have I think included it here. Before we get into what is the Container networking, let us just high level understand about what is Container or what is docker and why it is used. Looking at this particular architecture here, if you look at typically if at all, you are using virtualization, typical virtualization, right, that’s where you have hardware, you have guest host OS and then you install multiple guest OS’s on top of it to support different applications.

So you can allocate resources like CPUs memory and all that per guest operating system. But at the same time if you look at you are running the complete operating system to run your application in Containerization though that is not actually what it happens. So you have a host operating system and then you install you create a container and that container contains the library which is required for that particular application to run it. So it is not the full OS like in virtual machines in containers it’s like just the binaries which is required to run that particular application and you can install as many as containers as your hardware supports the capacity you have it. But that’s the difference between virtualization and containerization.

So typically if you look at the Google container management system, which is Kubernetes, they have open source state and now even Amazon and Azure is also providing services based on Kubernetes. Container Engineer Oracle Station Tool typically in any container system you have Master who is actually managing the connections and all those nodes and lifecycle of each and every node as well as the container inside these nodes. So that’s the Kubernetes architecture typically Master has got actually you can think of it as endpoint or doorway to the cluster to manage the cluster like number of nodes and what you can push it to the cluster like what are those containers like that.

So Master manages all those nodes and it is not a worker node. As such, your nodes in the cluster are the worker nodes and those are actually the nodes which runs your containers. And in Google context or in Kubernetes context, you can club one or more containers together in one particular pod. So they call it as a Pod. So that is the unit which gets deployed across the cluster and gets managed. You can enforce auto scaling policy per Pod. Like you can maintain say if you push one particular Pod which serve your website, you can say at least three services should be running and it can go up to say 20 based on your load on those services, right? So that is the Pod and this is very important for us to understand the networking of Kubernetes engine or GKE.

Traditionally that’s what they call it pod has got two different entities. One is IP address. So if at all your application has got two different containers. One is say, as an example here, NGINX. And the second one is Mem Cache as a cache, right? So the IP address is shared with all those containers. So how you can expose your services outside, if at all it is required. You can add different ports for different containers here. And that’s how the networking differs from traditional virtual machine instances. The container also shares the storage as well as the disk on that particular network.

But that is an additional thing. But in the context of networking, Google networking as such, Poor has got an IP address which is shared between the containers. That’s what you need to keep an understanding. That’s it for container networking guys, if you have any questions on container networking, let me know. Otherwise you can move to the next lecture, which is SAS Pas and Infrastructure as a service. Thanks.