Practice Exams:

Google Professional Data Engineer – VPCs and Interconnecting Networks part 7

  1. Dedicated Interconnect Direct and Carrier Peering

We saw earlier that there are three ways to interconnect your on premise network with a VPC on the Google Cloud platform. The first of these was the VPN, which can be used with or without cloud routers. The second is the dedicated interconnect, as opposed to a VPN. A dedicated interconnect does not use a tunnel over the Internet. Instead, it is a direct physical connection between Google’s network and your on premise network. The two networks that have been connected using a dedicated interconnect communicate with each other using the RFC 1918 communication space. The RFC 1918 Standard defines a private network where all instances communicate via the private IP address space.

Because the dedicated interconnect defines a direct physical connection, the capacity of this connection is very large, and you can use the dedicated interconnect to transfer large amounts of data between the networks. So if you are migrating from an on premise network to the Google Cloud, you might want to set up a dedicated interconnect for large data transfers. Because you’re not routing traffic over the Internet, and because the direct physical connection is of a high capacity, a dedicated interconnect might prove more cost effective than using high bandwidth Internet connections or VPN tunnels. A dedicated connection which avoids the Internet altogether, means that your traffic traverses fewer hops.

There are fewer points of failure where traffic might get dropped or disrupted. A single dedicated connection offers a capacity of ten Gbps, and you can have up to eight such connections between your Vpc and your on premise network. That means you can have a capacity of a total of 80 Gbps. There are two specific things you need to note about a dedicated interconnect. The minimum deployment per location is ten Gbps, so if your traffic doesn’t require that level of capacity, you might want to use cloud VPN. Also, the circuit between your network and Google’s network, because they are in the same physical location, is not encrypted. So if you need additional data security in the form of encryption, you need to do your own application level encryption or use your own VPN.

Here is a block diagram of two networks connected using a dedicated interconnect. The on premise network is shown on the right. Notice that it has an on premise router with a linked local address. On the left, you have the Vpc on the Google Cloud platform. It has been configured with a cloud router so that dynamic routing is enabled. Any routes or topology changes on the network will be communicated to the on premise network. The dedicated interconnect represents a specific physical connection between Google and an on premise network. The interconnect exists in a colocation facility where the on premise network and Google’s network meet physically. The interconnect is typically a single ten Gbps link or a number of links, each link of ten Gbps with a maximum of eight links.

If you have multiple connections to Google at different locations or to different devices, you must create separate interconnects. Google offers a list of colocation facility locations where your dedicated interconnect can be set up with Google. Unless your network happens to be very small and with a stable topology, it is good practice to provision a cloud router. Just like in the case of the VPN Tunnel. The cloud router will use the Border Gateway Protocol to exchange route information with your on premise network. Any changes to your network will be updated automatically. If you want to have a more reliable and a faster connection between your on premise network and Google’s Cloud, the dedicated interconnect is a much better option because it does not traverse the public Internet.

Fewer hops, so fewer points of failure. The two networks occupy a single RFC 1918 IP address space, which means they can use internal IP addresses over the dedicated connection traffic which uses external IP addresses, and Google is considered aggress traffic and maybe build higher. The dedicated interconnect is a high bandwidth, low latency connection. It can scale up to 80 Gbps, eight connections of ten Gbps each. Because traffic is addressed using internal IP addresses, the cost of aggress traffic which flows from the Vpc to the on premise network is reduced. Your bill will also be lower. Google allows you to establish a Direct Peering connection between your business network and Google’s. This Peering can be of two kinds direct or Carrier Peering.

With Direct Peering, Google allows you to establish a direct connection between your business network and Google’s. This Direct Peering connection can be set up in any of 70 plus locations in 33 countries. Carrier Peering makes use of an intermediary, a third party internet carrier which is used to route your traffic. Direct Peering is a direct connection between your on premise network and Google. This is a direct connect using Google’s Network at its edge network locations. Google’s edge points of presence are where Google’s network is connected to the rest of the Internet. Via peering direct Peering has Dynamic routing. The Border Gateway Protocol is used to exchange route information so that network topology changes are propagated from the on premise network to the Vpc and vice versa.

The cool thing about Direct Peering is the fact that it can be used to route traffic, not just to resources on the Google Cloud platform, but it can be used to reach all of Google services, including the full suite of Gcp products. This can be a major bonus, and in order to be able to peer with Google, you need to be a customer that meets Google’s technical Peering requirements specified in their documentation. Establishing a direct peering connection with Google is absolutely free. There is no ongoing cost per port or fractional port that is involved. There is no per hour charges. However, the Google Cloud platform builds aggress traffic through Direct Peering connections within a region.

There is special billing for Gcp aggress traffic. Other traffic is billed at standard Gcp rates carrier peering on. The Gcp was previously referred to as carrier interconnect or cloud interconnect. It’s basically called carrier peering now and that’s the terminology that we’ll use. You can get access to Google applications such as G Suite via an enterprise grade network service that connects your infrastructure to Google by using a third party service provider. This is carrier peering as opposed to an interconnect using a VPN tunnel. This will give you higher availability and lower latency, and you can have one or more links between your on premise network and Google’s network.

The actual latency and throughput that you will receive depends on the service provider. So you’ll have to work with them to work out the best deal for you. Because carrier interconnect depends on a third party service provider, google does not offer an SLA for this service. The actual SLA depends on the carrier and you’ll have to work with them directly. Just like in the case of a direct peering, there is a special billing rate for Gcp egress traffic and all other traffic is build a standard Gcp rate. Carrier peering with the Google Cloud platform is offered by all of these companies cloud Star, Level, Three, Megaport, Data Communications and so on. You can pick one which works for you.

  1. Shared VPCs

Connections need not always be between networks of different types. Between your Vpc network and your on premise network or another cloud provider, you might want to connect two Vpc networks together. You have two options in order to do this. You can either set up a shared Vpc or you can perform Vpc network peering within your organization. A project is a billing unit, so you might have a team or a department. Create a project for itself and all resources will be built to that department. However, the organization is one entity, so you often might want to have a single network across multiple projects. This can be enabled using a shared Vpc.

So far all that we’ve seen are one project and multiple VPCs that we’ve set up within that project. But now we look at a shared Vpc where we have multiple projects, but the resources in those projects belong to the same Vpc that is a shared Vpc. Shared Vpc is the terminology that we’ll use in this course, but another way to refer to the shared Vpc is called XPN or Cross Project Networking. This is older terminology and is no longer used, but if you see it anywhere, you should know that it refers to a shared Vpc. The one important differentiator between the shared Vpc and all the other interconnects that we’ve seen so far is the fact that the shared Vpc is a single network which spans multiple projects.

So shared VPCs allow XPN or Cross Project Networking. So you have multiple projects but it’s the same network. It’s not an interconnect or some kind of tunnel established between two or more networks. Just like any other Vpc that we’ve set up so far. The shared Vpc creates a single Vpc network of RFC 1918 IP Address Basis that associated projects can use. Any firewall rules or policies that you set up for the shared Vpc applies to all resources in all projects which are in that shared Vpc. This is different again from the interconnect options. A firewall rule applies to a specific network and when you use interconnects, there are different firewall rules for each network in a shared Vpc.

It is the same network across multiple projects. The same firewall rules and policies apply. Here is a block diagram which visually explains how one would set up a shared Vpc within the organization. Customer in order to demonstrate how shared VP are structured and how they work, there are four projects in this setup. There is a host project, service project one, service project two, and a standalone project. The dotted red line that you see on screen is the shared Vpc and it will connect the host project with the two service projects. The standalone project is not on the shared Vpc, it has its own network. The shared Vpc host project is the project that hosts the shareable Vpc networking resources within a cloud organization.

Here it is. Customer the shared Vpc networking resources can be used by other departments which are in service projects. A single shared Vpc host project can have several service projects associated with it. Service projects are those which want to have resources on this shared Vpc. These service projects have special permission which allow them to use the shared Vpc networking resources from the host project. You can separate service projects so that each service project is operated by a different department within your organization. Each department has complete ownership of the workloads contained within its project. It’s responsible for billing what resources are instantiated, but these resources can exist on the shared Vpc.

Now it’s totally possible that you have a department within your organization that does not need to be on the shared network. They might have their own project. That project has its own Vpc. This standalone Vpc is within a project that does not share networking resources with any other project. The shared Vpc network is a network owned by the host project and shared with the service projects that are associated with it. These service projects belong to the same cloud organization. Here it is customer the firewall rules and policies that apply to this shared Vpc apply to all the resources across all projects which are part of this Vpc. The cloud organization is basically the top level unit on the Google Cloud platform.

So in the cloud resource hierarchy, cloud organization is at the very top. Under that there are folders and projects that can belong to different departments. This is the top level owner of all the projects and resources created under this cloud organization. So the host projects and the service projects which are on a shared Vpc must belong to the same cloud organization. The shared Vpc is not possible across projects which belong to different organizations. Here are a few basic rules that you have to follow when you set up a shared Vpc when you set up a service project, it can only be associated with a single host. You can’t have the same service project be associated with multiple host projects. A project can be designated as a host project or a service project, but not both at the same time.

Instances on a shared Vpc are on the same network and they can communicate with each other using internal IP addresses. However, instances within a project can only be assigned external IPS from that project. So you know that every project has a pool of external IPS available to it. When an instance is created in a particular project, it can only access external IPS from the pool that is associated with that project. Let’s say that the HR department of your organization wants to join a shared Vpc. If they already have a project of their own and they have some apps running, some instances set up, and so on, you can have an existing project join a shared Vpc. That is totally possible. So you can mark an existing project as a service project.

To some host project. Any instances that have already been created on an existing project cannot be migrated over to the shared Vpc. You have to explicitly create new instances that will belong to the shared Vpc at any point in time. You can enable or disable an existing project in the cloud organization as a shared Vpc host or a service project. A few details about billing before we move on and look at some architecture diagrams. Traffic across projects is built with the same rules as if it were within the same project. In a shared Vpc setup, traffic egress is attributed to the project that originated the egress. Traffic egress traffic from a virtual machine instance is attributed to the project where that particular instance lives.

The same organization can have multiple shared VPCs. There is no limit that there can be just a single shared Vpc in an organization. Here you can see that in the production environment, there is a shared Vpc across three different projects, one host and two service projects. You can imagine that each of these projects belong to a different engineering team. They share the same network because they’re delivering the same product. In a typical setup, you want your test environment to be completely isolated from your production environment. Your test environment is likely to be on a different network that can be a shared Vpc as well. Once again, it has one host project and two service projects.

A standard use case for a shared Vpc is a two tier web service. So we have external clients which come in from an external load balancer. They first hit tier one serving instances, which then go through an internal load balancer to hit the tier two serving instances. If tier one is the front end and tier two is a back end, it’s totally easy for us to imagine that these are owned by different teams. The front end receives external requests from the users and then makes internal requests to the back end. They are on different projects because they belong to different teams, but they are on the same network because their resources contribute to the same product.

A website that is exposed to the public having these tiered separations and separate projects basically allows each team to deploy and operate its services completely independently. So you might have a front end update where the UI has changed, but none of the back end APIs have changed. In that case, you’ll just update tier One, that is the front end. Or if you have API changes, you might just update tier two, the back end. Because these are operated by different teams, they own the projects. It’s quite likely that each team has its own budget, which is why they have the billing unit, the project assigned to them separately. Each project is built separately, and every project administrator can manage their own resources. They can reduce the number of resources they use if they need to.

The advantage of the shared Vpc is that the front end and the back end are on the same network. They can communicate via internal IP addresses, which is why they just need an internal load balancer to the back end. In addition, you can have a single group of network and security admins that is responsible for the shared DPC. So your network and security team can be a single team, and they administer the network for both the front end and the back end. In a typical organization, network and security expertise lies within one team. Every engineering team cannot have its own network and security experts. They will be in charge of network connectivity and security rules for the organization as a whole. Which means they only have to administer these rules for the shared BPC.

  1. Lab: Shared VPCs

In this demo, we will create a virtual network in one project and then share it with another one. We will then go on to show how instances created on this shared network in the first project can be accessed using the internal IP address from an instance in the second project on the same network. Let us begin by creating a virtual network in which what is called a host project. This network will consist of two subnets one in the US East and another in the US west region. Once this network has been created, let us go ahead and try to share it. So we head over to shared Vpc and notice here that we do not have the requisite permissions to share a virtual network. Let us try to fix this by heading over to the IAM section. So we navigate to the menu to I am an admin and under IAM, we cannot just use the IAM section within our project.

Instead, in order to share a virtual network, the permissions need to be given at the organization level. So even though a user is an organization administrator, the specific permission to share a virtual network needs to be explicitly given. So for this user we navigate through the set of permissions and the specific one we’re looking for is under Compute engine and it’s called Compute Shared Vpc Admin. So once this permission has been assigned, we can head over back to our host project and see if we have now the permissions to share our virtual network. We see here that the warning message has disappeared and we now have the option to set up a shared Vpc. So let us just go ahead and do that.

The first step is to enable a current project to be a host project, that is the project which hosts a VPC network to be shared. Once that is done, the next step is to select the subnets which will be shared. Though we have the option to share all the subnets in our project, let us just pick one of the subnets from the network which we created earlier. So we navigate to the second page and we choose the US esubnet from our Learn Vpc network. The next step is to select the project with which we will share this virtual network. In Gcp terminology, this is called a service project. So we select Unicorn project 13 for our example. And finally we select the kinds of users in our service project who will have the Compute network user role for the shared subnet.

This role essentially allows one to create instances on the subnet. So now we just go ahead and hit Save and we wait for our shared Vpc to be set up. And once it is ready, we can now review our Vpc. And over here we see that for our shared subnet, the user Kishan@looneycon. com, who is the owner of the service project has the ability to create instances on the shared subnet. The next step for us is to create a firewall rule which will allow instances on our shared network to be accessible by things and via Ssh. So we create this new rule which we attach to our shared network. We want this rule to apply to all instances within the network, and we want any host from anywhere to be able to ping and to Ssh.

We specify the Ssh port and Icmp for ping. So we now have a firewall rule in place. Let us go about creating our virtual instances. So we go and create a new instance which we shall call our Host Project instance. This will be provisioned on the shared network and subnet, and once the host is ready, let us take a note of its internal IP address. We will soon be testing to see if this host can be reached from an instance in our service project created on the same virtual network using the internal IP address. So to do this, let us now navigate to our service project, which if you remember, was Lunicon project 13. So in this project we are logged in as the project owner whom we saw had the ability to create instances on the shared network.

We now go ahead and create a new VM instance which we shall call our Service Project instance. We created in the same region where our shared Vpc is. And when we go to select the network, we see that we can either choose a network in the same project, or we can choose a network which has been shared from another project. So we just choose the second option and we pick the shared subnet. And once the host has been provisioned, we can just bring up the Ssh terminal and see if we are able to ping the instance in our host project using the internal IP. So when we run our ping, we see that it runs successfully. So with that, we now finish our demo about creating a shared Vpc network. You.