Pass Google Professional Cloud Network Engineer Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!
Professional Cloud Network Engineer Premium Bundle
- Premium File 153 Questions & Answers. Last update: May 30, 2023
- Training Course 57 Lectures
- Study Guide 500 Pages
Last Week Results!
|Download Free Professional Cloud Network Engineer Exam Questions|
Size: 96.99 KB
Size: 62.51 KB
Size: 59.93 KB
Google Professional Cloud Network Engineer Practice Test Questions and Answers, Google Professional Cloud Network Engineer Exam Dumps - PrepAway
All Google Professional Cloud Network Engineer certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the Professional Cloud Network Engineer Professional Cloud Network Engineer practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Designing, Planning, and Prototyping a GCP Network
5. Designing the overall network – CDN
Let's look at CDN. Let us look at how you can optimise your latency using Syrian. If you look at the previous architecture, when you serve to the website or mobile applications, you are actually connecting to a load balancer, and the load balancer directs your request to the back end services. Consider a case where you have a static container. Static content means you have videos, images, or static text data that usually doesn't get changed. If Celine is not present, the request will be routed to the back end services, where the data will be extracted. Even though it is static content, the request will make its way back to the back-end services. But if you enable Syrian for the static content, what will happen is all the data, which is very static, will be stored in the Syrian locations, and there are around hundreds of locations—I think 80 or more—which Google provides you out of its cloud platform wherein you can cache your content. As an example, if you have video content of, say, 10 MB or a 10 MB image that will be stored in the Syrian location, and if at all you are accessing it from Asia or the US, you will hit the load balancer or the nearest Syrian location. If the data is found there, it will be served from that particular location.
We are going to get into details of the Celian, but overall, that's the concept. It's a content delivery network. So your content is getting cached in all those civilian locations, and the content is over there itself. There are two different advantages here. One is that you are optimising the latency to deliver your content because the content is delivered from the nearest location to the customer's Syrian location. That is one benefit. But the second benefit is that you are actually reducing the load from your back-end services, like the load on your virtual machine or your cloud storage, and that is definitely a good advantage when you are serving millions of customers or users to your applications. So that is the CDN. We'll go over theory and details, as well as the name of the CDN. But that's it for now. If you have any questions about high-level concepts about the CD, let me know. Otherwise, you can move on to the next lecture. Thank you. And next, we.
6. Designing the overall network - Project and Network Quota
Let's go ahead and understand the project quota. And there are two different quotas applicable here. The first is a per-project quota, and the second is a per-VPC quota. So, as we understand it, a VPC is a virtual private network or virtual private cloud that you create in Google Cloud Platform or any other available cloud platform. So there are different quotas for projects and for VPC. Let's go over what a project quota is. But before we get there, let me just tell you that there are two different terminologies here. One is the quota, and the second is the limit. So the quota can be thought of as a constraint that is either suggested or enforced by Google Cloud Platform. You can increase the quota by requesting it for that particular service. So it is applicable to a particular service or particular API.
You can request an increase in the limit for that specific quota, and you can assume that the quota exists. so that you avoid bill shocks, if at all. You're using a number of virtual machines, and you don't even know that there are so many virtual machines being used by your project because of the auto scalingor and so many other parameters.
And you get a monthly bill, and you get a bill shock, right? This was not expected. So to avoid those kinds of bill shocks, you have a project-related quota. And though this is the limit that is recommended and enforced by Google Cloud Platform, you can ask for an increase in the quota. So that is the quota. The other part is the limit. Those are the hard limits beyond which you cannot ask for an extension, right? And those are hard limits, and you cannot extend them. If we talk about projects, if you have a project, you can create up to five VPC networks, and that's why you have different quotas for projects and different quotas for VPC. Looking at per-project quota, you can think of a number of API calls that you can make in a particular day, and that is project-related quota. It's not particularly bothersome whether you use it within a single VPCor or across multiple VPCs.
So that is the project-level quota. You can also think about the number of virtual machines or load balancers used by one particular project to determine your project quota. When we talk about the VPC, some of the project quota is taken, and you can go up to 15,500 virtual machines per VPC. There is a quota for the number of firewall rules that you can create. There is a quota for a number of forwarding rules that you can enforce. The number of cloud routers you can create is limited. And there are so many other parameters as well. But when we see this in the Google Cloud Console, that's the quote concept itself. That's it, guys. For quota. If you have any questions here, let me know. Otherwise, you can move on to the next lecture. Thank you.
7. Designing the overall network - Hybrid Connection
Let us go ahead and understand what hybrid connectivity options are available. We already saw an introduction about the connectivity options that are available, but this is just a part of the syllabus. That's why I'm reiterating some parts that have already been covered.
The first option is you. You can use VPN, and Cloud VPN is an option on the Google Cloud Platform for connecting your data centre to your on-premises offices. VPN is secure with Google Cloud Platform, and you can add multiple tunnels to increase network throughput. That's the VPN; it is secure and can be used with or without a dedicated connection. With Google Cloud Platform, you can increase the number of channels for additional bandwidth.
The second option that you have is CloudInterconnect, which is a dedicated interconnect option from your on-premises account with Google, and you have two different options to connect you. One option is to connect directly to Google Cloud Platform, which has some requirements. To have it, you need to have at least ten Gbps of line. The second one is that you should be available in Google's Pop locations. If you're not available in Google's Pop location, you need to go via Partner Interconnections, or even if you are available in Google's Pop location but your requirement is less than ten GB, like five GB, then in that case you should go via Partner Interconnection. The other third option that you have available is not purely for Google Cloud Platform; it is for all G Suite services, like all collaboration platforms or Sheets document storage, plus YouTube or any other social networking that is required for your organization.
And you want to reduce the egress fees, which is the benefit of peering. You do not have any SLA data; it does not actually get encrypted in the pipe. You need to keep that in mind. And you cannot exchange the network—your on-premise network—with Google. That is not allowed in peering. The whole purpose of peering is to save on egress fees. That's it for hybrid connectivity. Guys, if you have any questions on hybrid connectivity in a nutshell, let me know. We are getting into details in the next section for all these topics, as well as the demo for all these topics. So if you have detail-level questions, hold on to them, and you can ask them after the actual theory and demo. Thank you.
8. Designing the overall network - SAAS PAAS IAAS
If you're learning Google Cloud Platform or networking into the cloud, this is a very basic topic that you're already familiar with. I'm just trying to explain it because it's on the syllabus, so if you're a new student or just graduated from college and don't have any understanding, I doubt you have that. But, for the time being, I'm only covering the syllabus. You can use a variety of platform services that are available. The first is software as a service, followed by platform as a service and finally infrastructure as a service. But beyond that, traditionally, what we know is that there are organisational companies that are managing their own datacenter because the amount of data they are handling is huge and there are security requirements that mean they cannot use any colocation facilities or data centres from other vendors. So, traditionally, if you manage or your company manages your own data center, they manage everything that is required for the data center. And the only thing missing is the real estate business, such as renting the location for the datacenter, maintaining the cooling, and everything else that is required for any building or real estate requirement, which they must maintain, and so on.
It starts with the computing resources, like the network, the cabling, the storage servers, the virtualization operating system, which you install on the hardware, the middleware, the runtimes, the data inside it, and the applications. So all of that is maintained and managed by you. So you will undoubtedly require real estate maintenance personnel, networking personnel, hardware personnel, operating system personnel who manage the operating system within that hardware, virtualization personnel, and so on. Everything is managed by you. But because you are a very large organization, you can spend that much money and have it done. The second one is infrastructure as a service. That's where you look for a vendor like a public cloud platform like Google and ask them to maintain their real estate business, their networking, their virtualization, and everything, and you're just using their virtual machines, their virtual services, and so on. The virtual machine in the Google Cloud Platform is one of the examples you can think of for infrastructure as a service.
Or if you are using a virtual machine and then you are installing any custom database or any application and you are managing that, you are taking ownership of the operating system, middleware, or anything else that runs inside that particular operating system, and that is where Google is just providing infrastructure. It is the data centre plus the physical location plus the networking, storage, and all that virtualization platform that is required for you to launch and manage your virtual machine instances, and that is infrastructure as a service. If you move up That's where you can think of Google as managing the operating system, middleware, and runtime as well as running your application, and this is what you can think of as App Engine, right? So App Engine allows you to write code, create a data model, and deploy your application inside App Engine, where it will begin running. You are not managing the underlying operating system; you're not managing runtime. Everything is managed for you by the Google Site. That is the platform as a service in software as a service. which is not typically a case of Google. Yes, there are some instances wherein you can create some of their services, like Vision APIs, right?
That is software as a service, but it is not typical software that an end user will use it.So, if you want to understand what software as a service is, consider Gmail as software as a service, or consider Salesforce as software as a service even though they also provide a platform. But there are so many things for which you can just use it readily by configuring it and using it. There are numerous vendors that offer software as a service, such as QuickBooks, and they provide everything out of the box; you simply need to subscribe and you can begin using it. That is known as software as a service. Typically, you'd want to map this to, say, a pizza-making company, right? If at all you want to make your own pizza at home, you will need to have your own infrastructure or real estate, right? Then on top of it, you need to—you should know how to cook it, right?
You need to have toppings, you need to have all the dough preparation recipes, you need to have your own gas kitchen, everything. You need to have it all done by you, which is like maintaining your own data center. If you go to "infrastructure as a service," that is where infrastructure is given by someone else. So the kitchen is provided by someone else. But you just take your dough, your toppings, and your knowledge of how to cook it, and you just take all those items together and go to those providers, then just cook your own pizza based on whatever available resources they are providing. Platform as a service is the third option. That's where, even though he's provided by them, you just need to bring your own toppings and recipe and you can make your own pizza, and the last one is that everything is handled by them, like Pizza Hut or Domino's, right? They are giving everything.
You simply tell them you want this pizza, such as a vicious pizza with so-and-so toppings and well done or extra currency, and they will make it for you. Everything is done for you. You just need to ask for it. That's not software as a service. Typically, you can extend software as a service, platform as a service, and infrastructure as a service beyond There is colocation for facility vendors, or you can think of a database as a service or a container as a service. Right. So there are multiple services that came up that are available and widely used in the industry nowadays. You do not need to worry about it. But all I'm saying is that there are three SAS passes and that IAS isn't the only thing available as a service. Many services are available, and you can use them. That's it, guys. Please let me know if you have any questions. Otherwise, you can move on to the next lecture. Thank you.
9. 1.2 Designing a Virtual Private Cloud (VPC)
Cloud Networking. Cloud networking is very important, or you can think of critical services that are available from any cloud provider right now in the market. We use cloud networking to isolate your cloud resources from other companies or public access. Let's go ahead and get into the details of what we have available from the Google Cloud Platform. Cloud networking is one of the three core services that are available from the Google Cloud Platform or any cloud platform.
To put it another way, in cloud networking, you can create a virtual private cloud, load balancers, firewalls, routes, subnetworks, and serial connections so that you can use it from your own data center. at a high level. Cloud networking is divided into three categories. The first one is a load balancer. That's where you take the traffic from your customers and distribute it to the back-end services. And we saw this in our computing service in great detail. The second one is VPC, and that's where you create the virtual private cloud, which is where you can think of private networks in the cloud, in the global cloud, and you can spin off any resources inside that particular virtual environment or private environment so that others cannot actually access it.
So the VPC is one, and you have subnetworks, a firewall, and a VPN created inside the VPC to isolate different environments. But besides that, you go ahead and create hybrid connectivity. For example, if you have a data centre and some applications running inside it, you may want to connect your data centre to the GoogleCloud platform, which is where the VPN or connections part comes in. And that is what we are going to see. It is in both VPC and VPC. as well as VPNs or interconnections. We'll also look at DNS and CDN services in this section. These are also the core services that are available. However, those are optional or extra services that you can use based on your needs. But we are going to see this in detail in this particular section.
So in a nutshell, we are going to see CloudVPC, which is the network inside the cloud. We are going to see "Interconnect." "Interconnect" means connecting your data centre with the Google Cloud Platform. We have already seen load balancer, we are going to see Sirian, and we are going to see DNS here in this particular section. Cloud VPC. It provides managed networking functionality for Google cloud platform resources. You can create a private network with Cloud VPC. That is, you can provision your resources, connect them within the VPC, and isolate them from one another by using or creating a separate VPC or subnet within it. You can also define fine-grained networking policies with Google Cloud Platform on premises or other cloud infrastructure using VPC.
You can think of it as a comprehensive set of Google-managed capabilities, including granular IP range selection. You can define routes, firewalls, and cloud routers just as you would in your own premises or data center. Let's look at some of the features of VPC. It is built so that you can build a private global data centre without managing hardware in terms of hardware switches or routers or building it on your own. You can route subnet firewalls or define the pairing within the V PC. You can monitor the network connections using flow locks, and it is global shareable and expandable by design, so you don't have to provision any resources. It is not a physical device, so it gets bottlenecked when there is a problem or huge bandwidth is utilized. It has managed functionality. Everything is managed by the Google Cloud Platform, and it scales based on the requirements, so you don't have to worry about the scaling. It is defined by software rather than hardware.
I just wanted to reiterate that it is possible to provision cloud resources. You can connect them with each other or isolate them. You can set up a subnet and even isolate different environments such as production, development, and testing, correct? and that is what VPC provides. There are different types of VPCs available. One is the default, which is created by default. When you have an account, you can create an auto-mode VPC or a custom-mode VPC, and we're going to get into the details of both.
Some of the features of VPC in the Google Cloud Platform It has a global scope, so it is not specific to a region, a zone, or a data center, right? It supports multi tenancy. You can have private communication, you can define subnetworks, you can define firewalls, you can define routes, you can have a cloud router for a BGP link, you can share the VPC, and you can have access control managed via IAM. In a nutshell, VPC is a comprehensive set of Google managed networking capabilities that includes granular IP address range selection, route and firewall definition, and it is a virtual private network on the cloud that supports cloud router. And what do we mean by "cloud router"?
We are going to just see it on the next slide. So, a virtual private cloud (VPC) is a virtualized version of the traditional physical network that exists within and between your physical data center. So you can think of it as a virtual version; it is software-defined, and it is not your physical devices inside your own data center. Each GCP project contains one or more VPC networks. VPC is global in nature and allows globalVM instances and other resources to communicate with each other via internal private IP addresses. So VPC, or the concept of VPC, is global in nature. It is not tied to a particular region or zone. So VPC does not have an IP range. So an IP range is created within the VPC. You can create different subnetworks. So that is part of that particular network, and you can attach the IP ranges to those subnetworks. So if you look at the VPC, it is a firewalled network and has no IP ranges. It contains subnetworks, and subnetworks have IP ranges. You can have more than one subnetwork in a region for a given VPC.
As an example, if at all you are deploying your services into, say, the US East, one of the regions, you can have multiple environments created there, like day stage performance testing, UAT, and production, and you can create all of those other different environments or different subnetworks. And that's the power of subnetworks. Some of the thoughts on project and VPC are undoubtedly due to the fact that VPC is contained within a project. So all the objects inside the VPC are also associated with a project. Every project begins with the default VPC, which is created from it. You don't have to create it; it is in Automate. So everything was created for you. You can have up to five networks per project, and this is the current limit for creating VPCs. Let's go ahead and get into the details of the different types of VPCs that are available and their features. The first one is the default VPC. When you have a project, this VPC is already created. So for each and every project, you have a default VPC. Subnets are automatically created per region and zone.
An Internet gateway was also created. Firewalls are also open between subnets so that all the resources can communicate with each other. Let's go ahead and browse this in the console. So if you look at if you go to network and go to VPC network options, you just click on VPC network, and this is the default VPC that is created for you. You don't have to create the VPC, and these VPCs contain subnetworks that are created per region. All the subnetworks are created. The routes are defined for all those subnetworks so that they can talk to each other.
And there are also firewall rules created for all the increased traffic. There are firewall rules that are created, and you can attach these firewall rules to a particular instance or you can have them for the whole subnetwork. Let's go back to the default VPC that Google Platform creates by default for you per project. The second step is to create a VPC, which you can do here. If I go ahead and say "create a VPC network," you can either say "custom VPC" or "automotive VPC." When I select Auto mode VPC, this is what happens. So you'll have subnetworks and routes created by default for you, and you do not need to worry about them. When you click on "Auto," that's where you need to add the subnetwork wherever you want to have it as an example, right? This one. So I'll go back.
So, by default, VPC has a single subnet per region and a fixed 20-site range per subnet in auto mode. You can expand that up to 16. The default network is a self-contained network with predefined IP ranges. So you are not creating any IP ranges. So, if you look at this one automatically, you have these subnet ranges already created, IP ranges already created, and firewall rules created for you by default. Okay. And the route has been designed specifically for you. You do not need to create anything in the case of an auto-mode VPC; if at all you want to disable something, you can just go ahead and disable that. There are no default subnets created in custom mode VPC. If I just go ahead and make this a custom VPC, let me just create it as a custom VPC and let me see custom, and I can just go ahead without any configuration. I can make dynamic routing rules, whether they are regional or global, and we are going to get into details here, but I'm just leaving it as is now. And you can have a DNS policy; you can just go ahead and create it. So the customer VPC is being created. So it has no default subnetworks.
Manually created subnetworks can be valid; any valid RFC1918 IP Ranges do not have to be contiguous between the subnetworks because you are defining your own subnetwork ranges, right? And you have full control over IP ranges going back here. Look at this: custom mode VPC has no subnetworks, and it is custom, right? There were no firewall rules, and there was no dynamic routing. I can just go ahead and add my subnetwork, US-West 1, and I can go here, US-West 1. I can actually define IP ranges, and you can go up to 16 per IP range or site range. You can turn Google private access on or off. You can enable flow logs, and flow logs are very important when you want to audit who is accessing what data and you want to monitor the network, and that's where flow logs will be useful. We'll go ahead and get into it, but I'll just go ahead and add this for now.
So it is creating my subnetwork inside customVPC; it is not there yet, I think. Okay, yeah, it's ready now. So I have created subnetworks created. But if you look at firewall rules, none of that is created. So if you look at this network, for this network we have firewall rules, but no firewall rules are created by default for a custom network. The routes If you look at the routes, there are two routes created. One is the default gateway to the Internet, and one is for your subnetwork that is created. So routes are created for the subnetwork that you have, but not for the other subnetworks that you have not created or any other regions or zones, right? If you want to connect to that, that's the custom VPC. So in VPC, as a summary, if you look at what you can create per project, you can create five networks, and that is by default, which is the quota that you get from the Google cloud platform. So you can say "Prod, Dev, Stage," and the network is organised according to the department. Also, you can create the network and create different subnetworks to isolate different kinds of environments.
Like Dave on stage, you must be proud, right? Those networks are global in nature; they are not limited to a single region, and resources can be created in any region within them. It can communicate with one another via internal network communication. so it does not need to use an external IP address to connect or communicate with each other But if you are creating a resource-indifferent network altogether, this communication is external traffic, and the internet egress charges will be applied for this particular communication. We just need to keep this in mind. So if at all you are creating resources in different regions within the network, then it is an internal Google network, and there are no charges incurred because it is internal to the network. There are some cross-region charges, which are minimal, but they are treated as internal to the network. However, if you connect two resources in a different network, even if they are in the same region, it is treated as Internet traffic.
Google Professional Cloud Network Engineer practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass Professional Cloud Network Engineer Professional Cloud Network Engineer certification exam dumps & practice test questions and answers are to help students.
IT Certification Tutorials
- Reasons Why You Should Get Certified This Year
- What Are 5 Main Responsibilities of Agile Software Development Managers?
- Top 5 Free Microsoft Excel Alternatives: Are They Worth Your Attention?
- 1z0-071 Oracle Database SQL - COLUMN ALIAS AND CONCATENATION
- LPI 102-500 - 103.2: Process text streams with filters
- ISTQB CTFL-2018 - 2018: Static Testing
- PMI PMP Project Management Professional - Introducing Project Stakeholder Management
- DA-100 Microsoft Power BI - Part 4 Section 3 - Row Level Security
- DA-100 Microsoft Power BI - Level 4: Adding more control to your visualizations
- Amazon AWS SysOps - CloudFormation for SysOps
- IIBA ECBA - Business Analysis and Strategy Analysis (IIBA - ECBA) Part 2
- PRINCE2 Practitioner - Introduction to Processes
- 1z0-082 Oracle Database Administration - Configuring the Oracle Network Environment
- Amazon AWS Certified Data Analytics Specialty - Domain 6: Security Part 2
- Salesforce Admin ADM-211 - Security and Access : Field Level Access