Practice Exams:

Google Professional Data Engineer – Ops and Security part 2

  1. Lab: Stackdriver Error Reporting and Debugging

In this lab, we will be using Stackdriver for error reporting and debugging. We will first be launching a simple Google App Engine application and then introduce an error in that application. We will then use Stackdriver in order to view the errors and then do some debugging to identify the error and eventually fix it. Let us start off off though by creating this Google App Engine application. So we bring up the cloud shell and then we create a directory called App Engine Hello, which is the name of our application. So once we go into the application, download the files for it from this cloud storage bucket which Google has made publicly available, and once the files have been downloaded, install it locally, in this instance from which we are running this cloud shell just to test it out.

So this is the command we use to start this application. And once it is up and running, we can preview it by going to this link and then choosing to preview on port 8080. This will bring up a new browser window and we can see that the app is up and running. Now, deploy the application to App Engine as opposed to to our local environment here. So this is the command to execute. And once we do that, at the prompt errors, just say yes, we do want to continue. And at the end of the installation process, we should be supplied with a link to our application. And we can also access the application by using Gcloud app browse. So let us just try that first. So when we execute that command, it looks like the shell did not detect our browser, but let us just use the URL.

So clicking on that should bring up a new browser tab and we can confirm now that the app has been deployed successfully. Moving along, let us introduce an error in our application. So in the main Python file there is this reference to Web app Two. Change that to Web app 22, which does not exist, and this should break our application. So we use Said to do a string substitution in this main py file. And once that is done, let us just take a quick look to confirm that the changes have gone through. And over here we see that the references to Web app Two have been replaced with Web app 22. Now that we’ve introduced the bug, let us deploy this application. So once again we do the Gcloud app deploy.

Again, we choose to continue and at the end of the deployment, grab the URL of our application and then plug it into a browser. Once we’ve done that, we can see that our introduction of a bug has indeed generated an error over here. Now take a look at Stackdriver and see if that has caught the errors. So we navigate in the menu to error reporting and note that it may take a while for Stackdriver to pick up the errors. So let us generate a few additional errors in the meantime. So we head back to the G Cloud shell. We once again grab the URL for our application, and then we plug it into the browser once again. And we hit refresh a few times so that multiple errors are triggered. And now let us head back to Stackdriver and see if these have been picked up.

And by now Stackdriver has caught some of these errors. So take a look at this error in more detail. So in this dashboard we see that six instances of the error have been registered. And in the Stack Trace sample we can take a further detailed look at the error. So click on this link, which takes us to the specific file where the error was thrown and stacked. Trace has identified that reference to web app 22 in the main py file. We can also take a more detailed look by looking at the logs. So let us bring that up. And over here we see more of the error messages along with the timestamp of when they occurred.

Let us now head back to the shell and remove the bug from our code. So we use Said to replace any reference to web app 22 with Web app two. Following that redeploy our application. Once again we hit yes, and once the deployment is complete, go back and take a look at our app. We just take the URL, plug it into our browser, and looks like the app is back up and running. Going back into the console for Stackdriver. Let us refresh the logs and see if any new messages have been generated. So we previously had twelve messages, and after the refresh it’s still twelve messages. So we have now come to the end of our lab on error reporting and debugging using Stackdriver.

  1. Cloud Deployment Manager

Here is a question which I’d like you to keep in mind as we go through the contents of this video. What role do templates play in the deployment of resources using GCP’s Deployment Manager? That’s the question, and let’s come back to the answer at the end of the video. Let’s now turn our attention to another important operational aspect of working on the cloud, and that is deployment. The Gcp offers a pretty powerful deployment. Manager service. This Deployment Manager is an infrastructure deployment service which automates creating and managing Gcp resources. It allows you to write flexible templates and config files and use them to create deployments with a variety of different services, including Cloud Storage, Compute Engine, Cloud SQL, and so on.

Now of course, we need to be really clear on what a deployment is. A deployment is simply a collection of resources which are going to be deployed and managed together. This is a logical unit, as it were, which the Google Cloud Deployment Manager will work with. Let’s also understand what the resources here could be in the context of deployments. A resource is a single API resource. API resources are classified as either Google Managed Base types or API resources provided by something known as a type provider. More on that in a moment. For instance, a Compute Engine instance is a single resource, a Cloud SQL instance is a single resource, and so on. In order to specify a resource, you need to also specify or provide a type for that resource.

Let’s understand what types are all about. A type can represent either a single API resource that’s called a base type, or a set of resources known as a composite type. Composite types can contain one or more templates, and they need to be preconfigured to work together. Once we are clear about all of the resources that we are going to include in our single deployment and their types, the next step is to aggregate them in a configuration. A configuration is defined by a file, so a configuration file can be written in Yaml syntax. Yaml stands for Yaml Int, another markup language at heart. This configuration file is a list of all of the resources that we need and their resource properties.

Now, because resources in configurations need to contain all of the information needed to deploy them, we must include at least three components for each resource. The name of that resource that’s a user defined, string for identification, the type of that resource, which, as we discussed, could be either a base or a composite type. And lastly, parameters that need to be passed into that resource. As we can tell from this configuration, files are mostly boilerplate. They have a lot of repetitive stuff which will be included across configurations. And in order to automate this and allow for reuse of configuration files, we can make use of templates. Templates are parts of configurations which are abstracted into individual building blocks.

These are much more high level and then can be reused. These files are written in a tool like Python. Because of this abstraction, they are much more flexible than individual configuration files and these help with easy portability across multiple deployments. But like with most such abstractions, there’s no getting away from the actual interpretation step. Each template must be interpreted and that interpretation must eventually result in the creation of a YAML file. That Yaml syntax will then need to be aggregated with the contents of the original configuration file. The Google Cloud Deployment Manager will take this configuration. It will also include any additional templates and then in a sense, compile these into a manifest.

A manifest is a read only object which includes all of the original configuration information as well as all of the additional metadata specified in templates. If any manifests are created by the Google Cloud Deployment Manager, a new manifest file will be created to reflect the state of the deployment. Each time you update a deployment, the deployment manager also works with another service called the runtime configurator. This allows you to define and store a hierarchy of key value pairs in the cloud. These key value pairs can then be used to dynamically configure services, communicate service states, send notifications of changes, and share information between multiple tiers of services.

The Runtime configurator is a powerful tool. It also offers watcher and waiter services. A watcher service will observe a key value pair contained within the runtime configurator and return whenever the variable changes. We can use this functionality to dynamically configure our apps based on changes in the data weighter services are ideal for startup scenarios. Here you might want to pause a deployment until a certain number of services is running. This can be accomplished using a waiter resource. This will watch a specific key value pair and only return when the value when the number of variables under that prefix reaches a certain number. This is known as a cardinality condition.

Let’s answer the question we posed at the start of this video. Templates play an important role in deployments using the Deployment Manager service. And as you would expect, the role that they play has to do with a reuse of configuration files. Remember that a deployment basically refers to a group of resources which are going to be put to use or deployed together. The usual mechanism for a deployment is a configuration file which is written in eml or Yaml. Templates are pieces of code. These could be in Python or Ninja. These are scripts basically, which can be interpreted and once interpreted will yield a configuration file in Yaml. And in this way, templates offer a reuse mechanism for deployments on the Gcp.

  1. Lab: Using Deployment Manager

This is a lab on Deployment manager which is GCP’s infrastructure Deployment service. We will be using some template and configuration files in order to define some resources, specifically a network with subnets and a VM instance. We will then deploy the configuration which will go on to provision those resources which we have defined. This lab will be performed in the Google Cloud shell. So let us first bring that up. And now let us set up our workspace by creating a directory for our files. And once we go in, download a set of sample configuration files. So these files are available in a bucket which Gcp has provided for us. So these will contain the definitions of various resources. So we have downloaded two zip files over here. Now unzip them to see what they contain.

So upon unzipping the first one, we see a whole set of files here. So a number of Yaml files which will contain configurations of resources and some Ginger files which are templates. Let us unzip the second file and we see this has fewer files, but these are the ones which we will be using in our lab. But before we go ahead, confirm that the deployment manager API is enabled in our account. So we navigate in the console to the API library, we look for deployment manager and once we find it, let us just click through and then enable it. So with that, head back to the Google Cloud shell and then take a look at our config files. So upon opening the net config Yaml file, we see that it contains references to the Ginger files which are the templates.

It also contains definitions for resources. One of them is a network resource which contains two subnets and the other to a VM instance. Now that we have taken a look at the Yaml file, go ahead and make some changes to it. So we bring it up in a text editor and then moving along to the definition of the network, let us change the region to perhaps one we are closest to. And then let us also modify the IP address ranges for the subnets and then change the zone in which our instance is going to be provisioned with that. Just save this Yaml file. Now let us take a quick look at the template file, starting with the instance template. And over here we can see an entire definition for the instance, including a lot of property references.

Taking a quick look now at the network template and once again we see a big definition. And now that we have taken a look at all the files, let us go ahead and deploy our configurations. So this is the command to execute and the configuration we’re deploying is our net config Yaml. And once we execute this given is provisioning a number of resources. Expect a wait of at least a few minutes. But once it is complete, let us go ahead in the console and see if we can spot our resources. So we first head over to the Vpc Networks page and we can see here that our network has been provisioned along with its two subnets. Now proceed and take a look at our VM instance. And even over here we can see that our instance has been provisioned.

Let us now take a quick look at our entire deployment from the console. So for that, we navigate to the Deployment Manager page. And over here we can see that it is present and we can see some information about when it was created and modified. Let us take a more detailed look to see if all our individual resources can be seen. And in fact, yes, all our resources the network, the subnets and the instance can be viewed here. Finally, let us head back to our Google Cloud Shell and take a look at all the resources which we can control using Deployment Manager. So when we run that command, we can see that the list is rather vast. So this is just to illustrate that Deployment Manager is rather powerful and we should be using it whenever we can. Okay, with that, we come to the end of this lab. Bye.

  1. Lab: Deployment Manager and Stackdriver

This lab is a simple introduction to both deployment manager and Stackdriver. What we will be doing is first deploying an instance using deployment manager. We will then be updating that instance also using deployment manager. And finally we will be viewing the load on that instance by using Google Stackdriver. To begin though, let us go ahead and enable some of the APIs we will need which gcp disables by default. So first let us navigate to APIs and services and in the library look for the cloud deployment Manager API. Once we find it, go ahead and enable it. So this might take a few seconds and once this is ready, let us go back to the library. The next API we will enable is the Google Cloud runtime config API.

So once again we just go into it and hit enable. And once that is ready, we finally just go back to the library and look for the Stackdriver monitoring API and we just go ahead and enable this one as well. With all our APIs enabled, we are now ready to create our first deployment using Deployment Manager. To do that, let us first navigate to the Google Cloud shell. And once we bring it up, first set an environment variable for our zone. So I’m going to create this variable called My zone and I’m going to set it to Asia south one, but you may choose any other zone which you prefer. Once that is complete, we are now ready to create our deployment yaml file. So, I’m going to call this My deploy Yaml and I have a sample yaml file downloaded already.

So I’m just going to copy paste the contents here, but you may download the file from the location listed on the screen. What this yaml file specifies is that a VM instance will be provisioned called My VM. It will be on your default network and it will run a startup script in which the app get update command will be executed. Once you have saved the file, there are a couple of values which we will need to substitute. So the first one is we will need to specify the correct project ID. So we run this command which will replace the text project underscore ID in the file with the actual value of your own project from which you brought up this cloud shell. Once that is done, we now need to set the zone.

So again we run this command and it will replace the text zone with the value of your zone in your environment variable. With that done, we are now ready to create our first deployment. So this is the command to execute where we specify a name for our deployment and we specify the yaml file which will be used. And once it is executed, we should expect our VM instance to be up. So let us just navigate to the VM instances page in the console and we see that a new instance called My VM has indeed been provisioned. Let us now go into the details for this VM and confirm that the startup script was indeed executed as we had specified in the Yaml file. So we should see an app Get update performed, and we see here that it was exactly configured according to the Yaml file.

So now let us go ahead and make a change to our Yaml file to run one more command at startup. So we navigate back to our shell and within our Yaml file add one more command to install Nginx. So we navigate to the Startup script section in this file and we say that in addition to performing an app Get update when the instance is provisioned, we also want to install Nginx lite. With that complete, let us save this file. And now let us update our deployment. So this is the command which we execute. So we just run an update and we specify the same Yaml file. And once the update has run, navigate back to the console and check in the VM instance details that the startup script has indeed been updated.

So once we head back, if we just navigate here and we see that the new line to install Nginx is indeed there, so we created a new deployment and then successfully updated it using Deployment Manager. With that, we conclude the Deployment Manager section of our lab. Let us now move along to Stackdriver and let us enable monitoring of this instance, which we just provisioned using Stackdriver for that Ssh into this instance, and then install the Stackdriver agent. So we first use Curl to download the Stackdriver installation script, and once we have retrieved it, execute the script in order to install the agent. So once the installation is complete, let us execute this command in order to put a massive load on our CPU.

So what this command effectively does is it generates a continuous stream of random data and then forces the operating system to compress it. And this should put a huge load on our CPU. Now that we have given our instance a lot of work to do, navigate back to the console and let us set up monitoring. So we go into the menu and click on Monitoring and by default Stackdriver is not enabled. So we need to create a new Stackdriver account for this project. So we just hit continue and confirm that we want to create the account for this current project of ours. Once the account is ready, we will be presented with the options of adding more projects to this Staggerv account, but just skip that for now.

And next we will be given the option of linking an AWS account to this Stackdriver account we just created, but skip that as well. And for the next step, we are presented with the steps needed to install Stackdriver agents on the instances which we wish to monitor. But again, we have already done that. So let us just continue. I do not want any emails to be sent to me right now, so I’m going to pick no email reports. And with that, we just wait for Stackdriver to fetch all the information it needs on our resources. We wait, and when that is done, we are ready to launch monitoring.

So let us just go and click. And before we enter the dashboard, we are asked whether we want to continue with the trial for Stackdriver, which is not free. So we just choose to continue with the trial. And in the dashboard let us just navigate. And over here we see the CPU usage on the VM instance. And we can see here that the CPU usage has suddenly spiked up at one point, presumably when we put the load on it. So with that, we have now seen how Stackdriver can be used to monitor our instances. So now let us go back to our terminal for the instance and just kill this process which is generating all that CPU load. And that’s concludes our lab on both deployment manager and Stackdriver.