Practice Exams:

SAP-C02 Amazon AWS Certified Solutions Architect Professional – New Domain 5 – Continuous Improvement for Existing Solutions part 1

  1. AWS Secrets Manager

Hey everyone and welcome back. In today’s video, we’ll be discussing about the AWS Secrets Manager. Now, the AWS Secrets Manager enables customer to rotate, manage and retrieve the database credentials, the API keys and any other secrets that you might have throughout its lifecycle. Now, generally in organization, if you might have seen lot of developers, they store their secrets in plain text or if you speak about DevOps team, they might add the secret as the environment variable. Now, this is again not a best practice and it creates a lot of security risk. Now, when you discuss about the compliance, various compliance like PCI, DSS, they do mandate that the credentials should be rotated at a predefined amount of time. So a lot of organizations, they need a service which helps them keep the secrets.

Now, Hashicop Vault is one of them, which a lot of organizations have been using. But again, you will have to manage those services. Now, great thing about Secrets Manager is that it is a managed service, so you don’t really have to worry about it going down or it is a responding slow or others. Now, there are certain great features of Secrets Manager where built in integration for rotating the MySQL PostgreSQL and aura around RDS. So this is one of my favorite feature and we’ll look into how exactly this works in the next video once we understand the basics about Secrets Manager. Now, along with its built in integration for various database, it also has the versioning capability so that application do not break when the secrets are rotated at a predefined amount of time that you put in Secrets Manager.

Now, along with that, you can have a fine grained access control on who can access the secret. Let’s say a user, you can define that a user only if he logins from a specific corporate network and he has the multi factor authentication in place, then only he should be able to access the secrets. So all of those fine gain access control which IAM supports, you can have it in the Secrets Manager along with the resource based policies. So, this is it about the theoretical perspective. Let’s jump into the practical and look into how exactly the Secret Manager looks like. So this is the AWS secrets manager console. Now, one part to remember is that it will be charged. They do offer 30 days free trial post, which you will be charged at zero point $40 per secret and $0. 5 per 10,000 API call. So, this is one thing to remember.

So make sure that if you’re doing it for demo, go ahead and delete it after your practical completes. So let’s go ahead and we’ll click on store a new secret. And there are three secret types. One is Credential for RDS database, Credential for other database and third is the other type of secrets. So this other type of secrets would be the SSH key or any API keys that you might have. We will discuss about credentials for RDS database in the next video because this is a great feature and I wanted to dedicate a separate video for that anyway.So here let’s click on other type of secrets. So here you need to give a key value pair. So for example, purpose what I have done is I have created a new key. I have named it as real demo key and this is the value associated with it. So let’s go ahead and store it.

This in the secrets manager. So for the value here, which is the key, I’ll put the key name. Let’s just edit it with Hyphen. And here you need to put the actual value. I’ll copy this up and I’ll paste the value. Now you can go ahead and encrypt this data with a specific Kms key that you might have. I’ll just leave it as a default encryption key. Let’s click on next. Now you need to give it a secret name. Let’s say I’ll say Zsh key and you can even give it a tag. I’ll just ignore this for now. Let’s click on next. Now here you have option for automatic rotation. So let’s say various compliance states that after 90 days you should rotate your keys. So you also have option for automatic rotation. We do not want to rotate our SSH key, so I’ll just disable it for now. Let’s click on next. It gives you a review.

And also here it gives you a sample code. Now, this is great because let’s say that you have stored an API key. So if you look into the Python code, basically here you’re importing both three as well as base 64. And then you are giving the secret name as Zlss key and the region as AP south is one. Basically what you want is you want the value associated with a secret call as Zlss key. So this is the entire code. If you have the Python code, you can just put it there and you’ll be able to retrieve the secret associated with the Zlss key name. Now, once you have done that, you can go ahead and you can store the credentials. So this is the first secret which got created. Now, if you click over here and if you want to retrieve the secret value again, in order to do that, you need to have an appropriate permission for that.

So when you click on retrieve secret value, you will be able to retrieve the key over here. Now, since I am an administrator, I do have access to this. You can just click on this button and you’ll be able to see it. But any other user who do not have permission will not be able to retrieve the secret value over here. Now, in case application wants to retrieve the secret value, you can go ahead and you can add the Im policy associated with the role with the EC. To instance and then the application will be able to retrieve the secret. So this is the high level overview about the AWS Secrets Manager. I hope this video has been informative for you and I look forward to seeing in the next video.

  1. RDS Integration with Secrets Manager

Hey everyone, and welcome back. Now, in the earlier video, we were discussing the basics about Secrets Manager. Now, one of the great features, in fact, this is one of my personal favorite one for Secrets Manager is its built in integration for rotating the MySQL, PostgreSQL and aura on RDS. So now, typically what happens is that let’s say a database team created a database. Now the application team wants to integrate their application with the database. So DB team would typically give them the DB username and password. Now what the application team will do, they will store that username and password within their code and the code will be able to connect.

Now, again, the problem is that you are hard coding the values within your application code in a plain text. And that again is a security risk. So that is the first part. The second part is when you want to rotate the credentials, let’s say every 30 days, you want to rotate your database credentials. So what would happen is DB team would have to create a new credentials, then they will have to give it to the application team. Application team would change the application code to put the new credentials and they’ll deploy it to the production environment. So there are a lot of hazards over here.

So what instead you can do is you can store the credentials in a Secrets Manager, let the application which is running, contact the Secrets manager for the database credentials. And in case of rotation, the Secrets manager can automatically rotate the credentials for database. And application team don’t really have to do anything. They can just fetch the latest credentials. So, this is a great feature. Let’s try it out and look into how exactly it would work. Now, in order to do that, the first thing that we will be doing is we’ll be creating our sample RDS database. Let’s click on Create Database. So I’ll be putting the MySQL one for our testing. And here you can just say only enable options eligible for the RDS free tire usage. So let’s go a bit down.

We are good with T. Two micro, 20 GB storage is also sufficient. The DB instance identifier I’ll say as KP labs secret DB the master username, I’ll give it as KP admin. And for the password, I already have a sample password here, we’ll just copy it here. I’ll paste the password. Once you have done that, we’ll leave things as default. It’s publicly accessible. Yes, because we don’t really have any VPN right now. And the database name that we want, let’s say I’ll just say it as kplabs and I’ll leave everything as default. And we can go ahead and click on Create database. Now, before we do that, just disable the deletion protection, because this is a test database, we do not really want that. Great. So our database instance has been created. So let’s quickly wait for a moment for the instance to get created.

All right, so our database is now created. You see, it is available now. So what we can do is we can now go ahead and verify whether we are actually able to connect to this specific end point. So let’s try it out. I’ll do a MySQL hyphen h. I’ll specify the user as KP admin followed by the password. So I’ll copy the password from a text file and I’ll paste it over here. Great. So now if you quickly do a Show database, you should see that the KP lags database is available over here that we had configured. Now, the next thing that you need to do is that you will have to change the security group. Again, this would depend. So basically what happens is that whenever you create a new secret, let’s say that this time we create a secret based on RDS database. In the back end it creates a lambda function.

Now lambda function, if it is outside of VPC, then you need to provide the security group rules accordingly. So for our demo purposes, I’ll just add a zero, zero, zero. Now, you should not be doing this in your environment for production. This is just for the demo purposes. I just wanted to show you how things work. Great. So once you have done that, let’s go ahead and create a RDS database. So, we’ll select this as the secret type. Here we’ll quickly give the username and the password so that the lambda function which the secret manager will create, it can go ahead and connect to it. So for the password, I’ll just copy and paste it here. We’ll leave things as default. Here it says select which RDS database this secret will access.

I’ll just select this is our RDS database which is kplab’s secret DB once, then click on Next. Let’s give it a secret name. I’ll say. Kplabs RDS secret Manager. Once you have done that, click on Next and here by default, the automatic rotation is disabled. Let’s click on enable here and here. There are two options. You can either use an existing lambda function or you can create a new lambda function. So we’ll click on create a new lambda function. Let’s call it as KP Labs lambda and we’ll leave everything as default. So once you enable the automatic rotation, what will happen is that secrets manager will rotate the credentials. So currently if you see this is our credentials.

So first time when it gets configured with automatic rotation, secrets manager will rotate the credentials and it will give you the new password at that time only. So you’ll be able to verify for sure that the rotation is working perfectly and if your applications application can also verify from their end. So once you have done that, let’s go ahead and click on Next. And this will give you an overview page. We’ll go ahead and click on Store. Great. So currently the secret is being configured, it will take around two minutes for it to be configured. So let’s quickly wait for a moment here. Great. So once your secret is ready, you will see that the blue tab changed to green. And now it says that your secret store is ready. So let’s go ahead and click on our secret.

And now if you click on Retrieve secret value, you should see that it is giving you a different value altogether. That means that the secret has been rotated. So let’s quickly find it out whether this actually works or not. So what I’ll do, I’ll copy this secret and let’s try to log in to the database through the CLI. So this was our earlier part. Let’s try it again. I’ll copy paste it and now you see I am able to log in. So if you quickly do a show databases, things work as expected. That means the secret manager has actually rotated the credentials for our database. Great. So let’s explore few more things. So if you go under the rotation configuration, you should see that there is a lambda function. So this is the lambda function which is actually responsible for the rotation.

Let’s look into how exactly this looks. So this is our lambda function. Now, if you look into this function, you have your lambda over here and it is calling the AWS Cloud Watch logs. So basically what is happening is that the logs of your lambda function goes to Cloud Watch. Now, if you go a bit down here, let’s go a bit down under the network. Currently, it is not under the VPC. In case if your RDS database is inside a private subnet, you need to put your lambda function inside the VPC so that it will be able to access the RDS instance. Otherwise, if it is not inside VPC and your RDS is inside the private subnet and not publicly accessible, then your lambda function will not work. Now, let’s go to Cloud Watch. Let me also quickly show you.

So under the Cloud Watch we’ll go to logs and you should see that there is one log group which is created with AWS Lambda secret manager kplabs, F and lambda. If you click over here, this is the log stream and you will be able to see when was the function run and whether there was any error or not. So currently here you see that the create secret. It says successfully put the secret for this specific ARN. So here everything seems to be working correctly. In case if things are not working, you can go ahead and look into the Cloud Wash law group for any errors. So this is the high level overview about the secrets manager. Let me actually show you one more thing before we conclude. So if I click on Store a new secret, there were three options.

We explore the first one, we explore the third one. However, if you click on Credentials for other database. Over here, this is the option where you can actually specify the server address, database name and the database port. Let’s say that your database is stored in a different VPC, so you can specify the credentials for other database, specify the server address, database name and port. And one of the difference that you will see between the first option and the second option is the lambda function will not get created when you click on credentials for other database. So you will have to create a lambda function manually. So this is one of the differences and the second difference.

Again here we’ll have to specify the server address, the database name and the database port. So that’s the highlevel overview about the secrets manager. I hope this has been informative for you and I look forward to seeing you in the next video.

  1. Data LifeCycle Manager for EBS Snapshots

Hey everyone and welcome back. In today’s video we will be discussing about the Data Lifecycle Manager. Now, the Data Lifecycle Manager for Ebay snapshots is basically a service which allows us to have a data backup of EBS value in the automated and a regular way. Now this is a great service because earlier if you would have to have a solution for automated backups of ebay’s volume at an interval of time, then you would have to write your own custom script in lambda and through there you would have to define all the logic. But AWS has released this service which allows us to do it in an automated and a stable way. Now what happens here is that we can define the backup and retention schedule for the EBS snapshots by creating a lifecycle policies on tag.

So now that this service is present, we don’t really have to create a lambda function to do the automated backups of your EBS snapshot. So let me quickly show you on how exactly it looks like in AWS console. So I’m in my EC to console and on the left there is the option under Elastic Block Store of Lifecycle Manager. So let’s click here and this is how the Lifecycle Manager looks like. Now here I already have a lifecycle policy which is set. So let’s do one thing, let’s do a modified so we can understand this in a better way. So the policy description is demo. Now, the target volumes is all the EBS volume which has the tag where key is named and value is demo. Now the schedule is snapshot every 12 hours.

So what this Lifecycle Manager will do is that it will first list all the EBS volume which has this specific tag of name and demo and then it will create the snapshot every 12 hours. Now, there is also an option for retention rule which basically states that number of snapshots that will be retained. Like you don’t want to have a snapshots which are a year older. So you can specify over here what is the retention rule for your snapshot. In my case it is ten. And the last option here is tag created snapshot. So what this will basically do is that any snapshot that gets created will have the tag where key is equal to name and value is equal to demo. Hyphen DLM. So this is how the lifecycle policy looks like.

Now if you typically look into the snapshot because what I had done is I had created a volume specifically for this demo and I had created a lifecycle policy which can take a backup of this specific snapshot. Now, if you look into the tags over here, the tag is named and the value is demo. So it becomes easier for the Lifecycle Manager to understand that this is the EBS volume which it needs to do a snapshot for at a twelve hour interval. Now, if you look into the snapshot here. This is one of the snapshot. I had enabled it yesterday evening. So in 12 hours, one snapshot was created. So this is the snapshot. And if you see in the tags, you have the name. The name is Demo hyphen DLM. This is something that we had already said. Then you have the DLM lifecycle Policy ID.

So this is the policy which had actually done the snapshot of the EBS volume. And this is the schedule name which is the default schedule. So this is the high level overview about the Data Lifecycle Manager. Let’s do one thing. I’ll show you from the start on how we can do that. So let me just select a random region because Singapore is something where we already had created great. So when you go to a Lifecycle Manager, you will have a page, something similar to this. So the first thing that you need to do is you have to click on Create a Snapshot Lifecycle Policy. Now here the first thing that you need to do is you have to give a description. Let me give it as. KP labs. Hyphen DLM. DLM is basically Data lifecycle manager. Now here you have to specify the target volume with tags.

So this is something which is mandatory. Now, let’s do one thing before we do that. Let’s create one volume. So I’ll create a volume, I’ll give it one GB and let’s associate this with a tag. I’ll say name and I’ll say values. DLM. I’ll go ahead and I’ll create a volume. Perfect. So now that this volume is created, we can now go to the Lifecycle Manager go to create Snapshot lifecycle. So I’ll copy the description. Now, the next thing is target volume by tags. Just select the tag associated with Name and the Values DLM. Now, do remember that it might take around ten to 15 seconds for this tag to appear within the Lifecycle Policy. Now, of the schedule name, you can give it any schedule name that you intend to do. I’ll just leave it as default. Now create snapshot.

There are two options over here. One is 12 hours and second is 24 hours. So depending upon your use case, you can select one of them. Let me just put it as twelve. So here you can specify the snapshot, start time and the retention tool. How many snapshot you actually want to retain? Do you want to let’s say you select as 24 hours and you want to retain the snapshots for the last one month. You can just specify 30 over here. Now, in case if it is, let’s just select 24 hours, all right? So then it will retain the snapshot for the last one month. Now, the next important thing is copy tags. So basically, what this will do is that any new snapshot which gets created, it will add or it will copy the tags from the specific volume. You also have the option for adding a tag.

Let’s say I’ll add name. I’ll say DLM snapshot. You can leave the policy as the default one. So the IAM role is default and the policy status is enabled. We can go ahead and we can do a create policy. So this is the policy which is created. Now if you look into the Lifecycle Manager, this is the policy which is created. Now after 24 hours, the Data Lifecycle Manager will automatically take the snapshot of the EBS volume and you should be able to see it in the snapshot screen. So this is something that we already saw the start of lecture in a different region. So this is the high level overview about the Data Lifecycle Manager. I hope this video has been informative for you and I look forward to seeing the next video.