Practice Exams:

Amazon AWS DevOps Engineer Professional – Incident and Event Response (Domain 5) & HA, Fault T… part 12

  1. On-Premise Strategies with AWS

Okay, so yet another theoretical lecture, but I still want to mention it because I think it’s extremely important. Going to the exam, you need to know about the services that AWS offers at a very, very high level, but just hear one about them once to do onpremise strategy with the cloud. So we have the ability to download the Amazon linux to ami as a virtual machine and they will be in the iso format. And you can load this iso image into the common software used to create vms. So that includes vmware, kvm virtualbox, which is oracle vm or Microsoft hyperv? And this would allow you to run Amazon linux Two on your on premise infrastructure directly using that vm. So that means that you can make it work with some user data and so on.

So, quite cool to think about. Then we have a feature called vm Import and Export. And what this allows you to do is to migrate your existing vms and applications into EC Two directly using this feature. And you could also create, for example, a disaster recovery repository strategy if you had a lot of on premise vms but you wanted to back them up into the cloud. And because it’s called Import and Export, you can export back the vms from EC Two to your on premise environment if you wanted to. Now for AWS application. Discovery service. Well, this is a service that allows you to gather information about your on premise servers and plan a migration.

This is a very high level, but it does give you some server utilization information and dependency mappings. And that could be quite helpful when you want to do a massive migration from on premise to the cloud. Finally you can track all that migration using the AWS Migration hub. Then we have AWS Database Migration Service or dms, which allows you to replicate from on premise to AWS, AWS or AWS to on premise for your database. So this is quite nice because if you had a mysql or a postgres database running on premise and you wanted to start moving your workload into AWS, you could use dms to replicate the database in the meantime.

And when you’re ready, you can fully transition to using AWS only. And the nice thing about it is that it works with various database technologies that includes oracle, mysql, dynamodb, et cetera, and allows you to do some really kind of fancy use cases. For example, migrate data from mysql into dynamodb, okay? And then finally you have AWS server migration service or sms. And this is for incremental replication of on premise live servers to AWS so you can replicate the volumes directly into AWS. And this is used for more ongoing type of replication, okay, incremental replication. And this is the last one that I have in my mind for on premise migration to AWS.

So at a high level you may be like, oh, this is a lot of services. Just remember the names at a high level. So linux two on premise is possible. We can do vm import and export for on premise and EC two, we can have migration services such as Application Discovery Service, AWS, migration Hub, database Migration Service, and Server Migration service dms and sms. And just remember those at a high level. That’s it. Just so you can see, if you see the name in the question, then you’re not taken by surprise. Okay? You know that all these things have to relate to on premise. Okay, well that’s it for this short theory lecture. I will see you in the next lecture.

  1. Multi Account – AWS Organization Overview

So let’s talk about a little known service, but quite a useful one, called aws Organizations. So it is a global service and it basically allows you to manage multiple aws accounts from one root account. So the main account is called the Master account and you can’t ever change it. But then all the other accounts under this organization are called member accounts. Now, one member can only be part of one organization, I think that makes sense. And then what you get out of it is you get consolated billing across all accounts. So you get one single payment method done and define at the master account route. And then all the other accounts, basically the billing rolls up into the bigger roller organization. And the reason why you would do this is that you get pricing benefits from aggregated usage.

So for example, if you’re using EC Two and you’re using a lot of volume on EC Two or S three, then you get aggregated usage. So you get more discounts as you use a service more, so it doesn’t prevent you from using multiple aviation accounts if you wanted to, but you still get the pricing benefits from it. And the really cool thing about organizations is that there is an API that’s available to you. And with this API you can automate the alias account creation. So you could create standalone sandbox accounts for anyone using just one API. Call. Now let’s talk about ou and service control policies or SCPs. So basically all the accounts you have can be organized in Ou or organizational units.

And the reason is that you can have this Ou for anything you want. Like anything you can say, okay, it’s going to be dev, test, prod, or it could be Finance, HR, It, and you can have ous within Ou, so you can nest organizational within one another. And so you can basically organize your organization just the way you want.So what you get out of it is this kind of diagram. You have the Roots accounts, and then underneath you have some ous, and under each Ou they could be aws accounts or they could be more ous.And so the Ou can be whatever you want. Here it could be Finance, and here it could be HR, and here it could be Dev, and here it could be Prod, whatever you want. And then it could be one aws account for whatever you can think of really.

So you’re really free to organize your organization any way you want, obviously. And then what do you do with these ous? You can apply something called SCPs or service control policies. And these are extremely important because they basically allow you to permit or deny the access for specific accounts or ous to aws services. And why would you do this? Well, basically you want to say, okay, in dev I can use any services, any kind of aos services I want, but in prod I only want to use the one service that I’ve permitted people to use. So the scp has a very similar syntax to iam, and the idea is that the scp is actually a filter to iam. So by setting up an scp or a control policy, we’re basically restricting the iam policies that people can use.

And using this, basically we can say, okay, deny star for API gateway, because we don’t want anyone to use API gateway or whatever we want. So when will we use an ou and service control policy? Well, anytime in the exam, they ask you about creating a sandbox account, that would be a really good way of doing it. If you want to release physically separate dev and prod resources into two different accounts, then organizations is a really great way of doing it. Or if you want, for example, to just allow only approved services in one account, whereas in dev maybe be a bit more relaxed. You know, ou and service control policies are a great way of doing it. So in the next lecture, we’ll just go ahead and see how they work through some practice.

  1. Multi Account – AWS Organization Hands On

So here in my management console, I’m going to type organizations. So organizations, here we go. And we’re going to be able to centrally set up multiple accounts and create an organization. So this account right here is going to be my root accounts. And so from my root accounts, I’m going to create an organization. So let’s go ahead and do this. We’re going to say, okay, this is going to give me a single payer and centralized cost tracking. This is great. I can create and invite accounts and I can apply policy based controls and I can just simplify my management of my amazon’s accounts overall. So this is great, I can create an organization. And here we go, it’s been created. Now my organization is being created.

And as you can see, there’s a verification email sent to my email and I have to validate it right now. So I’m going to do this right away. So I have now accepted my organization request. And so now I am able to start adding accounts to it. So I’m going to start adding an account and I can either invite a new account to join my organization, or I can create an account in this organization. So here I’m going to invite an account, and this is an existing account that I already have in another account, in another email. So let me just enter this right now. So I’m entering my email and then I can add a note, say welcome to my organization. Great. Then I click on invite.

And now I have to accept basically my invite request from my new account. So let me just wait for it. This is my other account, I’m going to click on invitations, and here I will find the invitation that’s given to me. I will accept it. And now I have to just say, okay, I confirm that I want to join this aws organizations. And I’m happy that the administrator of the organization can attach policy based controls to my alias account. So now my account is being controlled by the master accounts. I click on confirm, and now my account has joined the master organization. Now in my master account, I’m able to see the different accounts that are within my organization.

As you can see, I have Data cumulus Courses and I have stefan merrick, both under my same organization. So that’s the first step. We could also add an account and create an account directly. And you just specify a full name and an email and there you go, you’ve created a new account. But we won’t do this for now. We won’t need to do it for now. The next thing we have to do is maybe organize the accounts. So this is where we’re going to define our ous. And so our ous is where we place the accounts under different organizational units. So maybe I want to create a new one, and this one is going to be called dev and we’ve created it.

And then I’m going to be able to create a new account called prod and maybe a final organizational unit, sorry, called Test. Okay, here we go. So as we can see, as I’ve added different ous on the left hand side, they start showing up in a tree. And then what I can do is start assigning my accounts into my ous. So for example, I can say, okay, my root accounts, this one is going to be assigned directly to prod. So what I can do is say Ou. And then I need to move the account, as you can see. So I’ll say move, and then I’ll move it to say prod. Move it, here we go. And then maybe this account right here, I’m going to move it as well. And I want to move it to say Test. Okay? And here we go.

Now I can see that if I click on Test, there’s my one account, stefan merrick. And if I click on prod, there’s going to be my data cumulus courses. But within these ous, I can create more ous. So I could create within prod, I could create a finance ou and I could create maybe a hre ou. And so, as you can see now, I can start organizing all my accounts into a very nice way and place my accounts that I create under the relevance of you. And what would we do this? Well, we would do this to have a policy. And so we can create a service policy by default. We have a full eight of Us. Access. So we basically allow access to every operation into every account.

But say I want to create a policy, I have two options. I have either a policy generator in which I can basically generate a policy on demands, what we’ll do in a second, or we can copy an existing scp. And the only one we can copy right now is aws full access, which sort of makes no sense because it just allows everything. One thing we need to see though here is that when we do copy an existing scp, we can see that the scp has the exact same format as I am. We have an effect, an action and resource. So all the things we know already, okay, let’s go to policy generator. And the policy name is going to be called Deny athena. And here we’ll say we deny access to athena.

And what would we make this? I don’t know. Maybe we don’t want to allow people to use athena in some accounts, but we’ll just write this one just for fun. So the overall effect is going to be denied. And this is where we say, okay, all the services we’re going to specify in here are going to be denied. The rest is allowed. But we can also say just an allow policy. And basically we’re saying, okay, all the services right here are going to be whitelisted, and anything that is not listed in this Allow Statement will be blocked by default. So we’re more familiar with what a deny is, but this is a whitelist. Okay, so we’ll do a deny policy, and as I said, it’s for athena.

So we’ll select Amazon athena, and here we can select all the api call, all the actions that we want to be denying. So if you wanted to be really precise, maybe we just want to select an action. And maybe we just want to say, okay, anything that is stub query execution should be denied. But because we want everything to be denied, we’ll just select everything. And now no one can use athena if this policy is applied. So I’ll click on Add statement and here we go. We get Amazon. athena actions. Star effect. Deny. All right, let’s create the policy. And now we have to apply it. Now, to apply a policy, it’s pretty tricky. You have to go to organize Accounts, and here you’re in the root accounts.

Now you need to enable or disable policy types. So here we’ll enable service control policies. So click on enabling. And then once this is enabled, we can start attaching policies to Accounts or ous. So if we look at root right now, the service controls policies attached to root is full aws access. So that means that the root account, the master account, has access to everything. And that makes a lot of sense, thank God. But now we can go to any Ou and start attaching policies. I don’t want to attach deny athena to Root, because if that’s the case, any of these little ous will inherit that. So what I want to do is maybe go to test. And in test, I’m going to click on my Service Control policy.

And as we can see, the policies was inherited from the root, which is full aws access. And on top of it, I can attach deny athena to this Ou. That means that every account under this Ou, including my stefan merrick account, will be denied access to athena. So how do we test this? Well, by logging it into the other account. So let’s do it right now. So I just logged into my other accounts, and now I’m going to go and try to use athena. So I’ll go to the athena service and here’s my query editor. So, for example, I can say, okay, let’s just go ahead and maybe run a query. So we’re going to say Create database, and then we’ll limit test and say run query.

And now it says the query is the following error even the root account. So I’m using my root account right now, is not authorized to perform athena start query execution on this resource. And there’s an explicit denial. So even though in iam in this account, I’m the root account and I have administrator access and rights and everything, because I’m part of an organization and the organization applied in scp, I still get it denied. So it’s something you should know really, really important. If you start using organizations, not only you have to check iam for errors of access denied exception, but also if your account is part of an ou or part of an organization overall, there is a chance.

There is a service control policy attached to this account and you can’t know what it is directly from this child account. You need to see at the root account which policy is applied to which accounts so it’s something that’s really interesting to know and see. So as you can see here, I cannot use athena because I have an Scp on this ou forbidding me to using this athena service so I hope that helps, hope that makes sense. With organizations you can do a lot of wonderful things so be very careful though, because it’s very powerful and I will see you in the next lecture.

  1. Multi Account – Services Integration

So we have seen multiaz. We have seen multiregion, but there’s also multi account with aws, so it never stops. So any cross account action that you do as a role of thumb will require you to define iam trust relationships to allow these roles to do actions onto other aos accounts, then assumed roles across accounts can also be assumed. So this is a secure way of doing functionality from one account to another. And so that means that there’s no need to ever share your iam credentials with another account. If you need to do something in another account for this, you would need to call the aws Security token service or sts and assume a role in another account. And then you can do api calls in that account directly so it’s all secure, no need to share im credentials.

Again, this is really important from a security perspective. We’ve seen code pipeline, so code pipeline can do cross region, but it can also do cross accounts. And this is not something you can do through the console. You have to use a template to confirmation template for this. And this is quite cool. So if you have code deploy into other accounts, then you could do cross account invocation of code deploy. For example, for aws config, we’ve seen aggregators, so they’re multiregion but also multi accounts. So say we have a big organization that we just seen and then we want to aggregate all the resources across all my accounts in my organization and make sure they’re all compliance, then aggregators would be a really good use case for this.

We have cloudwatch events. So how can we share a cloud watch event across multiple accounts? Well, we have to create something called an event bus. And through that event bus we can share it with multiple accounts and multiple accounts can reach from that new bus and get events directly from other accounts. So something to just know at a high level. And then finally with cloud formation, we’ve seen we have stack sets. So this is something we’ve seen at extensive length again. And as we can see, we could create stacks in different regions across multiple accounts. Okay, finally, a very common use case is how do I centralize all my logs from multiple accounts and multiple regions? And this is a diagram from a blog on aws.

And this is very simple. For example, we have an application account and we want to send this into a centralized logging accounts. So what we would do is create everything that sends logs to cloudwatch, so to a lug group. Okay? And what would happen is that when in the centralized account we want to receive these logs, what we would do is that we would create a cloud watch log destination. So it is something you create using the cli and we connect that log destination into the log group from here. So that log group will send the logs all the way as a subscription into a log destination. And then from this, this log destination could be connected to a firehose, a kinesis firehose system.

And that kinesis firehose could send the data directly into an S three bucket. And that S Three bucket would contain all the logs, not just from this account, but from all the accounts we decide to connect to the very same log destination in the centralized logging account. So that’s it just at a very high level, just to make you think about how we can do multiregion multi counts for Cloud Watch logs as well. And the exam again would expect you to be creative or know this kind of architecture. So try to remember it. Okay, that’s it. I will see you in the next lecture.