Amazon AWS Certified Data Analytics Specialty – Domain 6: Security Part 4
- Policies – Advanced
So this lecture is a little bit special but I want to introduce you to reading advanced IAM policies and understanding what they mean because it’s possible you will see some advanced policies at the exam. So we’ll use this several links. We’ll start by this one, but in there I first want to introduce your attention to using the dollar sign AWS username. This basically will get the value of your username in AWS and the use case for it is that you can restrict maybe users to several DynamoDB tables or keys or maybe several buckets subfolders or keys as well prefixes using that variable you can also use the aus principal type as an IAM variable.
3So for example, a policy variable, for example, is it an account, is it a user, is it federated or is it an assumed role? So you can really have very tailored policies. You can also use tags, for example, is the principal tag equals department? If so yes, if not, no. So you can really have some really advanced concepts thanks to all these variables to restrict your im policies. So let’s have a look at this link, maybe I’ll make things a bit clearer. So I’m just here to introduce you to this link. You can read it in your own time but basically it’s saying okay, if we have an s three bucket and we want to allow people like David to use the David prefix in our s three buckets, we would need to create that fixed policy and assign it to the user David.
But if we have another user, for example Mike or whatever, then we need to create another policy for Mike and that becomes really cumbersome when you have hundreds of users, right? So instead you can use policy variables and you would replace this with maybe a group and within that group you would assign the group policy and you would assign this. So instead now the s three prefix is not David anymore, it’s dollar sign brackets AWS username closer bracket. So this will basically get replaced at runtime by the dynamic value of the user using your policy. And so that’s a really cool thing here is that now you can have very applicable broad policy to everyone using variables. So it basically introduces you to using the curly braces, the dollar sign and reading this. And so if you scroll down in this page, basically you can see a lot of things that are possible to do. So it shows you how we can use s three buckets for example, or we can define a queue, a queue per user for example. So if you want to have the users to be able to list the queues but only operate on their own queues, you would define this in the resource name, et cetera, et cetera. You can also use for example, resources for buckets to have certain principles, to have certain tags within the buckets, to only allow tags that are tagged with the bucket department, who knows? So you can really become very creative with your policies and this is what I want to teach you here so you can definitely just read more of the documents but here is a table or a list of all the stuff you can use. So current Time epoch Time principal Type secure, Transport Source IP user Agent user ID username Source instance ARN and on top of it it gives you the different values for these variables based on the type of principle you get. So it really educates you. The reason I’m showing you this is not for you to learn all these things by heart but is that if you encounter aim policy that looks a bit like this one, you need to be able to say okay, this is a variable. Stefan, explain this to me. So that means that this will get replaced at runtime by the user name of the account and so the user only has access to a specific queue.
Okay, let’s move on to the next section. So these things work great when we’re within AWS, but what about Federated users for this? There’s another link obviously and so we’ll get another variable called AWS federated provider which basically will be replaced by the identity provider that was used for the user. So it could be Cognito or Amazon. com or whatever and then within each IDP we will have specific variables to access the username. For example, if it’s Amazon. com as an identity provider we use, then the variable to access the user ID will be dollar sign www. amazon. com colin user underscore ID and if we use Cognito, you’ll be something else. It’ll be Cognitoidentity amazonite’s. com colin sub so you basically get the idea that based on the identity provider, it could be also SAML or Sts. Basically each of these will expose its own variable.
So again, not something you need to know by heart but something you need to see once so that if you do see it in the exam then you’re aware of it. So now if we explore the second link you will see some example policies you’re just using web Identity federation and so here in this s three policy we can do a get object from anyone as long as the resource name is the bucket name with the federated provider. So basically saying, okay, we have a specific bucket for each federation provider.
So we can get different providers, we can get cognito, we can get Amazon. com users, Facebook users, Google users, all that stuff. And as you scroll down on this page, basically it will show you for each of these providers what you can get for SAML. You can access these variables, et cetera. And it basically describes what it means, the variable value. So I will let you explore this in your own time obviously, but I just want you to get educated to the fact that it is possible so that if you see the policy, then you will understand it very clearly. Finally, the tool exercise. I want you to have a look at the policies on S three at this link. We won’t do it together, but please have a look at these on your own. Understand what they mean. See the advanced nature of them. Look at DynamoDB as well.
There’s some advanced policy on DynamoDB you can do. You need to be aware of those. And the exam matrix Cuba RDS im policies, remember, don’t help with in database security because RDS is a proprietary technology. And so the user authentication and authorization has to happen from within the database. So if you see a weird im policies, some variables applied on RDS to do authorization within RDS, that’s usually not the way it’s done. Okay? So just to give you some idea, but this is more of a self-exercise. So click on these links. Take the time to learn about these links and learn about these variables. Just get educated and hopefully your brain will be open to these new possibilities. All right, that’s it from me. I will see you in the next lecture.
So let’s talk about Cloud trail. And Cloud Trail is so important for the exam, but the question is actually pretty easy. Cloud Trail is used anytime you want to provide governance, compliance and audits for your AWS account. Basically, it will track every API call made to your account. And so it could be from the console, it could be from a CLI, it could be from SDK, it could be from whatever. By the way, Cloud Trail is enabled by default. So we’ll get a history tree of all the API calls made within our device accounts. And as I said, it comes from all these various sources. And the really cool thing about it is that from there we’re able to say and see who did what and when, which is quite helpful. You get to say, now all the Cloud trails can be put into Cloud Watch Logs, so we can get an information of all the API calls that were made straight into Cloud Watch Logs and maybe query them from there. And if a resource is, for example, delete in AWS, that is a very common exam questions, then the first place we need to look into is Cloud Trail because we will be able to see who does a Cloud Trail API call for Delete right away. Now, Cloud Trail will show only the 90 days passive activity.
So you need to basically store the data somewhere. After that, it could be Cloud Watch Logs or somewhere else. And the default will only show the create, modify or delete events. So events that change things. But you are able to create Cloud Trail trails, and these Cloud Trail trails are more detailed. Basically, you can choose the kind of events you want to lug, and then you can store this trail directly into S Three. If you wanted to analyze it further, maybe you want to use Amazon Athena on top of it to query these Cloud Trail Trails blogs. Now, Cloud Trails can be either region specific or global. So you have these options, and when you store them into S Three, automatically they will have SSE S Three encryption applied to them whenever it plays into S Three. So it’s quite neat. Then, if you want to protect, obviously these trails for whatever reason, you would use IAM or Bucket policies or whatever, you want to protect them.
So that’s it for Cloud Trail, not let’s go have a play with it. So for Cloud trail, it’s really easy. Let’s go into the cloud trail console. And Cloud Trail, as I said, is activated by default. So you will get information around all the events that happen in your account by default. So if you see here, there’s recent events. So this is all the thing that happened in my code. And so when I was in my account. And so when I was dealing with DynamoDB, it logged this. When it was dealing with EC two network interfaces, it logged it. So you can view all the events in there. And in there, you’re able to basically filter by read only event name. You can specify some filter, you can specify a time range. So you’re really able to just see a lot of things. For example, we can do events resource type, and then we’ll do Resource Type equals DynamoDB. So let’s just type in DynamoDB will probably be quicker. Here we go. DynamoDB table. And it will give me all the events around DynamoDB if we wanted to. Just do event name. And then we can just type in here something like Delete Cluster.
Here we go. Delete Cluster. I can see there’s a Delete Cluster event that was happening here for DynamoDB DAX. So you can really drill down into all the events down there within your cloud trail. And if you wanted to have more than 90 days of past events, you could create a trail. And with creating Trail, you can basically create event metrics, trigger alerts, run events, queries with Athena, create Workflows, et cetera, et cetera. So the way we do it is that we click on Create Trail. You can also do it from the dashboard here, where you can click on Trails and then Create Trail. So perfect. And then you create a trail name. So you call it My Trail, and you’ll say, okay, it applies to all the regions. Let’s do it for all read write events. So I’ll say all data events.
So whether you want to have also the events coming from S Three and Lambda if you wanted to. So you could take here and get all the read write events for S Three if you wanted to. And then finally where you want your cloud trail events to be stored. So I’ll call it Stefan Cloud Trail. Cloud Trail trails. Here we go. And it’s going to create a new extra bucket for me. You can specify a log prefix if you wanted to, and specify encryption parameters such as Sedums log file validation, and if you want to have SMS notification, anytime a log file is being delivered for now, that’s fine. I’ll click on Create and Bucket already exists. So I’ll go Trails to Create, and now my trail is being created. So now basically anytime I will be doing some API calls within my AWS accounts, it will appear in this S Three bucket. So I’m going to do a few API calls and get back to you. Okay, so now I’ve just done a few events in my account. I’ve basically deleted and created an MDB, tables, all that stuff. And so if I go to S Three, it took about maybe ten minutes, basically under my S Three bucket, under Aus Logs.
Here’s my account ID. And we get two buckets, two folders. The one that’s interesting is Cloud Trail, and then I’ll go to US East Two, and then by date, so 2019, et cetera. Drill down. I found two files basically representing all my API calls in there, which I can download. And by downloading this file, it’s basically a JSON document, what we’ll see in a second. So I’ve just opened that JSON document and now we can see that. For example, I did an as API call in this region and there was a describe configuration, recorder status, et cetera, et cetera. So you can just scroll down in your files and see what you did. For example, for Kms, I list the aliases, all that stuff.
So all the API calls happening within your accounts will be in cloud trail. If I type delete, maybe you’ll have a delete event. No, it’s not in this file. Anyway, you get the idea. Okay. Lastly, we go to Trails and we go to my trail. I’m just going to stop it. So I can just stop logging and it will make the previous file accessible, but it won’t log in in new events. Or you could go go ahead and simply delete my trail altogether and click on Delete and the trail will go away. All right, that’s it for collect trail. I hope you liked it. I will see you in the next lecture.
- VPC Endpoints
VPC endpoints are really important to understand with the exam perspective because they enhance the security of your network within your VPC. Basically, anytime you have an AWS service, usually it’s accessible on the public network. But sometimes using VPC endpoints, you’re able to connect to these AWS services on your private network instead of using the public network. So, as a graph, because I’m sure I’ll make everything clearer, here’s your VPC, and for example, you have SQS. SQS is a public service. You can access it, for example, from your local computer and so it’s accessible on the World Wide Web. But say you wanted to access it from within the EC Two instance on a private subnet. So how do we do this? Well, one way would be to give Internet access to that EC Two instance, but that would be a bit tricky because you need to create a public subnet, an Internet gateway, all that stuff. Or you can just create a VPC endpoint, also called Private Link, which basically has a private connection directly into the SQS service, and then to connect to the SQS service.
Simple as that. Your EC Two instance will connect directly into the VPC endpoint thanks to route tables. So that gives you an idea of the diagram of why VPC endpoints are super important and how they work. So they scale horizontally and they’re redundant. You don’t need to manage them. AWS does that for you. And they basically remove the need to create an Internet gateway, or not gateway, not instance or whatever. To access any AWS services, you have two type of VPC endpoints. You have the gateway endpoints, and they’re basically to provision targets. And they must be using the root table. And that only works when you have S Three or DynamoDB. So remember this VPC endpoint gateway are for S three and DynamoDB. And then you have interfaces to provision eni. So it’s a private IP address as an entry point. And then most of the services will have an interface, e VPC endpoint, that’s also called a Private Link. So that gives you an idea about VPC endpoints. Let’s just see how we could set up one in the console. Okay, so let’s open the VPC service in the console and within the VPC service. I’m going to go to Endpoints on the left hand side here. In the endpoints, I’m able to create an endpoint. And basically you can either look for it by services, or you can do the AWS marketplace services if you wanted to. But we’ll do AWS services in here. And for example, I’ll do SQS like this.
Do a search, which obviously does not work. And you scroll down and you have SQS right here come Amazon SQS Excellence. As you can see, that’s an interface. And here I can go to the bottom and basically create it. And at the end, say create endpoint. I won’t go over the details of it. You just need to know the architectural implications of this. So SQS is an interface? And so if you look at this type on the right hand side, the column type only S three is going to be a gateway with the interface is a bit clunky, I’m sorry. So S three is going to be a gateway. And then if you scroll up, the other gateway is going to be for DynamoDB. Again. Clunky interface. But DynamoDB. So DynamoDB NS three are the only two gateway of type for the endpoints. All the rest confirmation, cloud trail config, EC Two, etc. ECS everything is an interface. So that’s it. I just wanted to show you this. We won’t go ahead and create an interface, but remember what it means at the high level and the VPC endpoints usage for security. And I will see you in the next lecture.