- Home
- Amazon Certifications
- AWS Certified SysOps Administrator - Associate AWS Certified SysOps Administrator - Associate (SOA-C02) Dumps
Pass Amazon AWS Certified SysOps Administrator - Associate Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!
AWS Certified SysOps Administrator - Associate Premium Bundle
- Premium File 395 Questions & Answers. Last update: Dec 01, 2024
- Training Course 303 Video Lectures
- Study Guide 805 Pages
Last Week Results!
Includes question types found on the actual exam such as drag and drop, simulation, type-in and fill-in-the-blank.
Based on real-life scenarios similar to those encountered in the exam, allowing you to learn by working with real equipment.
Developed by IT experts who have passed the exam in the past. Covers in-depth knowledge required for exam preparation.
All Amazon AWS Certified SysOps Administrator - Associate certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AWS Certified SysOps Administrator - Associate AWS Certified SysOps Administrator - Associate (SOA-C02) practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Managing EC2 at Scale - Systems Manager (SSM) & Opswork
4. AWS Tags & SSM Resource Groups
So now let's take a look at tags and resource groups. So, tags, as we know them, are key-value pairs that we can have on many Ads resources, including EC2, but also many, many other sources. And they have a free naming convention, but common tags are going to be name, environment, team, and so on. So we use tags not only for resource grouping, as we'll see in this lecture, but for automation as well, for security and cost allocation.
And the general rule of thumb is that it's better to have too many tags than too few. So with these tags now, what we can do is leverage them to create resource groups, and to do so well, we make sure that we can group two resources together if they share the same tags. So they will allow us, for example, to group applications together, to group different layers of the same application stack, or to differentiate between a production environment and a development environment. So in this example, I have three easy-to-find instances. Two of them have "environment and development" as their tag, okay?
And one of them has the tag "environment." So what I can do is create a source group for the filter environment equals dev, and this will create a logical group of my first two instances. So this is something you can do on a regional level, okay? It's a regional service, and it works not just with EC2 instances, of course, but it works with Amazon, EC2, S3, DynamoDB Lambda, and so on. OK? So let's get going with tagging our resources. So the first instance is that I'm going to manage tags and add a tag right now. For example, the name is my development instance, and then I'm going to add a tag. The environment is going to be development, and the team is going to be finance. So I'm going to tag my first instance. And now the name is shown here as well. The second instance will also have tags. So I will manage the tags, and the name is going to be my prod instance. Now, for the environment, as you may have guessed, it's going to be production, and then for the team, it's still going to be finance.
Finally, I'll tag it for the last instance as well. So the name is going to be my other developer instance. "Environmental Development" will be the tagline. And finally, for the team, I'll use Operations. OK, we're good to go. So now we have our three instances, and from there, I'm going to be able to create resource groups. So back in Census Manager, I'm going to go here, and I'm going to find the resource groups, okay?
And then in the search bar, I'm going to go to resource groups and find the window for resource groups. I'm going to create a resource group, and it's going to be tag-based, okay? And then we need to select the resources. We can use all resource types to search across our accounts, or we can just look for simple two instances, such as if we want to have the environment oops here, we must be equals to Dev and add this. If we do this, we can preview the resource group resources, and we can find that two easy instances are in my resource group: my development instance and my other development instances is perfect.
So I will call this group "dev" and create this group. So, similarly, we can create a new resource group, and this time it will be for issue two, and the environment will be of type Production. We can preview the resource groups, and then there is one resource within it that I can label "Prod." Great. And next, I can create one as a group. So, once again, my easy two instances, and this time the team will be finance, and I'll refer to this group as my finance group.
And again, we can look at the resources, and we can find two easy instances. My PROD instance in my DEV instance belongs to this finance group. So this is good. We create a group, and here we go. We have three resource groups. Now, the reason for creating these resource groups is that we want to be able to run SSM directly at the group level, so that we can, for example, patch the operating system and perform some actions, and so on. So this is why it's a prerequisite. So, once you've completed your resource groups, you're ready for the next lectures.
5. SSM Documents & SSM Run Command
Let us now discuss the heart of SSM, which are documents. And documents can be written in either JSON or Yemen, in which parameters are defined, what the document does or actions are defined, and the document is then executed by a specific service. So many documents already exist in a service, and we can obviously leverage them to go faster in what we do.
So this is what a document may look like: So as you can see, there's a description, a bunch of parameters, and then some steps, and each step has an action in which maybe you can run a command. So this is just a simple example, and if you start using SSM a lot, you will write your own SSM documents. Okay, but what we get out of this is that they sort of look like the idea behind cloud formation, but this is at the centre of SSM. Now we have documents, and they're going to be how we can describe what SSM does. So these documents can be simply used to run commands, and we'll see this in this lecture. However, these documents can also be applied to other SSM features such as state management, patch management, automation, and documents can even retrieve data from the SSM parameter store to provide the modularity and dynamicity you require in the way these documents behave.
Okay, so let's have a look at how we can create documents right now. So if I scroll down on Assistance Manager, at the very bottom, under Shared Resources, we have documents, and documents are on this page. So we have documents owned by Amazon, some owned by me, but of course none of them are owned by me because I haven't created one that was shared with me. If you can share documents with other people or if you must have all three of them. So in this example, I'm going to show you documents owned by Amazon, and one of them, for example, is called AWS apply patch baseline. So I click on it, and I can see that this is useful for scanning or selling patches from a baseline. I will see what that means in a few lectures. The platform is Windows. We can see when it was created. Okay, and who is the owner and what is the latest version? So this document looks good. We can look at the content of the baseline patch baseline. So this is written in Jason for this one. And as you can see, there are parameters—one or two parameters—for operation and snapshot ID, and then there is a bunch of runtime configuration as well as a few commands. That is going to happen. So complicated, but not for us to maintain because this is maintained by AWS.
We can examine the various versions. So currently we cannot view the document version because we don't own it. So it will always be version one, but your own documents can be versioned, and we can go over the various parameters in detail. For example, this one is on document version one, and it has two parameters: operation and snapshot ID, which are relevant to this document. OK, so if I look at the document itself, we could create our own documents. So it could be a comment, a session document, or an automation document, and we'll see commands in this one. But I want to show you this at a high level. So the first way we're going to apply our documents is by using the run command SSM feature. So, after running a command, we'll either execute an entire document, which is indeed a script, or simply run a single command across a fleet of EC2 instances. And for this, we can make use of the resource group that we've already established. So the run command has features for rate control and error control. So imagine you're applying a command to 1000 instances, and it will take them down for a minute or two. Then you want to make sure you do this progressively and that, in case you have any errors, you are able to stop running the commands in your fleet. It is fully integrated with IAM and Cockroach, so you can see who runs commands.
There is no need for SSH. So the agent itself will be running the commands. But Systems Manager does not need SSH access to your instances to run the command, which is quite magical and cool. The command output can be shown in the console, but it can also be sent to your S3 buckets or to Cloud Watch logs. And finally, to know the status of your runcommand, you can look in the console, obviously, but you can also send them to SMS to get information about progress, failures, and so on. Finally, which events can be used to invoke run commands in automations and EventBridge to Cloud? So let's take an example. The run command can be run across a fleet of EC2 instances. The output of the command itself can be sensed for analysis in harsh logs or Amazon's three notifications into SNS, and events triggered in EventBridge could have a rule to trigger the run command itself. So what we want to do is install an HTTP server on my three instances. Okay? But first, to verify that it will work, we need to open up the security group.
So let's go into the security group rules, and under the inbound rule, I'm going to add a rule for HTTP on port 80 coming from anywhere. So this is allowing us to access our instances of HTTP, and we can make sure that our instances do not currently run a web server by taking, for example, one of these IPS. So we'll copy this IP and then paste it here. And as we can see, we are not getting anything, so let's go back into my security group at http port 80, and it should not turn out; it should just give me an error. So let's try again, which is just a weird behaviour of Firefox. So I'm going to copy this and I'm going to go into Chrome, and as you can see, if I go into my IP, I get a site that can be reached. That means that while port 80 is open, no HTTP server is currently running on my simple to run instance. So okay, so that's cool. We want to now install an AWeb server on these instances. So let's go into Systems Manager, and we're going to run a command. But first, we need to create our own documents. So the document is going to be for a command or session, and then I will call this one "install Apache," and the target type is going to be for an easy-to-create two-instance system. Now the document type is going to be a command document, and we can specify it either in JSON or YAML.
So we'll use YAML because I believe it's easier to read, and to make things simple, you can simply get the code from the SSNdirectory and copy this entire file document to install Apache. OK, return here, paste it, and we're good to go. So in this YAML document, we can easily read it. We have one parameter, which is a message, which by default is "hello world," and this is the welcome message we want to have as part of our instances, their web server. And then the main step is to run a shell script that has several run commands in it. So we update the instance, we install HTTPD, we start HTTPD, and we enable it. So this is in case of restarts, and then we echo the message from the hostname into this file right here, index HTML, and so this message under these double brackets is coming from the parameter message above. Okay, so I will go and create this document. So this document is now owned by me, and it's called Intel HTTP. And so now we need to go into the run command and run it. So let's go to the run command directory here.
We're going to run a command, and you need to find the document itself. So we're going to go home with me and find Apache. I will select this one, version one, and then we can customise the message. So for example, we can say customer, helloworld, and for this target, we can either specify instance tags, choose instances manually, or choose a resource group. And, as you can see, the previous resources are still available in this console. So I will choose my instances manually, and I will actually choose my three instances right here because I want to be able to install HTTP on all three of them three. Okay, next for other parameters, so we can have a timeout in terms of the commands. So if the commands don't finish within 600 seconds or ten minutes, then you should fail the command. So this is a much larger time than the one we need for this command.
This is fine. and then rate control. So the concurrency is pretty cool. So do we want to run the commands on 50 targets at a time, or maybe one target at a time? So you can do it one at a time, or a percentage of 33 percent at a time. But for this example, I will choose one target at a time and then the error threshold. So that means that after one error, this will stop the entire task. Okay, but maybe you know that some of these commands will have errors, and so maybe you're saying that as long as 5% of the instances don't error out, this is fine; please keep going. But if you go above this 5% error threshold, stop running the command. Right now, we'll keep the error at zero because we don't want any errors. Now, for the output option we can create, I'll disable it and send the output to freebuckets. Or we can send logs to cloud watchlogs, and for example, I will name my log group "itrun command output." SMS notifications if you want to get notifications about the status of this run command, and this is nice because we can get the equivalent of the CLI interface command if we wanted to run this directly from within the CLI.
So I click on "run," and as we can see, we have three targets, one of which is in progress while the other two are pending. So let me refresh this, and as we can see now, because we did it one at a time, it will go this one in progress, which will be successful, then this one, which will be successful, and then that one. So I'm able to refresh here, and here we go, the first one and the first two were a success. So we can see the start time and the end time. And for each of the targets, we can click on "View Output." So the output is literally the output of the command itself. So it shows you a maximum of 480 characters. So here is all the output that is available from it. and as we can see at the very end, it's going to say that if we're very lucky, httpd was installed. Very nice. Okay, and then complete. So the command is complete, and this is complete from the uninstall, and you can click on the cloud watch logs to view the logs directly from your commands in the Cloud Watch Log group. So this is my run command output Log Group, and we can see that we have many different streams available here for standard out and standard error in case any errors occurred on our instances. But if you go to standard out, as you can see, we have the five commands, what happened, and the fact that they did install and enable HTTPD. So this is great. We're good to go.
So, going back to our run command, we can look at the command history. This one was a success. And yes, three instances were behaving as if their command ran on it. And if I go back now to my student console and refresh, they're still here, obviously. But now if I click on this IP before and paste it, then we're going to get a customised Hello World from this IP right here. And if I go to another simple instance, this one, we'll get a custom Hello World. So this is a custom message that I did pass to my document, and the IP is going to be different on this one. That means the command was executed differently on each of my two ECs. Two instances. So this is pretty cool because here we are able to run a command across three easy instances. But remember, these two EC instances do not have the SSH port open. Okay? So what happens is that the SSM agents run the comments for us, which is super helpful because we don't compromise on security. So that's it for this lecture. I hope you liked it, and I will.
6. SSM Automations
Let us now discuss SSM automations. So automations are going to help you simplify command, maintenance, and deployment tasks for your Easy2 instances or other iterative resources. For example, using an automation, you can do something such as restart an instance, create an AM I, or take EBS snapshots. The idea is that the automations will be higher level, okay? They're from outside your current two instances, whereas the run commands from before are from inside your current two instances. So the automation run book is the name of the SSM documents that will be of the automation type. Okay? As a result, we commonly refer to them as "runbooks." The Now and Run books will take action on your HTTP instances or resources. You can also create your own runbook or use one of the predefined runbooks by alias. So here's an example:
The SM automation is using the run books. There are our automation documents, and we can execute them on these two instances or specific resources, such as EBS volumes, for example, for creating shaped snapshots, amis to create amis, or RDS for creating snapshots, and so on. Now, how do you trigger an SSM automation? Well, you can do it manually using the console, using the CLI, or using the SDK. You can also automate it by using EventBridge as a rule. And the target of the rule will be the SSM automation on the schedule. Using a maintenance window or directly as rule remediation whenever it is configured, it is frequently discovered that the resource is not compliant with the rule. So all these options are defined right here. So the console, the SDK, maintenance windows, EventBridge, and config remediation can all execute the automation called AWS Restart EC2 instance from within the SSM automation service. Okay? So, in this lecture, we'll look at how automations work.
So I'm going to automate on the left side. So I changed management automation, and I'm going to execute an automation. So I need to choose a document. Again, we can write our own document or choose the one owned by Amazon, okay? And there are document categories. For example, guidelines for patching security, instance management examples, common tasks for EC2 and EBSdata backups, AMI management, and self-report workflows So as you can see, a lot of them include, for example, cost management. So in this example, we can look at instance management and what can happen. As a result, we can directly attach any BSvolume from an automation. We can attach an Im role to an instance. We can, for example, enter an ASG into standby or exit standby. So we can do a lot of different things, okay? Detach the volume, and so on. So what I'm going to do is look for automations, and I'm going to look for the automation named AWS Restart. And I should do it. So Eric easily restarted the instance. And so in this one, we have the option to restart two instances, and what it's going to do is that it's going to just restart RC two instances, and as you can see, we have a different one if we wanted to have an approval step as part of our automation.
So let's do this one, actually. So we're going to choose the document. I will scroll down; here are the document details. Which version do we want? So we want the latest version. This is great. And for the description, click on "next." Now we need to choose where we want this document to be executed. So we can do a simple execution to execute on all targets or a rate control to execute on each one individually. We can also do multiple accounts and multiple regions. Alternatively, a manual execution can be used to perform a step-by-step runbook mode. So I would choose a rate control, okay, to restart my issue instances with approval, and the target is going to be, for example, instance IDs. And then we can choose based on the resource group, just like before. Or we can, for example, say on tags, parameters, values, or just all instances. So in this example, let's choose a resource group to change things up, and I want to operate it on the dev. So that means I want to restart all my development instances.
So we need to provide instance IDs. But as you can see, because we've specified a resource group, the instance ID is going to be filled and then approved. So I am the user, or user Erento approves of the automation action. And so I think that it's not going to be easy to do so. So I'm just going to undo this because I'm not using an im user and I want to keep things as simple as possible. But if we look again at this address (restartcommand) and use this one, we're good to go. Next. So again, the red control is going to be set on the instance ID for the dev group. We're going to need to have an order for the role that the automation will assume to perform the automation on your behalf. And this is if you want to have an automation user role that is different from the one that you are currently using right now to launch this automation. So I will not specify an impersonation role, but you could specify one if you wanted to. Now for the rate control: it's going to be one target at a time, and if there's one error then please stop this automation. Okay, let's execute it. And now the execution has been initiated for my restart of EC two instance.So we can take a look at the steps. So let's refresh this page in here.So currently one step is being executed, OK, which is to execute this automation.
So I can look at what is done by clicking on the step itself. So the execution ID is right here, and two steps were executed. So number one is to stop instances, which will change the instance state, and this is in progress. So, if I go into my EC2 management console, you can see that this one is being stopped, which was the first step in my automation. And then there's a second step that is going to happen, which is to start my instance. So this one was a success, and now the instance that was stopped is now being started. So if I go back into my management console and refresh this, we're in a pending state, which means that my instance is being started. So, for example, using this automation, we can simply restart our entire fleet of easy two instances without enabling SSH access. And number two, we don't have to code a script for it because, well, there is automation available to us, and if we had a script, then, for example, embedding different steps and making rate control and looking at errors and looking at logs would be extremely complicated.
So this really shows the power of automation within SSM. And as you can see, two executions are being done. So one of them on one instance and then on the other instance at this rate control, and so what we'll do is, well, now the three are running, and one of them will soon go down and be stopped, and so on. So you get the idea. But this is the power of automation, and hopefully it will open your eyes to how you use one. So, if you have some time, I strongly advise you to go through your documents, okay? And you need to just look at documents that are going to be of the automation type and have a look at what is offered by AWS in terms of documents. They are numerous, but they can assist you in imagining and comprehending how you can better leverage automations within your infrastructure. So that's it for this lecture. I hope you liked it, and make sure that the automation was successful at the end. So go back in here and make sure it was good, but I hope you liked it, and I will see you in the next lecture.
7. [SAA/DVA] SSM Parameter Store Overview
Okay, now let's talk about another service that I find really amazing in AWS and that I've used all the time while I was doing consulting on AWS, which is called the SSM Parameter Store. So this is to securely store your configuration and secrets in AWS, and you can have optional encryption with KMS. So you can store your secrets and have them KMS-encrypted directly from within the SSM Parameter Store. It is either a service or less service. It's scalable, long-lasting, and the SDK is simple to use. So I would use it anytime you need to encrypt secrets. For example, you have a versioning system for your configuration and secrets.
All your security for your configuration management is done using Path and Im policies. You can get notifications of cloud watch events, integration, and cloud formation to police parameters, so it's a very complete service, as we'll see in the hands-on. So, at its core, we have applications, each of which has a parameter stored in the Parameter Store. So it could be a plaintext configuration, in which case if we request that configuration, the parameter store will check with Im permissions to make sure that we can get them and then return them to us. Alternatively, we can request encrypted configurations, in which case the parameter store will also request Im. But on top of it, check the KMS permissions, and if so, call the decrypt API from the KMS service to give us our decrypted secrets. So here is a way to store your parameters in your parameters store.
So you can create a hierarchy, for example, my department underneath my app, then your dev environment, and then the name of your configuration, for example, the DB URL, maybe another secret, the DB password, and then maybe you have the dead environment. So you would also have the production environment with the same DB URL and DB password configuration and secrets. Then, if you have another application, you could create another application in the hierarchy, another department, and so on. So it's sort of like a folder structure that you know, for example, like a file system, and then you're also able to use the Parameters Store to reference secrets from the secrets manager, as we'll see. And you're also able to reference Parmesan directly from AWS; for example, the last one here allows you to retrieve the latest AMI ID for Amazon Linux 2 from AWS, which is very handy. So if you have a lambda function and it wants to access your dev parameters, then you would set an environment variable, and then your lambda function would get your parameters or get them by path and retrieve them.
And if we have a prod lambda function with another environment variable, it will automatically retrieve the prod values. And so this is how we could use, for example, lambda and the parameter store. So we have two tiers of parameters in the parameter store. We have the standard tier and the advanced tier, and the standard tier is going to be free and the advanced tier is going to be paid. And so for the standard tier, you have up to 10,000 parameters per account, which is a large amount of parameters. The maximum size is 4. As we'll see in the next lecture, the parameters and policies are unavailable. If you're using the advanced tier, then you get up to 1000 parameters; they can be up to 8 to get parameter policies; and you do have to pay for your parameters. So I don't think the exam is going to tell you to choose between standard and advanced tiers for parameters, but it's good to know as you go into the console. OK, so what are these parameters' policies? They're only for advanced parameters, and they allow you, for example, to assign a TTL (time to live) to a parameter, which effectively creates an expiration date.
And the idea is to force updating or deleting sensitive data in your parameter store, such as passwords, and you can assign multiple policies at once. So here are three examples: The first one is the expiration to delete a parameter. So in this example, I'm going to say, "Hey, my furniture expires in December 2020." Then we have an expiration notification. So you're saying, "Hey, this one sent me a notification through cloud watch events." 15 days before the expiration happens. And here is a "no change" notification. So this is saying that if my temperature hasn't changed in 20 days, then send me a notification through cloud watch events. So this is the kind of positive you can attach to your advanced parameters to trigger some sort of automationand to force yourself to change them quite often. So that's it for the parameter store; I hope you liked it. And in the next lecture, we'll get some practise to make it a bit more real.
Managing EC2 at Scale - Systems Manager (SSM) & Opswork
8. [SAA/DVA] SSM Parameter Store Hands On (CLI)
So let's use the Systems Manager Parameter Store. And for this, I'm just going to type "parameter," and it takes me directly to Systems Manager. You could also type "Systems Manager" to find the UI. So, within Systems Manager, you must now scroll all the way down to find the parameter store on the left hand side. second to last. Parameter store.
The Parameter Store is for secrets and configuration data management, and this is a way to centralise all these parameters within your AWS account, which is great. So here, you can see how it works. You create a new parameter, then you specify the parameter type in its value, and you reference that parameter from within your command in your code, which is exactly what we're going to do. So for this to get started, click on "Create a Parameter." So here I'll create a parameter called Myapp, and it's in the Dev environment. and the first one is a database URL. So I'll say the database URL for my in-development app.
Okay, now we can have three types of types. We can have a string and put whatever characters we want on it. String list. You have a list of strings separated by commas or a secure string. When it's encrypted, let's go with the string first. And so, for example, here is a database URL. So you can put whatever you want, but it has to be 4096 characters. So let's say Dev databasedefiniteeacher.com, and it's just something completely random. And maybe I'll add the port 3306—so it's just whatever you want it to be. Okay, so here's just a random URL for me. Devbase: stiffathletes.com 330 six I'll go ahead and create that parameter. And so here the create parameter request succeeded, and we have our first parameter. It turns out that we can see the description, the type, and also the version.
So, if we ever change this parameter, we'll get the most recently modified data, which will be new, as well as the most recently modified user, which will be tracked as well, so we can see who changed it. If I click on this again, I get a summary, I can see the history of the values, and I can see the tags. And we have our DB URL. So what I want to do is maybe create another parameter. So I'll create a parameter, and this time I'll call it the DB password for our database. And so I'll just say database_password from the app in development. and here I have a secure string. So this time we're going to encrypt our secret. And so we'll use KMS to encrypt. We can use our current account or another account for the KMS key source. We'll use our current account and the AWS-provided key.
We can also use the key we've created before. So I've created a tutorial KMS key from before, and we can just use that one as well. It's whatever you prefer based on how you want to manage your security. So I'll use my tutorial, and here's the value, and I'll say this is the developer password, which we don't see because it's a secret value and this will be encrypted. So I'll create the parameter, and now we see that we have a new parameter right here called "devdb password." It's a secure string. The file is now encrypted for them, the key idea is alias tutorial, and the version is 1. If I click on it, we can see that the value is hidden, but I can click on "show," which will decrypt the value on the fly and say this is the death password.
So this is pretty cool now, because what we can do is definitely copy these, and so we're going to do it again, but this time in production. So let's go ahead and create parameters. I'll create this in prod. So I'll just change the path and say "database URL," or the URL for my app in production. And I'll just be quick, defendertheacher.com database production port 3306. I've created, and I'll just create one last parameter, and we're going to create the DBS word "Prod," and the description is going to be "in production" and "secure string." I'll use the same tutorial value, but this time I'll say PROD password.
Okay, I'll create the parameter. So now we have four parameters, and we want to be able to access them. So what we can do is access them, for example, using the CLI first. So we are going to use the CLI to get the parameters. So it's called "get parameters," and you have to provide names. And for this, we have to provide the names of the parameters we want. So from here, I just say, "I want my DB URL and my DB password" and press Enter. And now we have two results from the API. We get the DB password and the DB URL back. So, let's take a look at the output, shall we? The first is that it's a string for the DBURL, and here's the value of the string. And as you can see, because it was not encrypted, the value comes back decrypted, and the version is 1. So that's perfect. We can see our database URL and use it. But for the password, it's a secure string, and here's the value of it, which is an encrypted value.
So for this, you basically need to decrypt it. And for this, you have a special parameter called "decryption." So I'll deal with decryption. And so this will check whether or not I have the KMS permission to decrypt this secret that was encrypted with the KMS tutorial key. So I'll press Enter, and now this time for my secure string, the value has been decrypted because I specified the weave decryption parameter. And so this is pretty cool because very, very quickly, we're able to have encrypted values that people can't access because they don't have access to KMS. That protects my password. But if I have access to KMS, I just provide one extra flag with decryption, and here we go. I get the value of my developer password. So it's really, really neat. The other thing we can do is do a SSM, get parameters by path, and let's go to the help. So the get parameters bypass allows us to go through a path, and we have to provide the path name, which has to start with a forward slash.
So, if we do get parameters by path and I delete this, I'll just keep my app dev and say, "Get parameters by path," and the path is this, then this will query for all the parameters under this path. And this is why we have a tree structure. And what we get out of it, obviously, are the parameters we had before. So we could go up a level to just my app and do minus minus recursively. To get all the parameters recursively under my app, press Enter. And now we have our four parameters back. DB password, DB URL, DB password for production, and DB URL for production So using this tree structure, we're basically able to organise our secrets and get them all at once, which is really neat. So that's it. You can also use the with-decryption flag to get the decryption. So that's just for the CLI. But in the next lecture, I'll just show you how it works with a base lambda, which is also very simple. So, see you in the next, and so on.
9. SSM Inventory & State Manager
Let's take a look at the in-stock feature SSM now. SSM. So this is used to collect metadata from managed instances and could be applied to premises. Metadata can include a variety of items, including the install, software, VOS driver configurations, ratings, updates, and services. It raises awareness of what is going on, for example. So you can view this data in the AOS console or store it, for example, in S3, and then query and analyses it using Athena for server less and quick analytics if you want to build some dashboards around it. As a result, you can set the collection interval to hours, cord days, minutes, hours, or days. Then you collect all of this information from the various accounts you may have and locate it. Finally, there is a central location for customs. Finally, custom invenyouycould replicate each managed lesson, you could replicate to. h managed acutance if you wanted to.
So let's start with console. So, on the left side, let's scroll down and look for the inventory. So here we are in the inventory, and as we can see, there are currently three disabled managed instances with inventory enabled. Currently, there are three disabilities that are caused by this. If you need to take inventory, click here for examples. So I'm going to instances for this. ck here to enable inventory on all And after clicking on the details and the toy request, I went to Manager. And the State Manager will be revealed in a moment, okay? something we'll see in the future But it is a way for you to apply statistics from those different instances to state. They are clearly all in the same three instances. sly, the state in which three instances must be is a state inventory. Owes us the gathering of stock, inventory So if your instance IDs are "beings," as we can see, all instance IDs are "be."
Okay? applied to this state execution history Okay? The execution history is now being opened, rigas wow. So I can click on it, and as we can see, one eventually will be gathered, a pending association. Red, but the other success remains a mystery to me. Okay, so one is brief. And while that's going on, let's take a look at what a state manager is. Let us first define an automated teller machine. As a result, State Manager is employed to manage instances during the bootstrapping process. The use case could also include booting systems and scheduling software updates with operating systems. So you specify what state you want your instances to be in.For example, in the one we just created, we want the state of the instances that will be monitored and gathered by the inventory.
Another example would be that you must always close port 22.or that you must install antivirus software on both of your instances. Then you specify a time frame for when this association and configuration will be applied. His organization, the States Managerial Union, will become an APA. To form an association, you must first create a document with SSM. You can, for example, create a document to configure an instance of Agent. So the superintendent's job is to desire. that your fleet of instances is my three-state declaration. Back to the beginning. So, if I return to Systems Manager and look at my inventory on the left side, we can see that three instances have the inventory enabled for them, which is fantastic. Three instances have had this enabled for them, which is an example now that we have a look at it, okay? Can we look at the instance version per type, for example? And, perhaps more importantly, do we care? at the top OS version. So three instances use inventory types here, huh? Now we don't have any cuapplications.y types, so this is not shown here. same AMI Examine the top five applications. Because all of my in some applications use the same AMI, there are no instances.be very interesting. However, many different applications exist, for example. But imagine a fleet of thousands of decongest names, thousands of EC, and obviously, thousands of EC.es.
It's going to be very helpful for you to know the details of all of that, obviously. But, if I go up and look up the source data details, we can have this right away. Consider the following detailed explanation: "a."So we can create inventory resource data sync, okay, for this. So I will just call it the "demo. sync." history feature of SSM. So this is used to collect metadata from your managed instances, and it could be easy to be on premises. And the metadata can include many things such as the install, software, Voss drivers, some configurations, the install updates, and running services. It creates an inventory of what's happening on your managed instances. So you can view this data in the AOS console or store it, for example, in S3, and then query and analyses it using Athena for server less and quick analytics if you want to build some dashboards around it’s you can specify the metadata collection interval, which could be minutes, hours, or days.
And then you could also gather all this data from multiple accounts into one account, and then you could query it from a central location. And finally, you can create a custom inventory if you want to. For example, you could replicate each managed instance if you wanted to. So let's take a look at how inventory works in the console. So, on the left side, let's scroll down and look for the inventory. So here we are in the inventory, and as we can see, there are currently three disabled managed instances with inventory enabled. So we need to enable inventory for my instances. So for this, I'm going to click here to enable inventory on all instances. And to set up this inventory request, I click on View details, and I'm taken into State Manager. And State Manager will be revealed in a moment, okay? But it is a way for you to apply state to your different instances to make sure that they are all in the same state. And obviously, the state we want our three instances to be in is a state that allows us to gather the software inventory.
So if you look at the targets, as we can see, all instance IDs are being applied to this State Manager execution. Okay? And if you look at the execution history right now, it is opening. So I can click on it, and as we can see, one manager instance currently has the association of eventually being gathered, but the other two are still pending. Okay, so one is a success, and I'm waiting for the other two to be done. And while that's going on, let's take a look at what State Manager is. So, State Manager is used to automate the process of keeping your managed instances in the state that you define. And the use case could be to bootstrap instances with software or to patch operating systems and software updates on a schedule. And you have to create what's called an association. So you specify the state in which you want your instances to be.For example, in the one we just created, we want the state of the instances that will be monitored and gathered data by the inventory.
Another example would be that you must close port 22 regardless of what. Or that you must install antivirus software on both of your instances. Then you specify a time frame for when this association and configuration will be applied. And to leverage a state manager, you use SSM documents and create an association. For example, you can create a document to configure the Cloud Agent. So State Manager is to ensure that your fleet of instances are all in the state that you desire. Now back in here, as you can see, my three managed instances are on the Success page. So, if I return to Systems Manager and look at my inventory on the left side, we can see that three instances have the inventory enabled for them, which is fantastic. And now that we have a look at this, well, we can have a look, for example, at the instance coverage per type, okay? And more importantly, we can look at the top OS version.
So we need to make sure that there are enough permissions for SSM to sync my data into my SD bucket. And for this, we probably need to add a bucket policy. So let's find the bucket policy right here. So this is an example of the policy itself. So we are going to copy this. Okay? And now that I'm in my measurement console, I'm going to take a look at the bucket I just made. So, for permissions under the Bucket policy, I'll edit it and we'll paste this policy. Now, I do need to change a few things. So instead of dock example buckets in here and here, I need to copy the name of my bucket and paste it here. And as long as we allow bucket prefixes, we don't have any. So I'll delete it, and then the account ID equals And then I need to specify the account ID of my account, which is right here. So I will paste my account ID, and we should be good to go.
So I will delete this, delete this, and delete the last comma. And now we're good to go in terms of permissions on my bucket policy. So let's save the changes and see if the error is now fixed. So create, and maybe let's make the permission a little bit more permissive. So, as they say, keep trying until it works. So I will just say that you're allowed to write anywhere, and that probably helps me a little bit. So, okay, let's save these changes. So now the permission is to allow the SSM service to put an object anywhere in my bucket, like in here. Create, and we're good to go. So the resource data sync was successfully created. So now we can look at the resource data sync itself. And this will be used in the backend by Athena to query my data into Amazon's three buckets. Okay, so we're waiting for the FINA to run this query for me, and then I'll be able to see all the installed software on my two Et instances.
Okay, so this took about five minutes to populate, okay? But now, under the inventory type, I can have a look at the application. And this is going to show me a list of all the applications installed on my two easy instances. For example, we can have a look at the version, the architecture, the summary, the package ID, the publisher, the release, the URL, as well as the name, and so on. So you can do a lot of different things. and we see. We have over eight or nine pages of results, so it could be quite tricky. And if you wanted to run advanced queries, you could go here, and it will take you into the Athena and anAthena, where you can run queries as you want on your inventory. OK, so let's say we've seen State Manager at a high level and we've seen inventory. I'll see you at the next lecture.
10. SSM Patch Manager and Maintenance Windows
So now let's see an overview of the SSM Patch Manager. So we use the Patch Manager to automate the process of patching our managed instances. This includes OS updates, application updates, and security updates. Of course, it supports both EC2 instances on-premises and Linux. Mac OS and Windows You're going to patch on demand, so you can run the Patch Manager as you want or on the schedule if you want to use a maintenance window.
What's going to happen is that the patch manager will scan the instances and generate a Patch Compliance report, which is a list of all the mission patches. And then this report can be sent to S3, and we can act upon this report. Patch Manager has two components that we must understand: the patch baseline and patch groups. So patch-based lines define which patches should and shouldn't be installed on your two easy instances. And the ability to create custom patches based on lines is up to you if you want to specify approved or rejected patches on your instance. The patches can also be automatically approved within days of their release in case someone is not there to approve them. By default. The patch baseline is to install only critical patches and patches related to security onto your SSM-managed instances. Now, for the patch group, this is to associate a set of instances with a specific patch baseline.
So if you define custom patch baselines, then you can create patch groups to associate them together. You can have an Apache group for development, testing, and production, for example. When using patch groups, instances should now be defined with a tag key patch group, and an instance can only be a part of one patch group at a time, and the patch group can only be registered with one patch baseline. So, hopefully, that makes sense. But I made this diagram for you. So we have three types of EC2 instances, all running SSM agents. The number ones are going to be tagged with OS Windows. Patch Group developers Number two is OS Windows.
The third item is OS: Windows Patch Group Prod. Okay, and in Patch Manager, we're going to define patch baselines. So the first patch baseline is attached to a default patch group, which is when the patch group is not defined by default. Okay? And this is the default here. So any instance that doesn't have a specific patch group is going to get the first patch baseline ID. And the second one is running patch groupdev, which is not a default patch baseline.
And we have a specific patch based on ID. As a result of the staging, the first instance under patch group development will have the PB 98 patch based on ID. And the other two are going to have the PB 0123 patch based on ID because they are not patched and tagged with a patch group developer. So, we're going to run a command here, and this command will run a document called a Deaths Run Patch Baseline. Okay? And this can be initiated from the console, the SDK, or the maintenance window. And then the run command itself will be applied to all these two instances from within, obviously to install the patches.
The SSM agents on these instances will now query the Patch Manager service to determine which patches to include and run as a result of the patch by side feature. So this may seem complicated, but hopefully this is something to understand. And then obviously, you can have rate controllers, like anything in SSM, using a maintenance window.
So, talking about maintenance windows, this is when you want to define a schedule for when to perform actions on your instances. For example, OS patching, updating drivers, and installing software For example, this could be done at night between three and 5:00 a.m. And a maintenance window contains a schedule, a duration, and a set of instances, as well as a set of tasks that can be run during that maintenance window. Hopefully, that makes sense from the standpoint of the exam. What you need to know is that PatchManager is used to patch your instances. But that comes, I guess, naturally. And these patches can be run within a specific maintenance window with a specific rate control if you need to. Okay, that's it. I will see you at the next lecture.
11. SSM Patch Manager and Maintenance Windows - Hands On
So, on the left, I can access Patch Manager and automate the patching of my instances. OK, so I will click on "Patch" now, and this is to apply a patch to my instances. As a result, we can scan or scan and install. And do we want to reboot if we want to patch our instances? And then what instances do we want to patch? So we can patch all instances, or we can patch only some specific targets based on tags or resource groups, or we can specify the instances manually. Okay, then patching log storage can be done and can be placed within a few buckets of your choice.
Then, if you want to perform some complex patching scenarios at a specific point during the patching, you can use lifecycle hooks. So this is one way of doing things. Okay, but you can also view the predefined patch baseline. And this is the only baseline you have for different operating systems. For example, the Red Hat patch baseline, the ScentOS patch based on Windows, and so on, and whether or not their default baseline exists. So some are yes, and some are no. I don't want to go too deep into it because I don't want to overwhelm you with details, but just to give you a quick overview of how Patch Manager works. So these are all the patches that have been distributed for Windows and Amazon Linux 2, as well as the release date, whether the security and level are critical, and so on. So whenever you create a patch group, it will appear here and be associated with a specific patch baseline.
And you can look at reporting to see if an instance is compliant or not with its patches. Finally, if you want to check out the maintenance windows, they're right here on the left. So under maintenance windows, we can create a maintenance window to run our patches. As a result, I refer to it as one demo payment window. Then we allow unreal targets to pass through this window. Where do we want this window to go and how far do we want it to go?
As a result, it may be a cronschedule rate or a chronic expression. So we can say every day. You start every day at zero three.So it's 3 a.m. now, and there will be a two-hour maintenance window. Okay? and stop initiating tasks. Well, it was zero hours before the window closed, but you could say 1 hour if you wanted to. If you want to schedule, we can say start and end times. And then we can create this maintenance window. Okay? And within this window, what we can do is register specific tasks that will be run. And one of these tasks, for example, could be a run command.
So I'll call it a patch. And the document that will be applied is the Address run Patch. And here we have the Run patch baseline, a document that we can register within our targets, and we can choose unrest or unrest targets. For example, these three can be applied. And obviously, thanks to the maintenance windows, we can specify concurrency and the error threshold. Hopefully, this works. Obviously, one target at a time, and zero for errors. and that should do the trick.
So let's see if I can simply click on res zero and command. Here we go. So now that means that within my maintenance window, this patch line will be run, and it will happen only during this window. So this is just a way to do things. I'm not going to go ahead with this maintenance window, but I want to show you a high level of how the patch and the maintenance windows were working. And to clean that up, you can delete this maintenance window, and you'll be good to go. So that's it. I hope you liked it, and I will see you in the next lecture.
12. SSM Session Manager Overview
So now let's talk about SSM Session Manager, which is a way for you to start a secure shell environment on your cloud instances and on-premises servers, but you access it through the console DCLI or the Session Manager SDK. And the real power of Session Manager is that you do not need to get direct SSH access into your instances, neither using a bastion host nor an SSH key. So this is distinct from traditional SSH or even the EC2 instance connect in the backend that uses SSH. Okay? So how does that work where our Easy-to-instance is running the SSM agent and has the right permissions to be registered with the SSM service? And so our user is going to connect to the Session Manager service with the correct item permissions, of course.
And then the session manager will be able to execute commands on our Easy to instance. So, if you want, it uses the same mechanism as the Run Commands service feature, OK? But this time the session manager is used to just have a common shell against our easy-to-instance cool things that support Linux, Mac OS X, and Windows, OK? And all the connections between your instances and the executed commands are going to be loved, so you can have them sent to Amazon's "Three" or "Clouds." The idea is that you will have more control and security.
When someone does an SSH command into an ECQ instance, you do not have the history of all the commands that were run, so less security, less compliance, and a cloud trail can also be used to intercept the startsession event, which is when session manager is being used to start a session onto your ECQ instance. For example, again, some automation, compliance, and alerting So Session Manager requires some improvisations, and this is allowing you to control which users or groups can access Session Manager in which instances?
And you can use tags to restrict access to only specific instances. So in this example, this is an Im policy that allows you to connect to any instance as long as the resource tag of the environment is named dev. So you also need access to SSM, write to Spread, and obviously write to Cloud Watch if you send the logs there. If you want to go all the way with security, you can even restrict the commands that the user can run in a session. So just reach out using SSH.
Well, we have to open up our security group, and a user with a specific IP can just SSH into our instance and do whatever he wants there. But using Session Manager, we don't need any inbound rules, okay? You simply need an instance with the SSM agents and the appropriate Im role, as well as a user with the appropriate adding permissions who can use the SessionManager to run commands against our EC2 instance. All session data will be saved or could be logged into Amazon's three or Cloud wish logs. So it's quite a cool service. I hope you like it, and I will see you in the next lecture to do some heads-on.
Amazon AWS Certified SysOps Administrator - Associate practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AWS Certified SysOps Administrator - Associate AWS Certified SysOps Administrator - Associate (SOA-C02) certification exam dumps & practice test questions and answers are to help students.
Purchase AWS Certified SysOps Administrator - Associate Exam Training Products Individually
Why customers love us?
What do our customers say?
The resources provided for the Amazon certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the AWS Certified SysOps Administrator - Associate test and passed with ease.
Studying for the Amazon certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the AWS Certified SysOps Administrator - Associate exam on my first try!
I was impressed with the quality of the AWS Certified SysOps Administrator - Associate preparation materials for the Amazon certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.
The AWS Certified SysOps Administrator - Associate materials for the Amazon certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.
Thanks to the comprehensive study guides and video courses, I aced the AWS Certified SysOps Administrator - Associate exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.
Achieving my Amazon certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for AWS Certified SysOps Administrator - Associate. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.
I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the AWS Certified SysOps Administrator - Associate stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.
The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my AWS Certified SysOps Administrator - Associate certification exam. The support and guidance provided were top-notch. I couldn't have obtained my Amazon certification without these amazing tools!
The materials provided for the AWS Certified SysOps Administrator - Associate were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!
The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed AWS Certified SysOps Administrator - Associate successfully. It was a game-changer for my career in IT!