Pass Amazon AWS-SysOps Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!
AWS-SysOps Premium Bundle
- Premium File 932 Questions & Answers. Last update: Nov 24, 2022
- Training Course 219 Lectures
- Study Guide 775 Pages
Last Week Results!
|Download Free AWS-SysOps Exam Questions|
Size: 1.85 MB
Size: 1.47 MB
Size: 1.64 MB
Size: 1.8 MB
Size: 1.69 MB
Size: 1.67 MB
Size: 1.71 MB
Size: 1.57 MB
Size: 1.09 MB
Amazon AWS-SysOps Practice Test Questions and Answers, Amazon AWS-SysOps Exam Dumps - PrepAway
All Amazon AWS-SysOps certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AWS-SysOps AWS Certified SysOps Administrator (SOA-C01) practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Elastic Beanstalk for SysOps
1. Beanstalk Intro
OK, so we know about EC2, how to manage EC2 at scale, and the concepts of high availability and scalability for easy to.Now, how about we start automating some of the deployment? For this, we'll use an elastic beanstalk. Beanstalk is usually a developer exam topic, but for the Saskatchewan, you're actually supposed to know a few things. So in this section, we'll go back over the beanstalk basics. Then we'll look at deployment options. And finally, we'll look at troubleshooting and a few optimizations we can do with Beanstalk.
2. Beanstalk Overview
Okay, so let's go ahead and learn what Elastic Beanstalk is about. So we have seen before that we have a typical three-tier web app architecture in which we have a load balancer. We had a private subnet that had REU instances in an auto-scaling group, and they were maybe talking to Amazon RDS for the database and ElastiCache to cache some information. So how can we safely reproduce this architecture very quickly? The solution is beanstalks. So when you're developing on AWS, you need to do a lot of things. You must manage the infrastructure, deploy your code, configure all of the databases and load balancers, and do everything else required to autoscale.
So you have scaling concerns, but at the end of the day, all these apps share the same architecture. For example, you would have an application developer and an auto-scaling group, for example.And as a developer, all you want to do is run your code, and you don't want to worry too much about the rest. So perhaps you want to do this across different applications, environments, and programming languages. So this is where Elastic Beanstalk comes in.
So Elastic Beanstalk is a platform as a service, and it's a developer-centric view of deploying an application on AWS. So the beautiful thing is that Elastic Beanstalk is just a layer, but it uses all the components we've seen before. That includes the EC, two instances, your autoscaling group, your load balancers, your RDS database, etc., etc. But the cool thing is that in the UI it's going to be one view, and it's very easy to make sense of. And we still have full control over the overall configuration of the previous elements. So Beanstalk is, all in all, free.
You do not pay for Beanstalk, but you do pay for the underlying resources that you use. For example, if you use an HTTP instance or a load bouncer on the RDS database, you're going to pay for those but not for the bin stock itself. Okay, so what is Binstock? It's a managed service, and it will both perform the instance configuration and the operating system configuration for you. That's all handled by Bin Stock. It will give you a deployment strategy and deployment mechanism, and we'll see how we can configure them. But this deployment is going to be performed by Binstock. So we have a full lecture dedicated to this. So the only thing that's your responsibility is your application code.
And then you need to figure out how to configure Bin stock, obviously, but that's very simple. On the Beanstalk, there are three architecture models. The first one is a single instance deployment, which is good for developments. The second one is a more production-like setup that leverages a load balancer and an autoscaling group, which is great when you have to have preproduction or production web applications.
And then finally, there's something called a "worker tier." We'll see this as well in this section, which is using only an autocurrency group, so no load balancer. And this is great when you want to process some messages in a queue. OK, back into Binstock. So Binstock has three different components. The first one is that you have an application, and this is your code. Then you have the application versions, and each deployment will get assigned a version, and finally the environment name. So it could be "development test," "broad," or whatever you want to call it. And so you're going to deploy your application versions to environments, and then you can promote application versions to the next environment. And the cool thing about Beanstalk is that it can do some deployment, but it can also do some rollback, so we can roll back to the previous application version. So we have this, and we have full control over the lifecycle of the environment.
So in a diagram, we create an application, and we create the environment together. We are going to upload the version and create an alias, and then we're going to release this into our Beanstalk environment. As soon as we get our hands on it, everything should make sense. So what languages are supported by Elastic Beanstalk? Well, we have support for many platforms. Golang, Java, Java with Tomcat, Net, Node, JS, PHP, Python, Ruby, PackerBuilder, single docker container, multicontainer docker, and that uses ECS and preconfigured docker are some examples. And if your platform is not there, you can write your own custom platform, which is more advanced. So Beanstalk does support a wide array of use cases, and what we're going to do right now is go ahead and practise them, and we're going to create our first Beanstalk application. So I hope you're excited, and I will see you in the next lecture.
3. Beanstalk First Environment
Okay, so before we get started with Bin stock, please go to the Amazon EC2 console and figure out if you have any EC2 instances running. If so, please terminate them. Figure out as well if you have a load balancer running. If so, please terminate your load balancer. And then finally, if you go to your autoscaling groups and you have an autoscaling group running, please terminate it as well. This is so we can start fresh and just focus on Binstock while remaining on the free tier. Okay, so next I'm going to go to Binstock, and we can get started with Beanstalk.
So here we go. We're going to get into the console, and what Beanstalk is used for is to run end-to-end application management. So let's get started and create our first application. So, as you can see, we create a web application and need to name it. So let's just call it my first webapp, Binstock, and we cannot change it later. Application tags are needed if you want to tag your environment. For now, we'll just leave it empty. And next, we need to choose a platform. So for the platform, I will choose one of these. As you can see, there's a lot of choice, but I will choose NodeJS for this tutorial.
And because we don't need to know NodeJS, obviously, we'll just look at some simple code. But I think it is easy to manipulate and easy to understand. So in terms of the platform branch, we'll just use the latest NodeJS on NodeJS Twelve, running on 64 bits. Amazon Linux version two This sounds good, but I think whatever you get—even if you get something different—should be fine for this hands-on. And the platform version has no options available, so we'll just leave it as is. Then, with the application code, we can get started with a sample application or upload your own code. And if you do, you need to upload your code, because we don't have any codes available to us right now. Let's just use a simple application. While we could configure more options for this handbook, we'll keep things simple and click on "Just Create Application." Because we can't select an application platform version, this is obviously selling.
So this is probably a bug in AWS. So let's go with NodeJS 10. This doesn't matter for this hand, and we'll choose this platform version five recommended, which works. So make sure you choose a combination that works for you. All right, we'll click on "Create Application," and our application is now being created. So it can take a while to create this environment. And as you build your environment, it will appear in this console. And so what I'm going to do is wait a little bit of time to show you everything that happened once it has been created. Okay? So my web application has finally been deployed. And as we can see, the health is green, saying okay, so that means everything is working for me.
Okay, so let's just go right into it. Here's a URL, and if you click on it and I open it in a new tab, you'll get a message saying, "Congratulations, your first Bin Stock application is now running." So this is magical. The application is running, but now let's try to get to the bottom of this to understand how everything works. So first of all, let's go to the menu on the left and then click on Events. So events show you all the events that happened in your environment, and you can choose the severity. So we'll just use the information. And in here I can see everything that has happened, from the very start to the successful launch of my environment. So the first thing is that there was an Amazon S3 bucket that was used to store the environment data.
So if I take these S-3 buckets, go to the S-3 console, and then filter the buckets, we'll just try to find the bucket that says Bin Stock. And so yes, this bucket was created, and I won't go into it right now, but we can see that it will hold the application code and some configuration. Okay, back into bin stock. What else? So it has created a security group and an elastic IP. So let's go to Amazon. It is easy to see this. So, in Amazon et deux, I'll go to the Elastic IP section on the left. And yes, an elastic IP has been created for me. So the Beanstalk environment is my first web app. So this has been created for me. Then I can also look at my EC. Two instances. So yes, an easy-to-create instance has indeed been created, and it's currently running. And we can see that the elastic IP on the right-hand side is assigned to the one that was just created. And then finally, yes, there is a security group. This security group right here has an ID, and this name of the security group is coming directly from Binstock.
So yet again, this was created by Binstock. So this is all correct. Binstock has been created for me, as well as Elastic IP, an EC2 instance, and a security group. Binstock configured those correctly. And then my application was launched, and it was available from this URL. So this Events tab really shows you everything that's going on. Now let's go back to our environment and try to understand everything from there. So we have the health, and we will see the health. In a second, we get the running version, which is a simple application, and we'll see how we can update this later, as well as the platform we're running on. OK, great. The configuration, on the left, is something we'll go over in depth in this course. So I am not going on this. Right now, the logs are showing the application logs on your EC2 instance. As a result, you can request a lock.
So you can request, for example, the last 100 lines of logs. And then this will create a file for you. And this file can be downloaded. And this represents the last 100 lines of logs, which are right here. So it could be interesting if you have some troubleshooting to do. Next, you can go to Health. Health shows you the health of your environment. And as we can see right now, one instance is okay, and we get some information about the number of requests per second. This instance is getting the number of responses based on the HTTP codes and so on, as well as some latency information and some load information. So we can get a lot of monitoring through the health system. Another tab is for the monitoring itself, so we can see the CPU utilization, the network in and network out, as well as more metrics and alarms, in case we have defined alarms, but we haven't yet managed updates in case we have platform updates or events.
This is the tab we already saw. And finally, tags, if you want to tag your environments so we can access all the menus except configuration. But trust me, we will spend a lot of time on the configuration very, very soon. Okay, so there are several things on the lefthand side now. We have here my first Web application, Binstockenz, and this is an environment that has been created. It's part of the application name, my first Binstock. Binstock, my first Web app, is also available on the left side.
So we have a concept of environment, and we have a concept of applications. So if I go one level up into environments, I can see I have one environment right here that has been created for me; it's Elf, which is okay, and I can also go one more level up, and this will show me the application. So I'll go to the left and click on Applications. As we can see, yes, we have access to my first Web App Bin stock application. So an application has multiple environments, and that makes sense. Then let's look at the actions.
So, on the right, you can take an existing environment and create a new one. You can manage the tags, delete the application, restore terminated environments, and swap the environment URL. So we'll see some of these options in more detail.
We can also look on the left side at the application versions. So we can see that the only application version we have for our application is this simple application. And this is the final configuration we've known so far, as well as the one we've been using in the tutorial. So we have a concept of applications, application versions, a concept of environments, and so on. So this is just an overview of Binstock. But as you can see, very simply, by just using some simple code, we were able to get this web page up and running. And this is already a great first step. Now, I'll see you in the next lecture for a better deep dive on beam stock.
4. Beanstalk Second Environment
Okay, so let's move on with Beanstalk. We have one environment, but in my application, I now want to create a second environment. So I'm going to click on Create New Environment, and here we have a new dialogue box that asks, "Do you want to create a web server environment or a worker environment?" This is a different option than before because we have already created one environment here. And so the second environment can be either a web server or a worker environment. For now, we only focus on web server environments. So I'm going to click on "webserver environments" and have a discussion about what a worker environment is later on. Okay, now for the name of the environment. So it was previously known as Minus en, but now we can give it a more meaningful name.
So I will say "minus prod." So before, let's say it was my development environment, and now this is my production environment. In terms of the domain, we can choose a value that we want. For example, my app is in Prod by Stefan, and hopefully we can check its availability, which is yes. So this is great. and we can give a description. For example, my beanstalk application in Prod. OK, so this is great. I'm going to scroll down, and we can choose a managed platform. So we're going to choose NodeJS. This NodeJS Twelve should not work. So let's go back to NodeJS 10 and take 5100 as recommended. This is great. And here we choose the application code. So we upload either a sample application, our own code, or an existing application. But we cannot select it because an example application was not uploaded by us. So it doesn't exist yet. So we'll just keep it simple. Instead of creating on Createenvironments, we'll configure more options this time.
So let's go and see all the options we can configure in Binstock. As you can see, there are a lot of them, and we'll try to keep them very simple. So at the very, very top, it says "Configure." Binstock Prod was my first Web app. And we have different presets. We have a single instance that is free tier eligible, and this is what we use to create the first environment. Now we have different options. We can use a Spot instance as a single instance, Spot and OnDemand instances for high availability, or a custom configuration. For this, I'm going to choose "High Availability" as a preset and go and see what this comes with. Okay, but you're more than welcome to just customise everything by just clicking on the configuration and then changing everything in here. So let's go with high availability, and I'm going to scroll down and see whatever we can do. So the first thing I want to show you is the software.
So if I click on Edit for the software, we can see the configuration. We have X ray. We haven't seen what it is yet: three logs or so if you want to store your application's logs in history, streaming the logs to Cloud Watch logs if you want the logs in the Cloud Watch logs service. So we haven't seen what the service is yet, but you can expect it to store the application logs in Cloud Watch. And so for now, we're not showing anything in here. So I'm just going to go back on this. I'm going to save it. Okay, then we can take a look here as well, so we can specify the route volume, the type of volume we want, the sizes, and the security group. So I'm just going to click on "set" because I don't want to edit anything for the capacity. More importantly, this is where it gets really interesting. So we have an auto-scaling group, and this is a load-balancing environment. So this is different from a single-instance environment. And so here we can configure our security group and our auto-scaling group. So many instances—one maximum instance is four.
And then what composition do we want for our auto-scaling group? Do we just want on-demand instances, or do we also want spot instances and on-demand instances? I'm just going to leave it simple as on-demand instances, and I'm going to scroll down. These would be the settings for the mix of on-demand spots. Then the instance type will keep it as "twice micro," the AMI ID to use to launch these ECtwo instances, the number of AZ to use "any one anytime" or just keep it as "any," and the placement. So we'll assign EWS to A, two B, and two C. Okay, now for the scaling triggers. So do we want to have auto-scaling on our scaling auto-scaling group, and what is the metric you want, for example, network out or CPU utilization, or whatever you want? the statistic, the average, the unit, and some threshold.
So we click on save, and here we have the auto-scaling configuration. And next, we have a load balancer. So we have an ASG but also a load balancer. And here, if we click on Edit, we can see that we have an application load balancer, which is a newer generation that uses HTTP and HTTPS. This is a web application, so this makes a lot of sense. But we could use a classic load balancer if you wanted to, or a network load balancer if you wanted to have ultra-high performance and static IP addresses for your application. We'll leave it as ALD. And here we are able to modify the application lounge or configuration by adding listeners and so on, as well as processes and rules. Finally, the load bouncers, log files, and access files can all be stored in Amazon S3, for example. Okay, we'll save this configuration. So as we can see so far, using these four settings, we're able to configure the underlying components of our Elastic Beanstalk environment that we'll be using. A load balancer and auto-scaling group rolling updates and deployment are going to be seen in the future lecture, so I'll skip over them. Security defines the service role for Ben Stock as well as the key pairs to use for your EC. Two instances of monitoring are around monitoring, so I just won't go right now. managers as well, which is to define when you want to update your EC. Two instances: some notification networking to make it part of a VPC or not, and a database. And more importantly, if you want to create an RDS database, you would define it here. So if I go in here, I would be able to create an entire RDS database and enter all the settings for it, but I will not do it right now. And something to note that is very important is that if you do create an RDS database in Binstock, then you cannot delete your Binstock environment without also deleting your RDS database will go as well.So sometimes it's good to have RDS within bin stock, and sometimes it's good to have RDS outside of bin stock. Okay, it's up to you. So I'm going to click on Save or Cancel.
This will be enough. Finally, tags for your environment. Oh, by the way, something I forgot. So in the load balancer, when you click on Edit, you choose a load balancer application (classic or network load balancer), and you cannot change the load balancer later on. So, if you choose application bouncer, you must keep an application bouncer for the duration of your environment's lifecycle. Okay? You cannot switch between these three. Okay, so let's go back here, and I'm going to go back into the high availability custom presets to make sure I'm just configured nicely. And then I will click on "Create Environment." And so now we will go into the same process to create this environment. So we'll wait a little bit of time for this to be done. So, after five minutes, my Prod environment has been created, and if I go to this URL, I get the same congratulations message. But this time, we know that we are served by a highly available setup because there is a load balancer and also an auto scaling group, so we can go and verify this.
So if I go to the Et Management Console and go to instances, I can find my prodinstance right here, which has a public DNS. So this one does not have an elastic IP, and this instance is actually managed by an auto-scaling group. so we can see it in the tag. There's an autoscaling group name for this. So I'm going to go to my auto-scaling groups on the bottom left of my screen, and as you can see, yes, we have two auto-scaling groups. The first one, this one that has max, min, and desired of one, is for my dev environment, and this one that has min of one, max of four, and the third of one is for my production environment, and here we can see our instance as well. So this is all very good. We have an instance managed by a number of skincare groups, and we can see from our load balancer, for example, that we have some listeners that have been created for us, and everything is nicely set up.
Oh, by the way, we can also go into the auto-scaling group, and obviously because this is an auto-scaling group, we can see the scaling policies that have been created for us. So two scaling policies, one for downscaling and one for scaling up, are very nice. Next, we can go to our security groups and have a look at those as well. So if we clear this filter and we can see the fact that many of those kinds of groups—let's just filter for beanstalk—will work, And so here we can see three security groups. The first one is our load balancer security group, which is attached to our new load balancer, so this allows inbound rules on port 80 on http, so this is great. The second is a standard security group that is attached to our EC2 instance, as indicated by the rules here, which mention port 80. OK, but the source of that port ad must be the automatic scaling of the security group; sorry about my load balancer. So this is a kind of optimal security setup we've seen in this course, and the last security group was the one for our development environment, which allows access to port 80 from anywhere. That's really cool. Binstock did a lot of stuff for us in a perfect way, but now we manage it all from this UI, which I think is quite handy and nice.
And so if we go back to environments now, we can see that Beanstalk has two environments, our dev environment and our production environment, both running alongside each other. And so this is already a much easier way to manage your application deployments. So I hope that was helpful, and I will see you in the next lecture.
5. Beanstalk Deployment Modes
Now, here is a very popular question from the exam, and the exam will ask you about deployment mode for Elastic Beanstalk and which deployment mode is better for which situation. So I want you to understand each and every option for Elastic Beanstalk deployment because that's really key to answering questions the right way. We've seen the single instance deployment, which is great for development because we basically get one easyto instance with one elastic IP and one autoscaling group, and we may or may not communicate with your database. All of this is in one AZ, so it's very easy to reason about. And the DNS name maps straight to the elastic IP. There's a second setup now, which is high availability with or without a load balancer, and it's great for production-type deployments. So in this case, it's a bit more complicated. But we've seen this architecture before.
We have an auto-scaling group, or ASG, and it will span across multiple availability zones in which we're going to get several one or several easy two instances, each with their own security groups, and they may talk to an RDS that may be set up in multiple availability zones as well, such as one master and one standby database. OK, so all this is pretty familiar. The elastic load balancer will then communicate directly with the ASG and connect to all of the simple two instances.
And that ELB will expose a DNS name, which will be wrapped by the Elastic Beanstalk DNS name. So this is what we've seen in dev and production, and obviously you can customise this a little bit, but what happens when you want to update these deployments? Okay, there are four or five different types of deployments, and you must be familiar with all of them. The first one is called "all at once," where you deploy all your applications in one go. Now don't worry, I have graphsdescribing in depth all of these. I just want to give you a quick overview. So with "all at once," it's the fastest kind of deployment, but instances won't be available to serve traffic for a bit. So you'll get downtime if you go into a rolling update. Then it will update a few instances at a time, also called a "bucket," and then move on to the next bucket.
Once the first bucket is healthy and updated, you get a slight twist on this called "rolling" with additional batches. And this is like rolling, but you spin up new instances to move the batch, such that your application is still available and always operating at full capacity. And finally, you'll get immutable deployments, where you spin up new instances in a new ASG and deploy the version updates to these instances. When everything is ready, we'll replace the entire ASG when everything is healthy. So this is a little bit high-level, and you probably have no idea what this means. So this is why I wanted to take my time and really show you with graphs and diagrams how these work. So let's talk about it all at once. Here are our four easily identifiable instances, all of which run version one, which is blue in our application. Then we are going to do an all-at-once deployment. So we want to deploy V2. And what happens? At first, Elastic Beanstalk will just stop the applications on all our EC2 instances.
So then I put it as "gray," as in "they don't run anything." And then we will be running the new V2 because Elastic Beanstalk will deploy V2 to these instances. So what do we notice? Well, it's very quick. It's the fastest deployment. However, the application is experiencing downtime because, as seen in the middle, they are all grey and thus unable to serve any traffic.
I think it's great for when you have quick iterations and development environments, when you want to deploy your code fast and quickly, and you don't really care about downtime. And finally, with this setup, there is no additional cost. Now let's talk about rolling. The application will basically run below capacity, and we can specify how much below capacity it should run. So it's called the bucket size. So, let's take a look. We have four instances running V1, and the bucket size will be two for this example. So what happens is that the first two instances will be stopped, but not the instance of the application on the instances, and so they're gray. But we still have the other two instances running in V1.
As you can see, we're only about half full. Then these first two instances will be updated. So they will be running V Two. And then we will roll on to the next bucket or the next batch. And so that's why it's called rolling, as you can see. The bottom two instances will now have their application, V One, reduced to grayscale and then updated to V Two. And so at the end, we have all the Institute instances that have been updated to run the V-2 application code.
As you can see, the application is running both versions concurrently at some point during the deployment, and there is no additional cost. Okay? You still have the same number of EC2 instances running in your infrastructure. And so if you set a very small bucket size and you have hundreds and hundreds of instances, it may be a very long deployment. Okay? Right now in this example, we have a bucket size of two and four instances, but we can have a bucket size of two and 100 instances. It will just take a very long time to upgrade everything.
Now, there's an additional mode called "Rolling" with additional batches. And so in this case, the application is not running at capacity. just like before. You know, at some point, we're only running two out of four instances. So that was below capacity. In this mode, we run at capacity, and we can also set the bucket size.
And basically, our application will still run both versions simultaneously, but at a small additional cost. That additional batch that we'll see in a second will be removed at the end of the deployment. And again, the deployment is going to be long. It's honestly a good way to deal with pressure. So let's have a look. We have four V1 EC2 instances, and the first thing we're going to do is launch new EC2 instances with the V2 version. So now, from four instances, Elastic Beanstalk has automatically created six instances for us. So two more were added, and as you can see, the additional two are already running the newer version. Now we take the first batch to the first bucket of two, and they get stopped.
The application gets stopped, and the application gets updated to version two. Excellent. Then the process repeats again, just like enrolling. So the application running V1 gets stopped, and then the application is updated to V Two. And so at the end, you can see we have six E2 instances running V Two. And so at the end of it, the additional batch gets terminated and taken away. So, what are you going to do with this? Well, now we can see that we are always running at capacity. The lowest number of E2 instances running the application we have at any time is four. So sometimes we are running at overcapacity, obviously, and this is why you have a small additional cost. It's very small, but there is an additional cost. And sometimes the exam asks you if there is an additional cost to this kind of stuff. Then there are immutable types of deployments, which are also in ZeroDown time. But this time, the new code is going to be deployed to new instances. So before it was on previous instances, now it's deployed on new instances. And where do these instances come from? They come from a temporary ASG. So there's a high cost. You double the capacity because you get a full-new ASG, and it's the longest kind of deployment.
However, as a bonus, you get a very quick rollback in case of failures because you only mitigate failure; you simply terminate, and Ubielastic Beanstalk will terminate the new ASG. If you're willing to pay a little more, it's a great choice for pride. So here's the idea: We have a current ASG with three applications, vOne running on three instances, and then we're going to have a new temporary ASG created. At first, Beanstalk will launch one of its instances just to make sure that one works.
And if it works and passes a health check, it's going to launch all the remaining ones. So right now, in three instances when it's happy, it's going to sort of merge the ASG with a temporary ASG. So it's going to move all the temporary ASG instances to the current ASG. So now we have six instances in the current energy, okay? And when all this is done and the temporary ASG is empty, then we have the current energy that will terminate all the V1 applications while the V2 applications are still there. And then finally, the temporary ASG will just be removed. Finally, there's something you may hear in the exam or in the white paper.
It's called Bluegreen, and it's not a direct feature of Elastic Beastock, but I'll try to give you my best version of it. It's basically a zero-down time, and it helps with the release facility, allows for more testing, etc., etc., etc. And so the idea is that you want to deploy a new stage environment, so it's just another elastic beanstalk environment, and you'll deploy your new V2 there. So before, all the deployment strategies were within the same environment. Here, we create a new environment so that the new environment, whether staged or green, can be validated independently in our own time and then issues can be rolled back. And then we can use something like Route 53, for example, to prevent the traffic from going in both directions. So we can set up weighted policies and redirect a little bit of traffic to the staging environments so we can test everything.
And then, when we're happy using the Elastic Beanstalk console, you can swap URLs when done with the test environment. So this is not a very direct feature, and it's actually very manual to do. It's not like it's embedded in an elastic beanstalk. So some documentation will say there is blue-green algae, and some will say it's not there. But overall, it's very manual.
So, just one graph, I'm trying to keep it simple, but in the blue environment, we have all the V1, and then we'll deploy a green environment with all the V2, okay? And they're both working perfectly fine at the same time. And then in Route 53, we're going to set up a weighted type of policy to send 90% of the traffic to Blue. So just keep the majority of traffic going to the instances we know work and maybe only 10% of traffic going through the green environment to test it out and make sure it's working and the users aren't having any issues. And so the web traffic basically gets split 90 ten. But it's whatever you want as far as the weight goes. So, once you're satisfied with your testing, when you've measured everything you want with your V2 environment and you think you've nailed it, you basically shut down the blue environment and switch the URL to make the green environment the primary environment.
So that's it for blue and green, right? And it's fairly complicated, and I believe fairly manual analyst beanstalk, but that's how it is now from the AVAs documentation; sometimes it's really good, and we get a brief summary. So this is the link. If you look into it, it's really, really good. I should like the page; you should read it. And so there's this table, which is quite nice and kind of summarises all the deployment options.
As a result, you have them all rolling at the same time, with additional batches being immutable. So we've been doing all of this in depth, as well as blue green. And so it basically tells you what happens if there's a failed deployment. What's the deployment time? Is there a zero-dam time or not? Is there a DNS change? What's the rollback process, and where does the code get deployed to? So this table should make a tonne of sense to you if my diagrams make sense to you as well, right? But now you should really understand all the differences between the deployment methods. They are very important in the exam, which asks you a lot of questions about which is better depending on the use case and the requirements. So I hope that was helpful. And you are now an elastic beanstalk deployment expert. I'll see you at the next lecture.
Amazon AWS-SysOps practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AWS-SysOps AWS Certified SysOps Administrator (SOA-C01) certification exam dumps & practice test questions and answers are to help students.