exam
exam-1
examvideo
Best seller!
AWS Certified Solutions Architect - Associate SAA-C02 Training Course
Best seller!
star star star star star

AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course

The complete solution to prepare for for your exam with AWS Certified Solutions Architect - Associate SAA-C02 certification video training course. The AWS Certified Solutions Architect - Associate SAA-C02 certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including Amazon AWS Certified Solutions Architect - Associate SAA-C02 exam dumps, study guide & practice test questions and answers.

144 Students Enrolled
10 Lectures
23:20:00 Hours

AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course Exam Curriculum

fb
1

Introduction - AWS Certified Solutions Architect Associate

2 Lectures
Time 00:04:00
fb
2

AWS Fundamentals: IAM & EC2

5 Lectures
Time 02:34:00
fb
3

High Availability and Scalability: ELB & ASG

3 Lectures
Time 01:29:00

Introduction - AWS Certified Solutions Architect Associate

  • 2:00
  • 2:00

AWS Fundamentals: IAM & EC2

  • 6:00
  • 6:00
  • 8:00
  • 10:00
  • 3:00

High Availability and Scalability: ELB & ASG

  • 5:00
  • 8:00
  • 9:00
examvideo-11

About AWS Certified Solutions Architect - Associate SAA-C02 Certification Video Training Course

AWS Certified Solutions Architect - Associate SAA-C02 certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.

AWS Fundamentals: RDS + Aurora + ElastiCache

4. RDS Encryption + Security

So now let's talk about RDS security. And the first topic I want to talk about is encryption. So we have "at rest" encryption, which is data that's not in motion. And there is a possibility to encrypt the master database and the read replicas using AWS KMS, which is a key management service of AWS that uses AES-256 encryption. So when you do encryption, you define it at lunchtime.

And if the master is not encrypted, the read replicas cannot be encrypted. So that is a common scenario question at the exam as well. You can also enable transparent data encryption, also called TDE for Oracle, in SQL Server. And that provides you with an alternative way of encrypting your database. Now there's inflight encryption, which is always going to be around SSL certificates, and they're used to encrypt data to RDS while in flight. So, when your clients send data into your database, you provide SSL options with a trust certificate when connecting to the database. And you will have established an SSL connection to enforce SSL and make sure that all the clients must use SSL.

If you're using Postures, there is a console parameter group you need to set called RDS dot force. SSL equals one that's pretty explicit, and you're using My Sequel within the database. You need to run this SQL statement, "Grant usage," onto MySQL user requires. Again, it's pretty obvious what it does. Just so you know, Postures is a parameter group, and MySQL is going to be a sequel command within the database. You should be aware of some RDS encryption operations. And the first one is: how do I encrypt an RDS backup? So something you should know is that if you have an unencrypted RDS database and you take a snapshot of it, then the snapshot itself will be unencrypted. And similarly, if you take a snapshot of an encrypted RDS database, then by default all the snapshots are going to be encrypted, but not always by default all the time.

And so what you can do, though, is copy an unencrypted snapshot into an encrypted one. So if you take a snapshot of an unencrypted RDS database and copy it, you can create an encrypted version of that snapshot. fairly easy, right? So now that brings us into the question of how we encrypt an unencrypted RDS database based on the information we receive. The first thing we do is create a snapshot of an unencrypted RDS database, which will be unencrypted, and then we'll copy the snapshot. And for the copied snapshot, we'll enable encryption. So now we have a copied, encrypted snapshot. And thanks to those encrypted snapshots, we can restore the database from the encrypted snapshots, and they will give us an encrypted RDS database. And then we'll just migrate all our applications from the old, unencrypted RDS database to the new, encrypted RDS database, and we'll delete the old database. And that is an operation you should be familiar with; it is fairly simple; you only see it once, and when it comes up in the exam, you are aware of it.

So now let's talk about networks and information security. For network security, our RDS databases are usually deployed within a private subnet, not a public one. So make sure not to expose your database to the World Wide Web. And RDS Security works by leveraging security groups that you attach to your RDS instance. They're exactly the same concepts for the two EC instances. And it controls which IP addresses or security groups can communicate with RDS. for access management, which is user management, and so on. In terms of permissions, you have iPolicies, which allow you to limit who can manage a RDS. So who can create a database, who can delete it, who can duplicate it, and so on. In addition, you must use a traditional username and password to connect to the database. OK, to login into the database, or as we'll see in the next slide, for just RDS MySQL and RDS Postgres SQL, you can use IAM-based authentication. So, the bottom line is that database security is typically provided from within the database.

Now let's talk about how we can connect to RDS using IAM authentication. So, as I said, it is only for MySQL and Postgres. And you don't need a password this time; you just need something called an authentication token, which can be obtained directly using I Am and the RDS API calls. We'll see this in the diagram in a second. And the authentication token, because it's short-lived, has a lifetime of 15 minutes. So here's the example: We have our easy-to-use security group and our EC2 instance, and then we have our MySQL-RDS database in the RDS security group. So the EC2 instance will have something called an IAM role, and we'll see what that means when we get deep into IAM roles for easy to.But the idea is that the EC2 instance, thanks to that IAM role, is going to be able to issue an API call to the RDS service to get back an authentication token.

And using that token, it's going to pass that token all the way through while connecting to the MySQL database and making sure the connection is encrypted along the way. And then it will connect securely to your MySQL database fairly easily. The benefit of this approach is that all network traffic in and out must be encrypted using SSL. IAM is going to be used to centrally manage users instead of managing users from within the database. So it's a more central type of authorization, and you can leverage Im roles and EasyToInstances profiles for easy integration. And, as I previously stated, we'll find out what Imrolls into instance profiles are very soon. So, in summary, for RDS security, you have encryption at rest, which is done only when you first create the database instance. And, by the way, it's unencrypted. You need to create a snapshot, copy the snapshot to encrypt it, and then create a new database from the encrypted snapshot, and that will encrypt your database.

And your responsibility is to check all the ports and IP security groups' inbound rules for the database security group or take care of all the inadvertent user creations and permissions. Or, as we saw before, managing through Im for MySQL and Postgres. Create a database with or without public access; is it going to be in a public subnet or a private subnet? Ensure the parameter groups in the database are properly configured to only allow SSL connections, making sure encryption is happening. And what is Amazon Web Services' responsibility? Well, it is to make sure that you don't have SSH access, that you don't have to do database matching or OS patching because AWS will do this for you, and that you don't have a way to look at the underlying instance. Again, that is AWS's responsibility. So RDS is a service that is provided to you, and you can use it or not, but in my opinion, it is the best service provided by AWS. So it makes a lot of sense to use RDS. Alright, that's it for this lecture. I will see you at the next lecture.

5. Aurora Overview

So let's talk about Amazon Aurora, because the exam is starting to ask you a lot of questions about it. Now, you don't need deep knowledge of it; you just need a high-level overview to understand exactly how it works. So this is what I'm going to give you in this lecture. Aurora is going to be a proprietary technology from AWS.

It's not open source, but they made it so it's compatible with Postures in my sequel. And basically, your Aura database will have compatible drivers. That means that if you connect as if you were connecting to a Postgres or MySQL database, then it will work. Aura is very unique, and I won't go into too much detail about its internals, but they made it cloud-optimized, and by doing a lot of optimization and smart stuff, they basically get five-fold performance improvements over MySQL on RDS or threefold extra performance over Postures on RDS. Not just that, but in many different ways, they also get more performance improvements from me. I watch it; it's really, really smart, but I won't go into the details of it. Now, Aurora storage automatically grows, and I think this is one of the main features that is quite awesome. You start at 10 GB, but as you put more data into your database, it grows automatically up to 64 terabytes.

Again, this is due to how it is designed, but the awesome thing is that you no longer need to worry about monitoring your disc as a DB, a DB, or a Sysops. You just know it will grow automatically with time. Furthermore, read replicas can have up to 15 replicas, whereas MySQL only has five. And the replication is faster. The way they make it, it's much faster. So overall, it's a win. Now, if you do failover in Aura, it's going to be instantaneous. So it's going to be much faster than a failover from mulitas on MySQL RDS. And because it's cloud-native by default, you get high availability. Now, although the cost is a little bit higher than RDS—about 20% more—it is so much more efficient that at Scale, it makes a lot more sense for savings. So let's talk about the aspects that are super-important, which are high availability and read scaling.

So Aura is special because it's going to store six copies of your data whenever you write anything across three AZ. And so Aura is made available. So it only needs four copies out of six for writing. So that means that if one AZ is down, you're fine, and it only means you have three copies out of the six needed for reads. So again, that means that it's highly available for readers. If some data is corrupted or bad, it self-fills with peer-to-peer application in the back end, which is quite cool. And you don't rely on just one volume; you rely on hundreds of volumes. Again, not something for you to manage. It happens on the back end, but that means that you've just reduced the risk by so much. So if you look at it from a diagram perspective, you have three AZ and a shared storage volume, but it's a logical volume and it has replication, self-inclusion, and auto-expansion, which is a lot of features. So if you were to write some data, maybe blue data, you'd see six copies of it in three different AZs.

Then, if you write some orange data, again, make six copies of it in different AZs. And then, as you write more data, it will be compared to six copies of it in three different AZ. The cool thing is that it goes on different volumes, it's striped, and it works really, really well. Now, we need to know about storage, and that's it. But you don't actually interface with the storage. It's just a design that Amazon made, and I want to give it to you as well so you understand what Aura does. Aurora, it's now like multi-AZ, four RDS. Basically, there's only one instance that takes place, right? So there is a master in Aurora, and we will take rights from him. And then, if the master doesn't work, the failure occurs in less than 30 seconds on average. So it's a really, really quick fail over. In addition to the master, you can have up to 15 replicas, each serving a different read. So you can have a lot of them. And this is how you scale your reading workload.

As a result, if the original fails, any of these replicas can take over as master. So it's quite different from how RDS works, but by default, you only have one master. The cool thing about Replicas is is that it supports cross-region replication. So if you look at Aurora on the right-hand side of the diagram, this is what you should remember. One master, multiple replicas, and storage will be replicated self-healing and expending little blocks at a time. Now, let's have a look at how Aurora is organized as a cluster. So this is more about how Aurora works. When you have clients, how do you interface with all these instances? So, as we said, we have a shared storage volume, and it's auto-expanding from 10 GB to 64 terabytes. really cool feature. Your master is the only thing that will write to your storage.

And because the master can change and fail over, what Aura provides you is what's called a "writer endpoint." So it's a DNS name, a writer endpoint, and it's always pointing to the master. So even if the master fails, your client still talks to the writer endpoint and is automatically redirected to the right instance. As I previously stated, you now have plenty of time to read replicas. What I didn't say is that they can have auto-scaling on top of these read Replicas. So you can have one up to 15 read replicas, and you can set up auto scaling so that you always have the right number of read replicas. Now, because you have auto scaling, it can be really, really hard for your applications to keep track of where your read replicas are. What's the URL? How do I connect to them? As a result, keep this in mind. Absolutely.

For the exam, there is something called a reader endpoint. A reader endpoint has the same functionality as a writer endpoint. It helps with connection load balancing and connects automatically to all the read Replicas. So anytime the client connects to the reader endpoints, it will get connected to one of the read replicas, and they will be load balancing. Done in this manner Make sure you just notice that the load balancing happens at the connection level and not the statement level. So this is how it works for Aura. Remember the writer endpoint and the reader endpoint. Remember autoscaling? Remember the shared storage volume that automatically expands? Remember this diagram because, once you get it, you understand how Aura works.

Now, if we go deep into the feature, you'll get a lot of what I already told you. Backup and recovery with automatic failover, isolation and security, industry compliance, push-button scaling by auto scaling, and automated patching with zero downtime So it's kind of cool. That backend work—advanced monitoring, routine maintenance—is taken care of for you. And you also get this feature called "Backtrack," which gives you the ability to restore it at any point in time. And it actually doesn't rely on backups; it relies on something different. But you can always say I want to go back to yesterday at 4:00 p.m. "Oh no, actually, I want you to do what I told you to do yesterday at 5:00 p.m.," you say. And it will work as well, which is super, super neat. For security. It is similar to RDS because it uses the same engine. We have Postgres and MySQL. So we get encryption at rest using KMS.

We have automated backups, snapshots, and replicas that are also encrypted in flight using SSL. This is the exact same process we have for Mysequel and Postures, if you wanted to enforce it. And we have also seen authentication using Im tokens, which is the exact same method we have seen for RDS. Thanks to the integration with MySQL and Postures RDS, you are still responsible for protecting the instance with security groups, and you cannot SSH into your instance. So Aura Security is, all in all, the exact same as RDS Security. Now, we have a new service called Aura Serverless. And this is quite awesome. So this is an automated database, instantiated with auto-scaling capability based on your actual usage of Aurora. It's really, really good if you have infrequent, intermittent, or unpredictable workloads. So anytime in the exam you see these kinds of keywords, think or say less.

And the beautiful thing is that you don't have to do any capacity planning. It works for you, and therefore it can be a lot more efficient because you pay per second and you have a lot of cost savings associated with it. So what does it look like? We have a shared storage volume for our clients, and our clients want to access our Aura database, but it is server less.

So what's going to happen is that on the back end, there's going to be an Amazon Aura created by Aura Server less, and there's a proxy fleet managed by Aura that our clients will connect to and then transfer it to our Amazon Aura database. But the beautiful thing is that if we get more data, more Amazon Aura databases will be created for us automatically. And if there is less load, fewer databases will be created, all the way up to zero aura databases if there is no usage.

So it's quite awesome when you think about it. Aura Server less really gives us the power of a relational database while giving us some server less attitude because we have no scaling to do, we don't use any capacity planning, and it will scale based on demand. OK, so that's for Aura server less, and now we have global Aura. So this is aura: two ways to have a global aura across multiple regions. The first way is to have AORA cross-regional replicas, which are useful for disaster recovery and very simple to put in place.

So you just create a replica in another region, and there you go. That is a simple way. But now there's a new way called Aura Global Database, which is the recommended way in the documentation. And so you have one primary region where all the reads and writes happen. And then you have up to five secondary read-only regions where the replication lag is going to be less than 1 second.

So you can have up to 16 replicas per secondary region. So that's a lot of replication you can do all around the world, and it will definitely help decrease latency. And if you ever wanted to promote another region for disaster recovery because the main region had a severe outage, then you have an RTO, which is a recovery time objective in less than 1 minute. That means that in less than 1 minute, your new Aura database in your secondary region will become primary and will be ready to take on rights. So this is really, really fast.

So as a diagram, what would it look like, for example, in US East 1, which is our primary region, where we have our Aura database, the main one, and our applications to read from and write to it? And then we want to define a secondary region in EU West One.

And so we'll define an Aura global database, which will perform some replication in up to one second. Lag. And this is asynchronous replication in action, as well as our applications in EU West. One can directly read from this database and perform read-only workloads. So that's it for Aura. It's a very dense subject, I know, but it's really important that you know everything going into the exam, and I will see you in the next lecture.

6. Aurora Hands On

So let's create an Aura database, and now we are in the new interface. So there was an old interface, and now you can switch back to the old one that I'll keep so that the video is more compatible with you. And so we're going to create an Aura database. So we can either do a standard create to configure everything or an easy create. But obviously, we want to configure everything. So we'll start with a standard create. I'll choose Aurora, and then you have to choose whether you want Aura with Mysequel compatibility or Postures SQL compatibility. So these are the only two modes in which you can use Aura. So we'll choose My Sequel because it has more options.

But whether you choose MySQL or Postures, you can see there's a version dropdown, and you can choose the version you want. Now, for this hands-on, I'm going to use my sequel because it has the most features for Aura to demonstrate, and then for the version, I'm going to use 5.6 Pt. And the reason I do so is that if you look in between here, for example, we have a database location that is regional and global, but if I select the next version, for example, if I select this one, I don't have that feature anymore, so I don't demonstrate as much.

So, if you want to follow along with me, keep in mind that this is not a free hands-on, okay, this is something I have to pay for because Aura is not part of the feature, but you can follow along just to see the options. So choose Aura MySQL 5610-8 just to have an option on the database location. So, regarding the database location, you can either have a regional aura database within a single region or you can choose a global aura database in multiple AOX regions, in which case the rights are going to be replicated to your other regions within 1 second. And there is a way for you, in case there is a regional outage, to failover to a different region by separating the different region into its own cluster. So we'll keep it as regional for now because it also shows us a lot of cool features we can get out of Aura.

So here we have to choose database features, and as we can see, there are four different modes we can use. Either we have one writer and multiple readers, which is the one I explained to you and is most appropriate for general-purpose workloads. But we can also have one writer and multiple readers for parallel queries to analyses to improve the performance of analytics.

In queries, you have multiple writers where you can have multiple writers at the same time in Aura; this is when you have a lot of rights happening continuously, and finally, you have server less, which is when you don't know how you will need Aura. You have an unpredictable workload; maybe you need a little bit in the morning, maybe you need a lot at night, and so you need to be more scalable, in which case you would choose server less, and this would be a great option. So regarding the exam, the ones you should definitely know are going to be the general one and the server less one.

Okay, so we'll start with the general one configuration because there are more to do on the server. So for the general one, we can go to either production or detest, and these are like pre-filled templates that will fill the settings in the bottom. So I'll choose production, and we'll go one by one. So, for the DB and Fire, you can call it whatever you want; I'll call it Aurora dB, and then if I scroll down the master username, I'll use something I know, for example, Stefan and, and then for the password, I'll use password just like before. So password and password here. OK, great. So next I'm going to scroll down, and we have the DB instance size. So this is where you choose the performance of your database.

So you can choose either memory-optimized classes or RANDX classes. So you can see all these instances in this drop-down menu, or we can have burstable instances, which are going to be cheaper and include T classes. So it being too small is going to be the cheapest option right now for this demo. So that's what we've chosen. However, as you can see from the workload, if you have a production-type workload, memory optimize will definitely be better. If you're doing development and testing, DBT Two Small is probably the best option in terms of cost savings, but it's still not a free tutorial. OK, now let's talk about availability and durability. So we can create Aura, read a replica, or read an Anode in a different AZ, which is great because if the current AZ is down, we can failover to a different availability zone, which gives us high availability.

And so this is why it says "multi-AZ deployment." So we can either create one or not; regardless, the storage is distributed across multiple AZ. That is an Aura feature, but it is more about getting your Aura instances into a cross-multiple AZ. And if you want a multi-AZ deployment, then please enable this, and I will keep it as is because it's a good option but obviously a more expensive one for connectivity. So where do you want to deploy your AU Cluster in your VPC, and then what do you want to have in terms of a subnet? Do you want it to be publicly accessible, yes or no? I'll leave it at no; we won't connect to it. And then you want to set the default security group or create a new one. It's up to you to choose whatever you want. This is not important; we won't connect to this database anyway.

I just want to show you the options. And finally, you have a lot of additional configurations. Okay, so the DB instance identifier could, for example, be this one. This is great. The initial database may be Aurora, and then you could specify primary groups. These are not in scope for the exam. You can define a preference for failover, but we won't do that. Backup is really great if you want to have snapshots of your database and want to restore from them. So they're great for disaster recovery.

So you can set up the retention you want for your backup between one day and 35 days. OK, then enable encryption. So do you want your data to be secure and encrypted with KMS? And this is a great option if you want to make sure that your data is not accessible by anyone, not even AWS. So encryption is something you might want to enable, followed by backtrack, which allows you to go back in time for your database. So, if you made some bad commits or transactions, you can undo them, which is a useful feature. We won't enable it right now. Monitoring the database with enhanced monitoring with high granularity, then log exports, and so on. So, as you can see, there are a lot of different options.

Finally, maintenance for the maintenance windows and the upgrades to the version, which are very similar to what we get in RDS Normal, And finally, the last setting is deletion protection, which ensures that we don't accidentally delete this database by just clicking "delete." Now we have an extra step to make sure that we don't do that. So when we're ready and we've seen all the options, from an exam perspective, the very important ones are going to be around multi-AZ, the fact that you can have one writer and multiple readers, or server less.

These will be the most important points of the aura. Okay, so when we're ready, we'll just create the database. And here we go. Okay, so it took a bit of time, but my aura cluster has now been created. So, as you can see, we have a regional instance of Aura, and we have a writer database and a reader database. So remember, the writers and the readers are separate.

So I'm going to click on this aura database to get a bit more detail. And as we can see, we have two endpoints. Here, we have a writer endpoint, and then we have a reader endpoint, and we know it because it says minus ro here, which means read only. Okay, so this is recommended: use a writer endpoint to write to Aura and choose a reader endpoint to read from Aura, regardless of how many databases you have.

But if you wanted to, you could click on the database itself and get the endpoint to connect to it. But this is not recommended in the recommended way. And what the exam will test you on is that you should select an endpoint that is either the writer endpoint to write or the reader endpoint to read. Okay? There are numerous options available in this section. We won't go over them; we have seen the main ones. Lastly, we can have a bit of fun and, on the top right, either add a reader, select the cross-region rate replica, create a clone, or add replica auto-scaling to give us some aspects of elasticity.

So I'll say my scaling aurora, and then you could select, for example, a target CPU utilization of 60% for your scaling, which looks a lot like what we had for auto scaling groups. And we could also specify additional configuration through the cool interior, the scaling, and so on. Finally, there are the minimum and maximum capacities. We'll leave it as is and add the policy, and all of a sudden we have added auto scaling to our Aura database.

That was really simple. And now we have a fully functional Aura database. So before finishing this hands-on, if you did create a database with me, please make sure to delete it so you don't spend some money. To do so, simply click on this one and delete it. You get rid of this one instance. So you type "delete me." And to do the same, you have to do the same with the reader and point to Actions > Delete and then say, "Delete me." and it can take a bit of time.

OK, so now if I refresh, I can see my database and have zero instances, but you completely delete it. I cannot do it right now because deletion protection is on. So click on "Modify," and then at the very bottom of this page, I'm going to disable deletion protection. I'll click on "Continue," and then I will apply this immediately to make sure that I have disabled my deletion protection. So now, if I click on my database and choose Actions, I can delete it. And I do want to take one final snapshot. No, I'm fine. And then I won't recover my data. That's fine. And I'll delete the database cluster, and I'm done. So that's it for Aura. I hope you liked it, and I will see you in the next lecture.

7. ElastiCache Overview

Now we're getting an AWS Elastic ache overview. So the same way RDS is to get a managed relational database, Elastic ache is to get a managed cache, in this case, Radis or Me cache. Cages and D are essentially memory databases. So they run on RAM, and they have really high performance and usually really, really low latency. And their role is to help reduce the load on databases by caching data. As a result, read-intensive workloads read from the cache rather than the database. So basically, it also helps make a bunch of applications stateless by storing states in a common cache. It also has the right scaling capability thanks to Shading. It has a read scaling capability using read replicas. It has multi-agent capability with failover.

So just like RDS, it has as taking care of OS maintenance, patching optimization, set up, configuration monitoring, fellow recovery, and backups. So basically, it looks a lot like RDS, and there's a very good reason why. It is pretty much the exact same thing. It's an RDS for caches, and it's called Elastic ache. Okay, that's what you should remember. So there is write scaling, read scaling, and multiday. Now, you may be asking, "How does it fit into my solution architecture?" At first, I was troubled. And really, this diagram that I created really helps put things into perspective. So when we have our application, it communicates with RDS, as we've seen before, but we're also going to include an Elastic ache. And so our application will basically first query Elastic Ash.

And if what we query for is not available, we'll get it from RDS and store it in Elastic ache. When you get into Elastic ache, that's called a cache hit, and it works. So we have an application, it has a cache hit, and we get the data straight from Elastic ache. In that case, the retrieval was super quick and superfast, and RDS did not see a thing. But sometimes our application requests data, and it doesn't exist this way. It's a cache miss. So when I get a cache miss, what needs to happen is that our application needs to go ahead and query the database directly. So we'll go ahead and query the database, and the RDS will give us the answer. And our application should be programmed so that it writes back to the cache results in Elastic ache.

And the idea is that if another application or the same application asks for the same query, well, this time it will be a cache hit. And so that's what a cache does. It just caches data. As a result, the caches will undoubtedly aid in relieving the load on RDS, particularly the read load. And the cache must include an invalidation strategy, which is up to your application to consider. so that only the most current and relevant data is in your cache. The user session store is another solution architecture that you must have. In this case, our user essentially logs into our stateless application. So that means there are a bunch of applications running. Maybe they're running into an auto-scaling group. And so all of them need to know that the user is logged in. So the process is that the user logs into one of the applications, and then the application will write the session data into Elastic ache. So this is it. The first application just wrote the session data into Elastic ache.

Now, if the user hits another instance of our application in our auto-scaling group, for example, then that application needs to know that our user is logged in. And for this, it's going to retrieve the session from Amazon Elastic ache and say, "Oh yes, it exists." So the user is logged in, and basically all instances can retrieve this data, making sure the user doesn't have to authenticate every time. So that's basically another very common solution architecture and pattern. Elastic ache’s main purpose is to relieve database load and to share some states, such as the user session store, into a common ground so that all applications can be stateless and retrieve and write these sessions in real time. So now let's talk about the difference between Radis and memory cache D. So Radis is going to have a multi-AZ feature. That means that you can have it in multiple availability zones with an automatic failover feature.

So that means that if one AZ is down, you can failover automatically to another one. You can improve your read scales by making read replicas. And so you have more reads and high availability, and you can enable data durability using AOF persistence. So even if your cache is stopped and then restarted, you can still have the data that was in the cache before stopping it available to you. And that's because of AOS's persistence. That means that you can backup and restore your Radis clusters. Okay, so if you think of Redis, think of two instances. one being the primary, and the second being the replica. And think data persistence, think backup, and think restore. Okay, very, very similar to RDS, I would say. So RDS is kind of similar, right? Think of it like the mend technique. But memory cache D is very different. Memcache D is going to use multiple nodes for partitioning of data. So it's called sharing. There's going to be a nonperson cache.

That means that if your memcache D node goes down, the data is lost, there are no backup and restore features, and it's a multithreaded architecture. So, if you want to see cache D conceptually, it's around Sharding. So a part of the cache is going to be on the first Shard, and another part of the cache is going to be on the second chart. And each shard is a me cache D node, conceptually speaking.

Okay, so they're very different; one is going to have more industrial RDS-type features. So Redis is going to be more like RDS, while Me cache D is going to be a pure cache that's going to live in memory, but there's going to be no backup and restore, no persistence, no multithreaded architecture, and so on. So try to remember these going into the exam so you can make the right decision based on that. If you want backup and restore based on it, multi-AZ and based, or charting and read replicas, Okay, well, that's it. I will see you at the next lecture.

Prepaway's AWS Certified Solutions Architect - Associate SAA-C02 video training course for passing certification exams is the only solution which you need.

examvideo-13
Free AWS Certified Solutions Architect - Associate SAA-C02 Exam Questions & Amazon AWS Certified Solutions Architect - Associate SAA-C02 Dumps
Amazon.testkings.aws certified solutions architect - associate saa-c02.v2022-09-16.by.iris.345q.ete
Views: 976
Downloads: 1222
Size: 3.15 MB
 
Amazon.selftestengine.aws certified solutions architect - associate saa-c02.v2021-11-25.by.ella.325q.ete
Views: 221
Downloads: 996
Size: 1.25 MB
 
Amazon.testkings.aws certified solutions architect - associate saa-c02.v2021-10-29.by.finn.325q.ete
Views: 397
Downloads: 1069
Size: 1.61 MB
 
Amazon.passcertification.aws certified solutions architect - associate saa-c02.v2021-09-08.by.reuben.310q.ete
Views: 1465
Downloads: 1805
Size: 1.67 MB
 
Amazon.examlabs.aws certified solutions architect - associate saa-c02.v2021-07-09.by.albert.457q.ete
Views: 1923
Downloads: 2230
Size: 2.19 MB
 
Amazon.actualtests.aws certified solutions architect - associate saa-c02.v2021-06-01.by.arthur.274q.ete
Views: 1912
Downloads: 1934
Size: 1.38 MB
 
Amazon.braindumps.aws certified solutions architect - associate saa-c02.v2021-04-26.by.lacey.263q.ete
Views: 1180
Downloads: 1517
Size: 1.35 MB
 
Amazon.examlabs.aws certified solutions architect - associate saa-c02.v2021-02-09.by.eliza.250q.ete
Views: 841
Downloads: 1597
Size: 1.34 MB
 
Amazon.prep4sure.aws certified solutions architect - associate saa-c02.v2020-10-09.by.lucas.142q.ete
Views: 1362
Downloads: 2028
Size: 1.17 MB
 
Amazon.certkiller.aws certified solutions architect - associate saa-c02.v2020-08-08.by.albert.130q.ete
Views: 1042
Downloads: 1914
Size: 806.86 KB
 
Amazon.train4sure.aws certified solutions architect - associate saa-c02.v2020-07-10.by.bence.35q.ete
Views: 868
Downloads: 1844
Size: 84.59 KB
 
Amazon.pass4sureexam.aws certified solutions architect - associate saa-c02.v2020-05-23.by.ryan.30q.ete
Views: 2701
Downloads: 2121
Size: 67.56 KB
 

Student Feedback

star star star star star
44%
star star star star star
52%
star star star star star
0%
star star star star star
0%
star star star star star
4%

Add Comments

Post your comments about AWS Certified Solutions Architect - Associate SAA-C02 certification video training course, exam dumps, practice test questions and answers.

Comment will be moderated and published within 1-4 hours

insert code
Type the characters from the picture.
examvideo-17