- Home
- Amazon Certifications
- AWS Certified Solutions Architect - Professional AWS Certified Solutions Architect - Professional (SAP-C01) Dumps
Pass Amazon AWS Certified Solutions Architect - Professional Exam in First Attempt Guaranteed!
AWS Certified Solutions Architect - Professional Premium File
- Premium File 1019 Questions & Answers. Last Update: Dec 05, 2024
Whats Included:
- Latest Questions
- 100% Accurate Answers
- Fast Exam Updates
Last Week Results!
All Amazon AWS Certified Solutions Architect - Professional certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AWS Certified Solutions Architect - Professional AWS Certified Solutions Architect - Professional (SAP-C01) practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
New Domain 2 - Design for New Solutions
28. Automatic Failover with RDS Multi-AZ Deployments
Hey everyone, and welcome back. Now in today's video, we will be discussing multi-availability zone-based deployments in RDS. Now, multi-AZ-based deployments in RDS not only give you enhanced availability but also better durability, which makes it a natural fit for production database workloads. Now, typically, in any organisation that you will take, if they are running their production database in RDS, they would have a multi-assisted deployment model there.
Now, basically, in order to understand how multi-as-based deployment works, let's look into this specific animation, where let's assume that there is a primary DB over here. So this is the master database. Now, what happens in multi-AZ is that AWS RDS will create one more database in a different availability zone. Let's assume that the master DBin Availability Zone is one. There will also be one more database, which will be created in a different availability zone. And there will be synchronous replication that will happen between Easy One and Easy two database. So this is referred to as the primary database, and this is referred to as the standby database. Now, what happens is that if the primary database goes down, Amazon RDS will automatically switch to the standby database. Now, once it automatically switches to the standby database, you don't really have to change the endpoint of the DB. The end point will remain the same even after the switch happens. So let me quickly show you.
So if you have a database, generally it has this endpoint. Now let's assume that this endpoint is associated with our database in AZ One. Now, if this database is unreachable or there are some issues related to hardware or any other things, then RDS will automatically switch over to the standby database in AZ too. And after the switch, the endpoint remains the same. So you don't really have to update your applications' configuration to point to the standby database. So this is one great thing about the multi-AZ-based deployment model. So let's look into how exactly we can do that. So if I go to the database, let's start from scratch. Let me create a database that will be based on MySQL. Let's do it next. Now, there are three use cases. So this is very simple to understand. So one is depth-based MySQL. So this depth-based MySQL will notary have a multi-availability zone. It will not have provisioned DIOPS, etc. It's just for testing. Then you have a production. So if you see production, you have the "multi-easy deployment" feature. You also have provisioned IOPS, and they also have production based on Amazon Aurora. So let's select "dev" testing for the time being. I'll do it next.
And if you go a bit lower, if you see under the multi-AZ deployment, the option is no. That means this is not a multi-AZ-based deployment. And if you go down, the estimated monthly cost is only 14 USD only. Now, if you create the multi-AZ-based deployment, If I enable this, the cost doubles up. So earlier it was 14, now it's 29. Now, the reason why the cost directly doubles up is because there is a standby database that is running. So this is the reason why you have doubled the cost compared to a single AC-based deployment. Now, once I've done that, let's go to the DB instance. Identify, I'll say as KP Labsiac, the master username. We can give it as KP admin, and let's quickly give the password here. Once you have done that, let's click on "next." So we'll leave all of these as the default configuration, and we can go ahead and build a great database. Now, one important part to remember is that it says that your database instance has been created. However, under the usage charges, If you see over here, it says that the following selection disqualifies the instance from being eligible for a free tire. So under "free tyres," the multi-AZ deployment is not part of that. So you will be charged for it.
So that's all right for the demo, but if you are trying it out, you can do it. But make sure that you delete your database after you do the practical. So we can click on view DB instance details, and this database has been created. So if I go back to the database, the Kplab multi-AZ status is still being created. So let's quickly wait for a moment, and we'll resume the video in a minute. So, the status of our K-pops multiAZ is changed to backing up. So it took around 10 to 15 minutes for this status to come up and be posted, at which point we'll have the available status. So in the meantime, let's do one thing. Let's go ahead and understand more about the multi-aziz-based deployment model. Now, we were discussing that if any kind of failure might happen to the instance, which is a primary DB, then RDS will automatically fail over to the standby DB.
So it is important for us to remember under what conditions this switch can fail. Now, these failure conditions are listed over here. The first is during the loss of availability in the primary availability zone. Second is loss of network connectivity, basically to the primary database. There is a compute unit failure on the primary DB, and a fourth is a storage failure on the primary. Now, based on these conditions, it's important to remember that RDS automatically detects these and will automatically recover from the most common failure by switching to the standby. So as soon as RDS detects that there is an issue with the underlying host, for example, it will automatically do the switch. So an administrator's or manual intervention is not required. RDS will do it for you. Now, one important part to remember is that when the switch happens, some amount of time latencies will spike up. But again, it is for a small amount of time only, and it happens quite a lot. I have been working in organizations where we have 50 servers and a lot of databases, and this automatic switching happens quite a lot.
So it's not like if you just have a single primary DB, it will just work flawlessly a lot of the time before the failure condition happens. So let's do one thing. Let's go back to the RDS console, and it seems that our database is available. Now, we're already discussing that there are multiple conditions under which failure would occur. Now, whenever RDS would fail, it would typically give the information about that failure within the event notifications over here. So within this event, you will typically see if the failure has happened. So one quick way to force that failure to happen is to restart. So we'll click on the RDS, and from the actions, we'll do a reboot. So you have an option that says "reboot with failure." and we'll go ahead and do a reboot. So let me quickly refresh the page. Now it has been two minutes. I generally pause the video during that amount of time. So now that you see the information is now available, that means the failure would have happened.
Now, if you click on events here, you would typically see that there is one event, and there should be one event related to failure. So here it says that the multi-AZ instance failure has started. And the last message is that the multi-AZ instance failure has been completed. So in case you want to see whether the failure has happened, because many times what happens is that since the end point does not change in the middle of the night, suddenly for some amount of time, the latency has increased and there are certain connection errors. And if you want to see whether it is an application-side issue or a networking issue, or if it's an issue related to DB by itself, you should do so because it might happen that the failure is happening. So you can quickly look into the events here and verify whether the failure happened or not. Now, you can also automate it in terms of whether the failure happens. You would typically like to receive an email, so that can be done with the help of subscriptions.
So if you click on event subscription, you can basically give an event subscription name, a new SMS topic, or even a new email topic. Now, if you look into the source code, let me select instances, and I can specify a DB instance, or I can specify a wildcard saying all instances. Let me do all instances and within event categories to include either all-event categories or specific event categories. So if I click on specific event categories Within this section, you have multiple categories over here.So one of the interesting ones would be the failure one, because this is one of the times where you will have certain errors within your application. So any time failure happens, or any time maybe failure happens, and you would like to receive an SMS or an email, then the even subscription is something that will help you there. So with this said, let's look into some of the important pointers.
For example now we already know that incase of any infrastructure failure RDS will automaticallyperform the failure to the standby instance. Now, since the end point remains the same after failure, we do not really need to modify anything within the code. Now, multi-based deployment is supported for MySQL, MariaDB, PostgreSQL, and Oracle. And one very important part to remember is that Multi-AZ is based on synchronous replication. While read replication is based on asynchronous replication, Many times in exams, you might get a quiz based on this. So when it comes to asynchronous replication, what really happens is—as we already discussed—whatever an application expects to happen, it happens to the primary DB. And then, after it has been committed to the primary DB, the changes would be replicated to various slaves or various read replicas. Now, from the Read Replica, you can do a backup, an ETL, or various others. All right? So this is asynchronous replication over here.
The right will first be committed to the master. Then it will go to the slave. So depending on the network over here or depending on the hardware resources, there would be a replication lag. So it's not like you will have millisecond base updates. There'll always be that replication lag that might happen. Now, coming to the topic of synchronous versus asynchronous replication, since Multiez uses synchronous replication, what happens here is that a write is not committed unless and until it is written on both the Replicas.Now, both the replicas are the standby instance as well as the primary instance. So whenever a write happens, it happens to both of them, and then it is committed. So the next thing is that since it is a synchronous replication, you have the higher durability and might also have the higher transaction latency over here. Now, when it comes to asynchronous replication, since rights are not happening in real time across both the master and read replica, it might happen that your read replica falls behind the master. And that can be determined with the help of Replica.
29. Introduction to NoSQL Databases
Hey everyone, and welcome back. So today we will be speaking about the basics of a NoSQL database. Now, in the previous lectures, we laid a good foundation for relational databases and the features and functionality that they provide. So that will provide us a good foundation to understand the NoSQL-based database types. So let's go ahead and understand what this basically means. So in definitive terms, a NoSQL database is also referred to as a nonrelational database or a NonSQL database. And it provides a means of storage and retrieval of data that is modelled by means other than tabular representation using a relational database. So this is a very simple understanding of what a NoSQL database really means.
So when we speak about a relational database, it is structured in a tabular form. So you have one table, and inside the table you have three columns over here. So this is what it means that the relational database was based on the tabular representation. However, non-relational is not really based on this kind of structure.
So the second point is that no sequel has gained a huge amount of popularity because they are simpler to use, more flexible, and can achieve performance that is very difficult with the help of traditional relational databases. Now, we will be speaking about this in the relevant section whenever it is necessary. But when you compare a relational database and a NoSQL database, the representation looks like this: So in the relational database, again, you have columns. However, when you are considered a NoSQL database, a specific type of NoSQL database works based on the key value stored. So you have a key over here, which is age, and the value is 26. So again, you have a key as an interest. Value is astronomy, key is name, and value is hers. So this is what is called a "key value store." You don't really have a predefined structure like name, age, or interest. So that predefined structure is not present.
We will be discussing this in the later part of this chapter. But before that, let's look into the advantages of a NoSQL database before we jump into the practical. So there are a lot of advantages to either a NoSQL database or a standard relational database. One is that it is schema-free. The second is horizontal scaling. It can really scale horizontally, and this is one of the advantages of having a NoSQL data vision. The third is easy replication. And fourth, it can manage huge amounts of data. And the last thing that we have to remember is the format. There are various formats that the sequel database does not support. First is the document database. The second is the graph store. The third is a key value pair.
This is something that we were discussing in the last lecture. So you have a key value pair over here, and the last is white-column stores. So for the time being, we just remember this, and whenever it is necessary for us to talk about each of these in detail, we will be speaking. However, for timing purposes, we will be primarily focusing on the key value stores. So let's do one thing. Let me show you one of the NoSQL-based implementations of Amazon, which is DynamoDB. And we'll be focusing on the first point, which is schema-free. And we'll understand exactly what "schema-free" basically means. So I have a DynamoDB over here. So DynamoDB is basically a non-relational database. So if you see it's a nonrelational database, here I have created a table called Test. So if you will see over here, I have a few columns or a few items present over here. So what "schema free" basically means is that I don't really have to specify beforehand what a schema would really look like or what the column is.
So here you have a name column, an age column, and an interest column. So again, if for one user I want to have one more column, let's say "college," and I don't want to have that column for the second user with a Z, then in a relational database I would have to put a null value over there. However, for NoSQL, we don't really have to do that. Let me show you. Let's assume I want to put a column for the first item over here or in the first row. And for the second row, I really don't want to put any colleges. So in this case, I have two rows. I'll open this up, and here you see we have a key-value-based store. You don't really have any columns or rows that are present. So let's do one thing. What I'll do is click on Insert, insert a string, name it college, and let me put it as "spit," okay? And I'll click on "Save." So now what we are seeing is that you are seeing a college value over here, but in this second item, the college value is not there at all. So here you see it showing the college. Now, the reason it is showing in college is because it is easier to view. But when I open up the second item, there is no college field at all. So this is what is called a schema-free database. You don't really have to specify a specific schema beforehand for the NoSQL databases.
So this provides a great advantage specifically for the unstructured data which is unstructured. Unstructured means you don't really have a specific structure for the database. When you talk about a relational database, you need to have a specific structure. Like the data, it should have a name, an age, and an interest. So this structure should be defined ideally beforehand only. Now, in cases where there is a clearly defined structure, you can use an Oracle database because it provides great flexibility. And this is what is called a "schema freebies" implementation. Now, there are a lot of advantages that no sequel database would provide. We will be speaking about this in the relevant lectures, but for the time being, I hope you understand what a schema-free would really look like and what a key-value store would really look like. So in the next lecture, we'll go ahead and understand what a Dynamo DB is and explore lots of other features as well.
30. DynamoDB - Read & Write Units
Hey, everyone, and welcome back. So, continuing our journey with the DynamoDB section. Today, we will be speaking about the read and write units. Now, understanding this topic is very important because your pricing and performance of a DynamoDB table will depend on the value that you put here. So let's go ahead and understand more about it. Now, in DynamoDB, whenever we create an atable, if you remember, we specify the capacity requirements for read and write activity. So, let me just show you, just to revise things. So, when you create a table, if you do not use the default settings, you have to specify the read capacity units and the right capacity units.
So, depending on the value that you put in each of them, the pricing that comes with them is really different. And this is the reason why you should specify proper read and write capacity units. Now, this will also affect the performance, because if you have mentioned a lesser value here and the application is trying to read more data, then the throttling will happen. So, it is quite important to understand more about these specific values. Now, the second point is that, whenever we specify these throughput values in advance, it basically helps DynamoDB reserve capacity in advance, so as to ensure consistent low latency performance.
Now, one simple example I can give for this point is a pen. So let's assume I want to buy 50 of these pens. So what I'll do is go to a shopkeeper and say to him, "I need 50 of these pens tomorrow morning so the shopkeeper can get ready." So even if he does not really have 50 units of these pens, what he'll do is contact the reseller, order 50 units of the pens, and tomorrow morning, when I arrive, he can give me the pens in a timely fashion. Similarly, in DynamoDB as well, it is good if we specify beforehand what the read-write units that we might be needing are. Now, for lesser read and write units, it might not really matter. But if you have a huge number of read and write units, like 500 or 1,000, then specifying this would really help DynamoDB as well as AWS to provision things accordingly. So, in DynamoDB, we specify the throughput, which is the overall performance, with the help of read capacity units and write capacity units. So, let's go ahead and understand each of these.
One read capacity unit represents one strong consistent read per second, or two eventual consistent reads per second for an item up to 4 KB in size. So, this is quite important to remember. The second point is that if an item is larger than 4 KB in size, then we need more read units. So let's understand with the help of an example: if the item is 40 KB in size, So item sizes of 40 KB So how many reading units do we need? So when we talk about strong consistency reads, we would needten because one item is up to 4 KB in size. So we need a strong, consistent read of ten reading units. So if you are going to have a strong, consistent read request that we'll be sending, then we have to specify ten read units. However, when it comes to the eventual consistency read, you see it is twice the performance of it is two times.
So when you want to read 40 KB of data, the eventual consistency read that will be required will be half of the strong consistency read, which would be five. So again, this is quite important to understand.
So if the item size is 40 KB and your request is going to be strong and consistent, then you have to specify ten. If your request is based on eventual consistency read, since eventually is twice the performance, the read units that you will be specifying would be five. very important to remember. And you should be remembering the item size as well, which is 4 tem size asSo let's go ahead and understand the right capacity unit. So, one write capacity unit represents one write per second, up to one KB in size. If an item is larger than 1 KB, then we need more storage units. Now, let's assume that your item size is 1.5 KB and each writing unit represents one KB. So how many writing units would you require? and the answer would be two. Okay, so if an item size is larger than one KB, so let's assume that the item size is one and a half KB, then we have to specify two writing units. We cannot specify one. Okay? So one important aspect to remember now is that we have to write five KB per second of item.
Now, how many writing units would you require? And since one write capacity unit is one KB in size, if you want to write five KB of item size per second, the total number of units that you will require is five units. So I hope you understand about the read and write capacity units. Now, generally speaking, how it works in a real-world scenario is that, as a solutions architect, when a developer asks you for a DynamoDB table, there are two questions that you must ask the developer. The first question is: what is the approximate reading that you will be doing? And you should also ask whether the read would be based on a strong consistent read or the eventual consistent read. And depending upon the value that he gives, you have to design your read-capacity units and write-capacity units. Now, this has become much simpler. Specifically, when the auto scaling comes into play. So you have auto-scaling that has come into topicality, and then it has become much simpler. But earlier, you had to actually design things accordingly.
31. Partition Keys vs Composite Keys
However, when it comes to the partition key plus sort key, you have two attributes here. The two attributes in our example are "portal name" and "course name." So this is something that the video is about.
32. Projection Expression
Hey everyone, and welcome back. In today's video, we will be discussing DynamoDB projection expressions. Now, generally, whenever we do an operation on a DynamoDB table, for example, a query-based operation, we get the resulting item with all the attributes in the output. So if one item has ten attributes, then the query operation output would contain all ten attributes. However, for certain use cases, we might want to only see specific attributes in the resulting output, and that can be done with the help of projection expressions.
Now, in theory, this is one of the things that is difficult to understand. So let's do a practical so that it will be much better understood. So if you look into the user tables here, the user table has multiple attributes, like a user ID, the order ID, the amount as of now, and also the username. So if I just want to see what the resulting amount is, So that can be done with the help of projection expression.
So let's quickly try it out. So if I just copied the query operation, you would see that this specific query operation is for the username dafita. So the resulting output here would be the output related to the Daffodil username. So here it shows the order ID, it shows the username, and it shows the amount. And this is for all the accounts that are present in the DynamoDB table. So you have a username, order ID, and amount. Now, what if I just want to see this amount column? I don't really need the order ID-specific information; I just need the amount-specific information. How do I do that? So, in case you just want to see one specific attribute, So if you want to go on attribute-level filtering, this is where the projection expression is going to help us. Now, in order to do that, I'll just use the same command.
We'll use projection expression, and we have to specify the attribute that we need. So you're the attribute that I need: amount. So if this is the case, what is the total amount that the user dafydc has purchased? So I'll do a projection expression on the amount, I'll press Enter, and now, if you see, it will only give me the associated amount which is associated. So I can quickly extract this amount with the script, I can add it, and I can quickly find out what is the total amount that the user daffodil has used for purchasing the item. So this is where the projection expressions really come in handy.
Amazon AWS Certified Solutions Architect - Professional practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AWS Certified Solutions Architect - Professional AWS Certified Solutions Architect - Professional (SAP-C01) certification exam dumps & practice test questions and answers are to help students.
Exam Comments * The most recent comment are on top
Why customers love us?
What do our customers say?
The resources provided for the Amazon certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the AWS Certified Solutions Architect - Professional test and passed with ease.
Studying for the Amazon certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the AWS Certified Solutions Architect - Professional exam on my first try!
I was impressed with the quality of the AWS Certified Solutions Architect - Professional preparation materials for the Amazon certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.
The AWS Certified Solutions Architect - Professional materials for the Amazon certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.
Thanks to the comprehensive study guides and video courses, I aced the AWS Certified Solutions Architect - Professional exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.
Achieving my Amazon certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for AWS Certified Solutions Architect - Professional. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.
I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the AWS Certified Solutions Architect - Professional stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.
The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my AWS Certified Solutions Architect - Professional certification exam. The support and guidance provided were top-notch. I couldn't have obtained my Amazon certification without these amazing tools!
The materials provided for the AWS Certified Solutions Architect - Professional were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!
The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed AWS Certified Solutions Architect - Professional successfully. It was a game-changer for my career in IT!