exam
exam-1
examvideo
Best seller!
AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Training Course
Best seller!
star star star star star
examvideo-1
$27.49
$24.99

AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course

The complete solution to prepare for for your exam with AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course. The AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including Amazon AWS Certified Solutions Architect - Professional exam dumps, study guide & practice test questions and answers.

134 Students Enrolled
235 Lectures
10:01:00 Hours

AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course Exam Curriculum

fb
1

Getting started with the course

1 Lectures
Time 00:04:00
fb
2

New Domain 1 - Design for Organizational Complexity

18 Lectures
Time 02:22:00
fb
3

New Domain 2 - Design for New Solutions

121 Lectures
Time 17:06:00
fb
4

New Domain 3 - Migration Planning

9 Lectures
Time 01:29:00
fb
5

New Domain 4 - Cost Control

9 Lectures
Time 01:20:00
fb
6

New Domain 5 - Continuous Improvement for Existing Solutions

71 Lectures
Time 10:01:00
fb
7

Exam Preparation Guide

6 Lectures
Time 01:39:00

Getting started with the course

  • 4:00

New Domain 1 - Design for Organizational Complexity

  • 05:18
  • 13:23
  • 06:18
  • 03:20
  • 12:16
  • 09:45
  • 07:09
  • 06:07
  • 07:51
  • 09:30
  • 05:47
  • 09:04
  • 4:00
  • 8:00
  • 7:00
  • 13:00
  • 6:00
  • 10:00

New Domain 2 - Design for New Solutions

  • 08:46
  • 18:41
  • 09:27
  • 05:24
  • 11:12
  • 07:49
  • 07:08
  • 4:00
  • 17:00
  • 4:00
  • 9:00
  • 8:00
  • 7:00
  • 9:00
  • 7:00
  • 9:00
  • 6:00
  • 3:00
  • 4:00
  • 6:00
  • 12:00
  • 10:00
  • 4:00
  • 5:00
  • 7:00
  • 8:00
  • 10:00
  • 13:00
  • 8:00
  • 8:00
  • 9:00
  • 3:00
  • 6:00
  • 6:00
  • 6:00
  • 7:00
  • 8:00
  • 7:00
  • 8:00
  • 10:00
  • 8:00
  • 7:00
  • 8:00
  • 10:00
  • 6:00
  • 9:00
  • 5:00
  • 5:00
  • 6:00
  • 6:00
  • 13:00
  • 15:00
  • 6:00
  • 8:00
  • 7:00
  • 9:00
  • 10:00
  • 17:00
  • 10:00
  • 15:00
  • 8:00
  • 13:00
  • 5:00
  • 9:00
  • 8:00
  • 7:00
  • 15:00
  • 13:00
  • 7:00
  • 11:00
  • 20:00
  • 11:00
  • 9:00
  • 12:00
  • 4:00
  • 4:00
  • 11:00
  • 4:00
  • 4:00
  • 6:00
  • 5:00
  • 3:00
  • 5:00
  • 10:00
  • 7:00
  • 10:00
  • 6:00
  • 8:00
  • 8:00
  • 5:00
  • 13:00
  • 11:00
  • 9:00
  • 8:00
  • 7:00
  • 14:00
  • 7:00
  • 12:00
  • 11:00
  • 7:00
  • 5:00
  • 12:00
  • 10:00
  • 10:00
  • 7:00
  • 4:00
  • 9:00
  • 14:00
  • 10:00
  • 14:00
  • 8:00
  • 4:00
  • 14:00
  • 7:00
  • 14:00
  • 9:00
  • 10:00
  • 4:00
  • 8:00
  • 9:00
  • 9:00

New Domain 3 - Migration Planning

  • 11:19
  • 06:16
  • 06:32
  • 07:49
  • 09:46
  • 03:50
  • 14:14
  • 09:38
  • 11:55

New Domain 4 - Cost Control

  • 08:16
  • 06:33
  • 08:50
  • 02:20
  • 14:54
  • 07:56
  • 14:26
  • 19:16
  • 07:04

New Domain 5 - Continuous Improvement for Existing Solutions

  • 09:12
  • 06:02
  • 07:37
  • 07:13
  • 09:30
  • 04:26
  • 09:31
  • 09:39
  • 04:17
  • 02:27
  • 16:29
  • 08:07
  • 10:31
  • 06:23
  • 14:43
  • 12:18
  • 03:44
  • 21:00
  • 6:00
  • 9:00
  • 12:00
  • 6:00
  • 7:00
  • 9:00
  • 9:00
  • 5:00
  • 3:00
  • 5:00
  • 6:00
  • 9:00
  • 7:00
  • 12:00
  • 11:00
  • 4:00
  • 7:00
  • 9:00
  • 6:00
  • 4:00
  • 11:00
  • 6:00
  • 10:00
  • 6:00
  • 7:00
  • 13:00
  • 10:00
  • 8:00
  • 17:00
  • 6:00
  • 5:00
  • 6:00
  • 11:00
  • 7:00
  • 5:00
  • 7:00
  • 9:00
  • 14:00
  • 13:00
  • 5:00
  • 6:00
  • 4:00
  • 9:00
  • 12:00
  • 9:00
  • 4:00
  • 3:00
  • 12:00
  • 16:00
  • 10:00
  • 4:00
  • 6:00
  • 5:00

Exam Preparation Guide

  • 10:12
  • 11:26
  • 04:32
  • 12:07
  • 06:08
  • 08:03
examvideo-11

About AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) Certification Video Training Course

AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.

New Domain 2 - Design for New Solutions

37. DynamoDB Streams

Hey, everyone, and welcome back. In today's video, we will be discussing DynamoDB streams. So, DynamoDB streams are essentially the time-ordered sequence of item-level changes made within the DynamoDB table by you or your application. Now, basically, this allows a lot of use cases to be done in a much easier manner, like continuous analytics, real-time notifications, and various others.

Now, I'm sure that, through this definition, understanding DynamoDB streams is something that is difficult. So we'll jump right into the practicals and investigate what DynamoDB streams are all about. So I'll go to the DynamoDB console, and I'll click on "Create a table." I'll give the table name as KP lapse the partition key, and I'll give it as the course name.

And I'll go ahead and create the table. All right, so the table is created, and within the table, if you look at the stream, the stream is currently not enabled. So we'll click on "Manage stream." And there are various types of streams over here. One is for keys only. The second is a new image.

The third image includes both new and old images. So I'll select these new and old images. So, basically, it will record any changes I make to the table or new entries I make to the table. So, let's assume that I have a new item within my DynamoDB table. And I'm modifying that new item so it will remember what the value and the key associated with the older item were, as well as what the new, modified value is. So that is the difference between new and old images.

Again, you'll understand it. When we get to the practical part, I'll click enable for now, and it will give us the stream ARL. Perfect. So, the next step will be to create an im role, because we want the modification details to be stored within the cloud watch.

So I'll click on roles. I'll create a role. The role type would be "lambda," the permission. Let me quickly just give administrator access just to ease things out. And I'll say, "administrator Lambda." And I'll click on "Create a role." Perfect. So now that I have created a role, the third part is to go to lambda and click on "create a function." This time, we'll select a blueprint. And within the blueprint, I'll say DynamoDB.

And this is the first one that comes from the DynamoDB process stream. So this is based on the blueprint. I'll select that and name it KP Labs 3. And within the new role, I'll choose an existing role. The existing role name would be "administrator lambda." So for the initial position, I'll simply make the stream horizontal. I'll select "enable trigger" and proceed to create a function. As a result, this is a function code.

We'll ignore this part. I will just create a function. So we have a DynamoDB function that gets triggered when there is a certain modification being made to the DynamoDB. And then it will store the output in the cloud watch log group. So what I'll do is, while I'm in my DynamoDB table, I'll click on "create an item." So within the course name, I'll say AWS developer associate, and I'll click on save. Perfect.

So I have already saved a new item. Let me append a few more things here. I'll say the launch date, and within this, I'll say August, and I'll click on save. So basically, anytime you make a modification or you add a new item, let me add a new item once more, I'll say "awsops administrator," and I'll click on save. So all the changes that you make—you add a new item, you modify an older item—all of these things will be saved within the cloud watch log group.

So I'll quickly go to the cloud watch, and there should be a log group named after your lambda function. So, if I go to Logs and see that I have a KPOP stream over here, what happens is that it takes a little time in seconds. Definitely, it takes like 10 to 15 seconds for new things to get updated.

So let's do one thing. I'll go to the administrator here. I'll append a new string, I'll say "release date," and I'll say it is December. Okay, so we created this new item and changed some of its associated attributes. So now, within the cloud watch, if I click on this stream, you would actually see there have been a lot of inserts and a lot of modifications.

So if I just click on "modified," it basically will give me the timestamp, and below that, it will actually give me the actual thing that was modified. So within this, you have the AWS administrator. So you have the old image here. The old image was what was there before there was a modification in place. So earlier there was coaching, then it was this AWS operations administrator. So then there is something called a new image. And within the new image, you have a coursename, and it also shows what was modified. The release date was added as part of the second iteration. So this is what the old image and the new image are all about.

So, if you recall, when we enabled the streams during DynamoDB setup, we enabled both the old image stream and the new image stream. So this is what it's all about. So let's look at an example use case. Let's understand where DynamoDB streams would really help here.

So whenever a new item gets added to the DynamoDB table, we already know that it will be part of streams. As a result, this could also be in cloud watch. Now, this DynamoDB stream can trigger a lambda function that is associated with the SNS topic. And the SNS topic would basically have the content of the stream. So here it says that this is Bark from the Wolfer social network.

So now anytime someone updates a message in the DynamoDB item, then again an SMS notification would be created, but this message would be updated and it would be sent to email or Slack channels or others. So this is one of the use cases where DynamoDB streams can be used. Again, there can be lots, but this is a simple use case for us.

38. Global Secondary Index and Local Secondary Index

Hey everyone, and welcome back. In today's video, we will be discussing the global secondary index and the local secondary index of DynamoDB. Now, with a global secondary index, it basically allows us to query the data based on any attribute that is part of the table. Now, generally, whenever we create a table, we have a partition key and a sort key, and we generally query the data based on those. However, with the GSI, the partition key and the sort key can be different from those of the table.

So, for example, if you look into the table, you have the partition key as the user ID and the sort key as the game title. All right? Now if you look into the global secondary index here, the partition key is the game title and the sort key is the top score. So it is completely different from that of the base table. Now, this proves to be an advantage in various situations. So, for example, let's say that you want to sort based on the top scores here.

So what you can do here is that you don't really need various other data like the top score, date, time, wins, losses, and various other data that might be present. You only need a certain set of data here. So you can create a global secondary index based on game title and top score, and that's about it. So the partition keys, since this is the base partition key of the base table, would be part However, the only two things that are primarily important here are the game title and the top score. And on top of that, what happens is that whatever results you see within the top score can be ordered as well. So it can be ordered with the scan index forward set to false. All right?

So basically, if you do a scan index forward to false, what will happen is that you will get the scores in descending order, so whichever score is the highest is the score that you see at the top. So it becomes much easier, and it also helps in the overall performance because in this case, in the first case, you will have to scan the entire table. Instead of that, you can just create a global secondary index and then have a scan index set to false, and you will get the results immediately. So this is what the global secondary index is all about. Now, you also have a local secondary index. A local secondary index now essentially keeps an alternatesort key for a given partition key value. So within the local secondary index, you cannot have a different partition key. The partition key has to be the same as that of the base table. The only thing that remains different is the sort key.

So you can have multiple sort keys based on your requirements within the local secondary index. In terms of the global secondary index, you can have a different partition key and a different sort key as well. So if you look at this diagram here, the primary partition key is the forum name. And then you have the key for sorting by subject. Now, in here, you can change it. So the primary partition key remains the same. So the forum name is something that you cannot change. However, you can change the sort key. You can change the sort key to be the last post date and time instead of the subject. So that can be done. Now, let me quickly show you a few things. So if you go to DynamoDB, let me create a DynamoDB table.

I just wanted to show you how you can create a global secondary index here. So let's call it a demo table. The primary key Let's use the same user ID as shown in the diagram. And let's skip a sort key. The sort key here is the game title. Let's give it a game title here. Alright, let's not use the default settings. And now, within the secondary index, let's create a new index where you can have a different partition key. So you can have a partition key, or as we call it the game title, a partition key.

And we can use a different sort key here. So the different sort key, let's call it "top scores" here, All right. And here, within the project attributes, you can even have key-only. So this can also improve the performance if needed. So you can go ahead and add an index. And now you can see that it has been detected.

So within the type, it is clearly stating that this is a global secondary index. So now let's do one thing. Let's also see how we can add a local secondary index. Now, we have discussed that for local secondary indexes, the partition key has to be the same as that of the base table. So within a base table, the primary key here is the user ID. So I call it "User ID" here, and let's add a new sort key. As soon as you create a new sort key, you'll notice that you have the option of creating a local secondary index. So now you can add a different sort key here; let's call it "last postdate time." So I'll call it the "last post date time," and you can go ahead and add an index. So now you see that it is a local secondary index. So there are certain important pointers that you should remember specifically when it comes to the global secondary index.

The first is that whenever you create a global secondary index on a provision mode table, you must specify, read, and write capacity units for the expected workload on that specific index. So the provisioning throughput settings for a global secondary index are separate from those of the base table. So basically, coming back to our console, if you go a bit down within the provisioned capacity here, you see the provision capacity for the table is fifth, and the provision capacity is also different. So again, you have f here, but it is separate from that of the base table.

So if you just deselect the auto-scaling part, you can have a different provision capacity for the table as well as that of your global secondary index. So a specific query operation that you make on a global secondary index consumes the read capacity units from the index and not from the base table. So if you're making, let's say, a read operation on a global secondary index, then the RCU will be from the GSI and not that of the base table.

39. S3 - Cross Region Replication

Hey everyone, and welcome back to the Knowledge Portal video series. Now, today's topic is cross-region replication. Now if you go in properties, we're slowly covering a lot of things like versioning, lifecycle policies, et cetera. And today we will be specifically speaking about the cross-region replication-related feature.

Now, if you remember from the previous lecture, we were speaking about the durability concept, where if the region itself goes down, then, independent of the availability and durability that AWS offers, your object will not be accessible. So in this case, what you need is that if your objects are, let's assume, stored in the US West region, you can replicate them in one more region, like Mumbai. So in that scenario, what will happen is that even if the entire region goes down, you still have the same objects in one more region, like Mumbai, which is geographically apart.

Now, along with this, there is one important thing to remember as far as S3 is concerned: by default, the objects that you create in the bucket, which is in the specific region, will never leave that region. So, if this is the bucket in the Oregon region, and any objects you create inside it will never leave this region by default, So this is one important thing to remember. So that is one of the default scenarios. So in order to demonstrate the cross-region replication, let's go ahead and create two buckets. So what I'll do is say that KP Labs is in Region 1. In our case, let's have Oregon as a region, and I'll click on Create. Okay. Now along with that, I'll create one more bucket, KP Labs region Zero 2.

And this time, I'll create it in Mumbai by selecting Create. Perfect. So now we have two buckets created in two different regions. What we will be demonstrating in today's scenario is that when we upload some objects over here, the same objects will be replicated in the second bucket, which is present in a different region. So the first important thing to remember is that cross-region replication needs versioning enabled as a mandatory thing.

So the very first thing that we'll do is enable versioning in both buckets because this is one of the mandatory requirements. Perfect. Now I'll select the first bucket, which is in the Oregon region. I'll go to Properties Management Replication, and I'll click on Add Rule. So, if you notice that it is cross-region replication, I will select Add Rule.

The source will be all the contents within this bucket. I'll select next. Now it is asking for the destination bucket. You can choose a destination bucket in your AWS account or a different AWS account. So for our case, it will be the same AWS account, and I'll select the bucket name, which is region two. Now you can also change the storage colours for the replicated object.

This is again a great feature. Now, in the source S3 bucket, if you are storing all of the objects as standardstorage classes in the destination bucket where your objects are being replicated, you can either choose IA or reduce redundancy to save money. So let me select standard IA for our demo purpose. Now you need to select the IAM role. I'll click on "create a new role."

So what this role basically does is allow the bucket to transfer the objects to the destination's three buckets. So I'll click on "Save." Let's wait. Perfect. As a result, our replication rule has been created. So let's try this out. Let me go here and let me upload a file. Let's upload Finance THC again, and I'll select Upload Perfect. So this file has been uploaded to this bucket.

Now let's go to the second bucket. And if you see it in the second bucket, the file is present. So let's try a few more things. Let me create a folder. This test folder will be named. I'll click on "Save." Now if I go to and click on "refresh," you will see the contents are getting replicated.

Now, one more thing that you will see over here for the objects that we are uploading is that the storage class is standard. However, for replicated objects, the storage class is automatically changed to the standard Hyphen IA.

So this is the basic information about cross-region replication. Now, one important thing to remember is that if you choose an existing bucket whose contents are already present, then during cross-region replication, the older contents will not be replicated. Only the new contents that you will be uploading will be replicated. So this is yet another important thing to remember. So this is it for this lecture. I hope this has been informative for you, and I look forward to seeing you in the next lecture. It.

40. Disaster Recovery Models

Hey everyone, and welcome back to the Knowledge Pool video series. And in today's lecture, we will be primarily speaking about disaster recovery techniques. So what this basically signifies is that if there is a disaster that might occur, what are the ways in which we can recover our industry's infrastructure in a specific amount of time? So, when it comes to disaster recovery techniques, one of the most important things is the RTO and the RPO. So there can be various disaster recovery designs that a solutions architect can implement.

Now, the design that can be implemented for disaster recovery directly depends on how quickly we want to recover from a disaster. So let's assume we have a website in a single availability zone, and if that availability zone went down and that website is like a part-time website that isn't that important, then we don't really have to worry about designing a multi-AZ base architecture. So that will just lead to more costs. However, if we want even one availability zone failure to have no effect on the performance of our website, the disaster recovery design must be very different. So when you talk about design, there are four broad steps in which we can design our architecture based on disaster recovery.

One is the simple backup-and-restore-based strategy. The second is the pilot light. The third option is warm standby. The fourth option is multi-site. Now, again, one important thing to remember is that whichever technique we choose, it comes with its own implications related to how fast we can recover, related to the performance, and related to the cost as well as the complexity factor. So let's go ahead and understand more about each one of them. So the first is the backup and recovery. So backup and recovery is a very simple and cost-effective method that requires us to constantly take backups of our data and store them in services like Three to restore them when disaster strikes. Now, this is a very simple technique, and I still remember a lot of my friends who have their own blogs.

Now they are personal blocks, and they cannot really afford a multi-AZ-based architecture because that will lead to more complexity, which will lead to higher costs. So what they go ahead and do is go ahead with a simple backup and recovery where, if their database gets corrupted or if something goes down, they can actually recover the database dump from the S Three. So they take the database dump every day and store it in S 3. And if the database gets corrupted or something happens, they pull the dump from S3 and recover the blog. So this is a very simple backup and recovery for on-premises servers, which have a huge amount of data, typically in the tens of terabytes. Then they can make use of technology like direct, connect, import, or export to backup their data to AWS.

Now, this is one important thing to remember because, for many organizations, they have a huge amount of data on premises and cannot really use the internet connection to backup. Because if you don't have a very good internet connection, backing up terabytes of data will be a huge pain. Now, in order to back up such huge data, there are various ways in which you can do that. One is direct connect, which is like a direct lead connection to AWS. And second is the use of import and export, which you can use to directly back up the data. Now, don't worry; we'll be speaking about each of them in great detail in the relevant upcoming sections. So, this is the first way. Now, second way is the pilot light. Now, Pilot Light Essentially, we have a minimal version of servers in the backup region in the stop state or in the form of an AMI in this approach.

So let's assume that this is the primary region where your web server, your app server, and the DB server are running. Now, as part of the pilot light, you have the same setup, but the servers are in a stopped state. As you can see, the web server is in a stopped state, as is the app server. The database, however, is currently mirroring. So this is one important thing to remember. So whenever a disaster strikes, you can start these servers, and your website will be up and running. So, this is one approach. The second approach is to have all of these AMIs in the backup region. So whenever this region goes down, since the AMI is from the second region, you can launch the instance from the AMI, and the website will be up and running.

So this is the pilot light. Now, as you can see, the Pilot Light is not a very fast solution for getting the website up and running. However, it does provide good disaster recovery because the entire servers are in different regions. So third is "Warm Standby," where the servers are actually running. So now you see the difference over here between Pilot Light and Warm Standby is that the servers are constantly running, but with a minimal version. So when the disaster happens, the servers are scaled up for production. So let's assume that this is a 4 GB RAM server. Then in standby, this might be a 1 GB ramp server under an elastic load balancer. So if the disaster strikes, we can quickly increase the size of our servers, and our application will be up and running. One important distinction between Warms and Pilot Light is that in Pilot Light, the server does not have to be in a stopped state.

Also, it might be a possibility that you just have the AMI of the Web server and the AMI of the App server, and whenever a disaster strikes, you can launch the servers from the AMI. However, in warm standby, you must have the servers in running conditions. So this is the difference. AMI cannot exist if no servers are operational. You should have a service running, but the server should be at its minimum size. So this is a warm welcome. The last is multisite, where you have a complete one-to-one mirror of your production environment.

So, if this is a four-GB RAM server, the backup server should also be four-GB RAM. So this is an exact replica of the production environment. Now, as far as cost is concerned, multisite will cost you the most, but multisite will also allow you to recover from disaster in the least amount of time. So these are some of the ways in which you can design a disaster recovery solution. Remember that each technique comes with its own cost and has its own level of complexity. So whichever technique you choose, make sure that you also test things out. It should not happen that you have a multisite network. However, when you switch to the backup server in the event of a disaster, these servers are either not running or are experiencing problems. So you need to do a lot of testing. Now, I still remember that in one of the organisations that I work with, we have testing every two weeks of testing. So what we do is, every two weeks, switch from one region to another and see whether everything is working perfectly or not.

So the entire production traffic is migrated from the primary region to the disaster recovery region, and we actually see if everything is working perfectly or not. So this is a nice way to make sure that when the actual disaster happens, we have a perfect working production environment. So again, there are various AWS services that we can use for disaster recovery, like S3, Glacier Import, and Export. You have a storage gateway. Direct connect VM, import and export, Trout fifty S, and many other features are available. So throughout this course, we will be looking into, I would say, all these services in the case of disaster recovery and also in terms of how exactly we can use them for our production environment.

Prepaway's AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) video training course for passing certification exams is the only solution which you need.

examvideo-13
Free AWS Certified Solutions Architect - Professional Exam Questions & Amazon AWS Certified Solutions Architect - Professional Dumps
Amazon.selftestengine.aws certified solutions architect - professional.v2021-12-28.by.jack.496q.ete
Views: 139
Downloads: 931
Size: 2.87 MB
 
Amazon.examlabs.aws certified solutions architect - professional.v2021-11-25.by.jackson.487q.ete
Views: 168
Downloads: 940
Size: 2.21 MB
 
Amazon.passit4sure.aws certified solutions architect - professional.v2021-07-27.by.albert.466q.ete
Views: 575
Downloads: 1302
Size: 2.2 MB
 
Amazon.test-king.aws certified solutions architect - professional.v2021-04-30.by.tamar.450q.ete
Views: 600
Downloads: 1321
Size: 2.8 MB
 
Amazon.pass4sure.aws certified solutions architect - professional.v2021-02-26.by.darcey.430q.ete
Views: 422
Downloads: 1257
Size: 2.88 MB
 
Amazon.selftestengine.aws certified solutions architect - professional.v2020-10-07.by.brahim.410q.ete
Views: 1459
Downloads: 1525
Size: 2.32 MB
 
Amazon.examlabs.aws certified solutions architect - professional.v2020-08-08.by.rachid.372q.ete
Views: 529
Downloads: 1580
Size: 1.5 MB
 
Amazon.test-inside.aws certified solutions architect - professional.v2020-05-08.by.callum.349q.ete
Views: 797
Downloads: 1838
Size: 2.14 MB
 
Amazon.testkings.aws certified solutions architect - professional.v2020-03-18.by.zara.342q.ete
Views: 662
Downloads: 1763
Size: 1.57 MB
 
Amazon.pass4sure.aws certified solutions architect - professional.v2019-12-07.by.arthur.335q.ete
Views: 1152
Downloads: 2309
Size: 2.06 MB
 
Amazon.actualtests.aws certified solutions architect - professional.v2019-06-22.by.mokki.269q.ete
Views: 1269
Downloads: 2337
Size: 1.36 MB
 

Student Feedback

star star star star star
65%
star star star star star
27%
star star star star star
6%
star star star star star
1%
star star star star star
1%

Comments * The most recent comment are at the top

okoth
Indonesia
Dec 29, 2022
It is simply the easiest way to learn and obtain a professional tag. The course comes with simple language, and video tutorials are truly good. I enjoyed learning with ease and even scored well. Thanks a lot.
Liam
United States
Dec 12, 2022
Interrupt for a moment and basically encounter the summary of this course by the best proficient educator. I guarantee you would not keep yourself from getting it. I was weak in a significant part of the subjects, but with the help of the video sessions, I could prepare for the exam with significant data avoiding the looting point. The essential use of tongue, sensitive tone, brief elucidations, and remarkable arranging papers took my heart. Thank you so much for helping me with this expert course easily and comfortably.
Grace
Nigeria
Nov 20, 2022
Inside and out course educational modules and itemized clarification on each topic. It is better to outline key focuses after every session. Practical sessions ought to be appeared in developed view that could be of much solace when getting from portable devices. Overall, good and satisfactory course.
tony
United Kingdom
Nov 05, 2022
Exceptionally valuable course to get ready for the exam. It is a great addition to preparation work. In any case, it is long, sentences get frequently rehashed, and the segment toward the finish of each exercise spreading out how to contact the writer ought to be dropped. Densifying the substance would decrease the length of this course by 50 percent, without losing anything important. Something I would truly appreciate!
Sarah
Ghana
Oct 15, 2022
Superb course for understanding. It has helped me really to make myself in both speculative and rational ways. The key centers given by the tutor are worthy. The preparation exams set is created for making arrangements for the hardest paper. Thanks a lot for the course that has helped in getting an expert testament.

Add Comments

Post your comments about AWS Certified Solutions Architect - Professional: AWS Certified Solutions Architect - Professional (SAP-C01) certification video training course, exam dumps, practice test questions and answers.

Comment will be moderated and published within 1-4 hours

insert code
Type the characters from the picture.
examvideo-17