exam
exam-1
examvideo
Best seller!
AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) Training Course
Best seller!
star star star star star
examvideo-1
$27.49
$24.99

AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) Certification Video Training Course

The complete solution to prepare for for your exam with AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course. The AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including Amazon AWS Certified Data Analytics - Specialty exam dumps, study guide & practice test questions and answers.

115 Students Enrolled
124 Lectures
12:15:00 Hours

AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) Certification Video Training Course Exam Curriculum

fb
1

Domain 1: Collection

20 Lectures
Time 02:06:00
fb
2

Domain 2: Storage

23 Lectures
Time 02:01:00
fb
3

Domain 3: Processing

26 Lectures
Time 02:19:00
fb
4

Domain 4: Analysis

23 Lectures
Time 02:33:00
fb
5

Domain 5: Visualization

5 Lectures
Time 00:38:00
fb
6

Domain 6: Security

12 Lectures
Time 01:09:00
fb
7

Everything Else

3 Lectures
Time 00:16:00
fb
8

Preparing for the Exam

5 Lectures
Time 00:22:00
fb
9

Appendix: Machine Learning topics for the legacy AWS Certified Big Data exam

7 Lectures
Time 00:51:00

Domain 1: Collection

  • 1:00
  • 7:00
  • 9:00
  • 8:00
  • 4:00
  • 5:00
  • 1:00
  • 8:00
  • 6:00
  • 7:00
  • 9:00
  • 7:00
  • 7:00
  • 5:00
  • 9:00
  • 7:00
  • 7:00
  • 4:00
  • 6:00
  • 9:00

Domain 2: Storage

  • 8:00
  • 12:00
  • 8:00
  • 3:00
  • 5:00
  • 3:00
  • 6:00
  • 8:00
  • 5:00
  • 3:00
  • 2:00
  • 7:00
  • 9:00
  • 3:00
  • 9:00
  • 5:00
  • 3:00
  • 2:00
  • 4:00
  • 1:00
  • 4:00
  • 9:00
  • 2:00

Domain 3: Processing

  • 5:00
  • 5:00
  • 6:00
  • 4:00
  • 8:00
  • 5:00
  • 2:00
  • 7:00
  • 2:00
  • 6:00
  • 7:00
  • 4:00
  • 7:00
  • 4:00
  • 8:00
  • 2:00
  • 4:00
  • 3:00
  • 5:00
  • 4:00
  • 5:00
  • 6:00
  • 10:00
  • 11:00
  • 5:00
  • 4:00

Domain 4: Analysis

  • 4:00
  • 2:00
  • 10:00
  • 10:00
  • 9:00
  • 7:00
  • 11:00
  • 9:00
  • 6:00
  • 5:00
  • 6:00
  • 9:00
  • 9:00
  • 5:00
  • 4:00
  • 3:00
  • 3:00
  • 8:00
  • 11:00
  • 4:00
  • 8:00
  • 6:00
  • 4:00

Domain 5: Visualization

  • 7:00
  • 5:00
  • 13:00
  • 10:00
  • 3:00

Domain 6: Security

  • 6:00
  • 8:00
  • 6:00
  • 2:00
  • 6:00
  • 5:00
  • 9:00
  • 2:00
  • 10:00
  • 6:00
  • 6:00
  • 3:00

Everything Else

  • 11:00
  • 3:00
  • 2:00

Preparing for the Exam

  • 9:00
  • 6:00
  • 4:00
  • 2:00
  • 1:00

Appendix: Machine Learning topics for the legacy AWS Certified Big Data exam

  • 7:00
  • 6:00
  • 6:00
  • 8:00
  • 10:00
  • 8:00
  • 6:00
examvideo-11

About AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) Certification Video Training Course

AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.

Domain 2: Storage

15. DynamoDB APIs

Okay, so in DynamoDB, you can do several operations. The first is, of course, to write data. And for this, we use the put item API, which either creates data or does a full replace. It will consume WCU. You can also do an update item to update data in DynamoDB, in which case you're going to do a partial update of attributes. And you can use atomic counters to increase them if you want to. And finally, because it's a distributed system, you can do conditional rights.

Accept a right or an update only if the conditions are met; otherwise, reject. And that helps when you have concurrent access to items and there's no performance impact from using conditional rights. So let's take a look at how we can write data. So here in my demo table, I can create an item called "user ID." I'll go with 1234, GameId. I'll say ABCD game. And maybe I'm going to add a result. So I'll type string result, win, and save. And this was my first correct answer. So I use the right API. The put item API I can do another right. For example, I'll say user ID 1234 and game XYZ. So this is another game.

In this one game, though, I'm going to append a string result, and I'm going to say lose. Click on save. And here we go. I just wrote my second row in my DynamoDB table. Now, because I share the same user ID there, I collected it in the same partition. All right, that makes sense. So this is the first API. Now, if I wanted to basically update something here, for example, I would want to add the final score. So I'll just say "final score." I will say 200. And this is just some random stuff, right? And then press the save button. Here, I have used the update API, so it gives you an idea of how the APIs are being used by the console. But obviously, you can use your application to use these APIs. Now, how do we delete data in DynamoDB?

Well, we have two ways of doing it. Number one is deleting an item to delete an individual row. And you can even use a condition when you delete to make sure that you want to delete the right thing. Or if you want to delete everything, you can delete the table as a whole and all its items. And it's a much quicker deletion than calling "delete item" on every item. So, it is very important to understand the subtlety. So let's see how we can delete an item in our UI. Let's create an item for user 2345, "game idea," "game ABC." And the result here is going to be "loss," so press "Save." So I just wrote a new item, and if I wanted to delete it, I would click on it, do action, and delete, and that would call the delete item API.

And likewise, if I wanted to delete the entire table. I would click on "delete table" right here, and I would call the "delete table API." Okay? So within this API, let's go to the next one. For efficiency's sake, you can use batches to perform writes. So you can batch right item, which is equivalent to 25 put item or delete item in a single call.

And you can write up to 16 megabytes of data and up to 400 bytes of data per item. So the batching basically allows you to save on latency because you reduce the number of API calls you make against DynamoDB. And all the operations for writing or deleting are done in parallel for better efficiency. So overall, you get much better performance by using the batch write item when you want to write data. It's possible, though, for part of a batch to fail, in which case you have to retry the failed items. For example, if you have a throughput exception on part of your batch, then DynamoDB will return that part of the batch that did not work, and you have to retry it on your own.

So how do we read data in DynamoDB? For this, we have two APIs. The first one is getitm, and getitm allows you to do a read based on the primary key. Remember, the primary key can either be just a partition key, which is called a hash, or a partition key and a sort key, which is called a hash range. You will eventually get constant reads by default, but you have the option to do strongly constant reads.

That will take more RCU, twice as much RCU, and it might take longer because of the latency involved. You can also filter some data to only include certain attributes when you transform the data using a projection expression. If you want it to be even more efficient, you could use a batch get item, which allows you to retrieve up to 100 items or 16GB of data. And each of these items will basically be retrieved in parallel.

So overall, you increase your efficiency and minimise the latency. So, when we clicked on that edititem thing, what happened behind the scenes was It did get an item, so we can look at all the fields and their values. So there's nothing else we can do here. But I just wanted to show you that when you see this window edit item, what happens behind the scenes is a get item. So in DynamoDB, you can do a query. A query cannot be performed on anything. It can be done only on the partition key, and you may use the equal operator on the sort key using the equal less than greater than begins, begin with, or between, which is optional.

And then, if you want to basically filter further, you can use a filter expression, and that will do client-side filtering, not DynamoDB-side. What it will return is up to 1 data point, or the number of items you specify if you specify a limit. And then, if you want to fetch more data, you can do something called pagination on the results.

As a result, you receive each result one by one. You can query a table, a secondary index, or a global secondary index, and so on. Let's have a look at how querying works now. So to do a query, I'm going to click on the left and say "query." And here I'm able to query my table. Now, as you can see, I can query on the partition key and the sort key. So let me just create a new item just before 12345, and I'll say game ABC. And again, the results here are going to be lose save. Okay, so now I'm going to say, "Okay, for this user ID,

I would like to query on 2345, the one that I just created, and the sort key." For now, we're not going to specify it; we can start a search, and here we go. Only my user ID, 12345, was returned at the bottom. But I could say the game idea is going to begin, and then I should say whatever. And because no game idea begins with whatever, I should get zero results. But if I say it begins with game, then it will start the search and return to me the game ID I wanted. So the really cool thing here is that you can do 12345, and here I will just remove this game and we'll start a search. But, as you can see, I'm not sure if the thing won or lost.

OK, the only thing you can query on is the partition key and the sort key. You could add a filter at the very end to filter any attribute you want. For example, if the result and USA both equal win, that would come later. So, if we begin the search anywhere, we will only find the winners. But this would happen client-side, not DynamoDB-side. So I hope that makes sense for how queries work. They can be really powerful, but you need to remember the constraints. The last way of retrieving data in your DynamoDB is to do a scan, and the scanner will scan the entire table, and then you can filter out the database based on what you need.

This is inefficient, unless you need to get the entire table. Scanning returns one piece of data at a time, and you then use pagination to continue reading. That will obviously consume a lot of time, or RCU, because you're reading your entire table.

As a result, you can limit the impact by using limits, reducing the size of the result, and pausing every now and then for faster performance. You can use something called parallel scans, which will allow you to basically have multiple instances scan multiple partitions at the same time to maximise your scan performance.

It will increase the throughput and the RCU consumed, obviously. But if you wanted to limit the impact of a firewall scan, just like you would for scans, you would, for example, limit the number of results you would return for each call. On top of it, you can use a projection expression and a shelter expression if you want to get less information from your scan.

So scan will be used, for example, by Hive when we query data in DynamoDB. So in this UI, if I click on Scan and do a Start Search, then, as you can see, we get all the elements from my table. So something you should know is that whenever you click on the Demo table and go to items, what's actually happening behind the scenes is that a scan is being run, and this is where you see all your data. You will get up to 1GB of data anyway in this UI because that's how much the scan can return. So I hope that gives you a good idea of all the capabilities in DynamoDB. As you can see, there are a few APIs, but that's it. There's no SQL; there's no query language, but that's enough for our use case. I hope you liked it, and I will see you in the next lecture.

16. DynamoDB Indexes: LSI & GSI

So indexes are really important in DynamoDB. They will basically allow you to do different types of queries on your DynamoDB tables. So we have two types of indices that you need to differentiate.

We have local secondary indexes, or LSI, and global secondary indexes, or GSI. So basically, the local local secondary index ASI is an alternate range key or sort key for your table that is local to the hash key. So the partition key stays the same, but you have an alternate range key. You can define up to five local secondary indexes per table.

And the sort key must be exactly one scholar attribute. And the attributes can be anything like a string, a number, or a binary. The LSI in this little distinction must be defined at table creation time. So when you create a table, if you need to add other Lego secondary indexes, you need to define them right away. You cannot add them afterwards.

So, here's an example: For example, here we have our user ID and game ID, and we want to have game timestamps as well as a local secondary index. So here we would define an LSI on that column, and that would allow us to start doing some queries on user ID and game time. Step. I hope you get the idea. Next, you have global secondary indexes, and these are used to speed up queries when you have non-key attributes.

So what does that mean? That means you define a new partition key and an optional sort key, and the index will be an entirely new table with its own RCU and WCU, and you can project attributes on it that you want. So the partition key and the sort key will always be projected, and then you can specify which extra attributes to include or even all of them.

So, as a site, you need to define RCU and WCU for the GSI but not the LSI. And you can add and modify GSI after your table is created, but not the LSI. Remember, it's only a table creation. So what does that look like? For example, here is our normal table. We have a user ID, a game ID, and a gametime stamp. And here we're going to create an index where we can query by game ID. So we'll have a game ID, a game timestamp, and maybe the user ID will be protected.

So here is the difference between the local secondary index and the global secondary index. Let's have a look at what it looks like in our example on our DynamoDB UI. So here I'm able to go to indexes and create an index, and this index I can create.

As you can see, it has to be a global secondary index. I choose a partition key, for example, game ID, and I have a sort key, for example, results. Here, I'm going to be able to query on game ID and results. I will also need to provide the read-capacity unit, and the right capacity unit of five eight-five looks fine to me.

And whether or not I want to project all attributes, only the keys, or if I want to include a list of attributes that I specify manually, I'll just use Alt and then click on Create Index. So as you can see, it basically creates a new table behind the scenes, where it's called an "index," and it goes into a whole status of being created. Now, if you make a table and I do another demo, and here I say user ID, I'll sort it by GameId. Here, as we can see, if I UNC default settings, I can choose a secondary index, and here, I can add my local secondary index. So here's my partition key user ID and sortkeys game ID. But here, maybe I want to have another.

So I'll use the same partition ID. If I don't use the same partition key, an error occurs, and I then add a sort key of maybegame timestamp and create a local secondary index at index. And here we go; we have an LSI. So, what I want you to notice here is that the LSI are defined at the time the table is created. So I'm not going to go ahead and create the table, but I want to show you the difference between LSI and GSI.

Now go back and click on cancel. Get back to my demo table, and I will basically wait for the index to be done creating. So I'll pause the video, and my index is now active. What does that mean? That means that if I go back to my items, I can issue another query, this time not on the table but on my index.

And here I can specify the game ID of ABC, as well as the sort key for the result. I want this to be a win or a loss because I want to have one result and start the search, and I'm getting this item back as a result. So here's the really interesting thing: you can use a different index to basically query your table, and this is why you would define global secondary indexes or local secondary indexes. So that was helpful, and I will see you in the next lecture.

17. DynamoDB DAX

So let's talk about DynamoDB DAX, because that can come up at the exam. DAX is also the name for the DynamoDB accelerator, and it's basically going to be a cache for DynamoDB. And you don't need to rewrite your application. You can just leverage a cache right away.

The left will go through DAX into DynamoDB, and then when you have reads for cagereads or cache queries, you'll get microsecond latency. So it basically offloads. Some of the cache reads directly into the cache, so it will solve the hotkey problem. So, for example, your iPhone is being read too many times in DynamoDB. By caching it in DAX, you will solve the hotkey problem and really decrease your RCU consumption. By default, the cache has a five-minute TTL. That means that your item will live in the cache for five minutes. You can have up to ten nodes in your DAX cluster, and you have to provision it.

It can also be multiples. So three nodes are the minimum recommended for production. It's going to be secure. You're going to get encryption at rest with KMS. You're going to be in VBC; you have IAM permissions; and you can have cloud-trail integration. So, as a diagram, what does that look like? where our application is trying to access our DynamoDB table or tables. And what it will do is that instead, it will interact with the DynamoDB accelerator DAX, and the DAX will directly talk to the table.

The added benefit we get out of it is that Dex will basically cache what we need right away. So how do we create a dash cluster? So, on the left, there's DAX, and you click on Dashboard. And here you can create a cluster. and I'll just call it my demo DAX cluster. You choose a note type. So you can choose between very large note types (up to 16 x large) and very small note types (you too small). By the way, DAX is not in the future, so don't create it if you don't want to spend anything. You choose the cluster size. So three is definitely recommended for high availability, but one can be fine for a development environment, whether or not you want encryption.

The Im role you select for DAX, as well as a new IAM role, subnetgroup, and security groups, will suffice. Then you can specify some cluster settings, such as the cluster description, the AZ you wanted to be in, some parameter groups, maintenance, windows, etc., etc. But for now, we'll just use the default settings. And then, when you're ready, you will click on Launch Cluster. And when you launch cluster, it basically goes ahead and creates a DAX cluster for you. So DAX is not something that auto-scales or whatever.

It's something you have to provision for in advance. And it will cost you money, obviously, but it will just improve your DynamoDB performance. If you have high read throughput and most of the reads are always the same, In this case, it's really, really nice to have a cache. So that's it for DynamoDB. What I'm going to do, DAX, is that I'm going to just delete this once it's been created. So I'll take action and delete it when it's done. But basically, I can't really show you how to use it. You need to use the SDK for this, but at least it shows you the steps into creating a DAX cluster so you get a better idea of how it works. All right, that's it. We'll talk again in the next lecture.

18. DynamoDB Streams

So how do we react to changes in real time in our DynamoDB table? The answer is using DynamoDB streams. As a result, all DynamoDB changes, whether they are creates, updates, or deletions, will end up in a DynamoDB stream. If you enable it, that means that the change log from your table goes into a stream, and the stream can be read by, for example, AWS Lambda. What can we do with it? Well, we can react to real change in real time. For example, we can use our users table to send a welcome email to new users. Or we can create derivative tables or views; we can insert data into Elasticsearch; etc., etc. Basically, your lambda function will receive a batch of records from your DynamoDB stream.

And based on what you write in your Lambda function, you can do anything you want. You could even implement cross-region replication using streams if you wanted to. With streams like Kinesis, the maximum data retention time is now 24 hours. You cannot set this to anything else. It has to be 24 hours.

And you can configure the batch size so that Lambda receives up to 1000 rows, or six megabytes of data. So we have a stream and we're wondering if we can use AWS Lambda to access a DynamoDB stream, but can we also use a Kinesis adapter?

So we can use the KCl library to consume data from the DynamoDB streams. We just have an adapter on top of it called a Kinesis adapter library. And here we have the same pattern as Kinesis streams. So we have KCl functions; they will checkpoint the progress into a DynamoDB table, and they can consume a DynamoDB stream. This way, we have the exact same interface. The programming is the same, and it also has partitions and charts and all that stuff. That's basically the alternative to using AWS Lambda.

So you have two choices for consuming a DynamoDB stream. Either you use AWS Lambda or you use the KCl library with the Kinesis adapter. Okay, so back in our table, as you can see, there is a stream detail, and we can click on manage streams. Here we manage the stream and say, "What do we want to have in our stream?" We can have only keys, only a new image, only an old image, or both a new and an old image.

So enable this. And here we go. We have created a DynamoDB stream. Basically. Now the stream can be read by a lambda function that you can hook on to and see what's going on in real time. Now, I'm not going to show you how to do this, but you know that, conceptually, you can do it either using the Kinesis library or by using the AWS Lambda functions. So that's it. We've been able to stream. You can always disable it if you want, but that's it. I will see you at the next lecture.

Prepaway's AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) video training course for passing certification exams is the only solution which you need.

examvideo-13
Free AWS Certified Data Analytics - Specialty Exam Questions & Amazon AWS Certified Data Analytics - Specialty Dumps
Amazon.certkiller.aws certified data analytics - specialty.v2024-02-12.by.arlo.78q.ete
Views: 135
Downloads: 171
Size: 220.49 KB
 
Amazon.selftestengine.aws certified data analytics - specialty.v2021-05-20.by.robert.57q.ete
Views: 245
Downloads: 1156
Size: 171.06 KB
 
Amazon.examcollection.aws certified data analytics - specialty.v2021-05-15.by.imogen.61q.ete
Views: 203
Downloads: 1120
Size: 175.77 KB
 
Amazon.passit4sure.aws certified data analytics - specialty.v2020-10-02.by.charlotte.28q.ete
Views: 377
Downloads: 1409
Size: 79.16 KB
 
Amazon.braindumps.aws certified data analytics - specialty.v2020-06-23.by.tamar.26q.ete
Views: 438
Downloads: 1501
Size: 79.39 KB
 

Student Feedback

star star star star star
54%
star star star star star
46%
star star star star star
0%
star star star star star
0%
star star star star star
0%

Add Comments

Post your comments about AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course, exam dumps, practice test questions and answers.

Comment will be moderated and published within 1-4 hours

insert code
Type the characters from the picture.
examvideo-17