cert
cert-1
cert-2

Pass Amazon AWS Certified Developer - Associate Certification Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!

cert-5
cert-6
AWS Certified Developer - Associate DVA-C02 Exam - Verified By Experts
AWS Certified Developer - Associate DVA-C02 Premium Bundle
$39.99

AWS Certified Developer - Associate DVA-C02 Premium Bundle

$69.98
$109.97
  • Premium File 357 Questions & Answers. Last update: May 09, 2024
  • Training Course 430 Video Lectures
  • Study Guide 1091 Pages
 
$109.97
$69.98
block-screenshots
PrepAway Premium  File Screenshot #1 PrepAway Premium  File Screenshot #2 PrepAway Premium  File Screenshot #3 PrepAway Premium  File Screenshot #4 PrepAway  Training Course Screenshot #1 PrepAway  Training Course Screenshot #2 PrepAway  Training Course Screenshot #3 PrepAway  Training Course Screenshot #4 PrepAway  Study Guide Screenshot #1 PrepAway  Study Guide Screenshot #2 PrepAway  Study Guide Screenshot #3 PrepAway  Study Guide Screenshot #4
cert-15
cert-16
cert-20

AWS Certified Developer - Associate DVA-C02 Exam - AWS Certified Developer - Associate DVA-C02

cert-27
Download Free AWS Certified Developer - Associate DVA-C02 Exam Questions
Size: 23.59 KB
Views: 0
Downloads: 108
Download
cert-32

Amazon AWS Certified Developer - Associate Certification Practice Test Questions and Answers, Amazon AWS Certified Developer - Associate Certification Exam Dumps

All Amazon AWS Certified Developer - Associate certification exam dumps, study guide, training courses are prepared by industry experts. Amazon AWS Certified Developer - Associate certification practice test questions and answers, exam dumps, study guide and training courses help candidates to study and pass hassle-free!

EC2 & Getting Setup

18. Elastic Load Balancers - Exam Tips

So I'm in the AWS console; check which region you're in and just go over to EC Two. Now, you might remember a couple of weeks ago that I had an instance that I stopped because we're taking an application-consistent snapshot. Fire that instance back up again. If for some reason you've deleted your instance, just create a new one, and we'll show you what else you should do.

So I'm just going to start this instance again. Okay, so my instance is now up and running. Here's my public IP address, and I'm just going to go in and SSH into this instance. Okay, so I'm in my terminal, and I'm just going to type in SSH EC2-Hyphen user and then at, then the public IP address, and then I and then mykey, which is my EC2-Hyphen key pair. And I'm now logged into my instance.

I'm going to elevate my privileges to root, and I'm going to clear the screen. The first thing I'm going to do is just check if the HTTPD or Apache service is running. So I'm just going to say "status." So it is running. If yours isn't running, just type in "service HttpdStart," and if you don't already have Httpd installed, all you need to do is type "yum," "installHttpd," and then "yes," and then again, just type "service Httpd Start," and that will start the service. And then you always want to just do this. So start with configuration, then HTTP, and so on. And that will just make sure that the HTTPD service starts every time you reboot your EC2 instance. So, hopefully, HTTPD is up and running.

Now you've got Apache running. Just type "service HTTP status" just to double check, but it should be running. And then you just need to go over to your HTML directory and just type in "ls" in here. Now, hopefully again, you should probably have an index HTML. If you don't have one, just make one with nano HTML. You can see my very simple one here that simply says Hello Cloud Gurus. And then we're also just going to create a health check. So we'll call it a nano health check. Dot HTML dot HTML, and in here I'm just going to say this instance is healthy.

Okay? I'll just type it in and press CTRL X. Yes. And then hit enter. And now we've got two files: our health check and our index HTML. Once you've done that, go back to the AWS console. Okay, so I'm back in. The AWS console is going to go down to Services and click on EC Two.

And now we've got our running instance, which we've just logged into. And what we're going to do is go over here, go to Load Balances, and just click in here. You won't have any load balancers created yet. So we're going to go ahead and hit "Create a load balancer." So we have two different types of load balancers. We've got application load balancers, and we've got classic load balancers. Application load balancers work at layer seven, so they work at the application layer, and they're actually the preferred method according to AWS. I guess it's right for HTTP and HTTPS, and you can basically do really clever routing using an application load balancer. Now, application load balancers are relatively new.

They're about six months old or so. They came out halfway through 2016. At least for the time being, all of the exam questions will be on the Classic Load Balancer. So a classic load balancer is what we call a "layer 4 load balancer." It typically makes routing decisions at a TCPlayer, but it can also do some layer-seven routing. But we call this a "layer four load balancer." So click on "Classic Load Balancer" and we'll start off with this one. Go ahead and hit "continue." So we're going to call it To be consistent, I'll refer to my classic LB as ELB. And then we're going to create it within our default VPC. We want it to be able to serve web traffic. So we want it to be an external load balancer, not an internal load balancer.

So we're not going to check that box. We don't want to enable advanced VPC configuration because we're just using a default VPC, and then this is our listener configuration. So what ports is it listening to and for what protocol? So it's listening for HTTP on port 80, and then where is it passing this protocol? And what about port two in our case? Well, it's going to; the instance protocol is going to be HTTP, and it's passing it to port 80 as well.

So just leave it all at default. Go ahead and hit "next." Put it in your Web DMZ security group. Go ahead and hit "next." Ignore this error message. We're just basically saying that it's basically saying you're not using HTTP; you should use HTTP. The reason we're not using HTTP is because we haven't got a domain name registered yet, and we're not using any kind of SSL certificate. We'll get to that later in the course. So now we're going to configure our health check. And this is really the guts of creating your elastic load balancer. So we're going to test this over HTTP using port 80 and the ping path.

We have an index HTML, but we also have a webpage called HealthCheck HTML that we created. And then here are our advanced details. Now if you ever forget what any of these mean, just scroll over the little info button. So the response timeout is the amount of time it's going to wait for a response from the health check. And this can be anywhere from 2 seconds to 60 seconds. I'm going to do everything as cheaply as possible. So I'm going to go for two. Now, my interval is the time between health checks. So this is how long it's going to take to do a health check and wait for a response.

And then, how long is it going to wait between each response? So what interval are we going to wait between our health checks? So we can have anywhere from 5 seconds to 300 seconds. Let's go ahead and just put in 5 seconds. Our unhealthy threshold is the number of consecutive failures on a health check before declaring an EC-2 instance as being unhealthy. So in this scenario, we're basically polling it and waiting for 2 seconds. If we don't hear back from that web page, it's going to fail a health check, and we're going to wait for another 5 seconds and do it again. And if it fails twice, then it's considered unhealthy.

And our healthy threshold is just the opposite. So how many consecutive health checks does it have to pass before it becomes healthy again? So I'm going to make this two and three, and we're just going to keep it quite lean. So it's both going to pass or fail its health checks really quickly. I'm going to leave it at 2523. Go ahead and hit "next." And here we add our EC2 instance. We only have one right now, so it's on our web server. Go ahead and add our tags. Now, this is so important. And most of the time, when people go above their free tier account and start actually spending money, it's because they've forgotten about their elastic load balances.

Elastic load balancers will cost you money if you leave them on. Especially if you have multiple elastic load balances spread across multiple regions. And lots of people just forget to delete their elastic load balances when they're not using them. So simply enter production ELB here, and value is activated. So we know that we've got this particular load balancer on. And then you'll be able to track it using resource groups, which we'll look at in the account section of the course. So go ahead and hit Review and Create, and then we'll go ahead and hit Create.

So we've created our elastic load balancer. There it is. and you can see the type here is classic. It's serving both availability zones automatically. It's in our default VPC, and we're given a DNS name here. We just moved this cross. There's a DNS name for our elastic load balancer, and I just want you to copy that into your clipboard, and we'll use that in a second. So let's go down to the health check. In here, we've got all our different configurations that we just set up. We can edit these at any time. In here, we've got our instances. Because our instance is currently out of service, it is unable to retrieve the web page that it is looking for.

Now, we'd expect this to come into service relatively quickly. Just hobby your mouse over there. If it still says instance registration is still in progress, don't worry about it. You might have to actually wait a couple of minutes because this is the first time you've provisioned your EC2 instance. I'm just going to pause the video for a second, OK? and we can now see that it's in service. So our instance count is one, it's healthy, and it's within availability zone A-2.

We don't have any other instances currently available, and we can remove it from the load balancer. What I'm going to do now is simply simulate a failure. So I'm going to alt-tab over to my terminal window, and in here we've got our file, HealthCheck HTML, so we could just go RM. So we're going to delete this straight away and then run HealthCheck HTML. And there we go; it's deleted. Now if we come back here, and it shouldn't take very long if we go and take a look at the health check settings, So it's going to pool, it's going to wait 5 seconds between polling, and it's going to wait for 2 seconds.

So for an unhealthy threshold, it should be two times five plus the timeout between them. So around 1015 seconds. So let's click back here. And yet, there we go. It's already out of service. And the reason it's out of service is because it's looking for this file, HealthCheck HTML, which we just deleted. So let's go back in and add that. So I'm just going to go echo "I'm healthy," and then I'm going to output that to HealthCheckHTML, and that should create a little web page. There we go. And if we tap back into our browser and then go over to our instances, we just hit the refresh button. I'm just trying to play for time here, in case you haven't won or in case you're guessing.

And there we go. It's in service, so it doesn't take very long for it to detect that file going missing and then to remove that EC2 instance from the elastic load balancer. And that's a really important point. Once an EC2 instance is out of service, your elastic load balancer will not send traffic to that EC2 instance because it's failing the health check and presuming that that EC2 instance has gone down. Now we've cut and pasted the DNS name into our clipboard. I want you to just do it, if you haven't already done it. And I've just opened up a new tab here. I'm going to paste it in here, and that's going to point straight to my index HTML. So straight away, we're getting DNS resolution. It's converting the name of our elastic load balancer to our public IP address, which is going to be this IP address that's attached to our EC2 instance.

So it's this one here, and essentially we're then able to view our index HTML web page. Now notice that you're going to get a public IP address for your EC2 instance. What you don't get is a public IP address for your load balancer. All you ever get is a DNS name, and basically, your elastic load balancer will have a public IP address. But Amazon manages it for you, and they don't want you to ever use a public IP address for ELBs. They always want you to use the DNS names.

The reason for that is because public IP addresses can change. So you always use the DNS name for your elastic load balancer. And when we start doing the Route 53 section of the course, I'll show you how to basically point your domain name to this DNS name and then how to put EC2 instances behind it. So then, that way, if you lose an individual EC2 instance, your web page won't go down. It will just take that EC2 instance out of service, and there will be other EC2 instances in service that you can navigate to, so you won't have any kind of downtime. And we're going to do that in a lab as well, which appears later on in the next section or in a couple of sections in the course. Okay, so we're quickly going to create the second type of load balancer. This is our application load balancer.

Now in the exam, typically you're not going to get questions around application loadbalancers because they're just too new. We do have an entire deep-dive course on application load balancing on the Acloud Guru platform. It goes for a couple of hours at least, I think. So if you are interested in learning this in great depth, go ahead and have a look at that course. For the time being, I'll just show you how to configure It's very similar to a classic load balancer. So call this my application, ELB. ELB. That's the name of the load balancer.

Then in here we've got to decide whether or not we want it to be Internet-facing or internal. I'm going to have it be Internet-facing. Here are our listeners. We're just going to do it on HTTP. Down here, we've got our availability zones. So these are our available subnets. Remember, I always say that one subnet equals one availability zone. You need to remember that going into the exam. So we just go in here and add them in. So now we're going to be load balancing across EU West 2A as well as EU B. And if you're in a region with more availability zones, just add in all the ones that are available to you. Go ahead and hit "next." For security settings, we're just going to be using HTTP. We're not going to be using HTTP, so go ahead and hit Next. In here, we select our security groups. So my web DMZ was the next target. Now we're in here. This is where we create our routing, and we use what's called target groups.

So I'm just going to call this app AlbtgTarget Group and leave it at that. We're going to create a new target group because we haven't created one before. And then I'm going to do HTTP over port 80. If I change this to Https, it will change the port to 443. In here, we've got the health check. So what protocol are we doing the health check on? We're going to do HTTP and then the path where we called it "Health Check HTML." In here, we've got our port settings. So you can say on the traffic port that it will automatically go to this port here. So if it's 80 for HTTP or 4 for HTTPS, you can also override it so you can tell it to send or do a health check over a specific port that you want to specify. So here we have our healthy threshold. now an unhealthy threshold. So this is the number of consecutive health checks before it's unhealthy, and then this is the one before it's healthy, and this is unhealthy.

So let's say two failures; we'll take it out of service. And in here, let's just say for argument's sake as well, our time out is the time in seconds where a no response is going to be a failure. So I'm just going to count this as two. And then our intervals, which I'm going to put at 5 seconds, In here, we've actually got HTTP codes that we can specify as a successful response from the target. and you can have multiple values. So it doesn't just have to be 200. It can be 200 to 202, or we can do a range. So let's do 200 to 299 just for fun. Go ahead and hit "next." And now we're going to register our targets. All we do is click "here" and hit "add to register." And we're doing this on port 80. So we're adding this EC-two instance to our registered targets. We go ahead and hit next, and we just review everything. All looks good and go ahead and hit create. And that will now create our application load balancer.

So we've got two: our classic load balancer and our application load balancer. I'm going to pause the video and wait for this to go live. And then we're just going to go in and test it. Okay, so I'm just going to refresh the screen, and there we go. It's gone from provisioning to active. So this is my DNS address for my application load balancer.

Just like with classic load balancers, you're never going to get an IP address for your application load balancers. You're always going to be using the DNS address in order to connect to it. And we can go down and take a look at our listeners. We can see here that it's just listening in on port 80. And then we go down to monitoring, and we can see our different monitoring statistics in here, so we can track the number of HTTP errors that we've got. So we've got our server-side errors here, and we've got our client-side errors here. So this would be like things like 404, et cetera.

And then we can scroll back up. If you want to make this a bit larger just to have a look at it, you can. And then we've got our tags here, but I haven't added any tags. Now, what I want you to do is just grab the DNS name and copy it into your clipboard. And then open up a new tab, and we can see it here. Just hit paste, and you'll be able to see. There we go. Hello, Cloud Guru! So our application load balancer is working as well. We now have a complete, in-depth course on application load balances and how to use them. So that is available on the A Cloud Guru platform if you want to learn a bit more about it.

But going into the Solutions Architect Associate and Developer Associate exams, all your quiz questions or all your exam questions will be focused around classic load balances as opposed to application load balances. So if you are going to read the FAQs going into the exam for ELBs, do it for the classic load balancer. You should still be aware at a high level of what an application load balancer is, just that it operates at layer seven.

19. SDK's - Exam Tips

So the most important thing to know is the available SDKs, which you can find by going to https:// AWS. Amazon.com Tools Now, at the time of recording this video, the current SDKs available are as follows: So start with our most common SDKs. So this is the Android SDK for iOS, and then there is JavaScript, which is basically the SDK for your browser.

We then have Java itself, Net, Node JS, and PHP, all of which we've covered in this section of the course. in a couple of practical examples. Then there's Python, Ruby, Go, and the most recent, C++. and that's still in developer preview at the time of this recording. So it's really important to remember what languages are supported by Amazon through their SDKs going into the exam because you'll be able to pick up some really easy marks by knowing them. The other thing you should know is about the default region.

So some SDKs have default regions and others don't, but the default region is always going to be US East 1. So for things like Java, the Java SDK will have a default region, but some don't, such as NodeJS. And if you don't set a default region, it's going to default back to US East 1. So that's an important thing to remember.

Now, if you remember from the PHP section of the course, we saw how, using the autoloader dot PHP and our connection string, we were able to set the region that we were going to be in. And we had that as our identity. East one. We actually had that in the client connection parameter. That's it for this lecture, guys. If you have any questions, please let me know. If not, feel free to move on to the next lecture. Thank you.

20. Lambda

So to get started, why don't we talk a little bit about the brief history of clouds? And I'm going to try and teach you all about the history of clouds in two minutes or less. So if you're a developer like me, you don't really like computer hardware. You know, it's heavy, it's cumbersome, or it breaks. So we've spent the last couple of decades building layers of abstraction over layers of abstraction in code to shield ourselves from the ugly truth of what's really running our code.

And basically, first we decided, "Hey, let's chuck all our hardware in a data centre somewhere, and someone else can be responsible for making sure it's turned on, connected to the network, et cetera." And then, basically, hurray, we no longer have to get out of our chairs. But provisioning the infrastructure is not fun. We had to talk to people; we had to call or email the data centre provider, and it would take them a while to get access to the new machines. Like when I worked at Rackspace, the typical delivery timeframe would be around ten days. So when you place an order for that database server or web server, it would take ten days for provisioning before it was ready for you. And then, in 2006, Amazon launched EC2, which was a complete game changer. Suddenly, you could provision machines with API calls from the command line or from your web browser, and it was heaven.

And infrastructure as a service was born, and developers all around the world got really happy and basically cheered because they didn't have to provision physical servers anymore. AWS is basically infrastructure as code. I mean, think about what you can do now. Using API calls, you can provision a virtual machine anywhere in the world and have it do whatever you want. So that was great. That was the birth of infrastructure as a service. But the problem with infrastructure as a service is that it's still running on servers. It's still running on virtual machines and physical machines as well. So you have to worry about what the operating system is doing. You still have to manage Windows, and you still have to manage Linux.

If you have some kind of corruption on the disk, you might lose your operating system. You might need to reinstall it. So infrastructure as a service only goes so far. Then Azure and Amazon came out with Platform as a Service. And in the Amazon world, it's an elastic beanstalk. Platform as a Service was essentially a really nice way of simply uploading your code. And then Amazon would basically take a look at that code and provision the underlying resources for you. So they would create the webservers for you, for example.

And that was great because you didn't really have to know all that much about infrastructure. You could just be a developer and upload your code, letting Amazon handle it for you. We still have the same problem. You're still in charge of operating systems; you're still in charge of Windows and Linux, and Amazon won't do it for you. Then we had containers. And containerization has been gaining a lot of popularity in the last few years. So the most obvious one is Docker. We have a whole bunch of Docker containers on the A Cloud Guru platform. And later on in the course, we'll talk a bit about Docker and what Docker is and how Amazon's Docker-based ECS works.

But we won't talk about that just now. We'll get to that later in the course. So containers are isolated and lightweight, but you still have to deploy them to a server, and you still have to worry about keeping your containers running and scaling them in response to load. So you still have something that you have to manage. And then Lambda was released in Reimagine 2015, and it was a complete game changer because suddenly you didn't have to worry about managing data centers; you didn't have to worry about managing infrastructure as a service. You didn't have to worry about managing the platform as a service. You didn't have to worry about managing containers. Amazon takes care of all that for you. Literally, all you have to worry about is your code. So you take your code, you upload it to Lambda, and then you configure an event trigger.

So what's going to trigger your lambda function? So what is lambda? Lambda is essentially encapsulating your data sentence, hardware, assembly code, and protocols—all of your high-level languages, operating systems, and application layer. And AWS APIs are encapsulated by AWS lambdas. You don't have to worry about managing any of it. Literally, all you have to worry about is your code. So the official explanation from Amazon is that "AWSLambda is a compute service where you can upload your code and create a lambda function." AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, or scaling.

And you can use Lambda in the following ways: You can use it as an event-driven compute source where AWS Lambda runs your code in response to events. And these events could be anything, such as changes to data in an Amazon S3 bucket or in a DynamoDB table, etc. And it also runs as a compute service to run your code in responses to HTTP requests using the AmazonAPI gateway or API calls made to the AWS SDKs. And this is exactly what we use at A Cloud Guru. Okay, so we covered the two ways that Lambda can be used, but I just want to show you visually to reinforce the message. So the first way is when we react to a particular event or trigger. So let's say we've got our user.

And our user wants to create a meme. So they upload this meme to S 3, and as soon as they upload our meme to S 3, it triggers an event. And this is a lambda function. So this lambda function might take our meme and the text that we've supplied with it, perhaps, and then basically encode that over the image and then store the image in S 3. Now our lambda event might then trigger another lambda event, and this is basically returning the image location of the new file back to the user.

It may then trigger another lander event, which will copy the image in this S bucket to another S bucket somewhere else in the world. So lambda events can trigger other lambda events. They can also communicate with other AWS services. So you could be sending information to SQS or SNS, which then goes on and triggers further lambda events. It's really important to understand that each instance—every time you do something with lambda—is just one instance of lambda running.

So if we had multiple users uploading multiple memes, it would create or it would invoke multiple lambda functions; the code itself would be the same, but the actual invocation would be completely different. And this could be happening on a server in a completely different availability zone, etc. And the great thing about Lambda is that it scales automatically. It's scaled out so that you don't have to worry about maintaining things like elastic and load balances, and often in it we have something called "scales up versus scales out." When you scale up, you increase the amount of resources in something. So it could be increasing the amount of RAM; you could go from four gigs to eight gigs or sixteen gigs, for example.

But that's scaling up. Scaling out is where you're adding more and more instances, for example. Consider things like load balancing. So as you get more and more load, you scale out rather than just scaling up your EC2 instance. And the great thing about lambda is that it scales out automatically. You can have one function that is literally a two-line function that says "Hello, world." But if a million users hit lambda at once, trying to request that function automatically, a million different functions will be deployed, and that will be returned to the user. So that's important to remember as well. The main thing to remember going into your exam is the different types of lambda triggers. And to get that, let's go take a look at the AWS console. Okay, so I'm here in the AWS console.

If we could just go to compute, we'd go to lambda, and we'd type lambda. I don't know if I have any functions. Yes, I've got a few functions. This is stuff that we're going to be doing in the next couple of lectures when we create a serverless website using Poly, but you probably won't see anything yourself right now. What you can do, though, is go ahead and hit "create a new lambda function," and let's just choose a blank function. Once we've chosen a blank function, we can configure our triggers. And if you click in here, you can find all the different triggers for lambda. Now, take note that I'm in Ireland. Things like the Alexa triggers are only available in certain regions, so they are region-specific.

If you want to know what the default lambda functions are for a given area, you always just go over to Northern Virginia, and that's basically because Northern Virginia is where they roll out all their product releases first. So, if you want to see what triggers are available globally, simply change your region to Northern Virginia and click here. And it's actually important to remember these going into the exams. You will be tested on what can trigger a lambda event and what can't. And really, it revolves around the core services.

Things like SNF S Three and Kinesis DynamoDBare are definitely worth remembering. Alexa, AWS, IoT, Cloud Front, as well as Cloud Watch—both events and logs and code commits—and, most importantly, API gateway which brings us to the next slide. So we previously mentioned how lambda could be used, and we looked at it as an event-driven compute service, and we looked at the example of creating a meme.

You can also use it as a compute service to run your code in response to HTTP requests using Amazon's API gateway. So let's take a look at what that looks like. So here we go. We have our user, and our user is browsing, let's say, in Google Chrome, and they want to go to our website. So they're going to send an HTTP request, and that is going to go to the API gateway. That will then trigger a lambda function. And it's possible that the user simply wants to see a discussion on a website, in which case lambda will return that response to the user. Now, the great thing about thisis that scales out automatically.

So if we have two users sending two HTTP requests, that's going to trigger two lambda functions, and they're both going to respond back with two responses. So this can get a little bit confusing, but it's a really important exam topic. And basically, every time a user sends a request to the API gateway, which is then forwarded to lambda, a new lambda function is invoked.

So if you've got two users sending two HTTP requests, you're going to invoke two lambda functions. If you've got three, you're going to invoke three lambda functions, etc. What doesn't happen is that if you have three users connecting at the same time, the lambda function will not respond to that. It's going to be multiple lambda functions responding to that. Now, inside the actual lambda function, the code is identical. That's what the function is. It's one set of code, but it will respond to multiple requests. So that's really important going into the exam: to understand that. It's also very different from using EC-2 with elastic load balancers.

Let's say you've got two website servers behind an elastic load balancer. Those two web servers are the ones that are always going to respond. It doesn't matter how many users there are; if you've got a single piece of code in lambda, that lambda function will scale out automatically. So if you've got a million users hitting your website at once, that's going to invoke a million lambda functions. And interestingly enough, you get a million free invocations per month. We'll get to that in the pricing slide.

So let's talk about what languages are supported with lambda. We've got NodeJS, Java, Python, and C. And you can see what languages are supported—or what versions of languages are supported—just by going in and creating a lambda function. Choose a blank lambda function, go ahead, and don't select a trigger. And here you've got your runtime environment. So we've got C sharp, we've got Java eight, we've got no JS four three, no JS six one, Python two seven, and Python three six. Now these versions will change, but the overall languages—so C, Java, Node, and Python—will not. And, as you prepare for your exam, keep in mind which languages are supported by lambda. So, how much does lambda cost? It's priced based on the number of requests. And like I mentioned earlier, the first 1 million requests are free, and then after that, it's twenty cents per million requests thereafter.

And believe it or not, the AKL Guru platform only started getting a lambda bill about six months ago, and we've served over a quarter of a million students. I think we're actually almost up to 3000 now. So lambda is really disruptive in terms of cost. You will have a very low cost if you can build your serverless applications using Lambda. You're not paying per minute or by the hour; you are literally paying as somebody tries to request your content. So it's an amazingly disruptive technology. You're also built on duration. And duration is calculated and rounded up to the nearest 100 milliseconds from the time your code begins executing until it returns or otherwise terminates. And the price depends on the amount of memory you've allocated to your function when you're charged. I'm not even going to try to pronounce that.

It's a ridiculous amount for every gigabyte per second of use. Of course, this is subject to change, and you will never be told how much the fees are in the exams. It is important to remember that the duration has a maximum threshold, and that threshold is five minutes. So your function cannot execute for more than five minutes. If your function is going over that, you're going to have to break it up into multiple functions and then just get one function to trigger the next one. So why is lambda cool? Well, no servers; you don't have to worry about maintaining and managing servers. And if you think about that, that means no database administrators, no network administrators, and no system administrators.

We don't need any of that. You can literally just focus on your code. It also provides continuous scaling. So if you've got a million users hitting your API gateway, the API gateway itself will scale automatically to handle that load, as well as Lambda, which will scale automatically to handle that load. So it's not like auto-scaling, which will have a whole bunch of rules and scale up automatically, but it might take a couple of minutes. Lambda scales instantly. And that is why it is seriously cool and powerful, and it's super, super, super cheap. This is why people like us are disrupting other industries.

So if you think about some of our big competitors who provide online education, they have their own data centers, their own system administrators, their own network administrators, etc. a cloud guru. We literally only have our developers, and we put all of our code into lambda, which scales automatically with our customers. So if you want a practical use case for Lambda, think of the Amazon Echo, sometimes referred to as Amazon's Alexa. Every time you speak to Alexa, you're invoking the lambda function, and that's basically Alexa speaking back to you. So that's a very practical use case. And in fact, the best way to get hands-on experience with Lambda is to go and create your own Alexa skills.

We actually have a free course on that on the ACL Guru platform. We also have a much more in-depth paid course where you go out and develop, I think, like ten or eleven skills, and it's so much fun. So if you do have the time, make sure you check that out. So what are my exam tips? Well, lambda scales out, not up, automatically. Lambda functions are independent, so one event will equal one function. Lambda is serverless, and it's important to understand what services in AWS are serverless going into the exam. So make sure you understand that. So things like S3, API Gateway, Lambda, DynamoDB, etc. EC2 is not serverless because you're managing EC2 instances. Also remember that lambda functions can trigger other lambda functions. So one event can trigger an x number of functions. So you could have someone uploading a picture to S 3. That triggers a lambda function. That could fire off an SNS notification, which triggers another ten lambda functions. So it's not that one event always equals one lambda function.

You can have multiple lambda functions in one event. Also, keep in mind that architectures can become extremely complex. So debugging this I mean, if you're using API Gateway with DynamoDB and Lambda, you might be using cross-region replication. It can be a nightmare to debug when something goes wrong. For that reason, you've got the AWS x-ray service, which allows you to debug what is happening. Also remember that lambda can do things globally, so you can use it to back up one set of three buckets to another set of three buckets, et cetera. Also understand. What? AWS. Triggers. There are four lambdas to determine which services can cause lambdas. Remember that lambda's duration is a maximum of five minutes. And then, of course, always remember what different languages are available for lambda. So that's it.

We're going to get really hands-on now. We're going to go ahead and build a serverless website, a very simple one that just says, "Hello, Cloud Gurus," using an API gateway and lambda. And then after that, we're going to do something much more fun. We're going to go ahead and create a service website that will convert anything—any text—that you post into that site and convert it using the poly service into audio. So you might want to start taking some notes. You might want to put that on this website. And then you can download your notes as an audio file and listen to them on your way there. So it's a lot of fun. So if you've got the time, please join me for the next lecture. Thank you.

21. Summary of EC2 Section

So, going into the exams, what do you need to know? Well, you need to know the difference in pricing models for EC too. So you need to know the differences between on-demand, spot, reserved, and dedicated hosts. In the exams, they'll throw you a bunch of different scenarios and then ask you to choose the best-priced models to fit each one.

So it could be that you're a genomics company or a pharmaceutical company with a huge compute requirement, but they will need to minimise cost, and they can time this for, I don't know, 2:00 a.m. in the morning. What would be the best option there? And it might be that they can afford to have their instances terminated. They design their application so that it stores all processed data in S three, any data that's processed.

So that would be a perfect example of using spot instances because you want to minimise your costs and can take advantage of it when the costs are really low and demand is really low. So you might want to use spot instances. There might be another example whereby you've got a fairly steady-state web application with 10,000 users. It doesn't spike a lot, but you need to minimise costs. What should you do in that case? You probably want to use reserved instances, which might give you a scenario where you've got a Black Friday sale coming up and you just need to scale for a 72-hour period and then scale back down again. Or those would be on-demand instances. And then you might get a scenario where maybe a regulatory body says that you cannot have multitenant, so you'd be using multitenant compute.

In that case, you'd want to be using dedicated hosts. So just understand the different pricing models, and you should do really well in those scenario-based questions. Remember that there is a significant difference between Spot instances. So if you terminate the instance, you're still going to pay for the hour. So let's say I had a Spot instance running for two and a half hours. My job was cut off halfway through the third hour.

I'm going to pay for three hours in total. But if AWS terminated it because my spot price was above my bid price, then you're going to get that hour that it was terminated for free. So you only pay for two hours. So do remember those key differences going into the exams as well. So, moving on to easy instance types, if you can still remember my dodgy, anagram, or acronym, see if you can name them all to yourself. If not, keep practicing. I keep saying in that particular lecture in EC2101 that you don't necessarily have to know every single EC2 instance type going into either the Solutions Architect Associate Exam or the Developer Associate Exam.

But for all other exams, it is really, really crucial that you know the differences, the different instance types, and to be honest, guys, if you're going to be working with AWS and you really want to know what you're talking about, you should always know your different EC2 instance types. So it is a really valuable thing to learn. So we used the anagram or acronym "Dr McGiftPicks," and we'll come to him in a second. But let's just have a look at what each one does. So we had D for dense storage, and this is the use case for file servers, data warehousing,

Hadoop, etc. We then had Memory Optimizer, which I like to remember is for RAM, memory-intensive apps, and memory-intensive databases. We then had four. M is just for general-purpose applications. As a result, your application servers are usually on M. We typically use T-two micros throughout every lab in this course. But M is what you would use in production. As a result, it's an all-around good virtual machine.

We then have CS, which stands for "compute optimized." So these are going to be your CPU-intensive apps and databases. We then have the G family, which is graphics intensive, really. The difference between G and P is that G is where you're doing video encoding and application streaming, whereas P is more for things like machine learning or bitcoin mining. So G is really about graphics. And moving on to I, we've got IOPS. This is your high-speed storage. So, no sequel, databases, data warehousing, and so on. We now have a new instance type, FS, which stands for fulfilled programmable gate arrays. And this is hardware acceleration for your code. very, very new. just came out at Reinvent 2016.

We then have tea, which is what we've been using throughout this whole course. So there are two micros. Just remember that they're the lowest-cost general-purpose servers, usually used for test and development, and they make great little web servers and small database servers. Then have PS. Again, PS, I like to remember those pictures. So there's general-purpose graphics as well as general-purpose GPUs. So really, it's about machine learning, bitcoin mining, and that sort of thing. Although it will never say "bitcoin mining" in the official documentation, that's what people do use. And then we have X. And I like to remember X as Extreme Ram. So things that are super RAM-intensive, like SAP Hana or Apache, Spark, etc. These are memory-optimized instances. So here we go. Here are my doctor's McGift picks.

So here he is. He's clearly a doctor. I don't know how—maybe because he's got the glasses on or something. He's clearly Scottish, so that's why he's got the muck. But instead of selling burgers like McDonald's, he sells pictures—or he actually doesn't even sell them; he gives them away. gives away pictures of Scotland. I know it's not brilliant, guys, but it's going to stick with you in the exam.

And if it gets you a few extra bonus points and there's a difference between you passing or failing your exams, then I'm completely unapologetic. Moving on to EBS So we looked at the different EBS volume types. So we have SSD, which is general purpose, and this is up to 100 IELTS. This is what we use basically all the time in the labs. We haven't had SSD provisioned IELTS, so the use case for this is more than 100 IOPS, and you get up to 200 IOPS per volume. Right now, this could change throughout the year.

You're never going to be tested on how many regions there are in AWS or how many IELTS you can deliver for provisioned IOPS, etc. But you're never going to get these maximums. You're never going to be asked for a particular figure. What's important is that you understand the use cases. We then have HDD throughput optimized, so this is where we're doing sequential rights.

So this is a frequently accessed workload that is usually used for data warehousing. For example, transaction logging—that sort of thing. We then have HDD code, which is called SCOne, and this is less frequently accessed data. Typically, you use these for file servers. Remember that Stone and SC One cannot be used as boot volumes.

They can only be additional volumes attached to your AC2 instance. And then we have HDD magnetic standard, which is essentially the same as HDD cold except you can have it as a boot volume. So this is your low-cost, infrequently used storage. It could be test and development servers while you're setting up your AM environment and want to keep costs as low as possible.

It's entirely up to you. And then my final tip for EBS Remember, you cannot mount an EBS volume to multiple EC2 instances. If you want to share basically block-level storage between two EC2 instances, you'll use EFS rather than EBS. And we saw that in the labs. OK, so moving on to EC 2, what did we learn in the labs? Well, we learned that termination protection is turned off by default. You must turn it on on an EBS-backed instance. However, the default action for the root EBS volume is to be deleted when the instance is terminated.

So that's the root EBS volume. By that, we just mean the volume on which the operating system is installed. If you go in and terminate your EC2 instance, it is going to delete that root device volume by default. However, you can turn that off. So you could go in and terminate your EC2 instances, but still keep the volume on which the OS is installed. Root volumes cannot be encrypted by default. You need a third-party tool, such as Windows BitLocker, for example, to encrypt the root volume. However, additional volumes can be encrypted, and we saw that in that lab. Moving on to volumes and snapshots So volumes exist on EBS, and they're basically virtual hard disks. And that's what we mean by "volumes." And snapshots exist at specific points in time, and they are snapshots at specific points in time.

So you can also take a snapshot of a volume, and that's going to store that volume on disc 3. And snapshots are point-in-time copies of the volumes. Snapshots are incremental. And this means that only the blocks that have changed since your last snapshot are moved to row three. So for that reason, it's always going to take you a fair bit of time to do your first snapshot. However, if you take a second or third, only the change data is moved to s 3. So again, bear that in mind when you're architecting. If this is your first snapshot, it will take some time to create before moving on to volumes versus snapshots from a security standpoint.

So snapshots of encrypted volumes are encrypted automatically. Volumes that are restored from encrypted snapshots are encrypted automatically. And you can share snapshots, but only if they are unencrypted. You can't share encrypted snapshots. So if you've got any encrypted volumes, then you're trying to take a snapshot of them. Just remember, you can't encrypt, you can't share that volume, and these snapshots can be shared with other AWS accounts or you can make them public in the marketplace.

OK, so moving on to snapshots of root device volumes To create a snapshot for an Amazon EBS volume that serves as a root device volume, you should stop the instance before taking a snapshot. So that's really important to remember. We discussed EBS versus Instant Store in both a lab and a theoretical section. So Instant Store volumes are sometimes called "ephemeral storage." Instant Store volumes cannot be stopped. If the underlying host fails, you are going to lose your data.

So just remember that you can reboot instance store volumes. However, EBS-backed instances can be stopped, and you're not going to lose the data on the instance if it is stopped. You can also restart both. And you're not going to lose your data. By default, both the root volumes will be deleted on termination. However, with EBS volumes, you can tell AWS to keep the root device volume. So again, it's coming back to that same point I made a couple of slides ago. By default, your root device volume for EBS will be deleted on termination. If you go in and terminate that EC Two instance, it's going to terminate the EBS volume automatically.

However, you can set a flag to say, "Don't delete it," but only for EBS-backed volumes. That is not possible, for example, with storage volumes. So if you terminate an EC2 instance that has an instance store volume, you're going to lose all your data on that volume. Amazon Machine Images So it's a couple of exam tips, or just one, actually.

AMI is regional, so you can only launch it from the region in which it is stored. You can copy Amis to other regions using console command-line tools or using the Amazon EC2 API. However, moving on to Cloud Watch So your standard monitoring is enabled by default for ECT2, and it monitors events on a five-minute basis. You can enable detailed monitoring, and it will then enable your events every 1 minute. but you do pay extra for that.

Cloud Watch is for performance monitoring. Just remember that. Don't confuse it with the Cloud Trail. Cloud Trail is for auditing. And again, that's a popular sort of scenario question. They will throw a scenario at you and say, "Should you be using Cloud Watch?" Should you be using cloud Trail? Should you be using something else? So just remember that Cloud Watch is for performance monitoring and Cloud Trail is for auditing. So, what can you do with Cloud Watch once you've received your dashboard, allowing you to create those awesome dashboards with all the different widgets and see what's going on in your AWS environment? You can create alarms, so these alarms can notify you when particular thresholds are hit. You can also use those alarms for auto-scaling events. You can create events or Cloud Watch events that help you respond to state changes in your AWS resources. And then you can also send your logs to Cloud Watch. The Cloud Watch logs help you aggregate, monitor, and store your logs.

In terms of roles, what did we learn in the Roles Lab? Well, roles are really awesome. They're much more secure than storing your access key and secret access key on individual EC2 instances. Roles are much easier to manage from a security perspective than having all these access keys and secret access keys out there in the world. If you lose one, you must disable them and then run AWS configure on all of the different EC2 instances that you may be using the command line tools on. As a result, roles are much easier to manage. And roles can now be assigned to an EC2 instance after it's been provisioned. And you can do this using both the command line and the AWS console. This used to be a very popular exam topic because it used to be that you could only assign a role to an EC2 instance.

When you were provisioning that instance, you couldn't then remove or change roles for the EC2 instance. That has now changed. You can definitely assign a new role or roles to EC2 instances after they've been provisioned. So do bear that in mind going into the exam. You can also update the roles' policies at any given time, so you can add policy documents to a role, and that effect will take place immediately as well. Rules are also universal. So you create a role, and it doesn't matter if you're in Ireland or California; you'll be able to use that role anywhere in AWS when it comes to instance metadata. This is basically used to get information about an instance. So, as we saw in that lab, it could be the public IP address, the private IP address, etc. And we did that using this command. Curl was the one. Http and then you should really remember this URL.

As a result, it's one of the most recent metadata (6925-416-9254). There's no such thing as user data for an instance, and we prove that by trying to curl the same URL but with user data at the end. Moving on to the elastic file system So, basically, the elastic file system is brand new. It supports NFS version four. You only pay for the space you use. You don't have to preprovision, as we saw in those labs. You don't have to say, "I need an eight-gigabyte instance." You literally just started writing to EFS, and you're going to pay at this stage thirty cents per gig. You can scale up to petabytes. It can support thousands of concurrent NFS connections, with data stored across multiple availability zones within a single region. And it's very similar to S-3 in that it has read-after-write consistency.

So as soon as you put a new object onto EFS, you'll immediately be able to read from it. One thing I will say, though, is that because EFS is still only within Oregon at the time of recording and it is still in preview, this will not come up on the exam just yet. However, it's important that you do have a good knowledge of it, because I would place a bet that it's going to be included on the 2016 exam at some point in time. And if you do see an EFS question on your exam, please write to us and tell us, okay? And then we had Lambda. And Lambda is again a fairly new technology.

We at Cloud Guru love Lambda. Our entire school runs off Lambda, saving us a fortune in hosting costs. We're completely serverless. So Lambda is a compute service that allows you to upload your code and create a Lambda function before using AWS. Lambda takes care of provisioning and managing the services that you use to run that code. So you don't have to worry about things like operating systems, patching, scaling, etc. You literally just put your code up in the cloud, and it will run. So you can use Lambda in the following ways:

As an event-driven compute service, Lambda executes your code in response to events. So these events could be changes to data in an S bucket or a DynamoDB table, and they gave the example of somebody uploading a picture to an S bucket using your web application. Lambda then goes in and maybe moves that picture over to another bucket. It could watermarket; it could do a variety of other things.

You can also use Lambda as a compute service to run your code in response to HTTP requests using the API gateway or API calls using the AWS SDKs, and that's exactly what we do at a cloud guru. So if you are watching this on the AQguru platform, on your mobile device, or on your laptop, you are literally interacting with lambda as we speak. So that's it for this lecture, guys. If you have any questions, please let me know. This has been a long section, but it is a really important section for the exam, and I'm sure you've learned an awful lot, but that's it. Thanks a lot, guys, and I'll see you over in section six. Thank you.

AWS Certified Developer - Associate certification practice test questions and answers, training course, study guide are uploaded in ETE files format by real users. Study and pass Amazon AWS Certified Developer - Associate certification exam dumps & practice test questions and answers are the best available resource to help students pass at the first attempt.

cert-33

Comments * The most recent comment are at the top

Mousa mahmoud
Jordan
Feb 20, 2024
i am thinking to buy the premium bundle , but still doubt about this decision... used some free materials and they were okay though... is there any money back guarantee if the premium file won't help me?...
terry
Nepal
Feb 17, 2024
@harry, why talking about money? check this website first, the files offered are FREE! download and practice them. i used this website in my prep process not the first time and can say the dumps offered are updated and valid. i passed the amazon exam due to the revision with aws certified developer - associate dumps, and they really made my pep process more effective. i managed to complete all 65 exam questions within the allocated time. so i know what im’ talking about…simply try them out.
Pratheesh Tk
Unknown country
Feb 11, 2024
Are these dumps valid? I am going to write my exam this month. Please comment
harry
United States
Feb 11, 2024
anyone who’s passed the exam with the help of these files?? i really need a proof before i start using these materials because using invalid ones is a waste of my limited time. plz rspond

Add Comments

Read comments on Amazon AWS Certified Developer - Associate certification dumps by other users. Post your comments about ETE files for Amazon AWS Certified Developer - Associate certification practice test questions and answers.

insert code
Type the characters from the picture.