Practice Exams:

SAP-C02 Amazon AWS Certified Solutions Architect Professional – New Domain 5 – Continuous Improvement for Existing Solutions part 9

  1. SNS – Part 02

Hey everyone and welcome back. Now in the earlier lecture we discussed the basic on how SNS would really work. So today what we’ll do, we’ll go ahead and create our own SNS topic and we’ll look into how we can subscribe it to various endpoints. So in order to do that, go to the SNS dashboard from the services and once you are into SNL dashboard, click on topics. So this will give you a list of topics. So by default if you are new, you won’t have any topic. So first thing you’ll have to do, you’ll have to click on Create new topic. So. Topic name I’ll say KP Labs Hyphen Notification and I’ll click on Create topic. So this is the topic which got created. Now just click on the ARN and you will be redirected to the console which is inside your specific topic.

Now the first thing that you will have to do is you have to create your endpoints. Endpoints are the ones to which the messages will be delivered. So let’s click on Create subscription and there are various types of subscriptions which are available depending upon your use case. For our test purpose, we’ll be using two endpoints. One is email, this is email and second is SQS. So I won’t use SMS for now because many times the videos goes on YouTube. And if you remember in the last lecture I had blurred out certain portion because there are a lot of spammy calls that happens for marketing which I really wanted to avoid. So in case if you want to or if any of the students wants to connect, then I’ll be more than happy to share my details. But as far as the spammy calls are concerned, I’ll just avoid it.

So let’s start with email. So I’ll click on email and you have to give your end point which would be your email address. So I’ll say instructors added Kplabsotting and I’ll click on Create Subscription. So now what it will do is it will send an email for confirmation. So I’ll click here and here you will see it is asking me to confirm a subscription. So I have to click on this specific link and my email will be subscribed to the SNS topic. Perfect. So this is one aspect, the second aspect would be the SQS queue. I already have an SQS queue called SKP Labs Hyphen demo which is created now in order for the subscription to work. Because if you remember in emails we had to click on the link to subscribe our email to our SNS topic. Similarly, when it comes to SQS, the way is pretty easy.

So click on the SQS topic, click on Queue action and select subscribe Q to the SNS topic. Now in the choose topic it will ask you for the topic name that you want to allow. I select the topic name and I’ll click on subscribe. Perfect. So the subscription has been applied now if you go to permission, you have permission from the SNS topic. So this is the ARN of the SNS topic to send message. Perfect. So now let me click on Refresh. And now you’ll see there are two subscription endpoints which are created. One is SQS and one is email. So in case if you want to do an SMS also I’ll show you how it works. You click on SMS and then you give your phone number including international code. So if you belong to India, you have to put 91 followed by your phone number which would be any ten digit phone number. And then click on create subscription.

And as far as phone number is concerned, you don’t really have to validate like similar to what we have been doing in email. So that validation is not required. Perfect. So now that we have our endpoints created, I’ll click on Publish to topic and the subject would be second mail. This is our second SNS message. Okay. And I’ll click on publish. So now it says that the message has been published. Perfect. So let’s verify if the message has been received. So in earlier case in the SQS you’ll see the message available was one. I’ll click on Refresh and the message available now is two. So if you just click on View delete message poll for 10 seconds and this is the SNS message that has been received. And in the message column you will see this is our second SNS message and this is the subject called a second mail. Perfect. So this is what SQS is all about. Second.

Now if you will go, you see here you have second mail and in the message this is a second SNS message. Perfect. So now that you have done this you can integrate the SNS with lot of other services like Cloud Watch. For example, if your CPU utilization is greater than 90% of an EC Two instance, then a Cloud Watch alarm will be triggered and that alarm is connected with the SNS topic which would send you an email or it would send you an SMS saying that in this particular instance the CPU utilization is greater than 90%. So there are lot of varieties of things that you will be able to do with SMS. I hope this lecture has been informative for you and again I would really encourage you to try this out. Don’t forget the SMS part because it is quite fun. Try this out and I look forward to seeing you in the next lecture.

  1. Revising the ELB Listner configuration

Everyone and welcome back to the Knowledge Pool video series. And in today’s lecture we are going to speak about the ELB listeners. So let’s get started and look into what the ELB listeners are all about. So, just to revise, whenever you configure an ELB, we must configure one or more listeners. So, just wanted to show you that whenever you create a load balancer in the first pages serve, you see there’s a listener configuration. So in the listener configuration we must select the configuration that we need according to the use case that we have for our organization. Now, the listener configuration is basically divided into two parts. One is for the front end connection and second is for the back end connection.

So if you see the first two columns, which is the load balancer protocol and the load balancer port, these are the front end configuration and then you have the instance protocol and the instance port. These are for the back end connections. So if you just have Http 80 http 80, what this basically means is that load balancer will be listening on port 80 and it will forward all the traffic to the back end on port 80. So let’s just revise before we go more further. So I have my load balancer which is up and running. And if you look into the listeners, I have one listener configuration where the load balancer port is 80 and the instance port is also 80. So what this basically means is that load balancer is listening on port 80 and whatever traffic that it gets, it will forward the traffic to the back end instance on port 80. So I have an instance which is configured over here and I am also connected to the instance.

And here you will see that I have my NGINX which is running on port 80 and this is where the load balancer will be sending the traffic to. So if I just open this up, you see I have a default NGINX page which is up and running. So this is the reason why we should have a proper listener configuration both for the front end and for the back end. So if my NGINX is running on port 80 80, then I have to change the instance port from 80 to 80 80. Perfect. So these are the basics about the listener configuration part. Now, there are two major type of listeners that we should be aware about. One is the Http and Https base and second is the TCP and SSL base. So again, this is quite important.

Like whenever you go ahead and create a listener, there are four protocols which are present which are Http, https, TCP and SSL. So the first type that we broadly classify is the Http and Https based and the second type is the TCP and secure TCP base. So these are the two types based on which the load balancer are broadly classified into. And let’s look into the Http and Https base. So depending upon the use case that you have in your organization, you need to select a specific type of listener. So let’s look into some of the use cases and we’ll look into what are the configuration parameters that we must set in order to achieve it. Now, one of the very simple use case is the basic Http load balancer. So this is something that we had seen where the load balancer is listening on port 80 and it will forward the traffic to the port 80 of the instance.

So this is a very basic Http load balancer. So during that, the front end protocol that must be configured is Http and the backend protocol that must be configured is also Http. So this is what the basic Http load balancer looks like. So on the left hand side you have the front end and on the right hand side you have the back end. So in this the ALB will be listening idly on port 80 and it will forward the traffic to port 80 in the back end instance. So this is a very basic Http load balancer. Now, the second type of use case is the website using ELB to offload the SSL decryption. So this is basically a website which uses Https. So we can use the ELB to offload the SSL encryption and decryption related functionality. So in that cases, the Https, the front end protocol should be Https and the back end protocol should be Http.

So what do I mean by this? So when you use Https to Http, that means the ELB is responsible for the entire SSL negotiation. Thus the SSL certificate must be configured at the ELV end. So from the client to the ELV, the entire data or the entire traffic will be encrypted. Now, from the ELB to the back end instance we have the normal Http protocol. So here the data is in plain text, but from the client to the ELB the data is in cipher text. So whenever you select the Https as the front end protocol, then the SSL must ideally be deployed in the ELB. So let’s look into how that would work. So, when I use an Https over here, you see when I select the Https, basically we have to configure the SSL certificate as well as the cipher related parameters.

So here you see, it is asking me to store the private key, the certificate body as well as the certificate change. So whenever you use the Https as a front end protocol, that means that the ELB will be responsible for the SSL negotiation. So all the SSL negotiation will be the responsibility of ELB and the instance does not have to worry about all those things. So this is the second type of configuration. Now the third type of configuration is specifically for the website needing end to end encryption. Now, I’ll give you one of the use case because in the organization that I used to work with, we had this approach where from client to ELB, everything was encrypted.

And from ELB to the back end instance, things were in the plaintext mode. Now, we went to a compliance, there’s one famous compliance called as PCI DSS, and the auditor actually asked us to encrypt this portion as well. So not only this portion, but on the right hand side, the entire traffic must be encrypted. So there are a lot of use cases that you might have to go through. And this is the reason why you have the third use case where website needing end to end encryption. So what you can do over here, you can have an Https on the front end. There will be a SSL negotiation here, there will also be an Https on the back end and there will be a backend authentication over here.

So the SSL certificate must be deployed in the ELB as well in the back end. So you have to basically deploy the certificate both on the ELP and in the back end instance as well. So from the client to ELB, the traffic will be encrypted. From the ELB to the instance, the traffic will be encrypted. So nowhere in the network the traffic is going through plain text everywhere the traffic is encrypted. So this is a high level overview about the Http and Https type of load balancer. There are second type calls, TCP and SSL. Now, again, within the TCP and SSL there are various config parameters. So what we’ll be doing is in the upcoming lectures, we’ll actually go into much more depth and understand more about the difference between the Http and the TCP base load.

  1. ELB Listeners – Understanding HTTP vs TCP listeners

Hey everyone, and welcome back to the Knowledge Pool video series. And in today’s lecture we are primarily going to look into one of the basic major difference between a Http and the TCP listeners. So in the earlier lecture we were discussing about various listeners type available in the ELP and we had paused our video in the listener type of TCP and SSL. So one of the questions which generally comes during the interview specifically is that what would be the difference when we use Http versus TCP protocol for the port 80 in ALB? So this is one of the very famous questions that I have seen specifically for the senior DevOps position and many of the people cannot answer.

And this is the very basic things that we should be understanding. So let’s look into this specific on what this question means. So you have the load balancer protocol which is Http. The load balancer port is the 80. So this is the front end connection that we had discussed earlier. And on the back end connection you have the load instance protocol as Http and the instance port is 80. Now, this is the first part. Second part you have the port as 80 in both the front end and back end. However, the protocol is now TCP, both in front end connection and the back end connection. So the question is what would be the difference between both of these approaches? And basically, if you have a very simple website, both of them would work perfectly.

 However, it is necessary to understand the difference which will help us have a proper base into how the elastic load balancer works. It will also help us in our exams. So let’s look into the first aspect. So this is the first listener port. So let’s look into how that would work, which is the Http listeners. So you have the client and you have the back end instance. So whenever a client sends a request to the elastic load balancer, the request might look something similar to this. So this is the Http request and it has certain headers of the Http protocol.

Now, elastic load balancer will initiate a new connection to the back end instance. So you see there is a color difference. You have a gray color and you have a blue color. So this same connection is not forwarded here. A new connection is sent to the EC two instance. Now, within this connection, if you see the request that the elastic load balancer will send to the back end instance, there are more headers which are added over here. Now this headers are being added by the elastic load balancer. And this is one of the major points to remember when you talk about the Http listeners and the instance will respond back and ELB will send back the response to the client.

So when you talk about the Http listeners, ELB can modify the Http headers which have been sent by the client to the server. Now, when you talk about TCP listeners, whenever a client sends the request again, you have a request header. The ELP will not modify any request headers over here, so it will forward the same headers which it received from the client back to the EC two instance running behind the scenes. And this is one of the major difference between the Http and TCP listeners. So there are more things to understand, but let’s look into how exactly that would really look like. I’ll just log into the EC to instance and this EC two instance. So I’ll just show you the overall diagram.

So what we have is we have an ALB over here and we have an EC two instance which is running. So this is the ELB and I have the EC two instance. If you see under the instance there is one EC two instance which is connected. So as the first part, let’s look into the Http listeners and we’ll look into the header aspect, whether the headers are modified or not. So let’s try it out. So what I’ll do, I’ll be starting the TCP dump on the EC to instance. Let me start the TCP dump and from one more terminal I’ll do a simple request curled request on the IP address of the EC to instance first, so that we can see on what is the difference, I’ll copy it up and I’ll paste it here. So once I press Enter, you see I have got the response back.

Now, if you look into the TCP dump output, what I have received is I have received a simple get request on Http 1. 1 and there are headers related to the host, the user agent and the accept. So these are the four lines that I have received when I make direct connection to the AC to instance. Perfect. So let’s start the TCP dump again and this time instead of directly sending the traffic to the EC two instance, I’ll send the traffic to the load balancer. So I’ll open up the load balancer, I’ll just start my TCP done again and we’ll run the same command and let’s press Enter. And now if you see, this is the request which I received from the ELB. Now these are the headers which were present earlier also when we had made the direct connection. If you just go up it up. So these are the Http requests which were present when we had initiated a direct connection.

But now when we initiated a connection via elastic load balancer, there are certain headers which are added by the ELB itself. So you see x forwarded for x forwarded port, x forwarded proto as well as the connection which is keep alive. So when you look into the Http, so this is basic Http listeners. So Http listeners can add its own headers is something that you have to remember. Now the second is the TCP listeners. We have discussed that the TCP listeners do not add any headers, it will just pass the headers which were sent by the client. So let’s look into that as well. So if you look into the listeners type I have first listener is the http 1. Second listener is the TCP one. So let’s try it out the TCP one as well. So I’ll run the TCP dump again and I’ll run the curl command this time on port 80 80.

So this is where my TCP is listening. And now you see here the headers are exactly the same. So load balancer have not added any header related to x 400 for the proto the port as well as the connection keep alive. So whatever headers which the client has sent the same header, the elastic load balancer has forwarded it to the back end EC two instance. And this is one of the important points to remember when it comes to the difference between the http protocol as well as the TCP protocol. So this is one of the difference. Again, in the upcoming lecture we’ll look into few more difference into where you should be using the TCP CP and where you should be using the http protocol. So I hope this basic understanding has been clear to you and I look forward to seeing you in the next lecture.