Pass Mulesoft MCPA - Level 1 Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!

Verified By Experts
MCPA - Level 1 Premium Bundle

MCPA - Level 1 Premium Bundle

  • Premium File 58 Questions & Answers. Last update: Jul 13, 2024
  • Training Course 99 Video Lectures
MCPA - Level 1 Exam Screenshot #1 MCPA - Level 1 Exam Screenshot #2 MCPA - Level 1 Exam Screenshot #3 MCPA - Level 1 Exam Screenshot #4 PrepAway MCPA - Level 1 Training Course Screenshot #1 PrepAway MCPA - Level 1 Training Course Screenshot #2 PrepAway MCPA - Level 1 Training Course Screenshot #3 PrepAway MCPA - Level 1 Training Course Screenshot #4

Last Week Results!

Customers Passed Mulesoft MCPA - Level 1 Exam
Average Score In Actual Exam At Testing Centre
Questions came word for word from this dump
Download Free MCPA - Level 1 Exam Questions
Size: 515.83 KB
Downloads: 118
Size: 1016.53 KB
Downloads: 1739

Mulesoft MCPA - Level 1 Practice Test Questions and Answers, Mulesoft MCPA - Level 1 Exam Dumps - PrepAway

All Mulesoft MCPA - Level 1 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the MCPA - Level 1 MuleSoft Certified Platform Architect - Level 1 practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!

Non-Functional Requirements of APIs

1. Introduction (Section 5)

Hi. Welcome to the new section of the course. So in this section, we will thoroughly learn and understand how to enforce and meet various nonfunctional requirements in your projects. So, nonfunctional requirements, as you already know, could be like enforcing some rate limiting on the APIs, or applying IP whitelisting or blacklisting of the IP addresses, or applying some threat protection policies, enforcing the client credentials, or some authentication schemes, or tokens, and many more things. Because this is a common IT standard, calculating the NFRS and applying them and meeting some of the SLS of the NFRS, you might be aware of them already, right?

So in this section, we will thoroughly look into these aspects and how they can be applied to the Anypoint platform. And what capabilities does any point-point platform support in order to meet the non-functional requirements of the NFR? So, like I said, the most important NFRS with which many organisations are concerned include the majority, the security aspect, and the performance aspect, right? And, obviously, proactive system stability efforts and all, they always try to meet these things first, along with the others that we discussed previously, such as rate limiting the IPs we are listing, and so on.

So of course some of them are security-related, right? So security, performance, and proactive system stability are all legitimate concerns, right? So what are these particular concerns? So, for example, security could be related to making sure that all the transactions are on the wire. Things will go over the HTTP protocol only; there should not be an HTTP. They want to enforce such rules. And what could be the performance-related NFR is like controlling the throughput, saying, "Okay, there can be a maximum of 1000 requests per second, not more."

Okay? Even for these thousand requests per second, the nation wants to impose an SLA stating that when the transactions are at their thousand requests per second peak when the throughput is at this rate, the maximum response time allowed should be 500 milliseconds and the average should be somewhat always closer to 200 milliseconds. And any transaction that takes more than 500 milliseconds should be timed out.

Okay? These are some of the SLS and NFRS that you've probably seen in a lot of projects, right? So this is what we will try to codify in this section. So we'll see how all of the above manufacturers can be accomplished using any point in API Manager. So where API Manager comes into play, and where NFRs come into play, is in the endpoint platform's API Manager. This is the place where we could take control of all these aspects. We will understand how any point-API manager controls the API invocations. So, when an API client sends a request, what happens first—does it go directly to your implementation? Or will there be some aspects that will be taken care of before the implementation begins? So if yes, then who controls it? Obviously, a pay manager controls this invitation. Okay. And then we will see how to apply APA policies to meet or reach the nonfunctional requirements or constraints on those APA invocations. So, because each of these are ours, we enforce them on APIs in the APA world via policies. So we'll see about that, as well as where the best place is to enforce these API and APA policies. Because the platform at any point API Manager actually supports enforcing these API policies at two places.

One is that we have an API implementation where we actually wrote the code that serves the requests and responses of the API and responses. As a result, we have the option of enforcing these policies on the APA itself. And there is another way, which is the APA proxy. So what you can tell the platform is, please don't touch my APA implementation.

I want to have a proxy in front of the implementation, and I want to control or enforce my NFRS or policies on the API proxy and leave the AP implementation as is. Okay, functionality-wise, both of them work the same way. You've mastered the behavior. But the debate over when to go for what will come to an end. Similarly to how we have divided the deployment models, when to go for what Assume we will brainstorm when and how we will enforce the policy. Is it an API proxy or an APA implementation?

Okay? And then we will also see and have some demonstrations of how we can register clients for the APIs and how those clients have to pass these client credentials if client credential policies are forced on the API, and such interactive demonstrations will be there. so that you can also see, not theoretically but practically, how those policies come into play.

And importantly, we will also go through the guidelines of the APA policies and understand what policies are suitable for what layers. Because the new It operating model and the proposed AP architecture from newsoft are three-layered, As you already know, you have heard about the experience layer, process layer, and system layer many times, just like we discussed them in the other aspects. Even here, the best practises will be discussed in terms of how and what kind of APA policies can be applied to or fit for each of these layers. Okay? So this is what we will thoroughly cover in this particular section. Understand and digest all the stuff. Alright? Happy learning. Let us meet again in the next lecture.

2. NFRs For Our Business Process

Hi. So let us now see the numbers for our Create Sales Order business process. So, what you're seeing in front of you is the last final version that we have done our work on; the initial version was the functional implementation with three experience APIs orchestrating the business process, like Validate and the Great Sales Order. The design is then revisited to improve reusability. Then we did the Process layer and Experience layer region, where we moved some of the components into the Process layer to achieve reasonability. And the Experience API will just call the process APA once, right? So this is what we have done before, all right? So now what we are going to do is address some of the NFR for this particular business cess. Okay? So if you see this particular implementation, the API implementation for this business process is functional, okay? Right. However, it has great potential for more ExperienceAPIs in the future because it's possible, right? Already, there are three consumer interfaces in the experience layer. One is based on REST, one on the web, and one on files. So the organisation can receive orders on any of these three consumer bases, correct? So there could be more possibilities in the future as the organisation grows.

There could be a mobile app, and calls could come directly from the mobile applications via some Android or iOS app. Or there could be bot applications where bots can intelligently create the orders. There are so many other possibilities. so the demand can increase. So such changes would also change the NFR significantly because the demand grows and the number of requests goes on. So, if you see downstream steps, actually a lot of downstream steps, this Experience API called the Process API, which will be called the Validate Process API, and then Create Self-Order, the Validate Process API. Again internally. Downstream calls.

Three system APIs So that, too, is a simple business process that we have taken. But in a complex one, there could be lots of calls involved, right? Because these validations are doing customer items with location validation, which is straightforward and a small scenariopicked, there could be more complex ones. So, if it is such a big, three-layer application with a lot of downstream steps, then the first step can be very tricky, right? So, for example, let's set some throughput for our small business process, the Create Sales Order. Say the customer has asked that for this Create Sales Order business process, they are expecting at least 100 requests for 1 second, okay? without jeopardising response times or anything else. So they should always get 100 requests per second. So this is difficult, right? It's very aggressive. So, what we need to do is perform this step synchronously, which we are doing today in the current functional design, which is actually very hectic to achieve that particular aggressive gressive throughp So performing this step synchronously with all these process calls and the downstream system calls would take too long.

Okay, and what we need to do for this is either plan and do some calls asynchronously or we need to remodel the model or design. Similarly, let's see. Even if we do asynchronously, what happens next is not enough. Okay? By making it asynchronous, we may improve the consumer's experience by getting responses fast, but background applications that are asynchronously processing the order request or something else could still take a long time depending on how complex your design or orchestration is. Right? So still, it will take a lot of CPU resources, memory, and all that. So we may want even that not to happen, just to avoid giving the consumer a "gold-plated" experience and choking the other layers like PRC and CS in the background. We don't want that, right? So we have to find a way to make the execution times faster as well, which runs in the , which runs in Another NFR we can set is that the entire end-to-end experience should be secured via the HTTPS protocol. So these are some of the NFS, like the throughput, the faster processing, and the security of the end-to-end communication.

So what can we do for this, and how can we do it? Let's see that in order to achieve this, what we must be doing is augmenting our current technology our current technoSo what we have is not wrong, but we need to augment or amend it to add these NFRS so that we can meet the NFRS and have very powerful, even more powerful APIs with minor changes. So the implementation of the creatures order now should meet the NFRS we discussed in the previous slide, which are throughput, faster processing, and security. Okay, so let's check the first one, which is the throughput. So in order to meet the throughput, what we need to do is make one small design change that is required for the end users in the experience layer: introduce asynchronous processing. Okay? So if we do the asynchronous processing of the sales order creation, then what happens is that we will just accept the order in the experience layer, immediately acknowledge the response back to the consumer, and leave the actual processing to the background system.

So how will we respond back to the end user, and then what? We will do something like what you may have seen in many of your projects before. We accept the order and generate a unique oral sequence number in the experience layer that is unique across the cluster. And then we immediately give it back to the consumer, saying, "Hey, this is your order number, the reference number." Okay? So then we'll forward the request to the background or PRC later, which actually goes and creates the order in the ERP and generates the ERP order number. However, whatever order number we return to the consumer would be generated by experience but unique across the order generation program.

Okay? So in the previous design we had, we were actually taking the request synchronously, going to the process layer, performing all the validations, and then creating the order in the ERP and responding back, right? So instead, what we are doing now is accepting the order, but again, we won't just blindly generate the number and give it back. We do only some prerequisite things like validating the client, which will be quicker, and then we immediately generate a number and give it back. We don't wait for the year to create an order number and So what we can do is achieve this by selecting some messaging system to trigger the synchronous processing, and that asynchronous processing using the message system should not be experiencing any message loss. So, out of the box, MuleSoft provided message systems; we have one, which is any point MQ. We can use that, which would necessitate additional licencing and costs, or we can use mule runtime persistent Vmqs, which can be implemented in the Cloud Hub. All right? So, if you didn't have the licencing or didn't want to pay for it, MQ would be a new component that you'd have to introduce at some point. Or you can use the Mule runtime persistent VMCs, which come as part of the Mule implementation logic and are not an extra cost.

OK? So now if we have to do any sync—a sync bridge, for example, like we have discussed—say we are immediately giving a unique generated order number back to the consumer, but once the order is properly created in the ERP, an ERP order number will be generated asynchronously in the background, right? So we have to link that particular consumer order number that you have given to this ERP order number. That correlation has to be there; otherwise, if there is no correlation, then tomorrow, when they do any queries on that particular consumer order number against the supply chain company's search database or something, they will not find it to be a dummy number. So there should be a correlation established between the order number generated and provided to the consumer and this original RP order number. Right? So to do that, what kind of things can we use? We need to use some kind of persistence mechanism to store the correlation information for the asynchronous processing. So the out-of-the-box persistent mechanism that Cloud Hub provides is the Mule runtime object the asynchrIf you use them, you can use the Muleruntime object store to have this persistent behavior.

So that order number will be stored in the object store as a key, for example, and a synchronous trigger transaction ID can be stored there as a value. And, once the ERP Order number has been generated, that specific key ERP number can be saved as a key value. Understood? As a result, we can improve throughput and try to reach 100 requests per second for the organization. Okay, so what's next? The next thing is faster processing and execution times. So, once again, 100 requests per second is quite aggressive, isn't it? So just doing this is not enough. OK, yes, we have made it consumer-side synchronous behavior, but still, we are doing things like validation of the client ID and other things like correcting the item and the shipment location.

So we're still making requests for it. So how can we even make it better to meet these 100 requests per second and an R? So this particular NFR is not just on the experience layer alone. Okay? So, if they are requesting hundreds of requests per second at the experience layer level, it would imply that NFR is applicable to all downstream APIs, either indirectly or implicitly. That is, whatever API is called, the EXP API is called, the experience layer API is called, and the underlying system APIs are called. So it is applicable all the way down. When I say applicable, you must already be aware of this. It is not a straightforward application. That is, the NFR does not directly apply to all APIs as "same and four." That is, it is not just 100 requests per second to EXP Experience, 100 requests per second to the process layer, and 100 requests per second to the stimulator. No, this is not how it works. Because if the NFL is the same for all three layers, then what about the hops and the latencies?

So if the system layer is also 100 requests per second, then it cannot be 100 requests per second in the experience layer because there will be some more orchestration. So the NFR won't satisfy. So you have to break it down. So if we get 100 requests per second in the experience layer, that means you have to set it even more aggressively for your downstream. That is, when it comes to the processing layer, you must set an effort of, say, 150 or 200 requests per second in order to meet the EXP of 100 requests per second. And when it comes to the system layer, you can be even more aggressive. Because of the orchestration, a single process layer may call more than three or four APS. System Layer APS, so 300 requests per second will satisfy the NFR and the PRC to meet the 200 requests per second. And our system's multiple PRC-layer processing layers (EPS) will satisfy the 100 requests per second in the Express layer. All right?

So to break down the APS on all tyres to make sure we meet these response times, or throughput, Okay, the next thing is that, after doing this, it is not a one-time thing; it happens right the moment you set some NFR by assuming it won't immediately meet it. So after applying the NFR, you have to monitor and analyse the behaviour, the throughput, or the response times using any point API manager and any point applying the NFBoth of them come with the Cloud Hub. So this is the one way, after doing this, we still need to do something else, right? Like I said, this "validate client," i.e., "validate customer item and shipment locations," is another APA call. But can we improve them somehow? Yes. How? Because most likely their data is static or almost static, unless a new item is introduced in the company, a new customer has bonded up with the supplier company, or a new shipment location has been introduced, the data stays static, right? So we can very well cache it.

Correct. You don't have to go forward every time or all the way to the next steps or whatever new system comes in to perform the validation if you cache the data. Okay? You can cache it in your layer, either C or PRC, whichever fits for you, so that you can save a lot of HTTP IO calls, in and out calls, network calls, and a lot of hops. So, you know, caching will improve the performance drastically. So you have to anticipate the need for the cache and introduce some caching so that now that the process is asynchronous on the experience layer at the same time, your validation can be very fast because it's cached. Now it seems feasible to meet the NFRS, right? So this is how we can improve performance by making executions faster executions. Correct? Because it is cached, your CPU and resources will not be consumed so much every time.

Go all the way to the backend system and get the data right. Okay? Now, the third one is the security-related NFR. So what could be the security-related NFR? As previously stated, the organisation has mandated that all communication from the EXP—not just from the EXP itself—from the API client all the way to the legacy system via process and system-layer APIs be encrypted using the HTTPS protocol. So it should be HTTPS mutual authentication, for example. So, is it possible? Yes. So, how can we meet this requirement? We can meet this requirement by utilising the Cloud Hub DLB. Okay, so you may be wondering why Cloud Hub DLB, when you can do it in the implementation by importing the certificates in the keystore, storing them in your AP implementation, and having that call. But that would leave it to a developer, which we can enforce as coders and all. But still, it's a good developer, and we have to do a code review and make sure it's done instead. We can implement it at the DLB level so that it will go to the administration side.

So there's no way we're leaving it up to developer teams or individual lobbying teams to make mistakes. Instead, repeated load balancers provide a way to enforce the certs if you DLB crowdfund DLB. We can import the certificates into the DLB and ensure that any communication with that DLB is done via the https protocol. Again, only science cognition certificates are available. Okay? However, this will only apply to the anypoint VPC, not the public cloud. This is possible for the private cloud where a VPC has been created for the Again, it's the cBecause we discussed before the deployment models, for the public clouds, there won't be any VPCs, so there won't be any dedicated load balancers. There will be only one shared load balancer for the public load. So it is not possible to import your sets or the company's sets. As a result, you should have any point VPC created. Then we can have a dedicated load balancer where we can enforce the shared EPS using your company's blic load.

And also, another related security effort could be applying the client authentication schemes like using client credentials, validating the client credentials, validating the tokens, and all that. So this can be achieved via the API manager, again through the APA policies. Okay? So we will try to slowly apply these various policies and modify a little bit of the design for the business process, all to meet the NFR for our API, which is to create sales orders. Okay? So, one by one, we will gradually implement these features on the API to make it even more powerful. Okay? So let us move on to the next lecture, where we will go over some terminology, similar to how we went over the API about the client-consumer API specification interface and the APimplementation in the beginning of the course. Now we'll go over some terminology related to the APN and the Farrelled API, such as "APA policy," "APA template," and "API definition." What are these? APA proxy and all. Okay? Once we understand the terminology, we will move on to the next portions of the next lectures in the course. All right? Happy learning.

3. Some more API Terminologies

Hi. In this lecture, let us discuss some of the important API technologies apart from what we have already learned in the early part of this course. Okay, so far in the early going of this course, we have learned about API, API client, API consumer, the implementation, and some of those terminologies. Now, what we are going to see is somewhat related to the APA manager and policy enforcement and all. So these terminologies help meet the terminology's understanding of the APA manager with respect to the NFS. Okay, so let's go to the first one. The first one is the API policy. Okay, so what is an API policy? So, API policy actually defines typically nonfunctional requirements that can be applied to an API by injecting them into the API invocation, which is between an API client and an APA endpoint. Okay, so an API policy is basically applied. It comes into play between the APAclient and the actual API implementation.

So, API implementation refers to the code written to actually implement the API's functionality, which is similar to a flow or process implemented by any point studio. Right. and hostile runtime. Correct. An API client is someone who is calling this API implementation. Okay, so where API policy comes into play is when the API client hits the API implementation over the API endpoint, right, before going and executing the API implementation code. Just before that, this APA policy comes into the picture. So the goal of this API policy should be achieved without changing the API implementation that is listening on the API endpoint.

So, the developer, or whoever is implementing the API in endpoint studio, does not need to be aware of the policies that will be applied to the API. The implementation is fully concentrated on the APA functionality only. All right? So without disturbing this particular AP implementation, just to enforce some of the NFRS or some of the policies, we should not be going and touching the code or the AP implementation. Okay? When the APA client connects to the API, the EPA policy allows for those to be enforced without affecting the AP implementation. Okay, so this is what AA policies A policy basically represents one non-functional requirement that is between the APA client and APA implementation and gets executed just before the APA implementation logic. And this is enforced without disturbing the actual AP implementation.

Good. I hope you understood it clearly. The next thing is the API policy template. Okay, so what is this we just discussed about the APA policy? So what is this terminology called? APA policy template. Basically, an API policy What we discussed is a definition, right? So an API policy is divided into two parts internally. One is an API policy template, and the second is an API policy definition. Okay, so let's see the difference between the API policy template and the API policy definition.

Okay, so an API policy template is a set of code and configuration parameters for the API policy functionality. OK, here I am talking about the APA policy functionality, not the API functionality. Please concentrate a little bit. OK, so when I say API policy functionality, what do you mean? For example, let's take one of the NFRS. We discussed this in the previous lecture. Let's say rate limiting or throughput is enough. We want to enforce an API. So, what could be the rate or throughput limiting factor? Something like 100 requests per second So it's not magic to enforce that, to implement that rule in front of the AP implementation, is it? There should be some code. Yeah, the code won't be written by the API developer. It comes standard with the Mule software or any point platform. But even the MuleSoft team has to write a piece of logic to manifest it, right? It may not be the client-side developer, but it's the time developer.

Someone has checked the code. So that piece of code is the APA policy template. Okay? So that code is something like a parameterized one, leaving the parameters for the users to fill in. Users can choose between 100 requests per second, 200 requests per minute, or whatever they want. So that information will be filled in by the users who are using the policy. But the code and those configuration parameters, which are done by the Mulesoft team or any other platform team, are the apology template. So if you are coming from a Java background—or not just Java, any object-oriented programming background— This apology template is similar to a rogramming So a class name, for example, for our throughput NFR is something like a rate-limiting policy. It's the class name, and the member variables in that class are something like "number of requests" and "time unit," which means 100 requests per unit requests" anSo those are values. So this is like a class with two members called rate limiters.

So this is what an API policy template might look like. Okay? Now let's move on to the second part of the API policy, which is the API policy definition. Okay? So the definition is nothing but a concrete parameterization of a specific API policy by supplying values that are all required parameters for the given API policy template. So we discussed a policy template before, right? So that's like a class. So here what we are talking about is like an instantiation of the class, which means you are again comparing it to object-oriented programming. So this is the object of that particular class. Meaning we are now giving the details or parameters as input, saying okay, 100 is my number of requests, and the second is my time unit, which means we are asking, "I want the behaviour of 100 requests per second for the red limiting policy."

So this is called a definition. Okay? So the template is similar to the default one that Platform provides, asking, "This is the piece of code." You won't be able to see the code, but it is the policy we are giving you. And the definition is when someone actually fills in the details and implements them; that is when something is enforced. That is when the EPA policy definition kicks in. Okay? This is like an object of the particular class. All right, so we are clear on what the APA policy means and does, the APA policy template, and the APA policy definition. Okay? Before we wrap up this lecture, let's go over some of the smaller terms. There are two small terminologies. One is an API endpoint, and the other is an API proxy endpoint. Okay, so what is an API endpoint? API endpoints are like any other technology definition.

You've heard of endpoints before. It's a URL at which a specific API implementation listens for the request. So we have the code written, the actual functionality of an API that runs on the APA runtime manager cloud hub. So to invoke it, there should be a URL or end point, right? That is what we put on the client side, like on Apostille or SoapUI, or even from the Java code, whatever. So that endpoint is the pay endpoint, a straightforward definition. Okay, so you might wonder why I had to bring this up. Anyone knows it, right? So I brought it up to avoid confusion with the next term, which is API Proxy Endpoint. So here, we have to see them differently. So, the API endpoint is the final endpoint on which our actual API implementation is running. And the next one is an APA proxy endpoint. So this proxy endpoint is something that is an APA endpoint only, but it is not directly on top of the APA implementation code we wrote.

So it is like an endpoint that belongs to a proxy layer, which sits in front of the actual AP implementation. This APA proxy will be discussed in detail in the following lecture. We will go into great detail about what this is. But just to clear up the terminology, an AP endpoint is like a direct end point on your AP implementation code. When you hit it, it's going to hit your code in the very first place, whereas an API proxy endpoint still hits the API implementation, but it first hits the proxy layer, which sits in front of the implementation. And from proxy or through proxy, it goes and hits the API implementation. Okay? So these are the terms you should be familiar with before we move on to the next lecture, where we will discuss where we can enforce policies, how we can enforce them, and so on. Okay, happy learning. Bye.

4. Enforcement of API Policies

Now that you've learned the APA jargon for APA policies and the APA manager, So let us now see how we can enforce the APA policies, or at what places we can enforce the APA policies. All right? So at any point, platform APA policies are always enforced from within a Mule application executing in a Mule runtime. Okay? So no matter what, the policies can only be enforced in a Mule runtime environment because, as I explained in the previous lecture during the terminology explanation, these APA policies are also a piece of code written by the MuleSoft team, and it has to be executed in order to enforce the restrictions or the NFR, correct? So it has to run on mule time. That's one thing. So, how much mule run time is there left? We have to choose. That is the list of options we have from an operations, administration, or architecture standpoint. Okay, so what are the two ways that the platform supports enforcing EPA policies? One is that we can enforce the API policies right on top of the API implementation runtime only.

Okay, so let me elaborate a bit more. So, we write an API implementation for a particular API, say, for example, for creative orders. We implemented the logic and everything in our Nipple studio. And when we bundle it, we obviously have to deploy it onto the runtime manager. As a result, mule runtime is reserved for this specific application. Now, we can use the same runtime where the AP implementation is running to enforce the APA policies as well. OK, so this is called the enforcement of embedded AP implementation. As a result, API policy enforcement is integrated into the AP implementation. So this is an implementation approach to API policy enforcement where this functionality is incorporated into the AP implementation. Okay, rather than having it in a separate proxy and all. OK, so we'll get to the APA proxy next.

So, in this case, we're saying that we have only one runtime and can enforce the APA policy on the same runtime where our application is running. Okay, so will it disturb your AP implementation code? No. Again, like I explained or as we saw in the previous lecture, these APA policies never have to actually touch or impact the code, so you don't alter any of your AP implementation logic. This is applied in front of the AP implementation but uses the CRM runtime. So there is a second way as well, which the platform supports: using an API proxy. So what is an API proxy? An API proxy is like a dedicated node that enforces these API policies, but it acts as a proxy between the APA client and the AP implementation. So this APA policy sits in between the APA client and the AP implementation. It's an extreme runtime. Okay, so when we saw the definition of APA policy, We said the EPA policy is the one that is enforced between the API client and the AP implementation. Apollo is enforced in between.

It comes into play between the API client and the API implementation. The APA proxy, on the other hand, is a physical, actually runtime, that comes in and sits between the APA client and the APA implementation by acting as an HTTP proxy layer where the APA policy is enforced on the proxy runtime. Okay, so what is the difference? Why would we go with this or that? That is something that I will want to leave up to you for an exercise that's coming in the next lecture. I have given them an exercise after this lecture, which I want you to first brainstorm and want to get your opinion on. Why would you think you would go for an API proxy-based enforcement of policies or go for the embedded APA implementation of APA proxies? So once you brainstorm and put it in, then I will give my solution as well, which is why I think it should be going for this or that approach, the pros and cons, et cetera. Okay, so one thing that you commonly need to understand, whether it is an APA proxy or embedded enforcement in the API implementation, is when we say the APA policy is enforced, right?

It does not imply that the actual API code or logic is loaded or colocates into the proxy or the implementation. It is not bundled up or made heavy as part of your implementation code or the proxy runtime. Okay? So how it works is, at runtime, when the actual client hits the API with the request, how it works is, first, when the request hits, whether it is hitting the API proxy or the API implementation depends on the end point.

Okay? So, depending on which approach they took, the customer, the APA provider, or whoever is implementing or providing all of these things will have to either share the APA proxy endpoint with the client or share the actual AP endpoint with the client. If they've taken the AP proxy approach for the policy enforcement, then the AP proxy endpoint would be shared with the client. If policy enforcement is embedded in the AP implementation, the AP endpoint is shared with the client. And we will very well discuss the difference between a proxy endpoint and the APA end point in the previous lecture. So the endpoint where the APA proxy runtime is listening is the APA proxy endpoint, and where the actual implementation is residing is the AP endpoint.

So APA credit has to hit one of these because, based on what your endpoint was shared as, it was hit the moment it was hit, which is the first thing the Mule runtime checks before executing the actual implementation. Is that okay? Are there any policy metadata added to this particular runtime? Okay, so again, it doesn't have actual policies, code, or a template. The policy template or the definition, et cetera, are not directly embedded in the runtime. It's just that metadata saying, "Yes, this particular implementation is being enforced with XYZ and FARS or policy names like policy IDs, whatever is unique." Okay, this has rate limiting, IP whitelisting, and some HTTPs. So, kind of, these three are the policyIDs implemented or enforced on this AP implementation.

That's it. So using that ID at the runtime, the policies are downloaded into the Mule application runtime, and along with the ID, whatever, the parameters or values that are linked are substituted into that particular template, and a definition will be created in the runtime, and it will try to enforce it. Okay, so let's say this won't become a bulky approach. So this is how it works. Always. The AP policies are downloaded and enforced in the Mule application at runtime from the any point API manager. Okay, so this is how enforcement can be applied in two places. One is an API proxy, and one is the embedded API implementation. All right? So I hope you understand this. So I even created this pictorial for you to help you understand it better. Okay? And now I have given an exercise after this lecture. Please go through it, and I would like you to put in what you think are the pros and cons of going with the EPA proxy-based policy enforcement or policy enforcement embedded in the AP implementation. things like your exercise Happy learning.

Mulesoft MCPA - Level 1 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass MCPA - Level 1 MuleSoft Certified Platform Architect - Level 1 certification exam dumps & practice test questions and answers are to help students.

Run ETE Files with Vumingo Exam Testing Engine

Comments * The most recent comment are at the top

United States
Jul 12, 2024
@ivy_k, yes, these questions and answers for MCPA - Level 1 exam are up to date. just as they were useful to me, you will find most questions from this file in your real exam… good luck to you☺
Jun 30, 2024
studying with this MCPA - Level 1 practice test was one of my most wise moves. i tackled the Mulesoft exam with ease. using this material really strengthens your knowledge and increases confidence as well as improves your time-management. i recommend this ete file 100%
Jun 16, 2024
are these the latest practice questions and answers for MCPA - Level 1 exam?
South Africa
May 31, 2024
@Zaiden, this MCPA - Level 1 exam dump can make a very big difference. It will be easy for you to understand the exam concepts if you practice with it for several times. and more importantly the questions in the file are highly related to that of the actual exam… wish you the best!
Bibhor Anand
May 23, 2024
I need dumps for MCIA i.e MuleSoft Certified Integration Architect - Level1
May 06, 2024
I aimed for a maximum score in this exam and this MCPA - Level 1 ete file played a major role in acquiring that goal. With its help I aced this exam with a very good result. this is an excellent material that each candidate should use in their preparation
United States
Apr 26, 2024
can this dump for MCPA - Level 1 exam make any difference? I missed the passing mark by only few points in my first attempt… I really want to pass this time