exam
exam-2

Pass F5 101 Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!

exam-3
block-premium
block-premium-1
Verified By Experts
101 Premium Bundle
$39.99

101 Premium Bundle

$69.98
$109.97
  • Premium File 460 Questions & Answers. Last update: Jul 13, 2024
  • Training Course 132 Video Lectures
  • Study Guide 423 Pages
 
$109.97
$69.98
block-screenshots
101 Exam Screenshot #1 101 Exam Screenshot #2 101 Exam Screenshot #3 101 Exam Screenshot #4 PrepAway 101 Training Course Screenshot #1 PrepAway 101 Training Course Screenshot #2 PrepAway 101 Training Course Screenshot #3 PrepAway 101 Training Course Screenshot #4 PrepAway 101 Study Guide Screenshot #1 PrepAway 101 Study Guide Screenshot #2 PrepAway 101 Study Guide Screenshot #31 PrepAway 101 Study Guide Screenshot #4
exam-4

Last Week Results!

160
Customers Passed F5 101 Exam
88%
Average Score In Actual Exam At Testing Centre
83%
Questions came word for word from this dump
exam-5
Download Free 101 Exam Questions
Size: 767.34 KB
Downloads: 369
Size: 1.07 MB
Downloads: 1992
Size: 1.23 MB
Downloads: 3119
Size: 213.49 KB
Downloads: 3182
exam-11

F5 101 Practice Test Questions and Answers, F5 101 Exam Dumps - PrepAway

All F5 101 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the 101 Application Delivery Fundamentals practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!

Application Delivery Controller (ADC)

1. Introduction to Proxy Servers Part 1

Proxy servers or proxies This solution, which sits between client and application servers, is used most of the time for web-related activities. We have several types of proxies, which we will discuss on this whiteboard. Sessions are forwarding, reverse, and half proxies. Let's talk about the forwarding proxies first. This is also known as an HTTP proxy, and most of the time we use this for outbound web transactions, right? And some of their functions include authentication and authorization as well. Now, what I'm going to do is add PCA, PCB, and a server. This is our proxy server. They are on the same network; they're connected to a router, and the router connects to the internet. I'm going to add two servers here, or two websites.

All right. Now we have PCA and PCA assistance traffic on the Internet. Let's say it wants to communicate with this website. The standard route is from PCAit to the default gateway. It's the router in this case, and the router routes it to the Internet via the ISP router first. Now, once the website receives the response request, it will respond back to the PCA from the original source. That is the traditional way. Assume you enable forwarding proxy or HTTP proxy on the PCB. By the way, before you enable it, you have to configure your web browser first. So this is your web browser. It can be, e.g. It can be Mozilla or Google Chrome. You need to specify the IP address of these HTTP proxies. So let's say this is 12168, 100, 100. Now what happens here is that when this guy, the PCA, wants to send traffic to this website, it doesn't send it directly; it doesn't forward the traffic to the router, and the router routes it to the internet.

No, it's different because when you use a web application through the HTTP proxy-configured web browser, the transaction is first received by your HTTP proxy, which checks many things, including the destination and category. It can do filtering as well. Caching is mainly used for security purposes. Assuming the website PCB wishes to visit is not pornographic, gambling, or malware-infected, the web proxy or HTTP proxy will allow the traffic, which it will now route to the routers, which will then send the traffic to the public network, which is the internet. As soon as the website receives the request, it will send the return traffic not to PCB but to the proxy server. Why? because it's the proxy server that's communicating with the website.

So from a website perspective, he doesn't see PCB, okay? What he sees is the proxy directly contacting him. So that is forwarding by proxy. And as I mentioned, forwarding proxies, or HD proxies, are mainly used for security in outbound web transactions. Now we have the reverse proxy. Now, "reverse proxy" as a terminology is not very popular; it's not well known, but the most commonly used reverse proxy device, or the best example, would be load balancers. Okay? So I'm going to add a load balancer here. This is the load balancer, and I'm going to connect the load balancer to three web servers. Now, the reverse proxy sits in between the client and the servers. Well, it's really sitting in front of the servers. Oops, okay. It's really sitting in front of the servers. Now, how it works is very simple. The application is processed by the reverse proxy. And this application is a request that must be delivered to the web servers.

And it enables application requests and different features such as caching and load balancing. Filtering is sometimes available as well. The reason why it's called "reverse proxy," if you compare it to the forwarding proxy, The forwarding proxy inspects and examines the outbound web transactions, while the reverse proxy is more for the inbound traffic coming to our web servers. Now, lastly, we have the half proxy. Halfproxy also sits in front of the web server. So I'm going to add our proxy device in front of these servers, and I'm going to add three servers here, okay? I'm also going to add a client, all right? So in half-propsy, it performs what we know as "delayed binding" in order to provide additional functionality. So this is how it works. The client sends an HTTP request, okay? And before the proxy sends it to the destination, it will first examine the request. Again, the examiner must determine where to send it.

Now, once the proxy has determined where to send the traffic or where to route the traffic to a destination, I'm going to add "connection is sent" or "connection is determined," right? Once the proxy has determined where to forward or send the traffic, the connection between the client and the server will stitch together. So I'm going to add here that connections from client to server are stitched together.

Okay? Now, as you can see, most of the examination and inspection of the traffic are also inbound. Furthermore, in half-proxy. The term "half proxy" refers to the fact that the functions of examining, caching, inspection, and everything else, including load balancing, are only one-way. It's in the mail. It's in the request. And if you think about it, how about the request? The majority of the half-proxies have no effect on the response. That is why it's called having a proxy. The majority of the half proxy servers now fall into the category of reverse proxy. So I can also call this a load balancer.

2. Introduction to Proxy Servers Part 2

Full proxy. This is a type of proxy that maintains two separate connections. One is between itself and the client, and the other is between itself and the destination servers. Now, the full proxy also completely understands many different protocols and applications. It can examine both application requests and responses as well. So if you compare this to a half proxy, where it can only examine HTTP requests, which is the inbound traffic for a full proxy, it can examine both the inbound and the outbound response. It does more than just analyses traffic and balance load. It can also perform a variety of advanced functions, such as modifying traffic behavior. It can also decrypt the encrypted data, and it can also do advanced compression. just to give you a summary.

Proxy Servers Sometimes it's referred to as a proxy, other times as HTTP or Web. Proxy servers are often configured and used for web purposes. So we can do content filtering, caching, blocking, and monitoring as well. It sits between the client and the server. It can be an inbound or outbound transaction as well. Clients interact with the proxy, which interacts with the server. So, as I mentioned, the server doesn't know that there is a client behind the proxy server, okay? And that's the real goal of the proxy solution. We also discussed four types of proxies. We have the forwarding proxy, also known as the HTTP proxy. This is usually found in our enterprise or campus land, where clients access it through their web browser configuration. When a client sends or transacts with a website, what the proxy will do is check and examine the web request so we can do filtering and caching. Also, the HTTP proxy has this feature.

It can check the category of the website you are visiting, so it may be in the adult or pornography category. It could also be gambling, a prohibited category, or a malware website. So this can easily be blocked by your security administrator. We also have the reverse and the half-proxy solutions, and they can be interchangeable because they are in the same category. Now, this proxy solution allows us to examine more of the inbound transaction, and like the forwarding proxy, we can also do caching and filtering. But the half proxy and the reverse proxy also do load balancing. Lastly, we have the full proxies. Full proxies have all of the features of the other proxies discussed, but they have additional functionality such as encryption. It can do deep security inspection as well. Later, we'll go over full proxy in greater detail.

3. ADC Overview Part 1

Now, we'll give you the application delivery controller, or ADC. It is defined as a computer network device that resides in a data center. Well, yes, it's a network device because it has most of the layer 2 and layer 3 features such as routing, VLAN, stagging, and even link aggregation. It is usually found in the data centre because it is in charge of protecting and controlling our servers. And we're not just talking about ten or 20 or even 50 servers; we're talking about hundreds of servers.

Why do we need an ADC? Well, one of the goals is to reduce the load on the web servers or other servers, as this helps applications direct user traffic to and from the servers. Now, why would you want to enable some of the features on those 100 servers? Or how about just doing a little configuration? Even though the configuration is minor, it is replicated across hundreds of servers. Would it be better if you did it all in our application delivery controller, which will intelligently route traffic to those hundreds of servers?

Now, ADC also includes many OSI layer services, from layer three to layer seven, which happens to include load balancing and often more advanced features such as content redirection and server health monitoring. Now, other features includes IP traffic optimization. So this allows us to optimise different application traffic and acceleration as well. Especially for Web, we have many parameters and configuration options for optimizing and accelerating web traffic, traffic chaining, and steering SSL offload. So this is one of the goals and one of the reasons for implementing an ADC. Imagine you have hundreds of servers in your data centre and you require clients to access your web servers using SSL or HTTPS instead of configuring and enabling HTTPS on your servers, which can be very difficult. We can just use ADC as the SSL uploader.

Sorry. Web Application Firewall Assume ABC has a device that comprehends layers two through seven. It's extremely fluid in applications, particularly the Web. It also enables security functionality. So if you combine two technologies that specialise in Web technologies plus security, you can create an undefined Web application as an appliance. It also allows us to create multiple policies that you cannot see in your ordinary firewall, even in your next-generation firewall, because these are specific to the Web application firewall. ADC also enables carriage grade, not CGN, and the list goes on.

So that is ADC. Fibigip has been the number one and leading ABC since its inception due to its high performance, extremely fast, user-friendly hardware, and virtual appliance. Okay, as I mentioned, it can be hardware. You have two options: the fixed switch, the one RU, or the two RU. You also have the chassis blade solution, which is also known as the modular hardware. There's also a virtual appliance, which we are using in our lab. Fib is also rich in programmability and automation features. Okay, it also has modularized software. Now, as I mentioned, Fib is the appliance. It can be hardware, it can be virtual, and there are also modules that allow us to enable some features. For example, load balancing Load balancing will not work if you don't enable LTM or GTM. Okay? So we're going to talk about this more later. But LTM is the most common module, and this allows us to do not just load balancing but other special and advanced features as well, such as health monitoring, profile persistence, web acceleration, and optimization. The list goes on. Now GTM, or should I say Big IPDNs, formerly known as GTM, allows us to enable global load balancing with the use of a special service DNS.

So it's not like LTM, which does the load balancing across servers listening on many ports, whether it's port 21 or port 22 or 80. It's only DNS for GTM, and it's used for more one-to-one load balancing between data centers. We also have ASM, which, as I mentioned, is the web application firewall solution of Fibig IP APM Access Policy Manager. This is the SSL VPN solution and the identity management device, advanced firewall manager, or AFM. Now, AFM is a firewall that is designed for data centers, and it has layer 2, layer 3, and layer 4 DDoS protection. So this is not like your next-generation firewall that does more outbound protection. No, AFM is an appliance that is designed to protect most, if not all, inbound traffic. All right, Fib, the ADC is a default Bini device. What do you mean by that? Just like firewalls and routers, if you don't configure them and enable the ports, you will not be able to send traffic to the destination.

Unlike layer 2 or Ethernet switches, it's a plug-and-play device. All you need to do is plug in the host servers and PC, and assuming they are all in one subnet or network, they will start communicating with each other. However, lying is deception. You have to manage them, enable the ports at the IP address, and configure some network objects to allow traffic to be forwarded to the destination. OK, now here's how the traffic works. In our F-5 big IP, once the client sends traffic to the listener, it will create a session as it completes the TCP three-way handshake. Now it will process the traffic. Big IP will act as a proxy device, and it may do load balancing and forward the data to the right server. Now, as it forwards the data, it will create a session based on another or a second TCP three-way handshake.

Okay? Now, why are we using "F5" as a proxy device? Okay, this is our full proxy architecture, and the main reason why we do this is to separate the TCP three-way handshake from the session and the client side. And on the client side, because Fibigip has so many ways to modify the traffic. Okay, all right, so we can modify the data. Even the HTTP content you see when you serve the web to Five Veg IP can be modified. It can also enable many types of modifications, such as traffic redirection or reconfiguring configuration parameters. We also have encryption for decryption. Now, this is the SSL offload feature that allows clients to send traffic to the big IP via HTTP. Now, client to big IP, all good, all secure—that's what the client knows. But Fib guy P can decrypt this traffic and forward it to all of the servers in the data center. Again, unencrypted. Why would we want to do that? Is it not better if the big IP forwards the traffic encrypted to all of the servers?

Well, it's not necessary because the reason why we want unencrypted traffic to be forwarded to the servers is that first, we want to lessen the load. Second, we don't want to make it more complex because, imagine, you will need to manage all of your HTTP servers, you will need to manage your certificate, you will need to reconfigure HTTP web services, and much more. Would it be good if the five big IPs would take care of all of the SSL traffic and the SSL management? Okay, same with compression. Why would I want to enable compression and manage it continuously across all my hundreds of servers? It is always better to do the compression management centrally via our five big IP appliances.

Now, this is like an architecture of our five big IPs. And as you can see, we have modules: we have LTM; we have DNS; we have ASM. These are all running under one operating system. As a matter of fact, we have two operating systems: the Kickstart, which is the Linux operating system, and the TMOs, also known as the Traffic Management Operating System. And whatever module you use or software module you use, it is running under TMOs. Okay? Now, all of the modules also share this feature called Irols. This enables us to customise whatever traffic, automation, and reconfiguration you want to do in your refibig IP using scripting full proxy architecture, regardless of the module you use.

And the reason why Five Big IP is so good, so reliable, and so fast is because of the special chip we have, the SSL and compression chip that provides high performance in our hardware appliance. Now, for administration, you can administer and manage your big IP via GUI, via HTTP, and via CLI using SSH. Sorry. You now have two shells available for SSH or CLI. One is the advanced CLI, also known as the Linux shell. This allows us to verify Linux configuration, and you can also verify your five big IP configuration files. But you can only configure your five big IP addresses via CLI through the use of tmSh, or the traffic management shell. Okay, as I mentioned, you can configure your fiVIP via GUI via https; this is actually the most common way of configuring your appliance. Also, via the GUI, you can configure your objects using Iapps.

Well, this is optional. This allows you to configure your F5 application objects in a different way, in a faster and more automated way. All right, this is how we administer and manage your FIV. Now, if you have an out-of-the-box hardware FIU, you can directly manage it through its out-of-band management interface. And you don't need to reconfigure its IP address because by default it's using 192.168.1245. Now, why is it two, four, and five? Why is it not one, one, or 1254? Because two four five is the decimal value. If you convert it to hexadecimal, the result will be F-5. Alright? That's for the hardware. If you deploy the virtual edition, there is no default IP address assignment. Rather, it will use an automated IP address using the HTTP client. So if you enable or connect the virtual machine to a network and you already have a DHCP server, it will assign an IP address automatically.

SSH or HTTP are the only ways to connect to your FiVo or hardware IP appliance. You cannot use Cellnet or HTTP. The default SSH credentials would be root, and the password is default. The default HTTPS credentials would be "admin" with a password of "admin." Can we now access our F-5 bigIP device through its own IP address on our VLANs rather than through out-of-band management? Well, yes, but you have to make sure that the ports, such as port 22 for SSH or port 443 for HTTP, are enabled under the port lockdown configuration. This is the only way you can access your bigIP device from the management out-of-band interface. Okay, by default, the port lockdown is set to none. So there is no traffic that will be accepted on this IP address assigned to the VLAN interface. but you can reconfigure it. You can enable port 22 or port 4. Port three?

4. ADC Overview Part 2

local traffic manager, or LTM. This is what allows for large load balancing on the servers. It also has advanced features such as profiles, persistence, help, monitoring, arrows, and so forth. We also have Application Visibility and Reporting (AVR), which analyses the performance of services, traffic, and applications running on our five-gigabit IP device. We also have layer seven traffic management. And what happens in your data center? Let's say you have hundreds of servers, and this allows the big IP to choose the most ideal servers or applications. And this can be based on load, performance, or persistence. There is also core protocol optimization.

This allows us to tune a variety of protocols per use case for optimization purposes. We also have SSL proxies and services. And since Big IP is running as a full proxy architecture on the device, it is very effective at doing SSL offloading. This allows Five Big IP to offload the SSL management and operations from all of the devices, and we'll talk about more SSL uploading in a bit. We also have application acceleration.

This allows LTM to enable acceleration based on caching, compression, and even bandwidth control. Now, LTM is not only a load balancing module; it also has built-in security protection. This is referred to as "scene flooding," and it enables VGIP to protect the device from layer three to layer four DDoS attacks by using VGIP DNS, formerly known as GTM. This converts names into IP addresses, allowing for intelligent wide area application traffic management of applications running across multiple data centers. It is also considered global load balancing because we're not forwarding traffic to multiple servers. No LTM would do that. The best example of Big IP DNS, or GTM, is, let's say, we have four data centers. Our organisation—or your organization—has four data centers.

One is in America, in California, let's say. The other one is in Europe, in London; the third one is in the Middle East, let's say in Dubai; and the fourth one is in Asia, let's say in Singapore. Now, what happens is that if I want to access your portal, first it will try to resolve the fully qualified domain to an IP address using a DNS resolver. The DNS resolver will be able to identify my location since I'm in the Philippines. In Manila, it will say, "Hey, you're in Southeast Asia, and the nearest data centre we have is in Singapore." So the name server will provide me with an IP address for the Singapore data centre because that is the nearest. Now the name server, the one who does this intelligence, is the big IP DNS, formerly known as GTM. It also uses geolocation plus DNS, or GTM, and enables security protections such as DNS, SEC, and DNS-DDOS to prevent DDoS attacks. Application Policy Manager, or APM. Now, APM is an SSL VPN remote access solution, and it supports endpoint security as well. This allows us to manage and accelerate remote application access on a high-performance and highly secure platform.

It is also an identity and access management system. so you can create a policy. Let's say a group of users can only access this network or application. Another group of users can access this sensitive information because they have the highest privilege. It scales up to 2 million concurrent access sessions. It is also readily integrated with AAA servers such as Active Directory, LDAP, and RSA SecureID. It also enabled BYOD, or bring your own device. It also supports single-sign-on enhancement. This is useful if you are integrating your applications with cloud service providers such as AWS, Azure, or Google Cloud. It also supports SAML in conjunction with SSO or single sign-on on the Identity Federation. The application Security Manager, or ASM, version 2 ASM module. Is the WAF or web application firewall PCI-compliant? And maybe you're thinking, "What is WAF?" What is a web application firewall? Well, WAV is not your typical firewall. It's not your network firewall or your next-generation firewall where you create, permit, and deny rules.

This firewall also inspects and influences up to the application layer. Layer Seven is dedicated to protecting inbound traffic, while your next-generation firewall appliances are designed to protect based on user traffic. As a result, it keeps track of your site visits. What sites are you accessing, like illegal sites for pornography, gambling, and so forth? It also monitors what files you download because there's a possibility that some of those files are malware. It builds policies based on what systems are running in your data center. So you can select the operating system such as Windows, Linux, the language such as PHP or Java, the Framework Jungle, XML, Ruby, and Rails. And based on this selected system, it will enable signatures. The ultimate goal of our data centre is to protect against various types of web attacks. And some of these—or the majority of these—come from the OWASP Top Ten.

This defines the most serious web application security risk. These include cross-site scripting, SQL injection sessions, sensitive data exposure, and so on. And ASM is designed to protect your web application from these attacks. It also protects from advanced application attacks such as Web scripting and brute force. Layer Seven DDoS and ASM can be deployed in a variety of ways, including rapid deployment, manual policy creation, and automated policy creation. Advanced Firewall Manager, or AFM AFM provides the traditional firewall facilities, but it is designed to inspect inbound traffic. You can create them based on policies and rules. What's special about ASM is that it provides DoS and DDoS protection components. It has a database that helps identify different types of DoS and DDoS attacks from layer two to layer four of the OSI layer. Also, it has additional security protection functionality such as the Port Misuse Intrusion Prevention System, or IPS. It also supports logging and reporting independently.

Resource Provisioning is where you can enable software modules that are running in our big IP system. You can also verify what modules are licenced and how much system memory, disc space, or CPU time will be allocated. We have four available allocation settings for modules. There are either none or disabled options. When you select this option, that module or modules will be deactivated. We are dedicated. Now, please be very careful with this option, and the reason why is because if you select this option, all other modules will be automatically deactivated. And the purpose of selecting dedicated is when you want your device to only use one module, and the system allocates all resources such as CPU, memory, and so on to that one module.

There is also nominal. Nominal is the most common option if you want to enable your software module. And how it works is that the module uses the fewest resources. The module then receives additional resources from the portion of remaining resources after all modules have been enabled. So the big IP will dynamically allocate more resources to that module than the minimum. Same with the nominal. It gets the least amount of resources required, but there will be no additional resources added or allocated to that one module. This is very rare to use, and you may only use this or, as I personally suggest during testing or experimental use, other features in our fiber. First. We have the iris. Iril is a scripting language, and this allows us to extend the IP capabilities. We use Iril when we need to automate something, change traffic patterns, or manipulate data and the configuration is not available through the UI or CLI. Okay. We also have iControls. Icontrol.

This allows five big IP devices to integrate with third-party systems. It could be a five-big IP device or something else. IAPS is another way of creating our application objects. So instead of creating pools of virtual servers, I rules, profiles, and persistence one by one, you can automatically create all of these on one page using the questionnaire. Apps also use a template base. So there are already system-defined templates available in your FIB. Or you can create your own or download templates from them. Central. Lastly, we have iHelp. This is the system diagnostic tool. Okay, I will say this in the next section. We also have a large Iqiq is a solution that allows us to manage not just one big IP device but multiple big IP devices. The main goal is to centralise the control of five big IP devices. This is either real or virtual. It can have up to 200 big IP devices for the inventory. Big IQ can be used to upgrade eight devices, monitor an SSL certificate, and integrate with iHelp in addition to configuring big IP devices.

F5 101 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass 101 Application Delivery Fundamentals certification exam dumps & practice test questions and answers are to help students.

Run ETE Files with Vumingo Exam Testing Engine
exam-8
cert-33

Comments * The most recent comment are at the top

NKScatt
Myanmar
Jun 21, 2024
PDF Dump File 460 Questions? Dump File 460 Questions with ETE Player?
zara
Saudi Arabia
Jun 06, 2024
these practice tests contain actual questions. i met a couple of new ones, though was able to crack them. be knowledgeable of the exam objectives and you will succeed.
Tyler
Denmark
May 24, 2024
like me, if you want to get good marks without huge efforts, then these mocks are perfect for you! i passed hassle-free, good luck to you all
Daniel
Netherlands
May 14, 2024
Really thankful to prepaway for availing us such useful free files, containing the latest questions along with correct answers. I wonder, how i could have passed w/t them? Suggest all candidates to use them in their prep process. they're definitely helpful!
harper
Algeria
Apr 28, 2024
believe it or not, these 101 questions and answers were amazingly helpful to me for passing the exam without putting any extra efforts.
zayn
South Africa
Apr 17, 2024
in the first attempt of the application deliver fundamentals exam, i failed. but, when i purchased the f5 101 exam dumps and studied, i passed the last week’s exam smoothly and effortlessly. so, i suggest all candidates to use it!