exam
exam-1
examvideo
Best seller!
101: Application Delivery Fundamentals Training Course
Best seller!
star star star star star
examvideo-1
$27.49
$24.99

101: Application Delivery Fundamentals Certification Video Training Course

The complete solution to prepare for for your exam with 101: Application Delivery Fundamentals certification video training course. The 101: Application Delivery Fundamentals certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including F5 101 exam dumps, study guide & practice test questions and answers.

130 Students Enrolled
132 Lectures
17:29:00 Hours

101: Application Delivery Fundamentals Certification Video Training Course Exam Curriculum

fb
1

Introduction

4 Lectures
Time 00:16:00
fb
2

F5 BIG-IP Lab

4 Lectures
Time 00:07:00
fb
3

Networking Basics

24 Lectures
Time 03:54:00
fb
4

Application Delivery Controller (ADC)

35 Lectures
Time 05:11:00
fb
5

Part 3: Maintaining Application Delivery Controller (ADC)

21 Lectures
Time 02:58:00
fb
6

Part 4: Application and Security Technologies

19 Lectures
Time 02:12:00
fb
7

Part 5: Troubleshooting Network and Applications

24 Lectures
Time 02:49:00
fb
8

Course Completion

1 Lectures
Time 00:02:00

Introduction

  • 4:00
  • 4:00
  • 4:00
  • 4:00

F5 BIG-IP Lab

  • 1:00
  • 2:00
  • 3:00
  • 1:00

Networking Basics

  • 1:00
  • 16:00
  • 11:00
  • 10:00
  • 11:00
  • 14:00
  • 15:00
  • 6:00
  • 5:00
  • 13:00
  • 9:00
  • 19:00
  • 12:00
  • 11:00
  • 12:00
  • 7:00
  • 8:00
  • 10:00
  • 2:00
  • 10:00
  • 7:00
  • 9:00
  • 9:00
  • 7:00

Application Delivery Controller (ADC)

  • 9:00
  • 4:00
  • 16:00
  • 13:00
  • 12:00
  • 5:00
  • 15:00
  • 7:00
  • 10:00
  • 11:00
  • 8:00
  • 8:00
  • 7:00
  • 3:00
  • 5:00
  • 9:00
  • 9:00
  • 11:00
  • 10:00
  • 9:00
  • 6:00
  • 9:00
  • 7:00
  • 8:00
  • 9:00
  • 5:00
  • 9:00
  • 8:00
  • 9:00
  • 7:00
  • 9:00
  • 13:00
  • 8:00
  • 12:00
  • 11:00

Part 3: Maintaining Application Delivery Controller (ADC)

  • 8:00
  • 9:00
  • 17:00
  • 10:00
  • 6:00
  • 9:00
  • 12:00
  • 6:00
  • 11:00
  • 10:00
  • 10:00
  • 4:00
  • 9:00
  • 7:00
  • 8:00
  • 6:00
  • 10:00
  • 5:00
  • 5:00
  • 11:00
  • 5:00

Part 4: Application and Security Technologies

  • 11:00
  • 10:00
  • 9:00
  • 10:00
  • 5:00
  • 9:00
  • 4:00
  • 7:00
  • 7:00
  • 4:00
  • 3:00
  • 7:00
  • 7:00
  • 9:00
  • 6:00
  • 4:00
  • 6:00
  • 5:00
  • 9:00

Part 5: Troubleshooting Network and Applications

  • 7:00
  • 6:00
  • 5:00
  • 13:00
  • 2:00
  • 5:00
  • 9:00
  • 11:00
  • 8:00
  • 4:00
  • 11:00
  • 7:00
  • 9:00
  • 3:00
  • 11:00
  • 9:00
  • 9:00
  • 8:00
  • 4:00
  • 4:00
  • 7:00
  • 4:00
  • 8:00
  • 5:00

Course Completion

  • 2:00
examvideo-11

About 101: Application Delivery Fundamentals Certification Video Training Course

101: Application Delivery Fundamentals certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.

Application Delivery Controller (ADC)

Application Delivery Controller (ADC)

I'm on my Fib login page, and I'm about to enter my login information; I'll type my username, "admin," and my password, "admin." Now, what you see here is the welcome page. Check out the setup, support plugins, and downloads. These are all available under the about tab. Now, we also have the configuration options and the modules available on the left pane. But, before we get into that, let's take a look at the system information in the upper left and upper right corners.

As you can see, we're using a hostname of bigiphonefi.com with an IP address of 192-16-8131 that also serves as our management IP address. We also have the date and the time. By default, we're using PDT as our time zone. I am logging in as the username admin, and the role is Administrator. The common partition, which is also the default partition, is now located in the upper right corner. We also have a bottom logout option here. So every time, or any time you want to log out of this GUI, all we need to do is click this button. We also have the device status. This is just a standalone device.

We haven't actually paired it or enabled high availability. Our current redundancy is online and active. We now have the Statistics configuration option under our main tab. This allows us to view different statistics from the DNS, LTM, or local topic module. We can also gather statistics from the network, system, and memory. underperformance report. We can gather traffic and performance reports as well. Now we have IAPPs. Iapps is an option for us to configure our application objects such as Virtual Server, Spools, IRolls, and profiles in an automated fashion. We use templates. We already have system-defined templates that we can use anytime. It is an option for us to answer questionnaires. After completing these questionnaires, the objects are configured automatically. We also have a DNS module.

Now, what is the DNS module again? Is this the big IP DNS? Yes. Have we already enabled it? Well, no, but even if we don't enable the module, this option is already added. But if you look at our options here, they are limited. And obviously, if we enabled our DNS module under resource provisioning, we'd get more features. Now, under or below DNS, we have SSL orchestrators. This enables us to create profiles like SSL and Client. This is where we also manage certificates and keys. Under or below the SSL orchestrator, we have local traffic; this is our LTM, and we have many features, starting off with a virtual server that serves as a listener.

This is what the client says and communicates. Okay? We also have policies, profiles, and irrelevant information. And what we already mentioned is the scripting tool that allows us to add additional features. And we can also add features that are not available in the CLI to the GUI. of our five big IP devices. We also have pools. This is where we enable the load balancing method or feature. This is where we also add our servers. We also have monitors. This allows us to track the health of the servers behind our five WEP. We also have address translations. Now, address translation will be discussed more in the next section. Below local traffic, we have an acceleration module. This allows us to enable HTTP compression, Web acceleration, One Connect, web optimization, and bandwidth controllers as well.

Below acceleration, we have device management. Device management. To enable high availability or device service clustering, first configure device trust. This allows us to trust other big IP devices with certificate authentication and device identity authentication. Okay, this is where we can also add trusted devices to a group called clusters, or sometimes we just call them device groups. We also have traffic group options.

This allows us to manage floating objects that will float from one big IP device to other big IP devices under the network module, which we've already introduced from our previous lab demonstration. This interface status can be viewed, and as you can see, we have interfaces one and two. Both are up because we use this on both our internal and external networks. And we also have another interface, which is one of three that is currently unavailable because we haven't activated it. We also have routes. This allows us to add routing configuration, both static and dynamic routing. We also have self-IP addresses from our previous lab demonstration. We added internal and external self-IP addresses, and floating self-IP address trunking is also configured here under Tunnels.

We can create many different types of tunnels, including GRE, It, SEC, PPP, VXLAN, and many others. We also have the route domain option. What is the route domain? Well, this allows us to create two or more route instances, and this is used to avoid IP or network duplicates. We also have VLANs. We've already configured and demonstrated how VLANs are configured and associated with an interface. We also have service profiles and classes of services. We also have Ipsec VPN, and this is where we configure it. IPsec VPN entails We have IKE peers, IPsec policies, and many other configurations. This is where we can also enable and configure rate shaping. The system module is located beneath the network module.

If I go to the first option, configuration, and select Device General, this allows us to view system information such as the hostname "big IPone f five trn.com," the chassis serial number, and the image version. We are currently using big IP version three, one, three. It also provides other information, such as CPU and active CPU accounts. We can also perform some actions and tasks. We can reboot and take the system offline from this page. Now, we also have software management. This allows us to upgrade our software image, and we're going to do that. Demonstrate it in the next section. If I click License, this will tell us which modules are licenced and ready to use, such as ATM. It is licensed, but it is limited. We have DNS; it is licenced by ASM, AFM, and many others. We also have resource provisioning.

I'm going to say this because I want to talk about these features in these configuration options in more detail. We have the platforms. Now, if you want to change your management IP address as well as the host name, you can do it here on this page. You can also change the time zone and the root and admin passwords. Okay? High-availability options You have a failsafe. However, the vast majority of high availability configuration options can be found in the Device Management management module archives. This is where we save in the archives our configurations, keys, and licenses. Okay? Under "Services," this is where we verify which services are currently running. We've got the big three D. This is used for high availability and device clustering. And you see that GTMD is not running. It is because our big IPDNs, formerly known as GTM, are not enabled. We have named We have NTP, SNMPD, and SSHD.

All are activated and running. And we also have SNMP. By default, it is enabled. You see Agent entrap configuration options Here we also have user options. And it's not only the admin user that we can use to log in; we can create one or many users. Just apply the username and password, and you'll have the option to choose a role. We have many options under "role," like "guest operator," "application editor," "manager," "certificate manager," "Irish manager," and so forth. Now, we also have the option to associate a user with a specific partition. Okay? Now, as we go down to the last few options, we have logs. You can verify and view the logs per system for local traffic. And we also have support. Now, support is a very important option, but I'm going to save this for the next section.

6. ADC Overview Part 4

If I click the Help tab, this will assist us in understanding and configuring what we are trying to add. In this case, we're trying to add a user. And in order for us to understand the properties, you can go to the left pane and click this option, "username." It gives us a username (specify username) for a new user account. Under username. We have a password. We have three options: old, new, and confirm, and we also have a role select box here where we have many options we may want to understand. What are the different roles, such as guests and operators?

All I need to do is click this cross sign, and as you can see, it tells us what "no access" is. It prevents users from accessing the system. Guest gives us restricted access. Operators grant users permission to enable or delete existing nodes or pool members only. Now if I scroll down, we have managers. It grants users permission to create, modify, and delete virtual servers and other configuration objects. Now if I scroll down further, it's not only usernames, passwords, and roles that help us understand and configure what we are trying to add. Okay, even the slightest minimal options, such as partition and terminal axis, are also available on the Help tab's left pane. Okay, now below here we have other links, such as Decentral User Community and Dev Central. This is a portal, and you can find many different resources, such as scripts, and you can also post in the forums. If you have any troubles, just post a question and wait for a few minutes. Someone or some people will answer the questions for you.

We also have additional resources. This is yahwerfive.com. We also have significant IP issues. Now, we've already talked about the support option under Assistance in the main tab. This is related to big-eye health, and we're going to talk about big-eye health or eye health. On the next section, if I click the "about" tab, as I mentioned as we logged in or the first time we logged in, setup support plugins and download options are available under this tab, but I would like to highlight this link. The link is called "Run the Setup Utility." If I click this link, it will take us to the wizard. This result is available the first time you access the big IP device. Now if you have a lab that is based on my other course about building a five-big IP lab for free, this is the first thing we did after accessing our five-big IP device. First, we licence it, but since our system is already licensed, we can just view the current licence status. All we need to do is click "Next" to proceed.

The next thing that we'll see is the resource provisioning page, where we can verify which modules are licensed. And as you can see, it's only the LTM module that is currently activated. If you want to activate one or more modules, all you need to do is click this tick box and make sure that you select Nominal in this resource provisioning page. You can also check the current resource allocation for the CPU, disk, and memory. Now, if I click Next, this will take me to the certificate properties. And if I click Next, this will take us to the general properties or platform configuration where we can change our management IP address plus the netmask, the management route or the default route, the hostname of our device, and we can also change our route and admin password. If I click Next, this will take us to other standard configurations such as network, and we can also, if you go down on the left pane, run the config sync ha utility. This is a wizard for high availability or device clustering configuration. You.

7. Load Balancing Technology Concepts Part 1

Let's talk about different types of load balancing. Static load balancing is available. This is a type of load balancing in which the distribution pattern is predefined. We also have dynamic load balancing and the distribution for dynamic load balancing based on runtime observation; this type of load balancing is more intelligent than static load balancing. We also have a failure mechanism, priority group activation, and a fail back. Round robin will be the host's first method.

Now the round robin load balancing, the distribution of connection is even across to all pool members. And this is how it works. Assuming we have a client sending traffic to a VS, they already have a pool that is associated with the VS. Okay? So as the Buck receives the traffic, it forwards the traffic evenly to all three members. As you can see, each full member has two connections. We also have a ratio load balancing method and a ratio load balancing method based on a weighted pattern using ratio values that can be configured on the specific poll numbers or the notes.

Now in this example, we have a ratio value of two for pool number one, a ratio value of three for pool number two, and a ratio value of one for pool number three. The first topic will be about even traffic. So pool numbers 1, 2, and 3 receive connections from big IP. What differs now is the next batch of traffic. As you can see, the fourth and fifth traffic goes to pools one and two. What about the sixth traffic? On full number one, it skipped full number three. And if you look at the pattern here, full number one, since it has a ratio value of 2, gets twice as much as full number three, which has a ratio value of 1. Full number two got three times as much as full number three. Why? because the ratio value is three. There are also list connections and list connections. In this case, big IP forwards the traffic to the member node with the fewest open connections.

The connection number, or current connection, for pool members is 100. Pool member two is 104. The third pool member is 110. What was going to happen here is that since pool member 3 has the highest connections, it will not forward the traffic or the connection to the third pool member. Instead, it will route traffic to servers one and two. Let's play the animation.

First view traffic pool numbers one and two. Now what if the connection numbers get updated? This time it's full number one, and full number three has the same number of connections. As a result, the next batch of traffic or connections is routed to pools one and three. The number of connections now is 102, 103, and 124. Pool number three This is the fastest load balancing method, and it load balances based on the number of outstanding layer 7 requests to a pool and the number of open layer 4 connections.

Now let's talk about what the outstanding layer seven requests are. If the big IP sends seven requests, this number increments. So there will be one outstanding layer seven request plus this. But at the moment the big IP receives a response from the application or from the pool, the number of layer seven requests will be deducted. And here's an example: we have pool number three with the least number of layer seven requests. That's why the next connections go to the third server. Now, since both full members and full members one and three have the same value of layer seven requests, they share the load balancing connection for list sessions. The following connections are made to members who have some existing persistence records. What exactly is persistence? How does it work?

Well, we haven't really talked about persistence. We're going to talk about this more in the next video. But just to give you an overview, when the client sends traffic to the VS and the VIP IP processes it and does the load balancing, it selects a pool member. As it selects a pool member, it will update a persistence record. The persistence record creates the following information: the source IP address of the client. Let's say this is a client's IP address. We don't need to add the IP address and the server it selects.

Okay? Now what happens if the client continues sending traffic? Since it is persistent, the big IP will forward succeeding traffic to the same full member. In short, load balancing has been bypassed. Now there's a record here, and for this session's load balancing method, it will base the number of records per pool member. In this example, the pools with the lowest persistence records are pools two and three. And if they have the same number of records, the connection will be distributed equally to all three pool members. Wait. And this session, this is when the next connection goes to a member node with the fewest connections as a percentage of its connection limit.

Now, how does it work? What you see here, the 50, 25, and 24 pool numbers, ones, twos, and threes, represents our percentage of connection limit. The true connection count for pool one would be 25, 50 for pool two, and 40 for pool three. This is the connection count. It doesn't work like a lease session, because if this is a lease session, obviously the next few connections will be forwarded to the first full member. Now, the reason why we have this percentage is because we set a connection limit for this individual pool member. The connection limit for pool number one will be 50. That's how we got the 50% forpool number two, which is 200. That is why we got the 25% for pool number three. This is also a connection limit of 200. In this type of load balancing, the lowest percentage is obtained rather than the lowest number of connections.

So in this example, the lowest percentage would be server number three, and the load balancing would be between servers two and three because they are tied when it comes to connection limit percentage. We also have the observed load balancing type, which calculates the dynamic ratio value based on the number of layers for the connection last observed. This is the last second, and based on the observed calculated result, it will generate and assign a ratio value to the pool members. This time, the winner will figure out that pools two and three exist. They don't have many or any open connections. I'm going to assign them ration number three. Okay. Now maybe you're thinking, "Why do we need to do some calculations?" Does it add overhead? Well, the answer to that question is no. The stats are already calculated. So whatever you do, there are already calculations. The question is whether you wish to use this function.

Okay, so if we proceed with the animation, as you can see, the load balancing is between pool number two and pool number three. We also have predictive analytics. Now, just like observe, it calculates dynamic ratio values based on the current connection—not the previous second's connection, but the current connection—and compares it to the previous observed connection. So if you're going to think about it, the differences observed always base themselves on the previous connection, while predictive analysis compares the current and the previous connection. And like Observe, it generates and assigns ratio values to pool members. In this case, pool number two receives the connection because it has a higher ratio value so forth. Let's talk about load balancing using members versus nodes.

I have here two verses. One is HTTP vs. and the other is SSH vs. And as you can see, they're using the same pool members but two different pools. One is the HTTP pool that is assigned to HTTPS. We also have an SSH pool that is assigned to SSHBS. Now the question is, if the traffic is HTTP, base the next connection request on members with connections.

Okay, so would it be pool number one, pool number two, or pool number three? Please pause this video and think about it. Now the answer to this is pool number three. The next connection or the next few connections will be sent to this guy over here. And the reason is because the total connection is less than pool numbers one and two. It has only 99 load balancing using node. We have a client sending traffic to HTTPS, and our pool is using the list connection node load balancing method.

Now, where will the big IP forward the traffic? To which full member will the next connection request go? What do you mean, node? Okay, so if you compare node versus full member, full members recognize ports while nodes only recognize IP addresses. If you look at the values here, you'll notice that pool member one has a total of 107 connections.

It is unsure whether this is HTTP or SSH. All it knows is that it has a total of 109 connections, while pool number two has a total of 111, and pool number three has a total of 124. Compared to the previous example, where we have a list of connection members, it is specific to the total number of connections specific to that port, or for that application, which is 99. However, regardless of the application, a node will consider the total number of connections. So the answer to this is pool number one, which has a total of 109 connections, which is the lowest.

8. Load Balancing Technology Concepts Part 2

Priority group activation I have here a big IP connected to a network with six other pool members. I'm going to split the pool into two now that there are six people in it. I am going to assign a priority group value to pool members 1, 2, and 3, and a different priority group number to pool members 4, 5, and 6. I'm going to give pool members 1, 2, and 3 a priority group value of ten, and pool members 4, 5, and 6 a priority group value of five.

Now we also have another setting here: the available members. I'm going to assign two available members. How the priority group activation works is that the priority group with the highest value will be active and will be the one to receive traffic from the big IP. As a result, only pool members 1, 2, and 3 will receive traffic from the big IP in this case. So the big IP will forward the traffic to them, and regardless of what type of load balancing is configured for a pool—round robin or ratio—it doesn't matter.

The point is that traffic should only be routed through pools 1, 2, and 3. What are numbers 4, 5, and 6 doing in the pool now? Nothing. They are just waiting. Why? because their priority group activation is lower. It's only equivalent to five. Now what happens if one of the pool members, let's say pool number three, goes down? Well, the big IP will still forward the traffic and load balance to pools one and two. Okay, nothing has changed except that pool number three has gone down and is no longer receiving traffic. Well, obviously.

Now what happens if pool number two goes down? Priority group 10 has only one active pool member. If you look at our available members, it says two, which means you must have an available number equivalent to two or more towards everything normal. If you have fewer than two available members, this is the time that the big IP will utilise the other pool members in the other priority group, the priority group with less value. So the big IP will not only forward the traffic to pool number one but also forward the traffic to pool members four, five, and six. Now what happens if, excuse me, a pool member goes back online?

Well, since the pool members, or the number of pool members, are now equivalent to the available members we have here, everything goes back to normal. The big IP will only forward the traffic to pool members 1 and 2. Now let's have another example, but this time we have fewer pool members. And I'm going to draw, and I'm going to add pool member one and pool member two. And each of the available groups has only one member, which is equivalent to one. Now I'm going to assign a priority group of, say, five and a priority group value of pool members. Let's say it's two.

The big IP will only forward traffic to pool number one in this case. Why? Well, the available number says it's one. As long as there is one active pool number, I'm good. We don't need to use or forward the traffic to a group with a lower value. So the big IP just forwards the traffic to pool number one. However, if pool number one fails, the big IP must forward the traffic to another pool member because it requires an active member. In this case, What is this called? This is called an "active standby" setup for the big IP LTM.

Now, you can only use priority group activation if you want to design something like an active standby or active standby standby setup. Whether a single server or a group of servers This is how fallback hosting works. Let's say we have six servers, and the six servers receive traffic from the big IP, whether they're using priority group activation or any type of load balancing. The large IP forwards traffic to pools 1, 2345, and 6. Everything is normal. But suddenly, all pool members went offline. So server 1, or pool number 12345, becomes unreachable.

Or maybe the entire network just went down—we don't know. But the big IP will realise there are no pool numbers. I will just forward the traffic to an external server called a fallback host. Now, this fallback host can be a maintenance page telling the clients that we are having technical issues, that we're doing a software or hardware upgrade, and so forth, just to make everything a little better than what's happening. Because in our case, either the pool members or the entire network just went down entirely. Now, under the Http profile, this fallback host, also known as HTTP Redirect, is configured. An HTTP profile is usually associated with the virtual server.

9. Configuring Load Balancing in ADC Part 1

I'm here in our FiB Guy. And we'll go to local traffic, virtual servers, and the virtual server list. This demonstrates our first VS: http underscore VS. Now we're going to create a new virtual server, and we're going to name it SSH underscore Vs with an IP address of 1010 1102. I'm going to use port 20, which is the port used by SSH applications. As I scroll down and click the select box, it only shows us one pool, and we haven't added an SSH pool yet. Now, to make this easier, I'm going to click this plus sign, and this will take me to the new pool configuration page. And I'm going to use or name the SSH pool. Okay. And I'm going to add 170, 216, and 21 for 22.

For our first full number, I'm going to change the last octet to "two click add," then change it again to "three click add." And now we're about to finish our SSH full. I’m going to leave the load balancing method to the default round robin. I will hit finish, and this will take us back to our virtual server configuration. So verify if our newly created pool is associated with the default pool. We'll just scroll down, and as you can see, it is already associated. Good. Okay, we scroll back up, and we verify the name.

On the port, we confirm the destination. We are now going to hit Finish. There you go. We just created our second virtual server. I am going to open a new tab, and this is for the same big IP GUI. But this time we will dedicate this tab to statistic viewing only under Statistics. Under statistics, I will select the module "Statistics for Local Traffic." Statistics are also available on this page. I am going to select "Pools." And as you can see, we have two poles. We have the HTTP pool, and we have the SSH pool. If I hit the plus sign, you will see there's nothing here.

Yes, we see the three pool numbers, but the counters for bits, packets, and connections are all zero. Now we're going to access our client PC, and I'm going to hit this patty icon multiple times. Now I configured this little puddle icon to automatically connect to our virtual server via SSH. Ten, 1102. I also added the username and password.

So if I double-click it, it will automatically establish connections. So I'm going to hit this button several times. 123-45-6789. All right, so I just hit nine or ten times. Let's verify the statistics. Still zero. But if I hit refresh all right, it's eleven. Actually, it's not nine; it's not ten. As you can see, it's almost evenly distributed. It's just that the third pool member is only three, but the first full member is getting equal credit for active connections.

Okay, now what I'm going to do is change the load balancing method from round robin to ratio member, and if I go to pools, I will select the SSH pool now and hit members. As you can see, we're currently using round robin. I am going to change this to a ratio member. I am going to click "Update." If I leave it like this, with nothing changed except the load balancing method, it will still go round robin-style. Load balancing fails because the ratio value is still set to one. Okay, now I am going to change the value of the second full number. I'm going to change it to four. I'm going to hit update. There you go. Now if we go back to our statistics page, we currently have four, four, and three. The next thing we will do is hit the SSH connection multiple times again, and this will start incrementing.

But there will be a significant increase for the second pool member because we set the ratio value to four. Now we're going to start from four, four, and three. And we're not expecting pool member two to have four times more of a connection versus pool members one and three because we already started with numbers four and three having active connections at least three times.

Okay, so we're going to go back to our Windows client and we will start creating more SSH connections. So I will keep hitting this icon. We still need more connections. We still need more connections. All right, I think that's enough. We'll go back to our statistics page. I'm going to hit refresh now, and we have a total of 63. And as you see, we have 36 for pool number two.

We only have 14 and 13 for pool members one and three, respectively. But you see the difference now, and we are verifying that the racial load balancing is currently working, although we still need to create more connections just to satisfy ourselves. Okay, I will create more SH connections more. I think that's enough. So I'll go back to the statistics page and hit refresh, and we have a total of nine to six current active connections. Now we have 19 for pool number one, seven for pool number four, and 60 active connections for pool number two.

And as you can see, this is actually more than three times, especially if you compare pool number three to pool number two. All right, now let's go back to our pool configuration. Okay, I'm going to hit members. This will take us to the load balancing configuration page. I am going to change the load balancing method to list connections. I'm going to click "update." Now, before I click Update, I just want to show you that the ratio values are still 1, 4, and 1. And if I click Update, this ratio value will not take effect because our load balancing method is list connection and doesn't have anything to do with ratio values at all. Now, if we go to our statistics page, we know that the active connections for pool number two are 60.

Because the concept of this connection load balancing is to provide more connections to full members with fewer open connections, using the list connection load balancing method allows pools one and three to catch up. So we will see a significant increase in connections for both pool numbers one and three. And we're expecting a closer number for all three members.

So let's go back to the Windows client, and I'm going to start hitting this icon to create multiple SSH connections. Okay, more. And as you can see, some of our SSH connections went idle. So these are all disconnected. All right, I think that's enough. Let's go back to our statistics page. We have 1960 and 17. I'm going to hit refresh now, and we have a total of 171 active connections. I'm going to hit the plus sign, and as you can see, the number of connections is very close. 56, 58, and 57. So we just check that the load balancing for list connections is working properly.

Prepaway's 101: Application Delivery Fundamentals video training course for passing certification exams is the only solution which you need.

examvideo-12

Pass F5 101 Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers As Seen in the Actual Exam!
30 Days Free Updates, Instant Download!

block-premium
block-premium-1
Verified By Experts
101 Premium Bundle
$39.99

101 Premium Bundle

$69.98
$109.97
  • Premium File 460 Questions & Answers. Last update: Apr 17, 2024
  • Training Course 132 Lectures
  • Study Guide 423 Pages
 
$109.97
$69.98
examvideo-13
Free 101 Exam Questions & F5 101 Dumps
F5.testking.101.v2024-02-27.by.layla.253q.ete
Views: 368
Downloads: 379
Size: 767.34 KB
 
F5.prep4sure.101.v2020-09-07.by.annabelle.276q.ete
Views: 1058
Downloads: 1912
Size: 1.07 MB
 
F5.pass4sure.101.v2018-06-23.by.simeon.231q.ete
Views: 1837
Downloads: 3038
Size: 1.23 MB
 
F5.Certkiller.101.v2018-01-06.by.ionut.137qs.ete
Views: 1839
Downloads: 3100
Size: 213.49 KB
 

Student Feedback

star star star star star
54%
star star star star star
46%
star star star star star
0%
star star star star star
0%
star star star star star
0%

Add Comments

Post your comments about 101: Application Delivery Fundamentals certification video training course, exam dumps, practice test questions and answers.

Comment will be moderated and published within 1-4 hours

insert code
Type the characters from the picture.