cert
cert-1
cert-2

Pass VMware 2V0-41.20 Exam in First Attempt Guaranteed!

cert-5
cert-6
2V0-41.20 Exam - Verified By Experts
2V0-41.20 Premium File

2V0-41.20 Premium File

$59.99
$65.99
  • Premium File 92 Questions & Answers. Last Update: Jul 17, 2024

Whats Included:

  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
 
$65.99
$59.99
accept 10 downloads in the last 7 days
block-screenshots
2V0-41.20 Exam Screenshot #1
2V0-41.20 Exam Screenshot #2
2V0-41.20 Exam Screenshot #3
2V0-41.20 Exam Screenshot #4
PrepAway 2V0-41.20 Training Course Screenshot #1
PrepAway 2V0-41.20 Training Course Screenshot #2
PrepAway 2V0-41.20 Training Course Screenshot #3
PrepAway 2V0-41.20 Training Course Screenshot #4

Last Week Results!

students 83% students found the test questions almost same
10 Customers Passed VMware 2V0-41.20 Exam
Average Score In Actual Exam At Testing Centre
Questions came word for word from this dump
Free VCE Files
Exam Info
Download Free VMware 2V0-41.20 Exam Dumps, Practice Test
VMware 2V0-41.20 Practice Test Questions, VMware 2V0-41.20 Exam dumps

All VMware 2V0-41.20 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the 2V0-41.20 Professional VMware NSX-T Data Center practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!

Preparing Transport Nodes for NSX-T

1. NSX-T Data Plane

And so the data plane is where the traffic actually flows. And we're going to have transport nodes within our data plane. So a transport node could be something like an ESXi host. We could have virtual machines running on ESXi as containers. Our hypervisors are part of the data plane. We could have bare metal servers that are part of the data plane as well. Then there's the NSX edge. And the NSX edge could potentially be a virtual machine or a bare metal edge. We'll learn more about that later. But traffic does flow through the edge, and we want to sort of think of the edge as the border of our NSX network. That's the north-south boundary. So if traffic is flowing out of our NSX domain towards the Internet, that's going to flow through an NSX edge.

And we're going to build something called a transport zone. A transport zone will be used to define the scope of a network. And we'll have these NSX-layer two segments that span the transport nodes within a transport zone. So, for example, I may create a transport zone and include a group of ESXi hosts in it. That's my transport node. The transport zone will define the scope of that network and how big it's going to be. And we're going to have something called tap tunnel endpoints that are created on our transport notes. We've called these V taps in the past. So if you're used to Nsxv, we've called these V taps. These are used to carry traffic between the transport nodes. And then we've also got vibes.

These are VMware installation bundles. These have to be installed on our transport nodes. So let's take a little closer look at these VIPs. So on our ESXi host, let's assume that our transport node is an ESXi host. We're going to have something running on it called an NVD. And this is sort of similar to the VSphere distributed switch. Kind of think of it as a cousin to the VSphere distributed switch. I can still run Vsphere-distributed switches on these hosts. I can still run Vsphere standard switches on these hosts. So these solutions are all compatible with each other. They can all run on the same host at the same time. I cannot run NSX and Nsxv on the same host. I can't have taps and V taps on the same host. That doesn't work. So running on this transport node, on my ESXi host, I've got the logical control plane. And remember, the logical control plane is the control plane piece of the ESXi host itself. We sort of think of the control planes as being adjacent to the data plane. This LCP connects to the CCP, the central control plane in NSX Manager. So basically, the LCP is the interface between the host itself and the control plane of NSX Manager.

So as the configuration changes in NSX Manager, the LCP actually programmes those changes into the data plane on the ESXi host transport nodes. And this could be things like route table changes, firewall changes, and things like that that are actually enforced at the data plane. And then we've got the management planning agent, or MPA. The management plane agent is used by the NSX manager to retrieve the status of the distributed firewall and to retrieve statistics from the hosts to give us information about what's happening within that distributed firewall. In addition, the NSX team manager gathers an inventory of all the VMs or containers running on these transport nodes. So let's take a moment to look at the big picture. In this diagram, you can see I've got two ESXi hosts running. And I have a VM named VM one in the ESXi host on the left. And notice the IP address of VM 1. It's ten dot one, ten dot one.

Here in the host on the right, I've got a virtual machine called VM 2. Also, take note of the IP address: 101. So VM one and VM two are on the same subnet. They are on the same layer two segment.

They need to be connected to the same Layer 2 switch. VM 3 is on a different network. So that's not going to be on the same layer two segment as VM one and VM two. So as we kind of build out our diagram here, you can see that we have two different segments that we've created using NSX. These are two different layer-two segments, and they're backed by something called a VNI. Now, if you're familiar with Nsxv, this is not a new concept for you, but if you aren't familiar with it, I want you to think of the VNI as basically very similar to a VLAN. The VNI identifies a layer to segment. So for example, maybe this VNI up here could be the application tier, and maybe this VNI here could be called the web tier. I've got these two different segments, which are essentially acting kind of like our port groups would with a VSphere distributed switch. And so now I've got two different networks, two different segments, running on the same ESXi host. I've got these two segments that have different networks. So what do I need to send traffic between those two different segments? I need some kind of routing mechanism.

And with Nsxv, what we've called the "distributed logical router" Now, as we're going to dig deeper into Nsxt, we're going to learn about the difference between a tier one and a tier zero distributed router. So we're going to change the terminology here a little bit, and we're going to start calling this a distributed router. Now, the purpose of the distributed router is to basically say, "Hey, I've got virtual machine one." Maybe it's trying to send some traffic to Virtual Machine 3. All the traffic can flow out of VM 1 onto its layer 2 segment, hit its default gateway, which is the distributed router, be routed onto the appropriate destination segment, and reach VM 3. and the traffic never needs to leave that host. It never needs to hit a physical network. That's the big benefit of putting a distributed router into our transport nodes. Another distributed component of NSX is a distributed firewall. The distributed firewall gives us the ability to apply firewall rules directly at the interface of a virtual machine.

So here we see VM 1. As traffic is flowing out of VM 1, it's as if it is directly connected to a firewall, and a rule set can be applied directly at the interface level. In fact, if I had multiple VM interfaces, I could have multiple sets of rules—different distributed firewall rule sets for each. So now, before the traffic even hits the layer two segment, I can apply a list of firewall rules. And by the way, it's the same on the way back in. If traffic is heading into a virtual machine, that traffic can be analysed by the distributed firewall before it hits the network interface of that virtual machine. And then I'm going to have something called an edge node, and we're going to learn much more about these edge nodes as we dig deeper into this course.

But I'm just going to keep this extremely simple right now. Here is an edge node. The edge node connects us to the Internet, or the external physical network, as I should say. It may not just be the Internet; it may be my physical corporate network. So the edge node connects us to the external networks. It provides all sorts of services as well. We'll talk more about those services, routers, and things like that as we dig deeper into this course. But at the moment, what I want you to think of when it comes to the edge node is that it is the north-south boundary of our NSX domain. If traffic is leaving NSX, it's going through an edge node. If traffic is coming in from some external network to NSX, it's coming in through an edge node. Okay, so now we've talked about the big picture a little bit here. Let's zoom back in and renew our focus on some of the layer-two concepts of the data plane. And what I want to talk about now is the process involved with an ARP request. So in this first diagram, we do not have NSX deployed. This is your standard VSphere distributed switch. And you can see here on the left that I've got virtual machines. Both VM one and VM two are connected to the same network. As we can see, they are on a ten.

Let's call it 19216 8111, 192-1681 dot ten. And on the right side of the diagram, I have 192, 1, 6, and 8113. As a result, I have four virtual machines on the same Layer 2 network. So, let's say VM one over here wants to tope or communicate with VM in some way. Basically, here's what's going to happen: If VM One wants to communicate with VM Four, VM One is going to say, "Hey, VM Four is on the same subnet that I'm on." VM Four is on the 192.168.1. subnet. So VM One is going to assume that it is connected to the same layer of the network as VM Four. This is my neighbor. It's like looking up somebody's address and seeing that they live on the same street that you live on.So you can just walk down the street and get to their house. So VM One now needs to discover the Mac address of VM Four. This traffic isn't going to the default gateway. It doesn't need to be sent to any kind of router. They're on the same layer two segment.

So it must now determine the Mac address, the layer 2 Mac address of VM 4. And so what it's going to do if it doesn't already know that layer-two Mac address is to send out something called an ARP request. And here's how an ARP request works: VM One is basically going to say, "I need to know the Mac address, 4119, 1/6, 8113." And so it is going to send out a broadcast. And that broadcast is going to be received by every device connected to the layer 2 network that that VM is connected to. That's how ARP requests work. They're broadcast. So VM 2 will receive that ARP request, and VM 2 will say, "That's not my address." I'm not dot 13, I'm dot eleven. So VM Two is just going to ignore it. Perhaps my router's IP address is 192.168.1. The router is the default gateway. So the router is going to receive that broadcast, and the router is going to say, "You know, I'm not dot 13; that's not me." And by the way, I don't forward broadcasts.

The router is not going to forward that layer to broadcast traffic. So the router kind of accesses the boundary for layer-2 broadcasts, and it stops them on the spot. So that layer-two broadcast is never going to hit this physical switch. It's never going to hit this virtual machine, and VM One is never going to get a result back for that up request. So the problem that we have here is that our physical network includes this router. Why does the physical network include a router? Well, maybe I've got different racks that are on different subnets. Perhaps my lair to network has grown so large that I need to split it up in order to make span trees more efficient, or something along those lines. Maybe I want to limit the scope of my Ethernet broadcasts. So I'm using a router to split up my layer 2 network into two smaller chunks. Every time you take a network and stick a router in the middle, you're creating a boundary for those layer 2 broadcasts. So you're cutting the scope of all of your broadcasts in half. You're greatly reducing the amount of broadcast traffic, and you're greatly reducing the complexity of spanning tree calculations that are used to detect loops.

So basically, if I put this router into the physical network and I don't have NSX, it breaks it. These VMs here really need to be on a different subnet. I can't have a layer-two network that spans a layer-three physical network. I'm just going to repeat that because that's an important concept. Without NSX, I cannot have a layer-two network that spans a layer-three physical network. I can't create a virtual layer-2 network that spans all four of these hosts. Let's now change our diagram a little bit here, and we are going to add the NSX into the picture. So right away, we're going to point out a couple of differences. We've created an NVD. We have a layer two segment. The layer-two segment is the exact same address range as we saw in the previous slide. I still have my four VMs here, all on the same network. I've still got the same physical underlay network. I'm going to call this the underlay network. Basically, the underlay network is the network that connects all of my transport nodes in my NSX domain. So I've got four transport nodes. They're all connected by the same physical network that we saw in the previous slide.

So now what happens when VMone generates this ARC request? It generates this layer for broadcast. Well, basically, we've got this thing here called a tap, a tunnel endpoint. And the tap is aware of which hosts have virtual machines on them that are part of this NVDs segment; that's part of the control plane of NSX. We're not going to get too deep into the details right now. I just want you to kind of get the big picture here so the tap understands, "Hey, this host and this host and this host all have virtual machines that are participating in this layer-two segment." And so what the tap is going to do is make sure that each of these tips receives a copy of this broadcast. The taps also notice that they are in their own address ranges. They have their own IP addresses that we will assign. For these taps, we'll create a pool of iPad addresses. And so these taps can communicate with each other. They can communicate with each other through an IP unicast. They can send traffic directly to each other over this Layer 3 network.

So the tap captures that ARP request and says, "Let me wrap this up in my own set of headers and forward a copy of it to all of these other taps." And when these other taps receive that encapsulated frame, they open it up and say, "Oh, this is a layer to broadcast." This is a layer for broadcasting on the segment in which we are participating. Let me forward this to the machines that are on that segment, and VM 4 is now able to receive that ARP request. So once we add NSX to the mix and we start creating these NSX layer-two segments, which, by the way, we used to refer to as "logical switches" in NSX, now the traffic can flow over a layer-three physical network. I can create a layer-two virtual network and extend it over a layer-three physical network. One of the things that you have to bear in mind with this process is that, basically, the tap is going to add some information to these frames as they hit the physical network. So before a frame hits the physical network, the tap is going to add some information.

Like it's going to say, "Hey, this frame should be headed to this tap." So it's got to add this as a source address. It's going to identify the VNI of this segment, and it's going to add that. So it's going to add some additional stuff to that frame as it hits the physical network. And so we may have to make some adjustments to the physical network itself. We may have to set the MTU to something a little bit higher. The minimum MTU configuration of the physical network has to be 1600 because, basically, the Ethernet frames are going to be a little bit bigger. If the MTU is 1500 and I start getting frames that are too big, that's going to be problematic for me.

It's not going to function correctly. So we want to make sure that our physical network has an MTU size of at least 1600 or more. can go all the way up to 9000. That's fine. So what I really want to get across here, and this is the main point of this lecture, is the fact that now I can create a layer-two segment, and the physical network underneath is kind of dumbed down a little bit, right? The physical network underneath isn't doing a whole lot for me. It's basically just raw physical transport capacity. It doesn't matter if there are layer-three networks in the middle. I'm not creating a bunch of VLANs on these physical switches because I'm creating my layer 2 segments within NSX. As a result, the physical network essentially becomes dumb raw transport capacity. And we're doing the vast majority of our configuration inside of NFX at this point.

2. GENEVE and TEPs

So let's start with the tap, the tunnel endpoint. And as you work with NSX, you'll notice that it's not at all like Nsxv. Your ability to manually go in and look at things like the VMware kernel ports associated with the taps is limited. So if you're used to working with Ms. X, this is going to look and feel quite different. The vast majority of this kind of behind-the-scenes plumbing is hidden from you. But regardless, the tunnel endpoint is there to encapsulate and decapsulate traffic.

It's going to add headers when it's sending traffic, and it's going to remove those headers when it's receiving traffic. And so this is called an overlay network, and with MSXT, our overlay network is called Jennifer. So again, just to talk about NSX for a moment, for VSphere, the old version of NSX, we had something called VXLAN. We're not using VXLAN with NSX. We are getting rid of that VXLand overlay and replacing it with Genev. So Geneva is our new overlay network. Now, what is meant by an overlay network, and what is meant by an underlay network? Let's learn about those concepts here.

So in this diagram, we have a really simple configuration. We've got a good old-fashioned physical switch here at the bottom, and let's assume that we have not done any kind of special configuration of that physical switch. Maybe we'll make one VLAN and call it VLAN Ten. And then on my ESXi hosts, these are my transport nodes. Each ESXi host has something called a tunnel endpoint, or tap. Those taps are basically connected to this physical network through VMX through the physical adapters of these hosts.

And so let's assume that my taps are connected up to VLAN 10 here on the physical switch, so the taps can communicate with each other over this underlay network. So the physical switch, those cables, and the actual physical network itself are our underlay network.

So now let's assume that VM One on the left wants to communicate with VM Two on the right, and again, notice that VM One is part of the same network. They're both on the 192-16810 network, according to VM Two. They're both connected to the same layer and segment. We've got a VNI that's been used to identify the segment, much like we would use a VLAN in a traditional VS for a distributed switch. So we've got these two VMs that are connected to the same layer-two segment, and VM One wants to send a frame to VM Two.

So VM One generates this Ethernet frame, and it's got a source IP, which is the source IP of VM One's destination IP, which is VM Two. It's got a source Mac, and let's assume that we do not need to do an ARP request here. So if an ARP request had to happen, we would have had a broadcast that would have come out, and it would have hit every single device on this segment. But let's just assume that VM-1 has already completed this ARP request, so it already knows the destination Mac address. Okay, so the frame gets sent out by VMOne, and now the frame hits the tunnel endpointtap one on the source transport mode. Now, here's what the tap is going to do: It's going to keep that original frame exactly the way it is.

So just kind of imagine that the original frame is right here, with all of the source and destination IPS and headers, and all of that stuff is still inside that original frame. We've still got our payload. Nothing's really changed in that original frame. But what the tap is going to do is utilise the control plane. So basically, the tap is going to determine that, hey, there's a destination IP of 19216 8111. It's going to figure out how to get to one nine two one six eight one dot eleven. The NSX control plane tells me that we need to forward this frame to tap two to ten one one. So that's one of the purposes of the control plane: to track which VM is running on which host and which tap can be used to reach each virtual machine. So now the source tab is going to append a new set of headers. It's going to append a new source and destination IP.

Here's the source, and here's the destination. It's going to append a new source and destination Mac. Those are the Mac addresses of the tunnel endpoints of the taps. And so now this frame is ready to head out over the physical network towards its destination, which is ten one one. And so the frame gets forwarded over the physical network and arrives at the destination. And so now the tap receives the frame. It looks at the destination Mac and says, "That's my destination Mac." Let me pull off this double header and look a little bit deeper. And then it sees a layer-three header. It sees that the destination IP is the IP of that tap. So it rips away that layer three header as well. It's like getting an envelope that's been addressed to you. That's essentially what the tap is doing: saying, "Hey, here's some traffic that was addressed to me." Let me open up the envelope and look inside. And the next thing that it sees is a VNI.

So when this traffic flowed out of VM 1 and hit the tap, the tap made a note saying, "By the way, this traffic came from a certain VNI and came from a certain segment." And so now the receiving tab says, "I see this traffic came from a certain layer to segment a certain VNI." And so it says, "Now I know where to send this to." I know which layer two segment this traffic belongs to. And so it drops the traffic onto the correct layer-two segment, at which point the original headers of the original frame are exposed again, and all of those extra headers that the tap put on are stripped away.

The original frame is now delivered to this segment. And so the destination Mac is the Mac address of VM 2. When VM 2 receives it, he says, "Hey, that's my Mac." Let me strip away that header, see the layer three header, strip that away, and receive the payload. So when we talked about the overlay network, what we're basically doing is establishing a group of tunnels between all of these tips. It's like a logical overlay network. There might be other taps on other transport nodes that we're not seeing here.

And that's what we call an overlay network. It's basically establishing these little tunnels over the physical underlay network. So the physical network is the underlay. And it's basically just a way to connect all of these tips to each other. The genev-encapsulated traffic that's being created by these tips—that's the overlay network. Okay, so let's dig a little bit deeper. Now. We're going to change up our physical network, and we're going to change up the way that our taps are addressed. But, for the most part, everything will operate in the same manner.

So VM one and VM two have not changed. They still have the same Mac addresses. They still have the same IP addresses. They're still connected to a particular layer-to-segment that I've created with NSX. But you'll notice that the taps look a little bit different. The tap on the left, tap one, is connected to the 10 1 1 network. The tap on the right is 170, 216, ten. So now my taps are on different subnets. Let's follow a frame as it makes its way through this network.

So the beginning does not change. Right? VM one says, "Hey, VM two is on the same layer and two segments as me." It's on the same subnet that I'm on. I already know the Mac address of VM 2. So I'm going to just generate a regular old Ethernet frame. I'm not going to send it to my default gateway. We only need the default gateway if the destination is on a different network. So the frame comes out, hits the layer-two segment, and is forwarded to the tap. And so at this point, the tape is going to utilise the control plane. It's going to say, "We're trying to get to this particular VM." How do we get to that Mac address? How do we get our traffic over to that destination? Virtual machine. and the control plane is going to indicate that virtual machine. That Mac address is reachable through tap two here. And so step one is going to say, "Okay, let's wrap it up in a new set of headers." So here's my original frame again, not being modified in any way; we're just adding additional headers on the outside.

It's as if that first frame was one small envelope. And now we're taking it and putting it inside a larger envelope and writing a new address on the front of that envelope. That's essentially what we're doing here to say, "Hey, the source IP is this tap." Let's ship it over to tap two. And there's our source for destination max. And so that's what we're doing. We're wrapping up that frame with a new set of headers. In addition, the router is linked to this 10/1/1 network. It's connected to this 172-16-dot-ten network. So the frame is this physical switch. It's routed by the router. The packet is routed by the router, and it arrives at the destination tap. Now, one thing that I do want to make note of here is the destination, Mac.

So as the frame is leaving this tunnel endpoint, I've called it PMAC 2. That PMAC 2 is actually the Mac address of this router, right? So it's the Mac address of an interface on this router. So at a layer-two level, that's what's happening. So let's break that down just for a moment here. For those of you who may be confused by what I just said, this tunnel endpoint has a TCP/IP stack of its own. This tunnel endpoint has a default gateway of its own, which is this physical router. So if this tunnel endpoint needs to send traffic to a destination that's on a different network, here's what's going to happen: I'm going to have a three-layer header with the source and destination IPs. I'm also going to have a layer-two header with a source and destination Mac. The layer 2 network ends right here.

That's where my layer 2 physical network ends, right? my layer to the underlay network. I've got a layer to overlay on top of that that expands across all of this. But at the physical level, at the underlay level, that's where my layer 2 network ends. So the frame is being sent to the default gateway for that particular tap, which is this router. The router is going to receive it. It's going to look at the destination, Mac. It's going to say, "This is for me." Cool. Let me strip away this layer-two header, strip away the source Mac and the destination Mac, at which point the router is going to look at the source and destination IP, and the router is going to determine, "Okay, I've got an interface on that network." This is destined for the 170/216-ten network. I've got an interface on that network over here. So let me forward this packet this way. Let me route it onto this physical switch that's connected to that destination network.

So that's the job of the router in this equation. Now, does the actual frame set by VM One alter the original frame with all of those original headers in any way? Nope. All of that stuff inside is still intact, exactly the way it was when VM One created it. It's just being shipped around by these taps across this overlay network. So now that it gets routed by the router, it arrives at tap two. So tap two to receive this. The source Mac is the Mac address of this interface on the router. The Mac of the tap is the destination Mac. So the tap is going to receive this frame. It's going to strip away the destination Mac when it determines, "Hey, I'm the destination Mac." It'll strip away that layer to the header, it'll see the destination IP, and it will determine, "Hey, I'm the destination IP. Let me strip away that header." When it strips away the layer two and layer three headers, the next header is going to identify the VNI. That was the other thing that this tap included. So now the receiving tap knows which VNI to forward this traffic to.

And from there, all of the headers have been stripped away. All of the headers that were put on and thereby the tips have now been stripped away, and it has been delivered to the appropriate VNI so that VM Two can receive this traffic. And, as I previously stated, when VM Two receives it, none of the inner headers have changed. They're exactly the way they were when VM One sent that frame. And so VM Two now says, "OK, I'm the destination Mac; let me strip away that Layer 2 header; I'm the destination IP." Let me strip away that layer three header and retrieve the payload that I was intended to receive here. So those are a few packet walks that kind of explain in a much deeper way the way that this overlay network works, how it's used to ship traffic across physical hosts and across different layer 3 networks in the underlay, and how we can use it to create a layer 2 segment that expands across a layer 3 physical network.

3. Transport Zones

And so a transport zone is used to determine which transport nodes can participate in a certain network. It identifies the transport nodes that are connected by the genetic overlay network. And these transport nodes could be ESXi hosts, but they could also be KVM bare metal or NSX edges. And so we've got an overlay transport zone and a VLAN transport zone, two different types of transport zones that we can create. This is very different than what we did in NSXV. Nsxt has a much different architecture when it comes to transport zones. So by default, when you create a transport zone, it will be an overlay transport zone. And each transport node can only be part of a single overlay transport zone. I want to repeat that one more time because this is different than Nsxv. Each transport node can only participate in a single overlay transport zone. We can also have a VLAN transport zone as well. This is used for any end point that we want to connect directly to a VLAN-backed distributed port group. VLAN transport zone doesn't require a tap for communication.

And one of those VLAN-backed endpoints is going to be the NSX edge. So remember that I mentioned the NSX edge nodes were going to be the border between our NSX networks and our northbound external networks. So on the northbound side, the NSX edge will connect to a VLAN transport zone. On the southbound side, it'll connect to an overlay transport zone. In addition, the VLAN transport zones support eight, two, and one Q tag, respectively. So I can have VLAN trunks on my VLAN transport zones. Now, one of the things that is different with NSXT versus NSXV is that we no longer want to think of those transport zones as any type of security boundary. With Nsxv, you could have many different transport zones that hosts belong to, and those transport zones would define the scope of your logical switches. We're not doing that here at NSX. And as we demo this, you'll notice that as soon as you create a transport zone, you're also going to have to configure an NVD's name. So you're creating your NVDs, your virtual distributed switches, right when you create your transport zone.

Now, just remember, within those NVDs, I can create many different layer-two segments. So it's not like just one switch in a two-layer network. I can create many layer-two segments within that NVD. Consider the NVDs to be a physical switch that connects all of the transport nodes. If that's what you had, you could create multiple V-lines within that switch. This is kind of like that. We'll name an NVD when we create the transport zone. But within those NVDs, I can create many layer-two segments. So let's build out our transport zone diagram. And for the sake of simplicity, I'm going to talk in terms of ESXi. I'm going to talk about communications between virtual machines running on ESXi hosts. And I'm going to assume that we've already configured the following. We've already set up a three-node NSX manager cluster with Vcenter as the compute manager, and we've already created an IP pool for our tips. So the tunnel endpoints are automatically going to get IP addresses from a pool of IPS that we have identified. So I'm assuming those things are already in place. And so at that point, I'll go into my Nsxt user interface, and I can create a new overlay transport zone. And suppose I call it Overlay T Z. I'll also have to name my NVDs when I create the transport zone.

So I'm just going to call it NVDs One. And let's assume that I also create a VLAN-based transport zone as well. And when I create my VLAN transport zone, we're just going to call it VLAN TV. I can choose the NVDs that that VLAN transport zone will be associated with, so I'll choose NVD 1. So at this point, we have one transport zone for all of my overlay traffic, all of the genevoverlay traffic that's going to be flowing between virtual machines in my NSX domain. And I've got a second transport zone for VLAN-based traffic. And remember, each transport node can only be associated with a single overlay transport zone. So let's build this out a little bit further. Now we're ready to configure something called an uplink profile. So in this diagram, we see four ESXi hosts, and the ESXi hosts are connected to the physical underlay network. So we've got actual physical network interfaces, actual physical switches, and maybe routers connecting all of these hosts to each other. So I've got real physical adapters on these ESXi hosts that are my transport nodes. How do I want Nsxt to leverage those? Which Nic teaming strategy should they employ? What are the MTU settings? The advantage of the Nsxt uplink profile is that it's going to be consistently configured across all of these hosts.

So you don't need to configure individual settings for each transport node. You create an uplink profile and apply it to the transport nodes as you add them. And you can create uplink profiles for your NSX edge nodes as well. We'll talk more about that later. But now let's assume that our uplinks are now configured. Are these ESXi hosts ready for MSXT at this point? Not yet. We've got a physical network established between them. We've got some transport zones established. But now what I want to do is enable the virtual machines that are running on these hosts to communicate over this NVD using that genev overlay network. So in order for that to happen, each host needs a tunnel endpoint; each host needs a tap. So, in order to accomplish this and establish these taps, I will select the ESXi hosts in the NSX manager user interface, click on configure NSX, select a cluster, and configure NSX on that cluster. And if you've worked with NsXV before, this is similar to the host preparation. You may be used to that with NSX, where you have to run host preparation. This is very similar to the way that you're picking your transport nodes and the transport zones in the NVDs that each of those nodes should be associated with.

So for this case, I've got four ESXi hosts. I want each of those ESXi hosts to be associated with the overlay transport zone and my VLAN transport zone. Let's just assume that in this scenario And so I'll also establish an IP pool for the taps to automatically get their IP addresses from. And so in this case, let's just assume that all of my hosts are going to use the same IP pool. And we can also have different pools for different hosts and different subnets. So we could establish different IP pools for different groups of hosts if we needed to. And then we'll select the physical adapters that should be dedicated to NSX. So in this scenario, I may have other VMs on these hosts. Perhaps I have VMX zero and VM Deck one on hand. There might be other VMs that I'm not going to use for NSX. So I'll explicitly choose which VMs should be part of the NSX underlay network. And at this point, the taps will be configured. So, in this situation, any traffic that hits these tips that is bound for the physical underlay network, the tap will leverage the VM NEX, the physical adapters that we are allocating to the NSX underlay. So we're basically bonding certain physical adapters to our tunnel endpoint. And I can handle it one at a time. I can configure these manually, one by one, or I can create something called the "transport node profile."

So let's say, for example, that all of my hosts are very similar and they all have the same number of VM necks. And I want to dedicate VMIC 2 and VMIC 3 on every host to NSX. I could create a transport node profile and apply it across multiple ESXi hosts to make the configuration process here faster, easier, and more consistent. So now we're starting to build out this transport zone diagram here. And we've got an NVD established, we've got transport zones established, we've got four hosts that are participating in the overlay TZ of that overlay transport zone, and all of the necessary vibes have been installed on my ESXi hosts. We've dedicated some of our physical adapters to NSX. So now we can start creating layer-two segments.

And each layer-two segment is associated with a VNI, a virtual network identifier. And so you may have thought of these as VXLAN network identifiers in the past. That's not what we're doing here. It's a VNI still, but it just means a virtual network identifier. There's no VXLAN with NSX. And the VNI is kind of like a VLAN. It's very similar to a VLAN. So the segments that we create will actually appear as port groups in Vsphere. But you can only manage those port groups with Nsxt. You can't make changes to them. That's one of the big things about NSX: you're not managing things in the Vsphere client. The management platform is completely separate from the Vsphere client. So when we create a segment in the NSXManager user interface, we'll choose the transport zone that we want that segment to be associated with, and that will determine which ESXi hosts are going to participate in this layer-two segment.

And at that point, we'll also choose whether the segment should be associated with something like a Tier 1 gateway, a Tier 0 gateway, or neither. Will we talk more about that later on? We don't really need to get into the gateways just yet. So here it is. Here's the segment that I've created. So I've created a segment for our application servers. I'm going to call it the NSX app. And there's the subnet 192, dot 168, one, dot zero. And I can now take VMs that are running on these hosts and connect them to that layer-two segment. And I can create other layer-2 segments, like maybe a segment for my web servers here, and VMs that are on the same layer-2 segment can automatically communicate with each other at this point, right? So, if I have VMs one and two connected to this NFX app segment, they can communicate with each other using only what I've configured here at this point. If the VMs are on the same host, the traffic will never leave that host. If the VMs are on different hosts, the traffic will be tunnelled by our tunnel endpoints.

And we looked at that whole process in the last video, where the traffic will be capsulated and shipped from one tap to another. What if I've got virtual machines that are connected to the NSX web logical switch, and I want my virtual machines that are on the NSX app logical switch to be able to communicate with those? Well, if the VMs are on different segments, the traffic is going to have to be routed by a distributed router. As you can see, VM 2 is sending a packet to VM 3. The packet hits the default gateway for VM 2, which is a distributed router running on that transport node. The traffic gets routed onto the appropriate Layer 2 segment. It gets encapsulated and flows over the underlying network and hits the destination. We'll talk much more about that process later on. I just wanted to mention it here. One point that seems to create a lot of confusion here is: are the VMs on this diagram connected to a particular VLAN? So are VMs 1, 2, and 3. Are they connected to a particular VLAN? No, they are not right. Each VLAN is associated with a specific port group.

Those port groups are my layer 2 segments. Each layer and segment has a unique VNI. So my two-layer, two-segment offerings are NSX App and NSX Web. They don't have any kind of VLAN associated with them. They have a VNI that they are associated with. But I still need VLANs in my physical underlay network. So you'll notice that I've identified VLAN 10 in this physical network. What is the point of VLAN 10 if none of my VMs are actually associated with VLANs? Well, basically, these are the physical switches that connect my hosts together, and the tunnel endpoints are connected to that particular VLAN. So all of the traffic flowing out of these taps and hitting the physical network is flowing through a particular VM neck, and it's hitting VLAN 10 down here on the physical switch. So that way, all of my VM necks are connected to the same layer-two segment within the physical network. So I still have some basic VLAN configuration that needs to be done at the physical level.

But what I don't need to do is establish a VLAN for every subnet up here. Now I've got different subnets and different layer 2 segments. I'm creating all of those in NSX. So I can create 100 of these layer-two segments and never have to reconfigure anything in the physical network. So that's one advantage of this approach: you're not coordinating multiple teams. You don't have to configure these things in multiple places. But there is still some VLAN networking that has to be implemented up front in the physical network. And once you've got that up and running, you kind of take your hands off it, and you really don't make many changes in the physical network moving forward. No matter what changes there are in NSX, the physical network basically remains configured the same.

VMware 2V0-41.20 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass 2V0-41.20 Professional VMware NSX-T Data Center certification exam dumps & practice test questions and answers are to help students.

Get Unlimited Access to All Premium Files Details
Why customers love us?
93% Career Advancement Reports
92% experienced career promotions, with an average salary increase of 53%
93% mentioned that the mock exams were as beneficial as the real tests
97% would recommend PrepAway to their colleagues
What do our customers say?

The resources provided for the VMware certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the 2V0-41.20 test and passed with ease.

Studying for the VMware certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the 2V0-41.20 exam on my first try!

I was impressed with the quality of the 2V0-41.20 preparation materials for the VMware certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.

The 2V0-41.20 materials for the VMware certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.

Thanks to the comprehensive study guides and video courses, I aced the 2V0-41.20 exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.

Achieving my VMware certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for 2V0-41.20. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.

I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the 2V0-41.20 stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.

The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my 2V0-41.20 certification exam. The support and guidance provided were top-notch. I couldn't have obtained my VMware certification without these amazing tools!

The materials provided for the 2V0-41.20 were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!

The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed 2V0-41.20 successfully. It was a game-changer for my career in IT!