exam
exam-1
examvideo
Best seller!
2V0-21.20: Professional VMware vSphere 7.x Training Course
Best seller!
star star star star star
examvideo-1
$27.49
$24.99

2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course

The complete solution to prepare for for your exam with 2V0-21.20: Professional VMware vSphere 7.x certification video training course. The 2V0-21.20: Professional VMware vSphere 7.x certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including VMware 2V0-21.20 exam dumps, study guide & practice test questions and answers.

85 Students Enrolled
12 Lectures
13:15:00 Hours

2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course Exam Curriculum

fb
1

Managing Networking in vSphere 7

4 Lectures
Time 02:51:00
fb
2

Managing Storage in vSphere 7

4 Lectures
Time 02:52:00
fb
3

vSphere 7 Monitoring Tools

4 Lectures
Time 00:57:00

Managing Networking in vSphere 7

  • 6:00
  • 18:00
  • 13:00
  • 15:00

Managing Storage in vSphere 7

  • 17:00
  • 13:00
  • 6:00
  • 8:00

vSphere 7 Monitoring Tools

  • 24:00
  • 8:00
  • 10:00
  • 4:00
examvideo-11

About 2V0-21.20: Professional VMware vSphere 7.x Certification Video Training Course

2V0-21.20: Professional VMware vSphere 7.x certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.

Managing Storage in vSphere 7

15. Introduction to vSAN

We'll start with the very basics: the host cluster. So a cluster is simply an alogical grouping of ESXi hosts. So let's say you have a group of ESXi hosts and you want to allow virtual machines to automatically failover to another host if their current host fails. That's high availability, and we have to create a cluster in order to enable high availability, and we have to create a cluster in order to enable DRS. So with DRS, we can have virtual machines that automatically get V-motioned from host to host for load balancing purposes. Those are a couple of features that require a host cluster in order for us to enable them. And another feature that requires it is virtual San. So step one of setting up vSAN is to create an ESXi host cluster. That's going to be the very first step in our process.

Now, that being said, there are some prerequisites. We have to be on the right version of Vsphere, we have to have the right version of VCenter, and we also have to have some supported hardware. And we also need to set up some VM kernel ports. So on each of these ESXi hosts here, you can see we've got a couple of things going for us. For the time being, let's concentrate on ESXi 0. ESXi One has two VM necks, and a VM neck is a physical ethernet port on the ESXi host. So this host has two physical Ethernet adapters. Assume they are 10-gigabit per second Ethernet adapters. And each one of these physical adapters is connected to a different physical switch.

And you can say the same thing on host ESXi 2, and the same thing on host ESXi 3. So all three hosts have this in common. They have two physical ten-gig VNX, with each VM NEX connected to a separate physical switch. And then what we've also done on each of these ESXi hosts is create a VM kernel port and tag that VM kernel port for VCN traffic. So if you're not really familiar with VM kernel ports, what this basically means is that we've created this little port, given it an IP address, and said, "Hey, if there is traffic related to vSAN, if a virtual machine needs to transmit traffic related to vSAN from host to host, use this VM kernel port." So we have to have that network under the surface in order for vSAN to work properly. And we'll see it in action here in a couple of slides. Now, one final thing that I want to note in regards to this network that I've shown you here is that there are a couple of design best practises that I have incorporated. Number one, I've got physical redundancy.

If either of these switches fails, there is still another switch up and running that can be used to pass all of the necessary traffic. So I've got physical redundancy enabled. I've also got nothing else connected to these switches. This is a dedicated physical network specifically for VCN traffic. Okay, so how are my Virtual Machine objects actually stored? And how do these VM kernel ports come into this picture? So here we see VM 1. And VM One is one of my virtual machines that is stored on VCN. And as VM One has reads or writes that need to be executed, they are going to be pushed over the physical network using this Vcnvm kernel port to the appropriate destination host. So we can see the activeVMDK for this virtual machine here.

And there's also going to be another copy of the VMDK over here. This is a mirror copy just in case the primary copy is on a host that fails. So the VPN VM kernel port is there to basically handle all of the traffic that's going to have to flow over this VCN network. The virtual machine is running on one host. Its virtual disc is on another host. So when it wants to read and write to and from that virtual disk, we're going to leverage a VMkernel port to push that traffic over the network. And hopefully, what will end up happening is that the majority of the read operations will be satisfied by our flash capacity. So what we see here is something called a hybrid configuration. So what does that mean? Well, on each of these ESXi hosts, we have some traditional magnetic storage devices. These are what we call our capacity devices.

We've got traditional hard disks, and then we've also got a cache tier, which is SSD. And the SSD is a lot faster than the traditional hard disks. So I've got these big capacity devices, these hard discs that will store a lot of data on each of these hosts. And then sitting in front of them, I've got this cache of SSDs, which is much faster and more expensive. So now let's look at what happens when Virtual Machine One wants to read some sort of data from its virtual disk. The VM kernel port is used to push that data over the physical network, and it eventually hits the destination host where its active VMDK resides. And look what's happening. It's hitting this SSD on host DSXI 2. And you'll notice it's happening very quickly, right? This reading is happening very fast. It's hitting the SSD, and the SSD is acting as a read cache.

So the purpose of the read cache is to store the most frequently read data on SSD. So much of this SSD is going to be dedicated to a read cache. A copy of all of the most frequently read data is going to be located on that SSD. This capacity device also contains a copy of that same data, as well as a plethora of other data. But the hope is that when data is read from the VMDK, most of the time the data will get read from that SSD because it's so fast.

If the data is not present on the SSD, this is what we call a cache miss. And you can see that this readjustment is happening much more slowly. The virtual machine needed some sort of data that was actually not present in the read cache, and therefore the data had to get served up by the capacity device. In this case, in the hybrid configuration, our capacity device is a hard disk.

As a result, this read will be much slower than the SSD read. How about writing? We've been talking about reads so far. What if my virtual machine needs to write some sort of data to disk? Well, here's the first thing we have to consider. Number one, there are multiple copies of this VMDK. This virtual machine has one copy of the VMDK on ESXi 2. But we have to prepare for the possibility that ESXi 2 could fail.

So in this case, another copy of that VMDK is being mirrored to ESXi 3. And that way, if ESXi 2 fails, my virtual machine's data is not lost. So when the right situation occurs, here's what's going to happen: When the virtual machine needs to execute a right, the right is going to be sent to both of those ESXi hosts. It's going to be mirrored. If you're familiar with Raid, this is very similar to the way that rights are mirrored across a Raid array. One copy of the data is sent to each of these ESXi hosts. That way, they both always have the current version of that virtual machine's VMDK, just in case one of the hosts fails. Another thing you might have noticed here is this video. It's going to hit the SSD first. That's what we call the right buffer.

So what happens is that any time these virtual machines that are on VCAN need to write some sort of data, the writes are carried out against the right buffer on the SSD. 30% of my SSD is dedicated to being a right buffer, and I sort of equate this to checking a book back into the library. So if I want to check a book out of the library, I can just walk in, drop it on the front desk, and I'm done. The librarian is going to take that book and shelve it. They're going to do the hard, time-consuming work. My experience is that I simply drop it on the desk and walk away. It's very quick for me. The same is true for this right operation. When the virtual machine needs to write some sort of data to its VMDK, it's going to be written to the right buffer, and that's going to happen very quickly. So, from the perspective of the virtual machine, once this write hits the write buffer, it's done.

And then, on the back end, the data is actually written from the right buffer to the capacity device. So to our virtual machines, it always feels like they're writing to the SSD. The writing speeds are always really quick. And then, after the fact, virtual sand handles getting that object written from the SSD to the capacity tier. OK, so in review, virtual machines can only be enabled on a cluster of ESXi hosts, and each one of those hosts has to have a VMkernel port that is marked for VCN traffic. All of our virtual San reads and writes will be routed through that VM kernel network. Virtual machine objects are striped and mirrored across hosts just in case we have a host failure. and read caches and write buffers are used to improve performance. Then, on the back end, we have the actual capacity device.

16. vSAN Disk Groups

So here we see a cluster of ESXi hosts, and the storage on each host is organised into disc groups. We have hard disc drives that present our persistent storage, or, as they're called, our capacity devices. And then we have SSDs providing area cache and a write buffer. And so in this configuration, and by the way, you have lots of options here, you can have up to seven capacity devices or as little as one capacity device. If you go with an all-flash configuration, your capacity devices can be SSDs. There are all kinds of ways that you can set up these disc groups.

So for this particular example, I've got one disc group on each host with six traditional magnetic hard discs as capacity devices and one SSD as the cache tier. So the SSD is providing a read cache and write buffer, and we have the total overall capacity of all of these hosts that are all combined to form a VCN data store. So how do I impact the performance based on what we see here? How do I go about actually improving the performance around these disc groups? Like maybe, for example, on host ESXi, that particular host isn't performing the way that we want it to. The storage latency is higher than it should be. Well, you can see what we've done here: rather than having the traditional six hard disc drives that we have on the other hosts, in this case, we've only got three, and we've taken three away.

So we have one disc group up here at the top that used to have six hard disc drives. We actually removed three hard disc drives from that disc group, created a second disc group with its own SSD on the front end, and then we're going to take those three discs and make them part of that second disc group. Now, what is the benefit of this approach? Well, what I'm essentially doing here is decreasing the ratio of SSDs to hard disks. So we now have more SSD for the same amount of hard disk. And that means it's much more likely that the data we want to read from our virtual machines is going to be cached in our read cache. So anything you can do to improve the ratio of cash to capacity is going to improve performance.

The recommended baseline is 10%. So, for example, if I have 1 hard drive, I need at least 100 GB of SSD. However, if you can increase that ratio above 10%, you'll see even better performance as you deploy more SSDs. If I need to increase the capacity of my VCAN datastore, I can simply create more disc groups on these hosts. Every host can have up to five disc groups, so I can create more disc groups. And by doing that, I can add more capacity. I could add more ESXi hosts as you add more ESXi hosts to the cluster; any storage on those hosts can be added to our V-Sand data store. So there are a couple of options for increasing capacity. There is one thing about this particular example that I actually kind of don't like here. If you'll notice, host ESXi Three has a little bit less storage capacity than the other two hosts do.

I added an extra disc group to hosts One, Two, and Three. I didn't add one. And this kind of goes against what I will typically try to do with a VCN cluster. What I'll normally try and do is keep my physical hardware configurations as consistent as I possibly can across the cluster because ESXi One and ESXi Two are now going to have more data on them. They're going to have more virtual machine objects, and they're going to be performing more operations. So I don't really have an equally balanced workload across these hosts. As a result, the network adapters, the actual CP, will have to work harder than ESXi Three's resources. The other problem introduced is that I have two hosts that are kind of inordinately large compared to the third host, right? So if one of these really big hosts fails, the impact is pretty significant, versus if all the hosts were equally sized, which makes it a little bit easier to prepare and plan for failures. So at the end of the day, what we end up with here is that I've got these three hosts. Now, I've got a total of four disc groups. I've got six hard discs as capacity devices in each disc group.

All of that capacity will be pooled, combined, and made available as shared storage to all of the hosts in this VCAN cluster. And it's going to create this one big data store called my VCAN Data Store. So it's only one data store, and it represents all of the physical storage capacity that exists across these hosts. Now, I don't want to say all the physical storage capacity because on these hosts, I might actually have some other discs for other things. Like, for example, I might have some physical storage for ESXi, maybe a couple of hard discs per host that we actually install ESXi on. So it's not necessarily all the physical storage on those ESXi hosts, but whatever storage we are dedicating to VCN is all combined into this one big VCN data store that is presented as shared storage across all of my ESXi hosts.

17. Virtual Volumes (VVOL)

In this video, we'll introduce virtual volumes. Virtual volumes are really the next generation of Vsphere storage. We've already spent some time learning about VMFS datastores, NFS datastores, and things like that. Virtual SAM volumes are kind of the next step in the evolution of storage. But the other nice thing about them is that they support a lot of the existing storage architecture that already exists. So, for example, virtual volumes support common storage networks like Fiber Channel, Iscosy, and NFS. But the biggest difference is that our virtual machine objects are actually exposed to the storage array. So let's kind of think about the way that things are right now. without virtual volumes. What we've got are data stores. We've got these data stores, and here's our data store and their data stores formatted with VMFS. So here's a VMFS data store. Well, when you take a data store and format it with VMFS, you're creating a file system on that data store that the storage array does not understand. The storage array is not able to dig inside a VMFS data store and view individual virtual machine objects. And that's the biggest difference between virtual volumes and the traditional lawn and VMFS architectures: we can manage individual virtual machine objects at the storage array level.

This makes things like cloning and snapshots very different than they've traditionally been with a VMFS data store. So with a traditional data store architecture, what we have are logical unit numbers. We're going to have these lungs, and on those lungs we're going to create VMFS data stores. So here we see two lungs and one each, and we've created a data store. And then we can store all of our virtual machine objects within those data stores. And what the lawn has typically provided is a storage container where all of those virtual machine files will be located. Now, with Vvals, the concepts of alawn and data store go away. So with Vvalls, what we're going to do is create a new object called a storage container. And all of our VVOLs, our virtual volumes, are going to be stored in this storage container. And the storage container doesn't have the traditional limitations of a locker.

The only restriction is the actual physical storage capacity of the array. So do data stores still exist? Well, technically, yes, they do. But the data store is purely there for functions like high-availability data stores and things like that. What's really going on now is that we've got this one big storage container on the storage array, and the storage array has visibility over all of the individual virtual machine objects that are contained within that storage container. So our storage container is where all of our virtual volumes exist. And as a virtual machine needs to send storage commands to its virtual disks, for example, we have something called the protocol endpoint that basically handles all the traffic; our host doesn't really care about the storage container. It doesn't care how big or small it is.

It doesn't care how many of them there are. There's going to simply be this protocol endpoint that will serve as an interface to the storage container. And so when my virtual machine issues a scuzzy command, the SCSI command, just like always, will flow out of the virtual SCSI controller of the VM and the ESXi host. The hypervisor will send it to the appropriate storage adapter through the protocol endpoint, and the database will be written to that individual virtual volume. So those are some of the underlying mechanisms involved with virtual volumes. Now, in terms of what advantages it provides, Let's say that you need to clone a virtual machine. So we have this virtual machine, and the vault for the virtual machine is here, and we need to clone it. Well, rather than handling the cloning operation at the ESXi host level, we can handle it at the storage container level and at the storage array because the storage array can see all of these individual VVOLs, and that's much more efficient.

Let's think about a cloning operation where the host is handling it well. What's going to happen in that scenario is that all of this virtual machine's data is going to have to get pulled into the host, and then a copy of it all has to get pushed back to the storage array. That's the way that cloning has traditionally been performed. What if instead of this traditional method, if we wanted to perform a cloning operation, maybe V centre could simply send a command to the storage array and tell the storage array, "Hey, this virtual volume needs to be cloned." This virtual volume needs to be snapshotted, and then the storage array itself handles that workload without having to transmit all of that data over the storage network. Right? We don't need that. We can offload those tasks to the storage array. So there are huge efficiencies with virtual volumes. And that's why there's kind of a push towards the next generation of storage. These are unlikely to displace traditional VMFS data stores in the next few years, but the percentage of virtual volumes back storage devices will continue to rise.

18. Demo: vSAN Network Configuration

And to demonstrate this particular task, I want to utilise the VMware hands-on labs. These are amazing free labs that you can access yourself. There's no charge for these. So if you're looking for a very convenient way to try out things that maybe wouldn't work so well in a home lab like, for example, VCAN, this is a great way to try those things out.

So I'm going to launch a lab here, the VCANGetting Started lab available at Hol vmware.com, and demonstrate these tasks in that lab environment. Now, if you're trying to follow along at home and complete the same set of tasks that you see me completing, just understand that the particular lab that I'm using may or may not be available. But I would guess that you're probably going to be able to get access to some sort of VCN lab at Hol Bmw.com.So don't get hung up on the specific version of the lab. Just try to use whatever version is there to replicate these tasks if you're following along at home.

So I've already signed into the Vsphere client in my hands-on lab environment, and I'm going to hosts and clusters. And here under Hosts and Clusters, you can see that we've got one cluster that's already created, and then we've got some ESXi hosts in that cluster, and then there's a bunch of other hosts that are not currently part of the cluster. And so let's take a look at this pre-created cluster that currently contains three ESXi hosts. And now I'm going to click on the configure tab and scroll down to vSAN. And as we can see at this moment, vSAN is currently turned on. So vSAN is already enabled on this particular cluster. So let's examine one of these ESXi hosts. And under Configure on my ESXi host, let's go to VM kernel adapters.

And here you can see all of the VM kernel ports that have been created for this ESXi host, and we can see the services that are enabled on each of them. So on this first VM kernel port, the enabled service is management. On the second VM kernel port, we're using this one for storage. Here's our v-motion VM. Colonel Port. And last but not least, we have an aVM kernel port with the VCN service enabled. So on this host, a VM kernel port has been created. And that VM kernel port has been marked specifically to handle VCN traffic. So this is where all of the VCN traffic on this first ESXi host is going to flow. And if I take a look at the second ESXi host, you see something very similar. If I take a look at the third ESXi host in my cluster, again, all three of them have VM kernel port three.

And all three of them have VCN enabled on that third VM kernel port. So what physical network is actually going to be used for this VCN traffic? If I click on my virtual switches, I can see which VM kernel port is connected to which virtual switch. And so, down here at the bottom, I've got a port group. And the port group is called VCN Region eight one. And there's that third VM kernel port. And I can see here, based on these little yellow lines, that, yup, we've got uplinks one and two. These are the two physical adapters on this particular ESXi host that are going to carry all of this VCN traffic. So this hands-on lab has already built most of the prerequisites for this VCN network. Now, if that were not the case, I could go to this VM kernel adapters area, and for the ESXi one, I could click on Add Networking, create a new VM kernel port, and pick which network that VM kernel port was going to connect to. So here I'll pick my distributed port group for Vs and traffic, and I'll create a VM Colonel port connected to that port group.

And then I could give it a name; I could set an IP address for it, of course; and I could tag vSAN traffic as an enabled service on this VM kernel port. So that's how I could go through the process of manually building that Vcnvm kernel port if it didn't already exist on this ESXi host. So from a troubleshooting perspective, or just a general verification perspective, you want to make sure that all of the hosts in the VCAN cluster are configured with VM kernel ports that can communicate with one another. If I have a virtual machine, for example, it could be running on ESXi 1, but it could have a virtual disc present on ESXi 2. The VCAN VM kernel port is going to be used to allow that virtual machine to communicate with those storage objects on other ESXi hosts. So it's critical that these VCN VMcolonel ports are configured consistently and that they can communicate with one another. Okay, so let's return our focus to this virtual switch diagram.

And remember, like I mentioned before, here's my Vcnvm kernel port, and I can trace back the physical adapters that it's actually utilising right here in this diagram. And those two physical adapters have been associated and assigned as uplinks to this particular VSphere distributed switch. So if we go over to the networking view and expand this here, we can find the Vsphere distributed switch, and here we can find the distributed port group that those VM kernel ports exist on. So on this port group, I'm just going to click on the configure tab, and I'm going to go over the policies associated with this particular port group. And this port group has been configured with a quick timing method route based on the originating virtual port. And it's configured to utilise uplink ones, two, three, and four.

Now, my physical ESXi host itself only has two physical adapters. So it's only going to use the first two uplinks, but it could potentially use a maximum of four. So an important thing to understand here is that we've configured this port group for routing based on the originating virtual port. So let's go back to our ESXi host here. And this configuration allows my ESXi host to tolerate the failure of one of these physical adapters. So this port group has Vmneck 1 and VMIC 0. Those are two of the uplinks for this distributed port group. And if either of those uplinks fails while the other one is still there, traffic can continue to flow out to the physical network, so that's the biggest benefit of multiple uplinks: redundancy.

We have the ability to tolerate the failure of an uplink. Is originating virtual port the best nic teaming method for my VPN network now? Maybe, maybe not, depending on what I'm looking to do here. So if I'm looking to actually actively load balance, then an originating virtual port isn't going to do that. All of the VCN traffic is going to flow over one of those ports, and the other one is just going to be passive, basically acting as a backup. So I may want to consider one of these other Nick teaming algorithms if I'm trying to support actually load balancing that traffic for VCN across multiple physical adapters. And what we're seeing here is a document, the VCN Planning and Deployment Guide. So this is the V 37 documentation. This is the VCN Planning and Deployment Guide. If you want to know the merits of those different Nick teaming policies, I suggest you come here and take a look at the documentation.

19. Demo: Create a vSAN Cluster

In this video, I'll show you how to create a VCAN cluster. So in the last video, you saw that we validated the network configuration of our existing host cluster here in the VMware hands-on lab environment. And so now I just basically need to turn VCN on. So here under Hosts and Clusters, I've clicked on this cluster that already exists. And before we get to vSAN, let's take a look at high availability. Vsphere's high availability is currently turned off. That's good. If I were to try to enable VCN on the cluster where high availability was turned on, I would need to start by turning off host monitoring.

That would be step one, right? So if I'm in a cluster where Ha is already enabled, I'd have to actually go ahead and disable host monitoring. Because remember, with Ah, the management network is normally used for Heartbeats normally. But if I enable vSAN now, the vSAN network is going to be used for heartbeats. So just to kind of review the process here, I would go to Ha, I would turn off host monitoring, and I would go ahead and get vSAN completely enabled. And when I was done enabling vSAN, I would actually grab these ESXi hosts and reconfigure them for Vsphere HA. The option is greyed out at the moment because Ha is turned off. However, keep that in mind. In your real-life environments, you can't just enable vSAN. If High Availability is turned on, you have to disable host monitoring, enable vSAN, reconfigure your host for Ha, and then re-enable host monitoring. But in this case, Ha is turned off. So we're going to go to vSAN here, and we're going to configure vSAN on this host cluster. I'm going to choose a single-site cluster. If I had a two-host VCN cluster in which I only actually had two ESXi hosts and a witness host, then I would set up a two-host cluster.

Or I could set up a stretched cluster in which I have two data sets at physical locations that I'm stretching vSAN across. But at the moment, I'm just going to go with a very simple single-site cluster. I'm going to choose whether or not I want to enable compression and deduplication. So I can only do this in an all-flash configuration. It only works for all types of flash. If I'm doing hybrid disc groups, I cannot enable this feature of encryption as well. I've got this encryption feature. This is available on both hybrid and all flash disc groups. I'm going to just click "next." For the moment, I'm not going to enable either of these features. And now I can actually claim the physical discs on my three ESXi hosts. So let's expand this here and take a look. So what we can see here is what VSN is recommending for this first host; it's identifying a flash device that is ideal for the cashier. It's doing the same thing for hosts 2 and host three. And then for each of those hosts, it's also recommending two of these devices for the capacity tier.

So what we are going to end up with on each of these ESXi hosts is a single diskgroup with one cache device and two capacity devices. That will be the end result of this configuration. And we can look at this in different ways, right? We can look at it based on the hosts as well. So it may help to clarify what's going on here from time to time. and you can see why it's doing what it's doing. The cache-tier device is only six gigabytes, whereas the capacity-tier devices are larger. Typically, the rule of thumb is that you want the cash to be at least 10% of the capacity of that particular disc group. So we are well within that guidance here. We'll have six gigabytes of cash and 24 gigabytes of storage. So we've actually got 25% of the cache here. So that's the configuration that we're going to end up with here. I'm going to go ahead and click on next, and now I have the ability to create false domains. So with the current configuration, we can tolerate up to one failure, meaning I've got a three-host cluster. My tolerance for failure is going to be a maximum of one. If I had a five-host cluster, this would say I could tolerate up to two failures, but I only have a three-host cluster, so I can't do that. And so then I'm just going to go ahead and hit finish here.

And so now it says here that VCN is turned off. I'm going to hit my little refresh button here in the Vsphere client. And there we go. We can see the VCN is now turned on. So now we have the ability to kind of go through and take a look at some of the configurations here. But I can see, okay, deduplication and compression encryption. These are currently turned off. We can see here that the VCN performance service is currently enabled, and so is the file service as well. And if we go over to the summary tab for this cluster, we can see some of the VCN-specific information there as well. If I scroll down, we can see our VCN capacity and our VCN health and performance indicators as well. So that's the basic process for creating a VCN cluster. First, you have to have all the networking set up, and of course, you have to have hosts that are on the hardware compatibility list and all that good stuff. And then you can go ahead and enable VCN on a cluster.

Prepaway's 2V0-21.20: Professional VMware vSphere 7.x video training course for passing certification exams is the only solution which you need.

examvideo-13
Free 2V0-21.20 Exam Questions & VMware 2V0-21.20 Dumps
Vmware.test-inside.2v0-21.20.v2023-11-28.by.sebastian.65q.ete
Views: 2579
Downloads: 1792
Size: 855.33 KB
 
Vmware.selftestengine.2v0-21.20.v2020-10-12.by.lily.42q.ete
Views: 1089
Downloads: 2070
Size: 687.23 KB
 

Student Feedback

star star star star star
49%
star star star star star
49%
star star star star star
0%
star star star star star
0%
star star star star star
1%

Add Comments

Post your comments about 2V0-21.20: Professional VMware vSphere 7.x certification video training course, exam dumps, practice test questions and answers.

Comment will be moderated and published within 1-4 hours

insert code
Type the characters from the picture.
examvideo-17