Practice Exams:

350-601 DCCOR Cisco CCIE Data Center – Compute part 1

  1. Section 02 Compute Starts ….

Great. We reach to section number two where we have to learn about compute and you can see the total weight is 25%. So we should complete this section to gain this 25% of the weightage. What are the topics we have? This again is structured here. You’ll find that first just a few section starting from two one up to actually up to two two e it is asking and checking your knowledge related to UCS plus a storage area network. So what I have done here is this that section starting with section two one two up to two two. Let’s focus on these topics first. Once we complete up to two one, up to two two then we’ll go and check two then we’ll go to move to the hyper flex. Okay? So let’s crack this section slowly one by one. So we should not leave anything and we should understand each and every concept related to the compute infrastructure. Now coming to two one and some topics related to two two. What I have done that I created 1234-5678 videos.

First of all, you will go and understand about the UCS architecture, the chassis connectivity and I have shown some Cisco websites where you can go and have a visual representation of chassis blades, service, et cetera. So in the video you will find that how you can go and navigate through those big extensions or some sort of animation. Then you see us bring up process their dashboard, walk through then what is service profile? That’s one of the key thing we have to understand. Service profile and then lab related to service profile template service profile and then how we can bring up is how we can associate the service profile with any of the blade server and then the server will put up inside the UCL. So once you go and complete these eight videos you will be good inside UCS, how it is working, what are the functions, what are the key concepts inside UCS, et cetera. Then slowly we’ll go and move to the storage area network and then we’ll go and move to the HyperFlex to complete section two.

  1. UCS Architecture

We have UCS architecture what are the components we have inside the UCS fabric or inside the UCS architecture? So you can see that I have the chassis. So maybe chassis one to chassis 20, then inside the chassis, obviously I need to insert the servers. So apart from that, we have the Blade servers inserted inside the chassis chassis. Those Blade servers, they have the CNA Converge Network adapter or the Vic card. Now, these Vic cards, they have internal pinning towards the fabric extender or I O module. This I O module is connected with the fabric interconnect Fi so these fix are connected with FIA and Fib and then on the northbound I have the traffic related to San traffic related to Ether Channel or Ethernet then we have the management interfaces as well. So here you can see the chassis and the Blade server these fabric interconnect, they are actually managed by the UCS manager so let me show you the other diagram as well where we’ll see the actual hardware type of stocks.

So here you can see that okay, I have chassis say for example UCS 5108 blade server where I can insert either half width say for example UCSB 200 m four or full width say for example B 240 m four or four 20 m four or two 60 m four like that. Then not only the half width and full width, I have two into full width as well. So that is a UCSB four 60. Later I will show you the diagram means we have the 3d view from the Cisco site. You can go to the Cisco site, you can check the 3d view as well. I will show you that also. But overall the chassis is so here you can see that I may have one chassis or may have number of chassis connected. Overall, all the chassis and blade servers they have something called I O module and from those I O module or the Fabric exchangers they are going to be connected with the fabric interconnects.

Now this fabric interconnect and the entire UCS architecture or UCS fabric they are going to be managed by the UCS manager. That is the software we have. So now you can see that all the components that we have here inside this UCS fabric so these are the things that we have inside the UCS. We have the fabric interconnect that can connect with the sand land or the management traffic. We have UCS manager from where we are going to the managed entire infrastructure. We have input output modules that is connected in between the server and the Fi module. We have big chassis 5108. Inside that, we have Blade server. Inside the servers, we have the converged network adapters or the WIC cards. Now, what are the main components we have then components? We can tell.

Okay, we have the fabric interconnects, we have the chassis, we have the I O modules. Inside IO module, we have the chassis switch manager, chassis management switch. We have chassis management control as well. Now, let me show you the other things as well. So we have the components and the same thing that we have discussed. This fabric interconnect, they have L one and L two interfaces. Just to see that actually these cluster of interconnects, they are up and running or not. And then you can see the IO modules as well. So let me show you that. Also we have the IO module like this. It is connected with FIA and Fib fabric interconnect A and B. And then we have the WIC card. So Vic card, they have the internal pinning towards the I O module. And then I O modules, they are connected with the fabric interconnect.

So either inside the, say, UCS, because by the end of the year, that you can think as a big server where you can install hypervisor and install VMs. So they can internally communicate from east to west, or the traffic can go from south to north. So if it will go south to north, it will go from the Vicar to IAM model to the fabric interconnect to the outside. All right, then we have fabric interconnect and it’s data seat. Then we can go to the Cisco site and we can check the fabric interconnect dataset. We have Fi 6332, that is the 32 port. We have 16 unified port. Also we have I O module. Here you can see the IO module as well. All these modules I’m going to show you in the Cisco 3D view as well. Now, inside this Fi fabric interconnect, we can see that this section is for L one, L two link.

That one fabric interconnect will be connected to the other fabric interconnect to see the hert bits. Then you have 26 into 40 port Q SFP plus, or 98 into ten gig SFP plus. For that you need four into SFP breakout cable for these interfaces. Then by the end, you have six into 40 Q SFP plus. Apart from that, we can see that we have the power slots, we have the four fan slots, we have the serial port, we have the management interface as well. All right, so let me log into the Cisco site where I can go and show you the three dimensional view of all these things. So you can see Cisco UCSB serieserver, M four 3D model. And if I click here, let me expand this, let me scroll it down a little bit.

So here you can see that this is Fi six three, 3216 up. And let me show you one by one all these things. So, first of all, we can see the front view. This is the front view where we have different type of servers connected. So m four B series service. We have the SATA drives, we have the Fi connected on the top. Now, one by one, I can show you the server so first of all we’ll see B 200 m four blade. It will remove and it will show so it is removing. This is B 200 m four and we can go and check the blade servers and the specification so I can stretch it, scroll it down. You can also go and check all the chips and the circuits and even we can expand this and we can read what are the things. We have the Ram slots and the hard disk and the other specifications related to service. Then let me go and let me show you the four 60 and two 60.

First of all we’ll see two 60 because that is the full width. This is actually the half width. You can see it is going half inside the chassis. So if we are talking about 5108, so that means that 5108 it can have eight servers. Now in the diagram also you can see that it has eight servers. So if I go back here and if I show you this full front view so front view you can see that we have let me stretch it. We have one. Let me little bit too small. So I have 1234. At the moment it is showing only three blade servers. Why? Because either you can have half width two and then 1234 means you have four slots. If you are using half bit server, then you can have it. Say for example, if you are using two by two type of server, then you can insert only two. Correct? So if I am using say four 60 at that time I can use only two servers. If I’m using, then I can insert four.

If I’m using 200, then I can insert it. Okay, so let me quickly show you the four 60. You can see it will take two by two of the space. It is in the cluster of the service and here it is. So likewise we can see all those things. Apart from that, if I want to see the scalability of UCS so that also it will show let us see the scalability. And this is actually the cabling. You can see here and in this cabling, let me scroll down and on the top you can see that I have fabric interconnect and here we have the I O module. So how the construction of UCS is the I O modules, the blue wires, then you have the chassis, the chassis, they are connected. So these IO modules, obviously they are going to FIA and Fib if I have A and B type of construction but I can see overall I have so many blade servers, 12345 and then 1234. So five into four means 20.

I have in this diagram it is showing and it is showing how the cables are connected. We can go and check the cable and connections as well. Correct? Then we can go and check the fabric interconnect and the IO module as well. So let me show you first the I O module and where it is. So here it is, the I O module about the I O module. Also we can go and read. This is 2304. So 1234 interfaces I have. Then I can go and check the interface fabric interconnect. On the top you can see that I have the fabric interconnect. It is showing the wiring. Let me show you this fabric interconnect. If I can have this without wiring, otherwise I need to open new Cisco site where we’ll go and check this fabric interconnect. On the top you can see this is the fabric interconnect without wiring as well. But here also we can see that.

Okay, we have L one, N, two interfaces. Then we have the interfaces for Ethernet specifically. Then we have for Fcui that can be converted as well. All right. And then finally, let’s see how the cabling is there so I can go and click here wiring. We know that the cabling is in a way that the IO module will be cabled. So here you can see the four interfaces they are going on the top four interfaces that are going in the bottom like that. So this is the overall structure and the architecture of the UCS. And the next section we’ll learn more about this.

  1. Chassis Connectivity

UCS chassis connectivity how we are going to connect the I O module to the fi that is my fabric interconnect. So we have option either I can use say for example one port from one IO module, one port from other IO module in the same chassis or I can pair two, two or 4488. But it depends which type of IO module we are using. Is it 2304 or is it 2208? Obviously four stand that I have four interfaces and eight stands for that I have eight different interfaces. So it depends upon the nature and what is the requirement. We have what type of connectivity we required. So for example, here we can see that 2208, I have eight into ten gigi unified fabric ports here in 2204 and 2104 I have four interfaces that I can connect. Then we can see that whatever overall throughput I have now here you can see that important difference is that for example, this 2208 I have eight interfaces that can connect with the fabric interconnect parallel I have 32 interfaces for internal ports.

So that’s the difference we have we’ll discuss about NIF network interfaces and then we have HIF, that is the host interfaces. So we have different different option inside different IO modules for the network interfaces and the host interfaces. For example, 2204, you can see that I have four network interface or fabric interconnect, 16 internal ports. That means internal pinning towards the servers. So what does it mean? Before moving further, I just wanted to show you this so we will be clear at this particular point here we can see the internal diagram. So suppose if I am using say three, three plus three means six half width servers and one full width server overall capacity I have actually eight half width or four full width servers. So here you can see that okay, I can use that. And here the IO module we can see. So I O module I can use for example, say 2208 because on the top you can see I have eight different interfaces.

So in this case I have eight NIFF interfaces and then I have 32 HIF interfaces, 32 HIF interfaces are these interfaces that I can connect with the server. So now you can see that if I group four four so I can connect, say, you can see four plus four. Some of the servers here are hosted over ESXi ESXi and again it depends upon which type of nick card we are using. So some of the Nic card you can see those are supporting eight interfaces. Four plus four means four will go in this and four will go in this. Some of the wick like 1240, it is supporting two interfaces. So two can go here and two can go here. In the Palo or WIC card I can see that it is supporting four plus four. So if you add total number of interfaces that I can pin towards here that will be four plus four plus four, four and two. So overall four into four plus two, that means 18.

But still I have capability that I can accommodate 32 host interfaces. So 618 here and 18 here is still a few of the interfaces will be empty. So if I add this, say four, four, all the green ones, two and then eight and then one one and then four. So how much total is twelve and then 214 and then two. So 14 plus 216 plus eight. So it will be approx 24 actually in the number. So it depends still I have total 32 ports here, 32 ports here but you can understand the internal, these are the HIF interfaces, these are the NIFF interfaces behind the scene. The internal tracing will be like that. Now, suppose if you are using full width type of line card or the blade server so if I’m using this b, four, four, four at these four places at that time I have say eight requirements. So eight nick card I needed, so eight into four, that means complete 32 this direction, complete 32 this direction.

So again, it depends which type of blade servers we are using. So the internal tracing will go like that, I hope. Now we’ll understand the terms about what does it mean by north bond F-I-S and what does it mean by south bond blades? So if in the older, say 2104, I have option only for eight number of HIFs and four number of NIF, in the 2204 you can see four plus 16 I can use. So that’s why if you go generation by, if you go into the next generation, for example 2304 here, you will find that how many NIF you have. So here I have four NIF and 32 hef host interfaces and the network interfaces like that we can again compare better. We can go to the Cisco site and we can check the data sheet and then we can compare all these connectivity options that we have that is showing over the I O module or the fabric interconnect. So I O module or the fix an IO module and in between the fabric interconnect and how it will be internally mapped. All right, so let’s stop here.