📜 ⬆️ ⬇️

Installing the Cisco Nexus 1000v in VMware vSphere 5.x

The article is a translation of two articles by AJ Cruz:
Nexus 1000v Part 1 of 2 (Theory)
Nexus 1000v Part 2 of 2 (Installation & Operation)

Partially overlaps with articles:
Installing Nexus 1000V on vSphere 5.1 (Part One)
Installing Nexus 1000V on vSphere 5.1 (Part Two)
But there is a different presentation style and other features, such as installing VEM into the hypervisor core through VMware Update Manager.

Nexus 1000v Part 1 of 2 (Theory)
This will be the first of two publications about my study of the Nexus 1000v switch.
Here I will talk about the theory of the Nexus 1000v. And in the second part I will describe the installation and operation.
')
While I will continue my post, I can mention the different terminology that needs to be defined. Therefore, I will immediately denote some terms:



All of the above is how the vSphere Distributed Switch is usually mentioned on the Internet and in all sorts of documentation.
They all mean the same thing and relate to the virtual switch as a whole.
Cisco Nexus 1000v is one of the options for implementing vDS. vSphere also has a built-in vDS, with which I do not have much experience in handling.



Let's find out what vDS is, namely the Nexus 1000v. First, let's take a look at what we all know. The image below is a Nexus 7004 switch:



Pretty standard. We have a pair of Supervisor Engines in slots 1 and 2, Sup in slot 1 is active. We also have a pair of I / O modules (IOM, line cards, Ethernet modules, interface cards) in slots 3 and 4. The matrix LAN lines are designed to show how we could connect to the rest of our infrastructure.

Now look at the vDS graphical representation:


We have the same four modules, but now they work as software within VMware (“V” in vDS) and are distributed in two different ESXi hosts (“D” in vDS).

VSM - Virtual machine applains (Virtual Management Module) A virtual machine that is the “brain” (like the Supervisor Engine) of the vDS. VSM can be run both on an ESX host and on a dedicated hardware.
VEM - Virtual Ethernet Module. Software on every ESXi host participating in vDS. Analogue of a line card in a standard modular switch.
SVS - Switch server verticalization. In SVS, the configuration determines how the VEMs communicate with their parent VSM. In general, it seems (at least in my head) to route the highways in standard modular switches.

Let's dig a little deeper and see how virtual machines “connect” to vDS and how vDS connects to the rest of the world. I have another image as an illustration:


It doesn't matter if the port profiles belong to VEM or VSM , I set them on VEM to make the work more understandable. What is a port profile?
In a virtual switch (and this is logical), we do not perform the operation on the physical interface. In fact, the virtual interface to which the VM connects does not even exist until the VM adapter has been configured. So, what we need to do is create a container where we all put it. The container contains configuration information (vlan, QoS policy (Quality of Service), etc.) and acts as a kind of funnel or aggregation point for virtual machines.
When we edit the settings of the virtual machine and select the connection to the network of its adapter, we will see a drop-down list (using the image above) with items “B” and “A.”

Why don't we see two different port profiles (X or Y)?

Port profile vethernet is a port profile type that is used for virtual machine connections. Available in the drop-down list of network connections of the Virtual Machine adapter settings.
Ethernet port profile — A port profile type that is used to physically connect from an ESX server. Not displayed in the VM adapter settings.

Thus, although virtual machines are assigned to the vethernet port profile, physical network adapters are assigned to the ethernet port profile. How do vethernet port profiles know which ethernet port profile to use for outgoing traffic?
The 1000v is not running the spanning tree. Ethernet port profiles must be configured with unique VLANs. In other words, a specific VLAN must be tied to one ethernet port profile. 1000v will allow you to configure several Ethernet port profiles with the same VLAN, but in the future this will lead to problems.
This does not mean that we can not have redundancy of uplink channels. As you can see in the last image: two network adapters are assigned to the same ethernet port. Backup can be achieved using LACP or vPC host mode (mac-pinning). I do not want to delve too much into the mac-pinning process, but it basically works the same way as it sounds. The MAC address of a specific virtual machine is attached to one of the physical output ports. Separating the VM between the inter-physical physical network adapters (automatically through the host mode VPC) provides a measure of load balancing.

vPC Host Mode (vPC Host Mode) - NOT VPC !!! Throw the vPC out of your head. vPC host mode = MAC pinning. If you have already worked with VMware, this is the same as “Route based on source virtual port identifier”.

This is a load sharing method that does not require a special switch configuration. As already mentioned, MAC addresses are simply tied to one physical port or another.

Next, I would like to learn a little more about SVS connections. Initially, VEM (ESXi host) and VSM should be contiguous with Layer 2. The latest 1000v versions support Layer 3 deployments and this is the recommended deployment method. The connection between VEM and VSM is IP. In fact, when deployed at Layer 3, all traffic is sent through UDP port 4785 . However, this does not happen automatically. We need to configure the vethernet vmkernel port profile using the " capability l3control " command . This is what instructs the VEM to encapsulate everything associated with the control plane in UDP 4785 and send it to the VSM.
Now we see that we are faced with the "chicken or egg" problem, especially if the VSM is working on VEM.

System VLAN — allows end-to-end access to configured VLAN traffic traffic, which means immediate access to the network in the absence of a VSM connection; vmkernel ports must be configured with a system VLAN. In addition, the “system vlan <#>” command is set to both the port of the vethernet profile and the port of the ethernet profile.

The last thing I want to mention is how the boundary ports appear in the Nexus 1000v. I mentioned that virtual interfaces do not exist until a VM is connected. After the VM has assigned a vethernet port profile inside its network settings and it is in online status, a veth interface is created on the Nexus 1000v. Veth interface is a physical boundary host port on a common physical switch. The Veth interface inherits the configuration of the parent port of the vethernet profile.
Ethernet port profiles are tied to physical network adapters in vSphere GUI (read more about this in the next post). In the framework of the Nexus 1000v, they are shown as multilevel Ethernet interfaces: the first is module #, the second is the port number (vmnic). Using the last image as a link, vmnic 3 will be displayed in 1000v as E3 / 4. Three because VEM is module 3 and 4 because it is the fourth vmnic in the block. However, we still do not perform any configuration on these interfaces, everything is done in the port profile.
The final note on the issue of numbering VEM. VEM will always be modules 1 and 2. By default, VEM will be numbered sequentially as ESXi hosts are added and placed online.

Now we have a good base of the theory of 1000v, we are ready to move on to the second publication in this series.

Nexus 1000v Part 2 of 2 (Installation and Operation)
In part 2 of this series, we will go through the installation of the Cisco Nexus 1000v switch and consider some basic operations.
I assume that the reader has basic knowledge of VMware networking technologies and architecture and 1000v operations.
If you would like to first study the theory of 1000v - see part one: "Nexus 1000v Part 1 of 2 (Theory)"

Notice that I start with an ESXi desktop, including vCenter. If you want to see how I configured everything, then see my post “My VMware Lab”.

I start with my ESXi network settings for two standard vSwitchs. vSwitch0 for management / VMkernel connections (including one VM port group per VLAN control) and vSwitch1 for VM traffic.
I have four network cards, vmnic0 and vmnic1 are assigned to the switch vSwitch0, vmnic2 and vmnic3 - to the switch vSwitch1:



Here is some information about my setup:
VLAN101 - 10.1.1.0/24  VLAN102 - 10.1.2.0/24 vCenter Appliance - 10.1.2.50 ESXi host- 10.1.1.52    - .254 


My installation goal is to replace the vSwitch1 with a Nexus1000v and make sure that my vCenter and VM Win2008R2 are still pinging vDS.

Installing the Nexus 1000v consists of five basic steps:

  1. 1. Install / Provide VSM Virtual Machines
  2. 2. Register the Nexus 1000v plugin in vCenter
  3. 3. Configure VSM-vCenter communication (SVS connection)
  4. 4. Install the VEM software on each ESXi host and set the module status to Online
  5. 5. Configure the host network configuration


We can either install 1000v manually or using the Cisco Java Installer. Since the Java Installer is the recommended way, I will use it in the demonstration. The Java Installer performs steps 1-3, and potentially 4 (this means that we have completed the installation).

Steps 1-3:
Go to the application to install the VSM and double click the installer icon:



Select the complete installation and click on the radio button for a user choice:



Click Next, enter the vCenter Server IP address and credentials, and click Next.

The installer will install two VSMs no matter you have one ESXi host or several. If you have only one - see the information for host 2. Fill in the installer with all the necessary information:



You can fill in all the fields yourself or select the [Browse] button options, just make sure that if you fill in your own, then you do not make mistakes. The installer does not validate the input until you leave the screen. So if you suddenly made a mistake, you will have to start over.
The name of the virtual machine will automatically be added "-1" and "-2" for VSM 1 and 2, respectively.
Select the .OVA file from the Nexus installation directory. Please note that the ova with the numbers “1010” in the title is for the Nexus 1010.
We will put layer 3 into operation, which is the preferred method.
When deploying Layer 3, the Management port and the Packet port are ignored. I will just assign them all to the VLAN management.
After you have completed everything, click [Next]

While everything moves it's time to place the windows. I would like to split the screen into two parts, leaving vCenter on the left side of the screen to see what is happening in vSphere while the 1000v installer is doing its job.

You will see the entire result on the screen. If everything looks good, click Next. Relax and let ESXi work its magic during the installation process. Upon completion of the installation, you will be presented with a confirmation screen:



Do not close this window yet - we will once again check all the steps that have been completed so far. The installer has just completed steps 1 to 3. Let's check everything ourselves. First, we can see in the image above that there are now two new virtual machines N1Kv-1 and N1Kv-2. So we know that step 1 is completed.
To check step 2 in vCenter, click “Plug-ins” in the menu and then “Manage Plug-ins” (Manage Plug-ins).



We see that a new plugin has been installed for the Nexus 1000V. Close this window.

To check step 3, we look at both the 1000v and the inside of the vSphere.
SSH to 1000v and go to the end of the configuration file:

 N1Kv# sh run !Command: show running-config !Time: Sat Aug 31 04:57:40 2013 version 4.2(1)SV2(1.1a) svs switch edition essential -----output omitted------ svs-domain domain id 1 control vlan 1 packet vlan 1 svs mode L3 interface mgmt0 svs connection vcenter protocol vmware-vim remote ip address 10.1.2.50 port 80 vmware dvs uuid "8f 99 26 50 21 ce f8 b2-97 7e 6d 49 a2 b6 9f d8" datacenter-name MYDC admin user n1kUser max-ports 8192 connect vservice global type vsg tcp state-checks invalid-ack tcp state-checks seq-past-window no tcp state-checks window-variation no bypass asa-traffic vnm-policy-agent registration-ip 0.0.0.0 shared-secret ********** log-level N1Kv# 


We see here the configuration of "svs connection vcenter". The installer has set up a connection for us. Now let's take a look at vCenter to make sure that it has created a new vDS.
In vCenter, press CTRL + SHIFT + N or go to Home-> Inventory-> Networking.
Expand the tree to verify that the connection of the 1000v and vDS was successful:



Now let's go to step 4 - installing the VEM software on ESXi. If we look at our installer, we will see that we have the option “install VIB and add module”. What is VIB?
VIB stands for vSphere Installation Bundle. This is just a piece of software or application that we can install on ESXi. In our case, this is the Nexus 1000v VEM software.
One caveat - the 1000v installer works with VMware Update Manager (VUM). Since I do not have it, I can not use the installer. So at the moment close the installer - we will do everything manually.

Step 4.
First we need to copy the VIB file to ESX. You can do it in absolutely any way. SCP is working. I usually just copy it to my data store using vCenter. To do this, go to vCenter and press CTRL + SHIFT + H to get back to Home-> Inventory-> Hosts & Clusters. Go to "Configuration" ("Settings"), then "Storage" ("Storage"). Right-click on the data store and click “Browse Datastore.” At the root level, click the “Upload files to this datastore” button and click “Upload File”.
Go to the VIB file, select it (I just chose the latest version), and click [Open].



Enable SSH and ESXi shell on the ESX host and SSH directly on ESXi.
If you are not familiar with this process, see: Using Shell ESXi in ESXi 5.x
in CLI, go to the data store and copy the vib file to / tmp / so we can work with it.

 ~ # ~ # ls altbootbank dev local.tgz proc store usr vmupgrade bin etc locker productLocker tardisks var bootbank lib mbr sbin tardisks.noauto vmfs bootpart.gz lib64 opt scratch tmp vmimages ~ # ~ # cd /vmfs /vmfs # /vmfs # ls devices volumes /vmfs # /vmfs # cd volumes /vmfs/volumes # /vmfs/volumes # ls 2c12e47f-6088b41c-d660-2d3027a4ae4d 521e1a46-2fa17fa3-cb7d-000c2956018f datastore1 (1) 3d762271-7f5b622d-3cfa-b4a79357ee70 521e1a4d-e203d122-8854-000c2956018f shared 521b4217-727150b0-5b58-000c2908bf12 521e1a4e-8824f799-5681-000c2956018f /vmfs/volumes # /vmfs/volumes # cd shared /vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # /vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # ls 6001.18000.080118-1840_amd64fre_Server_en-us-KRMSXFRE_EN_DVD.iso Cisco_bootbank_cisco-vem-v152-esx_4.2.1.2.1.1a.0-3.1.1.vib N1Kv-1 N1Kv-2 NSC Win2008R2_1 cross_cisco-vem-v152-4.2.1.2.1.1a.0-3.1.1.vib nexus-1000v.4.2.1.SV2.1.1a.iso vCenter Appliance /vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # cp cross_cisco-vem-v152-4.2.1.2.1.1a.0-3.1.1.vib /tmp/ /vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # 


Now go to / tmp and install the vib:

 /vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # cd /tmp /tmp # /tmp # esxcli software vib install -v /tmp/*.vib Installation Result Message: Operation finished successfully. Reboot Required: false VIBs Installed: Cisco_bootbank_cisco-vem-v152-esx_4.2.1.2.1.1a.0-3.1.1 VIBs Removed: VIBs Skipped: /tmp # 


At the moment we’ve half completed step 4. Now we have to connect the VEM module to the network. To do this, you need to create a VMkernel port group with the “capability l3control” option. Before that, let's check our status on 1000v:

 N1Kv# sh mod Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ ------------ 1 0 Virtual Supervisor Module Nexus1000V active * 2 0 Virtual Supervisor Module Nexus1000V ha-standby Mod Sw Hw --- ------------------ ------------------------------------------------ 1 4.2(1)SV2(1.1a) 0.0 2 4.2(1)SV2(1.1a) 0.0 Mod MAC-Address(es) Serial-Num --- -------------------------------------- ---------- 1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA 2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA Mod Server-IP Server-UUID Server-Name --- --------------- ------------------------------------ -------------------- 1 10.1.1.10 NA NA 2 10.1.1.10 NA NA * this terminal session N1Kv# 


We see two VSM modules: 1 and 2. Currently, no VEM is online.
Let's first create an ethernet port profile to control the uplink VLAN.
But first, let's move our windows again. I like to share the screen to watch what is happening in vCenter while I work at 1000v. In vCenter, press CTRL + SHIFT + N to return to the network. With the window open, now let's create a port profile:

 N1Kv# conf t Enter configuration commands, one per line. End with CNTL/Z. N1Kv(config)# port-profile type ethernet MGMT-Uplink N1Kv(config-port-prof)# vmware port N1Kv(config-port-prof)# switchport mode access N1Kv(config-port-prof)# switchport access vlan 101 N1Kv(config-port-prof)# no shutdown N1Kv(config-port-prof)# system vlan 101 N1Kv(config-port-prof)# state enable N1Kv(config-port-prof)# 


As soon as we install “state enable”, the port profile is immediately added to the vCenter.



Now let's create a port for the vethernet profile to manage traffic. We will add “capability l3control” to this port.

 N1Kv(config-port-prof)# port-profile type vethernet VLAN101 N1Kv(config-port-prof)# vmware port N1Kv(config-port-prof)# switchport mode access N1Kv(config-port-prof)# switchport access vlan 101 N1Kv(config-port-prof)# no shutdown N1Kv(config-port-prof)# system vlan 101 N1Kv(config-port-prof)# capability l3control Warning: Port-profile 'VLAN101' is configured with 'capability l3control'. Also configure the corresponding access vlan as a system vlan in: * Port-profile 'VLAN101'. * Uplink port-profiles that are configured to carry the vlan N1Kv(config-port-prof)# 2013 Aug 31 06:56:59 N1Kv %MSP-1-CAP_L3_CONTROL_CONFIGURED: Profile is configured with capability l3control. Also configure the corresponding VLAN as system VLAN in this port-profile and uplink port-profiles that are configured to carry the VLAN to ensure no traffic loss. N1Kv(config-port-prof)# state enable N1Kv(config-port-prof)# 




Now, at the final stage, we need to move the vmnic to the port of the ethernet profile, and move one of our vmkernel connections to the port of the vethernet profile.
Right-click on vDS and click "Add Host":



Select the ESXi host and vmnic to move. I choose one of two network adapters on the switch vSwitch0, in order not to lose connection with ESXi, plus my iSCI is tied to vmnic0, so I can not move it right now. On the right, select the MGMT-Uplink profile port in the drop-down list and click [Next]:



On the next screen, you need to select the vmkernel port to transfer to 1000v. I'm going to use VMK0 (Management) and in the drop-down list I select the port of the vethernet profile “VLAN101”, then press [Next]:



Don't worry about migrating virtual machines yet, click [Next].
The next screen shows a visual representation of vDS. Click [Finish].
Scroll back to the 1000v terminal and you should see:

 N1Kv(config-port-prof)# 2013 Aug 31 07:19:28 N1Kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esx2 detected as module 3 2013 Aug 31 07:19:28 N1Kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online 


When vCenter has finished its work, let's check in the 1000v console:

 N1Kv(config-port-prof)# sh mod Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ ------------ 1 0 Virtual Supervisor Module Nexus1000V active * 2 0 Virtual Supervisor Module Nexus1000V ha-standby 3 248 Virtual Ethernet Module NA ok Mod Sw Hw --- ------------------ ------------------------------------------------ 1 4.2(1)SV2(1.1a) 0.0 2 4.2(1)SV2(1.1a) 0.0 3 4.2(1)SV2(1.1a) VMware ESXi 5.1.0 Releasebuild-799733 (3.1) Mod MAC-Address(es) Serial-Num --- -------------------------------------- ---------- 1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA 2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA 3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA Mod Server-IP Server-UUID Server-Name --- --------------- ------------------------------------ -------------------- 1 10.1.1.10 NA NA 2 10.1.1.10 NA NA 3 10.1.1.52 564d33c8-ba44-2cce-c463-65954956018f 10.1.1.52 * this terminal session N1Kv(config-port-prof)# 


Is done. Module 3 online, step 4 completed.

Step 5.
Now let's create our VM vlan, the ethernet port profile to boot and the vethernet port profile for our VMs:

 N1Kv(config-port-prof)# vlan 102 N1Kv(config-vlan)# name SERVERS N1Kv(config-vlan)# port-profile type ethernet VM-Uplink N1Kv(config-port-prof)# vmware port N1Kv(config-port-prof)# switchport mode access N1Kv(config-port-prof)# switchport access vlan 102 N1Kv(config-port-prof)# no shutdown N1Kv(config-port-prof)# state enable N1Kv(config-port-prof)# N1Kv(config-port-prof)# port-profile type vethernet VLAN102 N1Kv(config-port-prof)# vmware port N1Kv(config-port-prof)# switchport mode access N1Kv(config-port-prof)# switchport access vlan 102 N1Kv(config-port-prof)# no shutdown N1Kv(config-port-prof)# state enable N1Kv(config-port-prof)# 




Next, we need to move the physical NIC to the VM-Uplink ethernet profile port. To make this change as less painful as possible, I will move one NIC, migrate the virtual machines, then move the other NIC.
Now that the ESXi host has already been added to the vDS, we can right-click on the N1Kv switch and click on “Manage Hosts” instead of adding hosts.
Select one of the vmnic-s on vSwitch1, select the VM-Uplink port group and click [Next]



Click [Next] ([Next]) two more times, then [Finish] ([Finish])

Now we are ready to migrate the VM. I'm going to start pinging my Win2008R2 VM.
Press CTRL + SHIFT + H to return to Home-> Inventory-> Hosts & Clusters (Home -> Catalog-> Hosts and Clusters).
I right-click on my VM and go to “Edit Settings”. Then I select the VLAN102 network from the vethernet port profile from the drop-down list and click [OK].



 C:\Users\acruz>ping -t 10.1.2.21 Pinging 10.1.2.21 with 32 bytes of data: Reply from 10.1.2.21: bytes=32 time=20ms TTL=127 Reply from 10.1.2.21: bytes=32 time=18ms TTL=127 Reply from 10.1.2.21: bytes=32 time=16ms TTL=127 Reply from 10.1.2.21: bytes=32 time=13ms TTL=127 Reply from 10.1.2.21: bytes=32 time=13ms TTL=127 Reply from 10.1.2.21: bytes=32 time=17ms TTL=127 Reply from 10.1.2.21: bytes=32 time=123ms TTL=127 Reply from 10.1.2.21: bytes=32 time=11ms TTL=127 Reply from 10.1.2.21: bytes=32 time=11ms TTL=127 Reply from 10.1.2.21: bytes=32 time=19ms TTL=127 Reply from 10.1.2.21: bytes=32 time=19ms TTL=127 Reply from 10.1.2.21: bytes=32 time=17ms TTL=127 Ping statistics for 10.1.2.21: Packets: Sent = 16, Received = 16, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 11ms, Maximum = 123ms, Average = 33ms C:\Users\acruz> 


We see a lag, but I have not lost anything.
I do the same for my vCenter device and any other virtual machines until my standard vSwitch 1 is empty:



Our final configuration step is to move vmnic2 to the 1000v port of the VM Uplink profile, but before I do this, let's configure the VM Uplink port for load balancing; otherwise we will run into problems.
I do not want to make any specific settings on my switch (LACP), so I will do mac-pinning.

 N1Kv(config-port-prof)# port-profile VM-Uplink N1Kv(config-port-prof)# channel-group auto mode on mac-pinning N1Kv(config-port-prof)# 


For demonstration purposes, I'm going to move vmnic2 in another way. Instead of right-clicking on vDS and then choosing Manage Hosts, let's click here on the vSphere Distributed Switch.
We see the location of our vDS. In the upper right corner, click " Manage Physical Adapters ... " ("Manage Physical Adapters ...").



Scroll down to the VM-Uplink port group and click "" ("<Click to add NIC>").



Select the physical adapter you want to add, and click [OK].



Click the [Yes] ([Yes]) button to remove vmnic2 from the vSwitch1 switch and connect it to NIKv
Click [OK] and after some time vmnic will be added:



As a final step, I'm going to click on the standard vSwitch and remove vSwitch 1.
Some more useful information. In the NIKv console, you can select " show interface status ", as well as on a normal switch, and see all 1000v ports.
You can select " show interface virtual " and see all the veth ports and what hosts they are on.

That's all. Enjoy getting to know the Nexus 1000v.

Source: https://habr.com/ru/post/363233/


All Articles