Skip to main content

Convert dvSwitch to vSwitch and back again

 dvSwitch to vSwitch and back again


Creating a Distributed virtual switch and adding a host is fairly straightforward. I have found that once a host is added to a dvSwitch ( or whichever way you wrap your head around the concept ) however that it’s a bit more difficult to convert it back depending upon how many virtual adapters are involved. This post documents the steps to add a host to a Distributed Virtual Switch and migrate 3 VMkernal adapters to it, and then migrate them back to the original Virtual Switch.
CONVERT to dvSwitch

(Fig 1) This screen shows the ESXi-1 host using a vSwitch configured with temporary VMkernal ports. Other than being set for DHCP they’re not used for any true purpose merely to allow for this example.

(Fig 2) As you can see in this screen I already have a dvSwitch configured with various dvPortGroups. The ESXI-2 server is already a participating host. I am going to add the ESXi-1 host and when I do I’m going to migrate the ports to the dvPG-VirtualMachines Port Group. Take notice of this section.
At this point I’m in the Inventory-Networking window and I’m going to click the ‘Add Host’ link in the upper right hand corner.

(Fig 3) The Add host GUI requires a host and desired NIC card selection. For this I’m going to select the vmnic1 and click next.

(Fig 4) Now we’ll migrate the virtual adapters from the Virtual Switch to the dvSwitch. Select the virtual adapters to migrate.

(Fig 5) Assign the migrating adapters to a dv Port Group.

(Fig 6) Here I simply select the dv Port Group identified as ‘dvPG-VirtualMachines’.

(Fig 7) Now you can see the vmk3, vmk4 and vmk5 have imported into the dvPG-VirtualMachines port group on the distributed switch.


CONVERT TO vSwitch


So adding a host to dvSwitch is pretty easy. If there were thousands of virtual adapters that needed to be migrated the ease of which this interface makes that process would quickly become obvious. Migrating ports from a dvSwitch back to a standard vSwitch is just as simple however it’s not as automated and can be more time consuming. I haven’t figured out a way to migrate all virtual adapaters back to the vSwitch all at the same time like we did when we migrated the virtual adapters to the dvSwitch.
If someone knows a better or more efficient procedure please let me know.
(Fig 8) To begin the process of migrating the host back to a Standard vSwitch we have to get all the adapters moved off the dvSwitch. To start the process click the Manage Virtual Adapters link in the upper right.

(Fig 9) Each adapter must be selected individually then click ‘Migrate to Virtual Switch’ 
(Fig 10) Select which vSwitch to migrate the virtual adapter to.

There must be a way to migrate all the adapters to a Virtual Switch at the same time but I haven’t been able to figure out how. In this example I had to do this three times to get all the adapters migrated. In a heavy production environment there may be hundreds or thousands of adapters. I could see this taking a really long time. One would obviously drop to command line or script to get this done on a large scale system.
(Fig 11) Once all the virtual adapters have been migrated back to a Virtual Standard Switch the last thing to do is remove the host from the dvSwitch. For this you go back to the Inventory-Network screen ( cntr-shift-H ) and select the Hosts tab.

(Fig 12) To remove the host right click and select ‘Remove from vNetwork Distributed Switch’ In this example I want to remove the ESXi-1 host so I followed this step for this host.

(Fig 13) Going back to the Inventory-Hosts and Clusters screen and Configurations tab you can now see the dvSwitch is no longer associated with the ESXi-1 host.

And that’s how to add a host to a Distributed Virtual Switch migrating the virtual adapters and back again to the originating Virtual Standard Switch.

source: chriswhitingsblog.wordpress.com/2010/12/07/215/

Comments

Popular posts from this blog

Integration with vCloud Director failing after NSXT upgrade to 4.1.2.0 certificate expired

  Issue Clarification: after upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Issue Verification: after Upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Root Cause Identification: >>we confirmed the issue to be related to the below KB NSX alarms indicating certificates have expired or are expiring (94898)   Root Cause Justification:   There are two main factors that can contribute to this behaviour: NSX Managers have many certificates for internal services. In version NSX 3.2.1, Cluster Boot Manager (CBM) service certificates were incorrectly given a validity period of 825 days instead of 100 years. This was corrected to 100 years in NSX 3.2.3. However any environment originally installed on NSX 3.2.1 will have the internal CBM Corfu certs expire after 825 regardless of upgrade to the fixed version or not. On NSX-T 3.2.x interna...

Calculate how much data can be transferred in 24 hours based on link speed in data center

  In case you are planning for migration via DIA or IPVPN link and as example you have 200Mb stable speed so you could calculate using the below formula. (( 200Mb /8)x60x60x24) /1024/1024 = 2TB /per day In case you have different speed you could replace the 200Mb by any rate to calculate as example below. (( 5 00Mb /8)x60x60x24) /1024/1024 =  5.15TB  /per day So approximate each 100Mb would allow around 1TB per day.

Device expanded/shrank messages are reported in the VMkernel log for VMFS-5

    Symptoms A VMFS-5 datastore is no longer visible in vSphere 5 datastores view. A VMFS-5 datastore is no longer mounted in the vSphere 5 datastores view. In the  /var/log/vmkernel.log  file, you see an entry similar to: .. cpu1:44722)WARNING: LVM: 2884: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device shrank (actual size 18424453 blocks, stored size 18424507 blocks) A VMFS-5 datastore is mounted in the vSphere 5 datastores view, but in the  /var/log/vmkernel.log  file you see an entry similar to: .. cpu0:44828)LVM: 2891: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device expanded (actual size 18424506 blocks, stored size 18422953 blocks)   Purpose This article provides steps to correct the VMFS-5 partition table entry using  partedUtil . For more information see  Using the partedUtil command line utility on ESX and ESXi (1036609) .   Cause The device size discrepancy is caused by an incorrect ending sector for the VMFS-5 partition on the ...