Skip to main content

Moving managed ESX host from VCenter to another VCenter

 Moving a managed ESXi/ESX host from one VirtualCenter Server/vCenter Server to another Virtual Center Server/vCenter Server (1004775)


Purpose

This article provides instructions to move an ESXi/ESX host from one VMware VirtualCenter Server/vCenter Server to another VirtualCenter Server/vCenter Server when using standard virtual switches on all ESXi/ESX hosts.

When moving a managed ESXi/ESX host from one VirtualCenter/vCenter Server to another, VMware recommends that you move the virtual machines and management kernel back to a Virtual Switch (vSwitch) from a Virtual Distributed Switch (vDS); then perform the migration. After the migration, recreate the vDS. If you are using vCenter Server 5.1 or later, you can use the export/import functionality of the vDS to move your existing vDS configuration. For more information, see 
Exporting/importing/restoring Distributed Switch configs using vSphere Web client (2034602).


Resolution

To move an ESXi/ESX host from one VirtualCenter Server/vCenter Server to another, remove the host from VirtualCenter Server/vCenter Server, then add the host to a new VirtualCenter Server/vCenter Server. This operation will not affect the state of any virtual machines running on the ESXi/ESX, the historical performance data of both the ESXi/ESX and its virtual machines will however be purged.



Removing the ESXi/ESX host from VirtualCenter Server/vCenter Server

To remove the ESXi/ESX host from VirtualCenter Server/vCenter Server:

  1. If the managed host is in a cluster, right-click the cluster. Set the Distributed Resource Scheduler (DRS) mode to manual anddisable VMware High Availability by deselecting Configure HA.
  2. Click OK and wait for the reconfiguration to complete.
  3. Click Inventory in the navigation bar, expand the inventory as needed, and click the appropriate managed host.
  4. Right-click the managed host icon in the inventory panel and choose Disconnect (wait for the task to complete).
  5. Right-click the managed host icon in the inventory panel and choose Remove.
  6. Click Yes to confirm that you want to remove the managed host and all its associated virtual machines.




Adding the ESXi/ESX host to a new VirtualCenter Server/vCenter Server

To add the ESXi/ESX host to a new VirtualCenter Server/vCenter Server:

  1. Connect VMware Infrastructure Client/vSphere Client/vSphere Web Client to the new VirtualCenter Server/vCenter Server.
  2. Click Inventory in the navigation bar.
  3. Expand the inventory as needed, and click the appropriate datacenter or cluster.
  4. Click File > New > Add Host.
  5. In the first page of the Add Host wizard, enter the name or IP address of the managed host in the Host name field.
  6. Enter the username and password for a user account that has administrative privileges on the selected managed host.
  7. Click Next.




    Copied from VM​ware site.

Comments

Popular posts from this blog

Integration with vCloud Director failing after NSXT upgrade to 4.1.2.0 certificate expired

  Issue Clarification: after upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Issue Verification: after Upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Root Cause Identification: >>we confirmed the issue to be related to the below KB NSX alarms indicating certificates have expired or are expiring (94898)   Root Cause Justification:   There are two main factors that can contribute to this behaviour: NSX Managers have many certificates for internal services. In version NSX 3.2.1, Cluster Boot Manager (CBM) service certificates were incorrectly given a validity period of 825 days instead of 100 years. This was corrected to 100 years in NSX 3.2.3. However any environment originally installed on NSX 3.2.1 will have the internal CBM Corfu certs expire after 825 regardless of upgrade to the fixed version or not. On NSX-T 3.2.x interna...

Calculate how much data can be transferred in 24 hours based on link speed in data center

  In case you are planning for migration via DIA or IPVPN link and as example you have 200Mb stable speed so you could calculate using the below formula. (( 200Mb /8)x60x60x24) /1024/1024 = 2TB /per day In case you have different speed you could replace the 200Mb by any rate to calculate as example below. (( 5 00Mb /8)x60x60x24) /1024/1024 =  5.15TB  /per day So approximate each 100Mb would allow around 1TB per day.

Device expanded/shrank messages are reported in the VMkernel log for VMFS-5

    Symptoms A VMFS-5 datastore is no longer visible in vSphere 5 datastores view. A VMFS-5 datastore is no longer mounted in the vSphere 5 datastores view. In the  /var/log/vmkernel.log  file, you see an entry similar to: .. cpu1:44722)WARNING: LVM: 2884: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device shrank (actual size 18424453 blocks, stored size 18424507 blocks) A VMFS-5 datastore is mounted in the vSphere 5 datastores view, but in the  /var/log/vmkernel.log  file you see an entry similar to: .. cpu0:44828)LVM: 2891: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device expanded (actual size 18424506 blocks, stored size 18422953 blocks)   Purpose This article provides steps to correct the VMFS-5 partition table entry using  partedUtil . For more information see  Using the partedUtil command line utility on ESX and ESXi (1036609) .   Cause The device size discrepancy is caused by an incorrect ending sector for the VMFS-5 partition on the ...