Moving VMs to a different vCenter

I had to move a number of clusters into a different Virtual Center and I didn’t want to have to deal with manually moving VMs into their correct folders. In my case I happened to have matching folder structures in both vCenters and I didn’t have to worry about creating an identical folder structure on the target vCenter. All I need to do was to record the current folder location and move the VM to the correct folder in the new vCenter.

I first run this script against the source cluster in the source vCenter. It generates a CSV file with the VM name and the VM folder name

$VMCollection = @()
Connect-VIServer "Source-vCenter
$CLUSTERNAME = "MySourceCluster"
 
$vms = Get-Cluster $CLUSTERNAME | Get-VM
foreach ( $vm in $vms )
{
	$Details = New-Object PSObject
	$Details | Add-Member -Name Name -Value $vm.Name -membertype NoteProperty 
	$Details | Add-Member -Name Folder -Value $vm.Folder -membertype NoteProperty
	$VMCollection += $Details
}
 
$VMCollection
$VMCollection | Export-CSV "folders.csv"

Once the first script is run, I disconnect each host from the old vCenter and add it into a corresponding cluster in the new vCenter. I can now run this command aginst the new vCenter to ensure the VMs go back into their original folders.

Connect-VIServer "Dest-vCenter"
$vmlist = Import-CSV "folders.csv"
 
foreach ( $vm in $vmlist )
{
	$vm.Name
	$vm.Folder
	Move-VM -VM $vm.Name -Destination $vm.Folder
}

Moving PVS VMs from e1000 to VMXNET3 network adapter

A client needed to remove the e1000 NIC from all VMs in a PVS pool and replace it with the VMXNET3 adapter. PVS VMs are registered by MAC address – replacing the NIC means a new MAC, and PVS has to be updated to allow the VM to boot.

I needed a script to remove the old e1000 NIC, add a new VMXNET3 NIC, and register the new NIC’s MAC with PVS. I knew I would easily accomplish the VM changes with PowerCLI, but I didn’t know what options there were with Citrix. I found what I needed in MCLIPSSNapin, a PowerShell snap-in installed on all PVS servers. The snap-in gives you Powershell control over just about anything you need to do on a PVS server.

I didn’t want to install PowerCLI on the production PVS servers, and I didn’t want to install PVS somewhere else or try manually copying files over. I decided I needed one script to swap out the NICs and dump a list of VMs and MAC address to a text file. Then a second script to read the text file and make the PVS changes.

First, the PowerCLI script. We put the desktop pool into maintenance mode with all desktops shut down. It takes about 10 seconds per VM to execute this script.

Param(
	[switch] $WhatIf
,
	[switch] $IgnoreErrors
,
	[ValidateSet("e1000","vmxnet3")]
	[string] 
 	$NICToReplace = "e1000"
)

# vCenter folder containing the VMs to update
$FOLDER_NAME = "YourFolder"

# vCenter Name
$VCENTER_NAME = "YourvCenter"

#The portgroup that the replacement NIC will be connected to
$VLAN_NAME = "VLAN10"

#If you want all VMs in $FOLDER_NAME, leave $VMFilter empty. Otherwise, set it to a pipe-delimited list of VM names
$VMFilter = ""
#$VMFilter = "DESKTOP001|DESKTOP002"

$LOG_FILE_NAME = "debug.log"

Connect-VIServer $VCENTER_NAME

$NICToSet = "e1000"

if ( $NICToReplace -eq "e1000" )
{
	$NICToSet = "vmxnet3"
}
elseif ( $NICToReplace -eq "vmxnet3" )
{
	$NICTOSet = "e1000"
}


function LogThis
{
	Param([string] $LogText,
      	[string] $color = "Gray")
 Process
 {
    write-host -ForegroundColor $color $LogText 
    Add-Content -Path $LOG_FILE_NAME $LogText
 }
}

if ( Test-Path $LOG_FILE_NAME )
{
    Remove-Item $LOG_FILE_NAME
}

$errStatus = $false
$warnStatus = $false
$msg = ""

if ( $VMFilter.Length -eq 0 )
{
	$vms = Get-Folder $FOLDER_NAME | Get-VM
}
else
{
	$vms = Get-Folder $FOLDER_NAME | Get-VM | Where{ $_.Name -match $VMFilter }
}

foreach ($vm in $vms)
{
	$vm.Name
	$msg = ""


	if ( $vm.NetworkAdapters[0] -eq $null )
	{
		$errStatus = $true
		$msg = "No NIC found on " + $vm.Name
		LogThis $msg "Red"

	}
	else
	{
		if ( ($vm.NetworkAdapters | Measure-Object).Count  -gt 1)		{
			$errStatus = $true
			msg = "Multiple NICs found on " + $vm.Name
			LogThis $msg "Red"

		}
		else
		{
			if ( $vm.NetworkAdapters[0].type -ne $NICToReplace )
			{
				$warnStatus = $true
				$msg = "NIC is not " + $NICToReplace + ", found" + $vm.NetworkAdapters[0].type + " on " + $vm.Name
				LogThis $msg "Yellow"				
			}

				LogThis $vm.Name,$vm.NetworkAdapters[0].MacAddress

		}

	}



}

if ( $errStatus = $true -and $IgnoreErrors -ne $true)
{
	LogThis "Errors found, please correct and rerun the script." "Red"
 
}
else
{
	if ( $warnStatus = $true )
	{
		LogThis "Warnings were found, continuing." "Yellow"
	}
	foreach ( $vm in $vms )
	{
		if ( $WhatIf -eq $true )
		{
			$msg = "Whatif switch enabled, would have added " + $NICToSet + " NIC to " + $vm.Name
			LogThis $msg
		}
		else
		{
			$vm.NetworkAdapters[0] | Remove-NetworkAdapter -confirm:$false
			$vm | New-NetworkAdapter -NetworkName $VLAN_NAME -StartConnected -Type $NICToSet -confirm:$false
		}
	}

	if ( $VMFilter.Length -eq 0 )
	{
		$vms = Get-Folder $FOLDER_NAME | Get-VM
	}
	else
	{
		$vms = Get-Folder $FOLDER_NAME | Get-VM | Where{ $_.Name -match $VMFilter }
	}

	LogThis("Replaced MAC addresses:")
	foreach ( $vm in $vms )
	{
		LogThis $vm.Name,$vm.NetworkAdapters[0].MacAddress
	}
	
	
}

The script offers a -Whatif switch so you can run it in test mode without actually replacing the NIC. It writes all its output to $LOG_FILE_NAME. First it logs the VMs with their old MAC, then the replaced MAC. The output looks something like this:
VD0001 00:50:56:90:00:0a
VD0002 00:50:56:90:00:0b
VD0003 00:50:56:90:00:0c
VD0004 00:50:56:b8:00:0d
VD0005 00:50:56:b8:00:0e
Replaced MAC addresses:
VD0001 00:50:56:90:57:1b
VD0002 00:50:56:90:57:1c
VD0003 00:50:56:90:57:1d
VD0004 00:50:56:90:57:1e
VD0005 00:50:56:90:57:1f

Scan the logfile for any problems in the top section. The data after “Replaced MAC addresses:” is what the PVS server needs. Copy this over to the PVS host. Now we need to use MCLIPSSnapin, but first we have to register the DLL. I followed this Citrix blog for syntax:
“C:\Windows\Microsoft.NET\Framework64\v2.0.50727\installutil.exe” “C:\Program Files\Citrix\Provisioning Services Console\McliPSSnapIn.dll”

I copied the VM names and new MAC addresses to a text file vmlist.txt and put it on my PVS server, in the same folder as the following PowerShell script. It runs very quickly, it takes only a few seconds even if you are updating hundreds of VMs.

Add-PSSnapIn mclipssnapin
$vmlist = get-content "vmlist.txt"
foreach ($row in $vmlist)
{
	$vmname=$row.Split(" ")[0]
	$macaddress=$row.Split(" ")[1]
	$vmname
	$macaddress
	Mcli-Set Device –p devicename=$vmname –r devicemac=$macaddress
}

Now, replace the PVS pool’s image with one that is prepared for a VMXNET3 adapter and boot the pool. Migration complete!

Mass update VM description field

I had a need to update description fields in many of my VMs and the vSphere client doesn’t exactly lend itself to being able to accomplish this quickly. I decided to use PowerCLI to dump a CSV of my VM names and description fields. I then updated the CSV and imported the new descriptions back into the VMs

First, to dump a CSV of VMs and descriptions in a particular cluster

Connect-VIServer "myserver.foo.com"
Get-Cluster "mycluster" | Get-VM | Select-object Name, Description | Export-CSV "myvms.csv"

Then, after updating the CSV with new description information, save the descriptions back into the VMs.

Connect-VIServer "myserver.foo.com"
$csv=Import-Csv "myvms.csv"
$csv | % { Set-VM $_.VMName -Description $_.Description -Confirm:$false }

vSphere Datastore Last Updated timestamp – Take 2

I referenced this in an earlier post, but we continue to have datastore alarm problems on hosts running 4.0U2 connected to a 4.1 vCenter. In some cases, the timestamp on the datastore does not update, so it’s not just the alarm having a problem but also the datastore timestamp. As a stopgap measure, we scheduled a little PowerCLI script to automatically run to ensure all of the datastores are refreshed. We then accelerated our upgrade plans to quickly eliminate the problem from our primary datacenter. We now only have it in our DR site, so it’s not critical anymore, just annoying.

if ( (Get-PSSnapin -Name VMware.VimAutomation.Core -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PsSnapin VMware.VimAutomation.Core
}
 
Connect-VIServer yourserver.foo.com
$ds = Get-Datastore
foreach ( $dst in $ds )
{
   $dsv = $dst | Get-View
   Write-Host "Refreshing "$dsv.Name   
   $dsv.RefreshDatastore()
}

Guest NICs disconnected after upgrade

We are upgrading our infrastructure to ESXi 4.1 and had an unexpected result in Development cluster where multiple VMs were suddenly disconnected after vMotion. It sounded a lot like a problem that I had seen before where a misconfiguration in the number of ports on a vSwitch prevents vMotioned VMs from being able to connect to the switch. If a vSwitch has too few available ports, the VMs that vMotion over are unable to connect to the switch. You generally avoid this with host profiles, but it’s possible a host in the Dev cluster fell out of sync. In any event, the server that was being upgraded this evening had been rebuilt and it wasn’t worth trying to figure out what the configuration might have been. I needed to go through, find all VMs that should have been connected but weren’t, and reconnect them. I decided that I needed:

  • VMs that were currently Powered On – obviously as Powered Off VMs are all disconnected
  • VMs with NICs currently set to “Connect at Power On” so I could avoid connecting something that an admin had intentionally left disconnected
  • VMs with NICs currently not connected

Note that this script will change network settings and REBOOT VMs if you execute it. I was watching the script while it executed, I pinged the guest DNS name first to ensure the IP wasn’t already on the network, then connected the NIC, then pinged again to make sure it was back on the network. I figured I could Control-C to stop if something looked wrong. I rebooted all of the guests to avoid any failed service / failed task problems that might have occurred while the guests were disconnected.

$vms=get-cluster "Development" | get-vm | Where { $_.PowerState -eq "PoweredOn" }| Sort-Object Name
foreach ($vm in $vms)
{
   $nics = $vm | get-networkadapter | Where {$_.ConnectionState.Connected -eq $false -and $_.ConnectionState.StartConnected -eq $true}
   if ($nics -ne $null)
   {
  	 foreach ( $nic in $nics )
  	 {
	     	write-host $vm.Name
	     	write-host $nic
	      	ping $vm.Guest.HostName -n 5
		$nic | Set-NetworkAdapter -Connected $true -confirm:$false
	 }
 
        ping $vm.Guest.HostName -n 5
	$vm | Restart-VMGuest
 
   }

PowerCLI proxy problems

oday, I couldn’t connect to my vCenter server using Connect-VIServer. It failed with “Could not connect using the requested protocol.”

There have been some changes to the corporate proxy servers over the last couple of weeks and it’s causing connection problems

I bypassed the proxy with this:

Set-VIToolkitConfiguration -ProxyPolicy NoProxy -confirm:$false

VMware Automatic HBA rescan

When you add a new datastore, vCenter initiates an automatic rescan on the ESX hosts in the cluster. This can create a so-called “rescan storm” with your hosts pounding away at the SAN. This can cause serious performance problems for the duration of the storm. Even if you don’t have enough hosts in a cluster to see a real problem, it’s pretty inconvenient to have to wait for each rescan when you’re trying to add 10 datastores.

To disable the automatic rescan, open your vSphere client.

1. Administration->vCenter Server
2. Settings->Advanced Settings
3. Look for “config.vpxd.filter.hostRescanFilter. If it’s not there, add it and set to false.

If it’s currently set to true, you have to edit vpxd.cfg and restart the vCenter service.
C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg, change this key to false and restart the vCenter service.
true

Doing this creates a new problem – you now have to manually force a rescan on every host.

Here is a PowerCLI one-liner to automate this process. It will perform the rescan on each host in the cluster, and only on one host at a time.

get-cluster “Production” | Get-VMHost | %{ Get-VMHostStorage $_ -RescanVMFS }

PowerCLI script to export all Vms

Here is a PowerCLI script I hacked together to list out all of our VMs by cluster.

# List out all VMs by cluster, export to Excel
 
$VMCollection = @()
$ClusterName = ""
 
# Path to save Excel output
$savepath = "D:\My Documents\Scripts\powershell\VMware\VMListByCluster\"
# Enter your vCenter server here
$VIServer = "MYSERVER"
 
function GetVmDetails( $Details, $ClusterName )
{
 
$Details | Add-Member -Name Name -Value $VM.Name -membertype NoteProperty
$Details | Add-Member -Name DNSName -Value $vm.Guest.get_HostName() -membertype NoteProperty
$Details | Add-Member -Name Description -Value $vm.Description -membertype NoteProperty
$Details | Add-Member -Name OperatingSystem -Value $vm.Guest.get_OSFullName() -membertype NoteProperty
$Details | Add-Member -Name Cluster -Value $ClusterName -membertype NoteProperty
 
if ( $Details.DNSName.Length -eq 0 )
{
$Details.DNSName = " "
}
}
 
Write-Host "Connecting to Virtual Center..."
Connect-VIServer $VIServer
$AllClusters = Get-Cluster | Sort-Object "Name"
ForEach( $Cluster in $AllClusters)
{
 
$ClusterName = $Cluster.Name
$AllVMs = get-cluster $ClusterName | Get-VM | Sort-Object Name
ForEach ($VM in $AllVMs )
{
Write-Host $VM.Name
$Details = New-Object PSObject
GetVMDetails $Details $ClusterName
$VMCollection += $Details
}
}
 
#$VMCollection
 
Write-Host "Exporting to Excel..."
$cnt = ($VMCollection | Measure-Object).Count
 
$Excel = New-Object -Com Excel.Application
#$Excel.visible = $True
$Excel = $Excel.Workbooks.Add()
 
$Sheet = $Excel.WorkSheets.Item(1)
 
$Sheet.Cells.Item(1,1) = "VM Name"
$Sheet.Cells.Item(1,1).Font.Bold = $True
$Sheet.Range("A1").ColumnWidth = 24
 
$Sheet.Cells.Item(1,2) = "DNS Name"
$Sheet.Cells.Item(1,2).Font.Bold = $True
$Sheet.Range("B1").ColumnWidth = 35
 
$Sheet.Cells.Item(1,3) = "Description"
$Sheet.Cells.Item(1,3).Font.Bold = $True
$Sheet.Range("C1").ColumnWidth = 47
 
$Sheet.Cells.Item(1,4) = "OS"
$Sheet.Cells.Item(1,4).Font.Bold = $True
$Sheet.Range("D1").ColumnWidth = 54
 
$Sheet.Cells.Item(1,5) = "Cluster"
$Sheet.Cells.Item(1,5).Font.Bold = $True
$Sheet.Range("E1").ColumnWidth = 16
 
#Header Row
$Sheet.Range("A1").RowHeight = 50
 
$intRow = 2
ForEach ($objVM in $VMCollection )
{
$Sheet.Cells.Item($intRow,1) = $objVM.Name
$Sheet.Cells.Item($intRow,2) = $objVM.DNSName
$Sheet.Cells.Item($intRow,3) = $objVM.Description
$Sheet.Cells.Item($intRow,4) = $objVM.OperatingSystem
$Sheet.Cells.Item($intRow,5) = $objVM.Cluster
$rng = "A" + $intRow.ToString()
$Sheet.Range($rng).RowHeight = 110
Write-Host $objVM.Name
$msg = ($intRow -1).ToString() + " of " + $cnt.ToString()
Write-Host $msg
$intRow += 1
 
}
 
$fname = $savepath + "vms.xlsx"
$Excel.Application.DisplayAlerts = $False
$Sheet.SaveAs($fname)
$Excel.Application.DisplayAlerts = $True
$Excel.Close()
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($Excel)

Quick and dirty PowerCLI script to list out all VM MAC addresses

I needed a quick list of all MAC addresses in our VMWare environment and threw together this PowerCLI script.

$LOG_FILE = "C:\machines.txt"
$VI_SERVER = "your-virtual-center-server-here"
Connect-VIServer $VI_SERVER
get-vm | %{ $_ | select Name | out-file -filepath $LOG_FILE -append; $_ | Get-NetworkAdapter | out-file -filepath $LOG_FILE -append }