Moving VMs to a different vCenter

I had to move a number of clusters into a different Virtual Center and I didn’t want to have to deal with manually moving VMs into their correct folders. In my case I happened to have matching folder structures in both vCenters and I didn’t have to worry about creating an identical folder structure on the target vCenter. All I need to do was to record the current folder location and move the VM to the correct folder in the new vCenter.

I first run this script against the source cluster in the source vCenter. It generates a CSV file with the VM name and the VM folder name

$VMCollection = @()
Connect-VIServer "Source-vCenter
$CLUSTERNAME = "MySourceCluster"
$vms = Get-Cluster $CLUSTERNAME | Get-VM
foreach ( $vm in $vms )
	$Details = New-Object PSObject
	$Details | Add-Member -Name Name -Value $vm.Name -membertype NoteProperty 
	$Details | Add-Member -Name Folder -Value $vm.Folder -membertype NoteProperty
	$VMCollection += $Details
$VMCollection | Export-CSV "folders.csv"

Once the first script is run, I disconnect each host from the old vCenter and add it into a corresponding cluster in the new vCenter. I can now run this command aginst the new vCenter to ensure the VMs go back into their original folders.

Connect-VIServer "Dest-vCenter"
$vmlist = Import-CSV "folders.csv"
foreach ( $vm in $vmlist )
	Move-VM -VM $vm.Name -Destination $vm.Folder

The parent virtual disk has been modified since the child was created

Some VMs in my environment had virtual-mode RDMs on them, along with multiple nested snapshots. Some of the RDMs were subsequently extended at the storage array level, but the storage team didn’t realize there was an active snapshot on the virtual-mode RDMs. This resulted in immediate shutdown of the VMs and a vSphere client error “The parent virtual disk has been modified since the child was created” when attempting to power them back on.

I had done a little bit of work dealing with broken snapshot chains before, but the change in RDM size was outside of my wheelhouse, so we logged a call with VMware support. I learned some very handy debugging techniques from them and thought I’d share that information here. I went back into our test environment and recreated the situation that caused the problem.

In this example screenshot, we have a VM with no snapshot in place and we run vmkfstools –q –v10  against the vmdk file
-q means query, -v10 is verbosity level 10

The command opens up the disk, checks for errors, and reports back to you.



In the second example, I’ve taken a snapshot of the VM. I’m now passing the snapshot VMDK into the vmkfstools command. You can see the command opening up the snapshot file, then opening up the base disk.




In the third example, I  pass it the snapshot vmdk for a virtual-mode RDM on the same VM –  it traverses the snapshot chain and also correctly reports that the VMDK is a non-passthrough raw device mapping, which means virtual mode RDM.



Part of the problem that happened was the size of the RDM changed (increased size) but the snapshot pointed to the wrong smaller size.  However, even without any changes to the storage, a corrupted snapshot chain can  happen  during an out-of-space situation.

I have intentionally introduced a drive geometry mismatch in my test VM below – note that the value after RW in the snapshot TEST-RDM_1-00003.vmdk  is 1 less than the value in the base disk  TEST-RDM_1.vmdk



Now if I run it through the vmkfstools command, it reports the error that we were seeing in the vSphere client in Production when trying to boot the VMs – “The parent virtual disk has been modified since the child was created”. But the debugging mode gives you an additional clue that the vSphere client does not give– it says that the capacity of each link is different, and it even gives you the values (20368672 != 23068671).

The fix was to follow the entire chain of snapshots and ensure everything was consistent. Start with the most current snap in the chain. The “parentCID” value must be equal to the “CID” value in the next snapshot in the chain. The next snapshot in the chain is listed in the “parentFileNameHint”.  So TEST-RDM_1-00003.vmdk is looking for a ParentCID value of 72861eac, and it expects to see that in the file TEST-RDM_1.vmdk.

If you open up Test-RDM_1.vmdk, you see a CID value of 72861eac – this is correct.  You also see an RW value of 23068672. Since this file is the base RDM, this is the correct value. The value in the snapshot is incorrect, so you have to go back and change it to match.  All snapshots in the chain must match in the same way.



I change the RW value in the snapshot back to match  23068672 – my vmkfstools command succeeds, and I’m also able to delete the snapshot from the vSphere client6_vmkfstools


VMware load balancing with Cisco C-series and 1225 VIC card

I recently did a UCS C-series rackmount deployment. The servers came with a 10gbps 1225 VIC card and the core routers were a pair of 4500s in VSS mode.

The 1225 VIC card lets you carve virtual NICs from your physical NICs. You can put COS settings directly on the virtual NICS, enabling you to prioritize traffic directly on the physical NIC. For this deployment, I created 3 virtual NIC for each pNIC – Management, vMotion, and VM traffic. By setting COS to 6 for management, 5 for VMs, and 4 for vMotion on the vNICs, I ensure that management traffic is never interrupted, and I also guarantee that VM traffic will be prioritized over vMotion. This still allows me to take full advantage of 10gbps of bandwidth when the VMs are under light load.

Cisco 1225 VIC vNIC

Cisco 1225 VIC vNIC

My VCAP5-DTD exam experience

I took the VCAP5-DTD beta exam on January 3rd, 2013. Like many people, I received the welcome news today that I passed the exam.

I’m laughing a little to myself as I write this post because my certification folder contains a log of my studying. I downloaded the beta blueprint on December 17, 2012, but I already had Microsoft exams scheduled for December 28th.  I did no studying for this VCAP until the day before the exam, January 2rd, where you can clearly see my feverish morning download activity. I will say though that I have several years of View deployments under my belt, so my knowledge on the engineering side was up-to-date and at the front of my mind.

VCAP5-DTD Folder

I downloaded every PDF referenced in the exam blueprint, and I already had most of the product documentation already downloaded. I am primarily a delivery engineer, but to be successful on the exam you need to put on your designer’s hat. I tried to keep that in mind as I pored through the PDFs – it does make a difference because different information will stand out if you actively look for design elements.

My exam was just after lunch and it was well over an hour away, so I left early and brought my Kindle. I continued going through the PDFs until exam time. The sheer volume of information you have to read through makes VMware design exams quite difficult. I suggest reading the answers before you read the question – this helps you identify clues in the question. There are detailed descriptions requiring 6 or more paragraphs of reading just to answer a single multiple choice question.

The GA version of the exam has 115 questions and 6 diagramming scenarios. Keep track of the number of diagramming questions you get so you can budget your time appropriately. You should not spend any more than 15 minutes on a diagram. Keep in mind that 15 * 6 = 90 minutes, leaving you only 105 minutes to answer 109 questions. The pace you have to sustain is mentally exhausting. The beta was even more difficult with 131  questions, plus the expectation to provide comment feedback on the questions.

I found the diagramming questions to be even more involved than the DCD questions.. I’d say the tool was a bit better behaved than the DCD exam, but not by much. It’s easy to get sucked in to a design scenario and waste far too much time. Remember that you’re not designing the perfect system, it just has to be good enough to meet the stated requirements.

Is It Time To Remove the VCP Class Requirement – Rebuttal

This post is a rebuttal of @networkingnerd‘s blog post Is It Time To Remove the VCP Class Requirement.

I would like to acknowledge that it’s easy for me to have the perspective I do as a VCP holder since version 3. I’ve already got it, so I naturally want it to remain valuable. The fact that my employer at the time paid for the class has opened up an entire career path for me that would have otherwise been closed. But I believe the VCP cert remains fairly elite specifically because of the course requirement.

First, consider Microsoft’s certifications. As a 15-year veteran of the IT industry, I believe I am qualified to state unequivocally that Microsoft certifications are utterly worthless. This is partially because of the proliferation of braindumps. There is no knowledge requirement whatsover to sit the Microsoft exams. You don’t even need to look at a Microsoft product to pass a Microsoft test – go memorize a braindump and pass the test. We’ve all encountered paper MCSEs – their existence completely devalues the certification. I consider the MCSE nothing more than a little checkbox on some recruiter’s wish list.

I would go so far as to say that Microsoft’s test are specifically geared towards memorizers – they acutally encourage braindumping by focusing on irrelevant details and not on core skills. Passing a Microsoft exam has everything to do with memorization and almost nothing to do with your skill as a Windows admin.

On the other hand, to sit the VCP exam you have to go through a week of training. At the very least, you’ve touched the software. You installed it. You configured it. You (wait for it)… managed it.  Obviously there are braindumps out there for the VCP exam too, but everybody starts with the same core of knowledge. The VCP exams have improved to a point where they are not memorize-and-regurgitate. A person who has worked with the product actually stands a reasonable chance of passing the exam.

Quoted directly from the blog post:

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

There is a tiny window of opportunity after the release of new vSphere edition to go take the exam without a course requirement. Those of us who are able to pass the exam in that small window are the people who do exactly as you say – we are downloading and installing the software in our labs. We are putting in the time to make sure that our knowledge of the newest features is up to par. Many of us partipate in alpha and beta programs, spending far more time with the software than a typical certification candidate. Some of us participate in the certification beta program, where we have just a couple of short weeks to study for and book the exam. I’ve put in quite a few late nights prepping for beta exams.

VMware forces us to learn the new features by putting a time limit on the upgrade period. We have a foundation of knowledge that was created by the original class that we took. There isn’t enough time for braindumps to leak out there, and the vast majority of upgraders wouldn’t use one anyhow. VMware can be reasonably certain that a VCP upgrader without the class really is taking the time to learn the new features. @networkingnerd is correct, the value IS in the knowledge, but the focus is ensuring that every VCP candidate starts with the same core of knowledge.

@networkingnerd suggests an alternative lower level certification such as a VCA with a much less expensive course requirement. I think it’s an interesting idea, but I don’t know how you’d put it into practice. I’m not sure what a 1-day class could prepare you for. It’s one thing for experienced vSphere admins to attend a 2-day What’s New class. But what could you really teach and test on? Just installing vSphere? There’s not a whole lot of value for an engineer who can install but not configure.

Again quoting from the article:

Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring.

This suggests that the entry-level certification from the leader in virtualization is somehow not well-known. While I would agree that the VCAP-level certifications do not enjoy the same level of name recognition as the CCNP, I talk to seniors in college who know what the VCP is. There is no lack of awareness of the VCP certification. I also agree that it’s ridiculous to send a VMware admin who has years of experience to the Install Configure Manage class. That’s why the Optimize and Scale and the Fast Track classes exist.

I don’t believe dropping the course requirement would do anything to enhance VMware’s market share. The number of VCP individuals has long since reached a critical mass.  Nobody is going to avoid buying vSphere because of a lack of VCPs qualified to administer the environment. While I agree that Hyper-V poses a credible threat, Microsoft is just now shipping features that vSphere has had for years. Hyper-V will start to capture the SMB market, but it will be a long time before it has the chance to unseat vSphere in the enterprise.

VMware View Composer starts, but does no work.

I worked on a client outage over the weekend, Virutal Center and View Composer were down. It started with a disk full situation on the SQL server hosting the vCenter, Composer, and Events databases. The client was shut down for winter break, so the Composer outage was not noticed for several days. After fixing the SQL Server disk space problem, everything came back up. I was able to restart all services and they appeared to be running. Composer started without issue, but it didn’t respond to any commands – any operations I requested in View Manager were ignored. I didn’t find any obvious errors in the logs.

I ran through the troubleshooting options in KB1030698 without finding any issues. I validated the SDK was responding by going to https://vcenteripaddress/sdk/vimService.wsdl . I couldn’t find any cause for the outage, so I opened up a Sev-1 ticket with VMware Support.

The support tech concluded that a problem with the ADAM database was preventing Composer from doing the job. He had me shut down all but one connection broker, then restart the View services on the remaining broker. At this point, commands issued on the broker were obeyed by Composer. We deleted or refreshed all of the desktops listed under Problem Desktops. Once we were sure that the ADAM database reflected the true state of the environment as reflected in vCenter, we restarted the other brokers. They synced databases and the problem was resolved.

Extending Citrix Cache Drives in vSphere

I have a large client running a Citrix XenDesktop farm on top of vSphere. The environment is using PVS to PXE boot desktops. The VM shells were created with a 2GB cache drive. However, the environment has grown and we needed to extend the drive to 3GB.

PowerShell and PowerCLI to the rescue! First, we need to extend the size of the VMDK from 2 to 3GB. The client wanted me to do this in a controlled manner, so I pointed my script to the AD OU containing computer accounts for a specific pool of desktops. I do realize I could have passed a few more of the variables as parameters.

    [switch] $WhatIf=$true

$LOG_FILE_NAME = "output.txt"

function LogThis($buf)
    write-host $buf
    Add-Content -Path $LOG_FILE_NAME $buf

if ( Test-Path $LOG_FILE_NAME )
    Remove-Item $LOG_FILE_NAME

Add-PSSnapin VMware.VimAutomation.Core
Import-Module ActiveDirectory
$computers = get-adcomputer -Filter * -SearchBase "OU=Some OU2,OU=Some OU,DC=foo,DC=com"
foreach ( $computer in $computers )
   LogThis( $computer.Name )
   $vm = Get-VM $computer.Name -ErrorAction SilentlyContinue
   if ( $vm -eq $null)
        LogThis( "Could not locate VM in vCenter" )
        foreach ( $hd in (Get-HardDisk $vm) )
             #$hd.CapacityKB -lt "2097153"    #2097152 is 2048K
             if ( $hd.CapacityKB -lt "2097513" )
                  if ( $WhatIf -eq $true )
                     LogThis("Running in whatif mode - would have extended disk.")

                        Set-HardDisk -HardDisk $hd -CapacityKB 3145728 -Confirm:$False
                LogThis("No disk extension required.")


Next, I needed a way to expand the partition for Windows. I thought about some kind of script to disconnect the VMDK, mount it to another VM and extend it that way, but it seemed too destructive. So I looked at diskpart instead. I first thought I was going to use a GPO to trigger a startup script, but apparently you can’t use those with Citrix PVS. The VM thinks it’s the identity of the master on boot – your WMI filters don’t work.

Instead, I went with remote Powershell invocation of diskpart.exe

    [switch] $WhatIf

Add-PSSnapin VMware.VimAutomation.Core
Import-Module ActiveDirectory
$computers = get-adcomputer -Filter * -SearchBase "OU=OU2,OU=OU,DC=foo,DC=com"

$LOG_FILE_NAME = "diskpart_output.txt"

function LogThis($buf)
    write-host $buf
    Add-Content -Path $LOG_FILE_NAME $buf

if ( Test-Path $LOG_FILE_NAME )
    Remove-Item $LOG_FILE_NAME

foreach ( $computer in $computers )
     LogThis( $computer.Name )
     if ( $WhatIf -eq $true )
        LogThis("Would have performed remote script")
        invoke-command -computername $computer.Name -ScriptBlock { $script = $Null;$script = @("select disk 0","select partition 1","extend","exit");$script | Out-File -Encoding ASCII -FilePath "c:\windows\temp\Diskpart-extend.txt";diskpart.exe /S C:\windows\temp\Diskpart-extend.txt}

The Invoke-Command line deserves some explanation

The diskpart commands I want to run are:
select disk 0
select partition 1

I create an empty variable and write the diskpart commands out to it. Then I use Out-File to save the diskpart commands to a text file in C:\windows\temp. Then I call diskpart.exe with an /S command switch, which executes the commands in the script. Because I used the -ComputerName parameter, all of my code is remotely executed on the desktop.

Hope this post saves you some time.

License activation for Adobe CS6 in a View linked clone environment

Update 2018-08-02

I haven’t had to do this since I wrote this blog post in 2012. However, somebody found this post about a year ago and told me they were able to use Adobe’s current tooling with my old post to make it work for their Horizon environment. I just got a question today on this blog post.

The current tool is called Creative Cloud Packager. It’s not quite the same as Adobe Application Manager, but the concepts are similar.


Original Post 2012-09-23

I recently had to work out the process for license activation of the Adobe CS6 suite. Adobe offers an academic FTE licensing scheme similar to Microsoft’s FTE program. The calculation for licensing cost is based on your employee count; the entire district is then licensed and you don’t pay a dime for students. The Adobe K-12 Enterprise Agreement contains Design/Web Premium, Photoshop Elements, Captivate, and Presenter.

The total installed size of these products turns out to be 8-10GB, quite a bit of a nightmare to attempt a ThinApp. I decided to bake the Adobe software directly into the base image. However, Adobe license keys do not survive the quickprep process. The software comes up unlicensed when you log in to a linked clone.

Adobe offers a free enterprise deployment tool called Adobe Application Manager. One of the functions is to create a serialized installer key along with an executable that will license the already-installed Adobe software. Note that this does NOT work on Photoshop Elements. We have a ticket in to Adobe support for assistance, but at the moment it doesn’t appear possible to activate Photoshop Elements anywhere other than during installation.

First, download and install Adobe Application Manager. Then download your Adobe software and unzip the installation files. Then launch Adobe Application Manager. I found that it only worked properly when I chose Run as Administrator.
Launch Adobe Application Manager
Select the Serialization File option from the main menu.
AAM Main Menu Selector
Browse to your unzipped installer, you need to point to the folder that contains Set-up.exe. Then enter a folder name to save the serialized output, and a location on the network to save the folder.
Path to Installer

Enter the serial number.
Enter Serial Number

The output of the tool will be an executable and XML configuration file.
Application Manager output

Now we need to make this script run after guest customization. We put a C:\scripts folder inside each template. Then create customize.cmd in C:\scripts. Customize.cmd is a generic batchfile that will be called by View after it performs guest customization. You can only call one batchfile, so you either need to put every command in the customize.cmd batchfile, or use customize.cmd to call other batchfiles.
The script looks like this:
Customize script

Put one copy of the AdobeSerialization.exe into C:\scripts\adobe. Then create a folder for each Adobe product that you installed. Inside each of those folders is the prov.xml output file. Create the adobe-commands.cmd file and write it to call the executable once for each xml config file.
The syntax to run the licensing is as follows: AdobeSerialization.exe –tool=VolumeSerialize –provfile=prov.xml
Adobe licensing commands

Configure your View pool to run the customization script after the linked clone is quickprepped.
View Post-sync script

Now the Adobe products will be fully activated anytime you recompose your linked clone pools.

VMware View / PCoIP / PowerPoint bug

We just went live with a 400-seat View deployment at a school district. They have Wyse P20 zero clients which have dual DVI monitor ports. The project was a complete rip-and-replace – routers, switches, wiring, all Mac computers replaced with P20s, and two new racks full of gear in the server closet.

The teachers arrived this past week and started working on transferring their files off the Macs. One of them brought a problem to my attention – when he was trying to run his PowerPoint in presenter mode, the projector showed nothing but a black screen. Interestingly enough, if he used the marker tool to draw on his side of the presentation, the slide on the projector suddenly appeared.

Slideshow Ribbon in PowerPoint 2010

Slideshow ribbon in Powerpoint 2010

A Google search for the View release notes turned up with this, listed as a known issue in every View release since 4.5:

In Windows 7 desktops when connected using PCoIP in multiple-monitor display mode, if you choose Show On: Monitor Generic Non-PnP Monitor for any monitor other than the primary monitor in PowerPoint 2010, a black screen is displayed instead of a slide show.

Workaround: If you are using PowerPoint 2010 in a Windows 7 desktop, you must use Show On: Primary Monitor for a slide show.

One immediate workaround that other engineers at my company suggested is to use the RDP protocol. This does work, the issue does not exist when using RDP. However, it wasn’t an option at this client as the user endpoints were PCoIP Zero clients.

This was a huge problem for user acceptance of the project. Most of the teachers were used to presenter mode for delivery of their slideshows. The only workaround was to connect the projector directly to DVI output #1 and have the teacher present directly from the projector. Many teacher desks were oriented such that the teacher couldn’t even see the projection – either we got the presenter mode to work or we were asking dozens of teachers to rotate their entire classroom one day before students arrived. It wasn’t really a good workaround for teachers who could see the projection anyway. This district uses hosted PowerSchool for their student information system. They take attendance online and you generally wouldn’t want to project the process of logging into the system. A teacher could easily reveal credentials and allow students to log in and change grades. So we were down to having to switch video cables from the projector back to your monitor many times a day – a bit frustrating to say the least. We discussed a DVI splitter but that wasn’t a great option either, it would be a little easier to disconnect the projector when you didn’t want to project, but it was still kludgy.

I opened up support tickets with VMware and Teradici at 7:30 PM. The client paid for 9×5 support, so I wasn’t going to get an answer from VMware for quite some time. Teradici was a different story. They are the creators of the PCoIP protocol and maintain firmware and management software for zero clients. You can log on to and get support if you’re running a PCoIP solution.

I went home and woke up to discover that e-mails from Teradici support had arrived in my inbox at 7:30AM. The support tech had already tested the bug out in his lab and had a way to resolve the issue. That’s a completely free 12-hour turnaround with resolution!

The fix was to enable 3D rendering. First you enable it in the pool. Doing so means you can no longer support RDP connections to the pool, so you have to turn off “Allow users to choose protocol.” Then enable 3D Rendering.

Enable 3D in View Pool

Enabling 3D rendering in View

Next, enable 3D support on your template VM. For me, Performance was unacceptable with less than 64MB of video RAM, and it got better with more. You might want to experiment with various RAM settings to determine the best setting in your environment.

Enable 3D support on a VM

Enabling 3D support on a VM

Then inside your template, enable Aero. The quickest way I found is to search for aero, then click “Find and fix problems with transparency and other visual effects.”

Enable Aero in Windows 7

Enabling Aero in Windows 7













Enabling Aero turns on the “pretty” features of Windows 7, which suck up memory and CPU cycles. I found the VM to be sluggish with Aero enabled, so I set the VM’s visual effects to Best Performance.

Adjust for Best Performance

Adjusting Windows for Best Performance

I recomposed my pool, tested it out, and it worked! There is definitely a penalty here in the form of increased resource use, but it gets the technology out of the way and lets the teachers focus on teaching the lesson.

Many thanks to the folks over at Teradici for this fix!

My VCAP5-DCD exam experience

I passed the VCAP5-DCD exam on July 25th!

I found the exam to be extraordinarily challenging. Design has never been a primary focus of my job, and much of what I learned for the exam was completely new to me. If your primary job is vSphere administration, you are in for a bit of a rough ride. Terms like requirement, constraint, risk, and assumption obviously had English meaning for me, but they meant nothing in the context of a vSphere design.

The exam requires a *TON* of reading. Scenarios are extremely lengthy, far longer than any VCP or VCAP-DCA scenario. You have to be able to quickly extract the important details. If you are a slow reader, you are at a crippling disadvantage for this exam.

For exam preparation, I relied heavily on Cody Bunch’s vBrownBag series. The Asia-Pacific version of the vBrownBag sessions was run by Alastair Cooke and covers the entire VCAP5-DCD exam blueprint. It consists of 15 1-hour sessions and they are all recorded for you to download. You can either watch them streaming, or @nickmarshall9 has converted them to MP4 format, you can download here. I downloaded all of the vBrownBag sessions and saved them out to my Dropbox. I kept two of them marked as favorites on my Droid at all times so I could listen to them while commuting.

Another resource that has a ton of awesome exam-relevant content was the DRBC Design – Disaster Recovery and Business Continuity Fundamentals course. Unfortunately it’s not free, but you are in luck if you work for a VMware Partner. The course is free at the Partner University.

For the exam itself, I followed my typical method of answering questions as quickly as I could. If I had any doubt at all, I flagged the question for review and moved on. One tip for this exam is to read the multiple choice answers first – it helps focus your reading so you can spot the answer. I didn’t even attempt any of the diagramming questions on my first pass, I marked them for review and moved on.

Many people have complained about how kludgy the Visio-style diagramming tool is, and my experience was no different. I lost diagrams multiple times and I had very strange behavior with objects moving themselves around on the canvas. There is a video demo of how to work with the diagramming tool on the VCAP5-DCD site, I strongly recommend you watch the short video to familiarize yourself with the tool.