The parent virtual disk has been modified since the child was created

Some VMs in my environment had virtual-mode RDMs on them, along with multiple nested snapshots. Some of the RDMs were subsequently extended at the storage array level, but the storage team didn’t realize there was an active snapshot on the virtual-mode RDMs. This resulted in immediate shutdown of the VMs and a vSphere client error “The parent virtual disk has been modified since the child was created” when attempting to power them back on.

I had done a little bit of work dealing with broken snapshot chains before, but the change in RDM size was outside of my wheelhouse, so we logged a call with VMware support. I learned some very handy debugging techniques from them and thought I’d share that information here. I went back into our test environment and recreated the situation that caused the problem.

In this example screenshot, we have a VM with no snapshot in place and we run vmkfstools –q –v10  against the vmdk file
-q means query, -v10 is verbosity level 10

The command opens up the disk, checks for errors, and reports back to you.

1_vmkfstools

 

In the second example, I’ve taken a snapshot of the VM. I’m now passing the snapshot VMDK into the vmkfstools command. You can see the command opening up the snapshot file, then opening up the base disk.

 

2_vmkfstools

 

In the third example, I  pass it the snapshot vmdk for a virtual-mode RDM on the same VM -  it traverses the snapshot chain and also correctly reports that the VMDK is a non-passthrough raw device mapping, which means virtual mode RDM.

 

3_vmkfstools

Part of the problem that happened was the size of the RDM changed (increased size) but the snapshot pointed to the wrong smaller size.  However, even without any changes to the storage, a corrupted snapshot chain can  happen  during an out-of-space situation.

I have intentionally introduced a drive geometry mismatch in my test VM below – note that the value after RW in the snapshot TEST-RDM_1-00003.vmdk  is 1 less than the value in the base disk  TEST-RDM_1.vmdk

4_vmkfstools

 

Now if I run it through the vmkfstools command, it reports the error that we were seeing in the vSphere client in Production when trying to boot the VMs – “The parent virtual disk has been modified since the child was created”. But the debugging mode gives you an additional clue that the vSphere client does not give– it says that the capacity of each link is different, and it even gives you the values (20368672 != 23068671).

5_vmkfstools
The fix was to follow the entire chain of snapshots and ensure everything was consistent. Start with the most current snap in the chain. The “parentCID” value must be equal to the “CID” value in the next snapshot in the chain. The next snapshot in the chain is listed in the “parentFileNameHint”.  So TEST-RDM_1-00003.vmdk is looking for a ParentCID value of 72861eac, and it expects to see that in the file TEST-RDM_1.vmdk.

If you open up Test-RDM_1.vmdk, you see a CID value of 72861eac – this is correct.  You also see an RW value of 23068672. Since this file is the base RDM, this is the correct value. The value in the snapshot is incorrect, so you have to go back and change it to match.  All snapshots in the chain must match in the same way.

4_vmkfstools

 

I change the RW value in the snapshot back to match  23068672 – my vmkfstools command succeeds, and I’m also able to delete the snapshot from the vSphere client6_vmkfstools

 

VMware recertification

VMware just announced a new recertification policy for the VCP. A VCP certification expires 2 years after it is achieved. You can recertify by taking any VCP or VCAP exam.

Part of VMware’s justification for this change is “Recertification is widely recognized in the IT industry and beyond as an important element of continuing professional growth.” While I do agree with this statement in general, I don’t believe this decision makes much sense for several reasons:

  • Other vendors – Cisco and Microsoft as two examples – expire after 3 years, not 2 years. Two years is unnecessarily short. It’s also particularly onerous given the VMware course requirement for VCP certification. It’s hard enough to remain current with all of the vendors recertification policies at 3 years.

 

  • Other vendors – again, Cisco and Microsoft as examples – have no version number tied to their certifications. You are simply “MCSE” or “CCNA”. With VMware, you are “VCP3″, “VCP4″, or “VCP5″. The certifications naturally age themselves out. A VCP3 is essentially worthless at this point. The VCP4 is old, and the VCP5 is current. An expiration policy doesn’t need to be in place for this to remain true.

 

  • The timing of this implementation is not ideal. VMware likes to announce releases around VMworld, so we’re looking at August 2014 for 6.0.  Most VMware technologists will be interested in keeping certifications with the current major release, so demand for the VCP6 will be high. Will the certification department release 6 in time for everybody to test before expiration? It’s really a waste of my time and money to force me to recertify on 5 when 6 is right around the corner.

 

  • The expiration policy makes no sense in light of the policy on VCAPs and VCPs. Currently, any VCP makes you eligible to take a VCAP in any of the three tracks, and achieving the VCAP in a track automatically gives you a VCP in the same track. This is a significant timesaver for those of us who are heavily invested in VMware – skip the entry level exam and go straight to the advanced exam. VCAP exam development is obviously even slower than VCP exam development. I have doubts that the VCAPs will come out quickly enough to meet the March 2015 deadline.

 

  • Adam Eckerle commented in his blog post “I also think it is important to point out that I think it encourages individuals to not only keep their skills up to date but also to branch out. If your VCP-DCV is going to expire why not take a look at sitting the VCP-DT or Cloud, or IaaS exams?  If you don’t use the Horizon products or vCloud Suite as part of your job that can be difficult.”I agree that in some cases, this might encourage you to pursue a certification in a separate track. Before I had the desktop certifications, I might have considered accelerating exam preparation to prepare for this recertification date.  However, I already own 4 of 6 VCAPs. Even as a consultant I have no use for vCloud, there’s just not enough demand from our customers to build a practice area around it. There’s currently no business benefit in pursuing the Cloud track.

It’s VMware’s program and they can do as they please, but I hope they consider 3 years instead of 2 for recertification.

Christian Mohn’s blog has a fairly lively discussion going on, and Vladan Seget also has some thoughts and comments

More loathing of Pearson Vue … or, my [redacted] beta exam experience

This is what I get for taking beta exams. I understand. I create my own mess. But that doesn’t change the fact that Pearson Vue is spectacularly incompetent. I’ve previously blogged my strongly negative opinions of Pearson Vue and today’s experience doesn’t do much to improve my outlook.

I sat the [redacted] VMware beta exam today and there were problems with the remote environment. It took me almost 40 minutes to complete the first two questions because the performance of the environment was so lousy. Per my beta exam instructions, after about 15 minutes I asked the Vue proctor to contact VMware for assistance.

The proctor returned to say that we should reboot my local exam station, as if that had anything at all to do with the slow response from the remote lab. I asked her who told her to do that – she had called the Vue helpdesk, not VMware. I told her to call VMware, to which she replied “We can’t do that.”  By then the environment had improved from ‘lousy’ to ‘nearly tolerable’ so I gave up complaining.

After the exam I spoke with the VMware certification team and I received confirmation of the following:

  1. VMware has been assured by Pearson Vue that VCAP candidates will be able to get in contact with VMware’s support team for problems with the exam environment
  2. VMware’s support team has the authority to grant extended time in the event of an environment failure
  3. Pearson Vue has the capability to extend the exam time.

The slow performance of the environment set me back too far to recover;  I was unable to complete the exam. Maybe it will cost me a passing grade, maybe it won’t, but Pearson Vue’s failure to rectify the situation is inexcusable.

VMware load balancing with Cisco C-series and 1225 VIC card

I recently did a UCS C-series rackmount deployment. The servers came with a 10gbps 1225 VIC card and the core routers were a pair of 4500s in VSS mode.

The 1225 VIC card lets you carve virtual NICs from your physical NICs. You can put COS settings directly on the virtual NICS, enabling you to prioritize traffic directly on the physical NIC. For this deployment, I created 3 virtual NIC for each pNIC – Management, vMotion, and VM traffic. By setting COS to 6 for management, 5 for VMs, and 4 for vMotion on the vNICs, I ensure that management traffic is never interrupted, and I also guarantee that VM traffic will be prioritized over vMotion. This still allows me to take full advantage of 10gbps of bandwidth when the VMs are under light load.

Cisco 1225 VIC vNIC

Cisco 1225 VIC vNIC

My VCAP5-DTD exam experience

I took the VCAP5-DTD beta exam on January 3rd, 2013. Like many people, I received the welcome news today that I passed the exam.

I’m laughing a little to myself as I write this post because my certification folder contains a log of my studying. I downloaded the beta blueprint on December 17, 2012, but I already had Microsoft exams scheduled for December 28th.  I did no studying for this VCAP until the day before the exam, January 2rd, where you can clearly see my feverish morning download activity. I will say though that I have several years of View deployments under my belt, so my knowledge on the engineering side was up-to-date and at the front of my mind.

VCAP5-DTD Folder

I downloaded every PDF referenced in the exam blueprint, and I already had most of the product documentation already downloaded. I am primarily a delivery engineer, but to be successful on the exam you need to put on your designer’s hat. I tried to keep that in mind as I pored through the PDFs – it does make a difference because different information will stand out if you actively look for design elements.

My exam was just after lunch and it was well over an hour away, so I left early and brought my Kindle. I continued going through the PDFs until exam time. The sheer volume of information you have to read through makes VMware design exams quite difficult. I suggest reading the answers before you read the question – this helps you identify clues in the question. There are detailed descriptions requiring 6 or more paragraphs of reading just to answer a single multiple choice question.

The GA version of the exam has 115 questions and 6 diagramming scenarios. Keep track of the number of diagramming questions you get so you can budget your time appropriately. You should not spend any more than 15 minutes on a diagram. Keep in mind that 15 * 6 = 90 minutes, leaving you only 105 minutes to answer 109 questions. The pace you have to sustain is mentally exhausting. The beta was even more difficult with 131  questions, plus the expectation to provide comment feedback on the questions.

I found the diagramming questions to be even more involved than the DCD questions.. I’d say the tool was a bit better behaved than the DCD exam, but not by much. It’s easy to get sucked in to a design scenario and waste far too much time. Remember that you’re not designing the perfect system, it just has to be good enough to meet the stated requirements.

Moving PVS VMs from e1000 to VMXNET3 network adapter

A client needed to remove the e1000 NIC from all VMs in a PVS pool and replace it with the VMXNET3 adapter. PVS VMs are registered by MAC address – replacing the NIC means a new MAC, and PVS has to be updated to allow the VM to boot.

I needed a script to remove the old e1000 NIC, add a new VMXNET3 NIC, and register the new NIC’s MAC with PVS. I knew I would easily accomplish the VM changes with PowerCLI, but I didn’t know what options there were with Citrix. I found what I needed in MCLIPSSNapin, a PowerShell snap-in installed on all PVS servers. The snap-in gives you Powershell control over just about anything you need to do on a PVS server.

I didn’t want to install PowerCLI on the production PVS servers, and I didn’t want to install PVS somewhere else or try manually copying files over. I decided I needed one script to swap out the NICs and dump a list of VMs and MAC address to a text file. Then a second script to read the text file and make the PVS changes.

First, the PowerCLI script. We put the desktop pool into maintenance mode with all desktops shut down. It takes about 10 seconds per VM to execute this script.

Param(
	[switch] $WhatIf
,
	[switch] $IgnoreErrors
,
	[ValidateSet("e1000","vmxnet3")]
	[string] 
 	$NICToReplace = "e1000"
)

# vCenter folder containing the VMs to update
$FOLDER_NAME = "YourFolder"

# vCenter Name
$VCENTER_NAME = "YourvCenter"

#The portgroup that the replacement NIC will be connected to
$VLAN_NAME = "VLAN10"

#If you want all VMs in $FOLDER_NAME, leave $VMFilter empty. Otherwise, set it to a pipe-delimited list of VM names
$VMFilter = ""
#$VMFilter = "DESKTOP001|DESKTOP002"

$LOG_FILE_NAME = "debug.log"

Connect-VIServer $VCENTER_NAME

$NICToSet = "e1000"

if ( $NICToReplace -eq "e1000" )
{
	$NICToSet = "vmxnet3"
}
elseif ( $NICToReplace -eq "vmxnet3" )
{
	$NICTOSet = "e1000"
}


function LogThis
{
	Param([string] $LogText,
      	[string] $color = "Gray")
 Process
 {
    write-host -ForegroundColor $color $LogText 
    Add-Content -Path $LOG_FILE_NAME $LogText
 }
}

if ( Test-Path $LOG_FILE_NAME )
{
    Remove-Item $LOG_FILE_NAME
}

$errStatus = $false
$warnStatus = $false
$msg = ""

if ( $VMFilter.Length -eq 0 )
{
	$vms = Get-Folder $FOLDER_NAME | Get-VM
}
else
{
	$vms = Get-Folder $FOLDER_NAME | Get-VM | Where{ $_.Name -match $VMFilter }
}

foreach ($vm in $vms)
{
	$vm.Name
	$msg = ""


	if ( $vm.NetworkAdapters[0] -eq $null )
	{
		$errStatus = $true
		$msg = "No NIC found on " + $vm.Name
		LogThis $msg "Red"

	}
	else
	{
		if ( ($vm.NetworkAdapters | Measure-Object).Count  -gt 1)		{
			$errStatus = $true
			msg = "Multiple NICs found on " + $vm.Name
			LogThis $msg "Red"

		}
		else
		{
			if ( $vm.NetworkAdapters[0].type -ne $NICToReplace )
			{
				$warnStatus = $true
				$msg = "NIC is not " + $NICToReplace + ", found" + $vm.NetworkAdapters[0].type + " on " + $vm.Name
				LogThis $msg "Yellow"				
			}

				LogThis $vm.Name,$vm.NetworkAdapters[0].MacAddress

		}

	}



}

if ( $errStatus = $true -and $IgnoreErrors -ne $true)
{
	LogThis "Errors found, please correct and rerun the script." "Red"
 
}
else
{
	if ( $warnStatus = $true )
	{
		LogThis "Warnings were found, continuing." "Yellow"
	}
	foreach ( $vm in $vms )
	{
		if ( $WhatIf -eq $true )
		{
			$msg = "Whatif switch enabled, would have added " + $NICToSet + " NIC to " + $vm.Name
			LogThis $msg
		}
		else
		{
			$vm.NetworkAdapters[0] | Remove-NetworkAdapter -confirm:$false
			$vm | New-NetworkAdapter -NetworkName $VLAN_NAME -StartConnected -Type $NICToSet -confirm:$false
		}
	}

	if ( $VMFilter.Length -eq 0 )
	{
		$vms = Get-Folder $FOLDER_NAME | Get-VM
	}
	else
	{
		$vms = Get-Folder $FOLDER_NAME | Get-VM | Where{ $_.Name -match $VMFilter }
	}

	LogThis("Replaced MAC addresses:")
	foreach ( $vm in $vms )
	{
		LogThis $vm.Name,$vm.NetworkAdapters[0].MacAddress
	}
	
	
}

The script offers a -Whatif switch so you can run it in test mode without actually replacing the NIC. It writes all its output to $LOG_FILE_NAME. First it logs the VMs with their old MAC, then the replaced MAC. The output looks something like this:
VD0001 00:50:56:90:00:0a
VD0002 00:50:56:90:00:0b
VD0003 00:50:56:90:00:0c
VD0004 00:50:56:b8:00:0d
VD0005 00:50:56:b8:00:0e
Replaced MAC addresses:
VD0001 00:50:56:90:57:1b
VD0002 00:50:56:90:57:1c
VD0003 00:50:56:90:57:1d
VD0004 00:50:56:90:57:1e
VD0005 00:50:56:90:57:1f

Scan the logfile for any problems in the top section. The data after “Replaced MAC addresses:” is what the PVS server needs. Copy this over to the PVS host. Now we need to use MCLIPSSnapin, but first we have to register the DLL. I followed this Citrix blog for syntax:
“C:\Windows\Microsoft.NET\Framework64\v2.0.50727\installutil.exe” “C:\Program Files\Citrix\Provisioning Services Console\McliPSSnapIn.dll”

I copied the VM names and new MAC addresses to a text file vmlist.txt and put it on my PVS server, in the same folder as the following PowerShell script. It runs very quickly, it takes only a few seconds even if you are updating hundreds of VMs.

Add-PSSnapIn mclipssnapin
$vmlist = get-content "vmlist.txt"
foreach ($row in $vmlist)
{
	$vmname=$row.Split(" ")[0]
	$macaddress=$row.Split(" ")[1]
	$vmname
	$macaddress
	Mcli-Set Device –p devicename=$vmname –r devicemac=$macaddress
}

Now, replace the PVS pool’s image with one that is prepared for a VMXNET3 adapter and boot the pool. Migration complete!

Is It Time To Remove the VCP Class Requirement – Rebuttal

This post is a rebuttal of @networkingnerd‘s blog post Is It Time To Remove the VCP Class Requirement.

I would like to acknowledge that it’s easy for me to have the perspective I do as a VCP holder since version 3. I’ve already got it, so I naturally want it to remain valuable. The fact that my employer at the time paid for the class has opened up an entire career path for me that would have otherwise been closed. But I believe the VCP cert remains fairly elite specifically because of the course requirement.

First, consider Microsoft’s certifications. As a 15-year veteran of the IT industry, I believe I am qualified to state unequivocally that Microsoft certifications are utterly worthless. This is partially because of the proliferation of braindumps. There is no knowledge requirement whatsover to sit the Microsoft exams. You don’t even need to look at a Microsoft product to pass a Microsoft test – go memorize a braindump and pass the test. We’ve all encountered paper MCSEs – their existence completely devalues the certification. I consider the MCSE nothing more than a little checkbox on some recruiter’s wish list.

I would go so far as to say that Microsoft’s test are specifically geared towards memorizers – they acutally encourage braindumping by focusing on irrelevant details and not on core skills. Passing a Microsoft exam has everything to do with memorization and almost nothing to do with your skill as a Windows admin.

On the other hand, to sit the VCP exam you have to go through a week of training. At the very least, you’ve touched the software. You installed it. You configured it. You (wait for it)… managed it.  Obviously there are braindumps out there for the VCP exam too, but everybody starts with the same core of knowledge. The VCP exams have improved to a point where they are not memorize-and-regurgitate. A person who has worked with the product actually stands a reasonable chance of passing the exam.

Quoted directly from the blog post:

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

There is a tiny window of opportunity after the release of new vSphere edition to go take the exam without a course requirement. Those of us who are able to pass the exam in that small window are the people who do exactly as you say – we are downloading and installing the software in our labs. We are putting in the time to make sure that our knowledge of the newest features is up to par. Many of us partipate in alpha and beta programs, spending far more time with the software than a typical certification candidate. Some of us participate in the certification beta program, where we have just a couple of short weeks to study for and book the exam. I’ve put in quite a few late nights prepping for beta exams.

VMware forces us to learn the new features by putting a time limit on the upgrade period. We have a foundation of knowledge that was created by the original class that we took. There isn’t enough time for braindumps to leak out there, and the vast majority of upgraders wouldn’t use one anyhow. VMware can be reasonably certain that a VCP upgrader without the class really is taking the time to learn the new features. @networkingnerd is correct, the value IS in the knowledge, but the focus is ensuring that every VCP candidate starts with the same core of knowledge.

@networkingnerd suggests an alternative lower level certification such as a VCA with a much less expensive course requirement. I think it’s an interesting idea, but I don’t know how you’d put it into practice. I’m not sure what a 1-day class could prepare you for. It’s one thing for experienced vSphere admins to attend a 2-day What’s New class. But what could you really teach and test on? Just installing vSphere? There’s not a whole lot of value for an engineer who can install but not configure.

Again quoting from the article:

Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring.

This suggests that the entry-level certification from the leader in virtualization is somehow not well-known. While I would agree that the VCAP-level certifications do not enjoy the same level of name recognition as the CCNP, I talk to seniors in college who know what the VCP is. There is no lack of awareness of the VCP certification. I also agree that it’s ridiculous to send a VMware admin who has years of experience to the Install Configure Manage class. That’s why the Optimize and Scale and the Fast Track classes exist.

I don’t believe dropping the course requirement would do anything to enhance VMware’s market share. The number of VCP individuals has long since reached a critical mass.  Nobody is going to avoid buying vSphere because of a lack of VCPs qualified to administer the environment. While I agree that Hyper-V poses a credible threat, Microsoft is just now shipping features that vSphere has had for years. Hyper-V will start to capture the SMB market, but it will be a long time before it has the chance to unseat vSphere in the enterprise.

VMware View Composer starts, but does no work.

I worked on a client outage over the weekend, Virutal Center and View Composer were down. It started with a disk full situation on the SQL server hosting the vCenter, Composer, and Events databases. The client was shut down for winter break, so the Composer outage was not noticed for several days. After fixing the SQL Server disk space problem, everything came back up. I was able to restart all services and they appeared to be running. Composer started without issue, but it didn’t respond to any commands – any operations I requested in View Manager were ignored. I didn’t find any obvious errors in the logs.

I ran through the troubleshooting options in KB1030698 without finding any issues. I validated the SDK was responding by going to https://vcenteripaddress/sdk/vimService.wsdl . I couldn’t find any cause for the outage, so I opened up a Sev-1 ticket with VMware Support.

The support tech concluded that a problem with the ADAM database was preventing Composer from doing the job. He had me shut down all but one connection broker, then restart the View services on the remaining broker. At this point, commands issued on the broker were obeyed by Composer. We deleted or refreshed all of the desktops listed under Problem Desktops. Once we were sure that the ADAM database reflected the true state of the environment as reflected in vCenter, we restarted the other brokers. They synced databases and the problem was resolved.

License activation for Adobe CS6 in a View linked clone environment

I recently had to work out the process for license activation of the Adobe CS6 suite. Adobe offers an academic FTE licensing scheme similar to Microsoft’s FTE program. The calculation for licensing cost is based on your employee count; the entire district is then licensed and you don’t pay a dime for students. The Adobe K-12 Enterprise Agreement contains Design/Web Premium, Photoshop Elements, Captivate, and Presenter.

The total installed size of these products turns out to be 8-10GB, quite a bit of a nightmare to attempt a ThinApp. I decided to bake the Adobe software directly into the base image. However, Adobe license keys do not survive the quickprep process. The software comes up unlicensed when you log in to a linked clone.

Adobe offers a free enterprise deployment tool called Adobe Application Manager. One of the functions is to create a serialized installer key along with an executable that will license the already-installed Adobe software. Note that this does NOT work on Photoshop Elements. We have a ticket in to Adobe support for assistance, but at the moment it doesn’t appear possible to activate Photoshop Elements anywhere other than during installation.

First, download and install Adobe Application Manager. Then download your Adobe software and unzip the installation files. Then launch Adobe Application Manager. I found that it only worked properly when I chose Run as Administrator.
Launch Adobe Application Manager
Select the Serialization File option from the main menu.
AAM Main Menu Selector
Browse to your unzipped installer, you need to point to the folder that contains Set-up.exe. Then enter a folder name to save the serialized output, and a location on the network to save the folder.
Path to Installer

Enter the serial number.
Enter Serial Number

The output of the tool will be an executable and XML configuration file.
Application Manager output

Now we need to make this script run after guest customization. We put a C:\scripts folder inside each template. Then create customize.cmd in C:\scripts. Customize.cmd is a generic batchfile that will be called by View after it performs guest customization. You can only call one batchfile, so you either need to put every command in the customize.cmd batchfile, or use customize.cmd to call other batchfiles.
The script looks like this:
Customize script

Put one copy of the AdobeSerialization.exe into C:\scripts\adobe. Then create a folder for each Adobe product that you installed. Inside each of those folders is the prov.xml output file. Create the adobe-commands.cmd file and write it to call the executable once for each xml config file.
The syntax to run the licensing is as follows: AdobeSerialization.exe –tool=VolumeSerialize –provfile=prov.xml
Adobe licensing commands

Configure your View pool to run the customization script after the linked clone is quickprepped.
View Post-sync script

Now the Adobe products will be fully activated anytime you recompose your linked clone pools.

VMware View / PCoIP / PowerPoint bug

We just went live with a 400-seat View deployment at a school district. They have Wyse P20 zero clients which have dual DVI monitor ports. The project was a complete rip-and-replace – routers, switches, wiring, all Mac computers replaced with P20s, and two new racks full of gear in the server closet.

The teachers arrived this past week and started working on transferring their files off the Macs. One of them brought a problem to my attention – when he was trying to run his PowerPoint in presenter mode, the projector showed nothing but a black screen. Interestingly enough, if he used the marker tool to draw on his side of the presentation, the slide on the projector suddenly appeared.

Slideshow Ribbon in PowerPoint 2010

Slideshow ribbon in Powerpoint 2010

A Google search for the View release notes turned up with this, listed as a known issue in every View release since 4.5:

In Windows 7 desktops when connected using PCoIP in multiple-monitor display mode, if you choose Show On: Monitor Generic Non-PnP Monitor for any monitor other than the primary monitor in PowerPoint 2010, a black screen is displayed instead of a slide show.

Workaround: If you are using PowerPoint 2010 in a Windows 7 desktop, you must use Show On: Primary Monitor for a slide show.

One immediate workaround that other engineers at my company suggested is to use the RDP protocol. This does work, the issue does not exist when using RDP. However, it wasn’t an option at this client as the user endpoints were PCoIP Zero clients.

This was a huge problem for user acceptance of the project. Most of the teachers were used to presenter mode for delivery of their slideshows. The only workaround was to connect the projector directly to DVI output #1 and have the teacher present directly from the projector. Many teacher desks were oriented such that the teacher couldn’t even see the projection – either we got the presenter mode to work or we were asking dozens of teachers to rotate their entire classroom one day before students arrived. It wasn’t really a good workaround for teachers who could see the projection anyway. This district uses hosted PowerSchool for their student information system. They take attendance online and you generally wouldn’t want to project the process of logging into the system. A teacher could easily reveal credentials and allow students to log in and change grades. So we were down to having to switch video cables from the projector back to your monitor many times a day – a bit frustrating to say the least. We discussed a DVI splitter but that wasn’t a great option either, it would be a little easier to disconnect the projector when you didn’t want to project, but it was still kludgy.

I opened up support tickets with VMware and Teradici at 7:30 PM. The client paid for 9×5 support, so I wasn’t going to get an answer from VMware for quite some time. Teradici was a different story. They are the creators of the PCoIP protocol and maintain firmware and management software for zero clients. You can log on to http://www.teradici.com and get support if you’re running a PCoIP solution.

I went home and woke up to discover that e-mails from Teradici support had arrived in my inbox at 7:30AM. The support tech had already tested the bug out in his lab and had a way to resolve the issue. That’s a completely free 12-hour turnaround with resolution!

The fix was to enable 3D rendering. First you enable it in the pool. Doing so means you can no longer support RDP connections to the pool, so you have to turn off “Allow users to choose protocol.” Then enable 3D Rendering.

Enable 3D in View Pool

Enabling 3D rendering in View

Next, enable 3D support on your template VM. For me, Performance was unacceptable with less than 64MB of video RAM, and it got better with more. You might want to experiment with various RAM settings to determine the best setting in your environment.

Enable 3D support on a VM

Enabling 3D support on a VM

Then inside your template, enable Aero. The quickest way I found is to search for aero, then click “Find and fix problems with transparency and other visual effects.”

Enable Aero in Windows 7

Enabling Aero in Windows 7

 

 

 

 

 

 

 

 

 

 

 

 

Enabling Aero turns on the “pretty” features of Windows 7, which suck up memory and CPU cycles. I found the VM to be sluggish with Aero enabled, so I set the VM’s visual effects to Best Performance.

Adjust for Best Performance

Adjusting Windows for Best Performance

I recomposed my pool, tested it out, and it worked! There is definitely a penalty here in the form of increased resource use, but it gets the technology out of the way and lets the teachers focus on teaching the lesson.

Many thanks to the folks over at Teradici for this fix!