A quick NSX microsegmentation example

This short post demonstrates the power of NSX. My example is a DMZ full of webservers – you don’t want any of your webservers talking to each other. If one of your webservers happens to be compromised, you don’t want the attacker to then have an internal launching pad to attack the rest of the webservers. They only need to communicate with your application or database servers.

We’ll use my lab’s Compute Cluster A as an a sample. Just pretend it’s a DMZ cluster with only webservers in it.

Compute Cluster A

 

I’ve inserted a rule into my Layer 3 ruleset and named it “Isolate all DMZ Servers”. In my traffic source, you can see that you’re not stuck with IP addresses or groups of IP addresses like a traditional firewall – you can use your vCenter groupings like Clusters, Datacenters, Resource Pools, or Security Tags to name a few.

Rule Source

I add Computer Cluster A as the source of my traffic. I do the same for the destination.

NSX Source Cluster

 

My rule is now ready to publish. As soon as I hit publish changes, all traffic from any VM in this cluster will be blocked if it’s destined for any other VM in this cluster.

Ready to publish

 

Note that these were only Layer3 rules – so we’re secured traffic going between subnets. However, nothing’s stopping webservers on the same subnet from talking to each other. No worries here though, we can implement the same rule at layer 2.

Once this rule gets published, even VMs that are layer 2 adjacent in this cluster will be unable to communicate with each other!

NSX layer 2 block

This is clearly not a complete firewall policy as our default rule is to allow all. We’d have to do more work to allow traffic through to our application or database servers, and we’d probably want to switch our default rule to deny all. However, because these rules are tied to Virtual Center objects and not IP addresses, security policies apply immediately upon VM creation. There is no lag time between VM creation and application of the firewalling policy – it is instantaneous!  Anybody who’s worked in a large enterprise knows it can take weeks or months before a firewall change request is pushed into production.

Of course, you still have flexibility to write IP-to-IP rules, but once you start working with Virtual Center objects and VM tags, you’ll never want to go back.

Alzheimer’s Association – Forgotten Donation

Chris Wahl just put up this blog post showing the donation of royalties from his book, Networking for VMware Administrators. I won my copy for free and at the time I promised to donate to the Alzheimer’s association. I failed to do so, but I have rectified that today.

Below is my personal donation along with VMware’s matching gift. $31.41 is the minimum donation to receive a matching gift with VMware’s matching program.  alz-donation alz-matching

Moving VMs to a different vCenter

I had to move a number of clusters into a different Virtual Center and I didn’t want to have to deal with manually moving VMs into their correct folders. In my case I happened to have matching folder structures in both vCenters and I didn’t have to worry about creating an identical folder structure on the target vCenter. All I need to do was to record the current folder location and move the VM to the correct folder in the new vCenter.

I first run this script against the source cluster in the source vCenter. It generates a CSV file with the VM name and the VM folder name

$VMCollection = @()
Connect-VIServer "Source-vCenter
$CLUSTERNAME = "MySourceCluster"
 
$vms = Get-Cluster $CLUSTERNAME | Get-VM
foreach ( $vm in $vms )
{
	$Details = New-Object PSObject
	$Details | Add-Member -Name Name -Value $vm.Name -membertype NoteProperty 
	$Details | Add-Member -Name Folder -Value $vm.Folder -membertype NoteProperty
	$VMCollection += $Details
}
 
$VMCollection
$VMCollection | Export-CSV "folders.csv"

Once the first script is run, I disconnect each host from the old vCenter and add it into a corresponding cluster in the new vCenter. I can now run this command aginst the new vCenter to ensure the VMs go back into their original folders.

Connect-VIServer "Dest-vCenter"
$vmlist = Import-CSV "folders.csv"
 
foreach ( $vm in $vmlist )
{
	$vm.Name
	$vm.Folder
	Move-VM -VM $vm.Name -Destination $vm.Folder
}

The parent virtual disk has been modified since the child was created

Some VMs in my environment had virtual-mode RDMs on them, along with multiple nested snapshots. Some of the RDMs were subsequently extended at the storage array level, but the storage team didn’t realize there was an active snapshot on the virtual-mode RDMs. This resulted in immediate shutdown of the VMs and a vSphere client error “The parent virtual disk has been modified since the child was created” when attempting to power them back on.

I had done a little bit of work dealing with broken snapshot chains before, but the change in RDM size was outside of my wheelhouse, so we logged a call with VMware support. I learned some very handy debugging techniques from them and thought I’d share that information here. I went back into our test environment and recreated the situation that caused the problem.

In this example screenshot, we have a VM with no snapshot in place and we run vmkfstools –q –v10  against the vmdk file
-q means query, -v10 is verbosity level 10

The command opens up the disk, checks for errors, and reports back to you.

1_vmkfstools

 

In the second example, I’ve taken a snapshot of the VM. I’m now passing the snapshot VMDK into the vmkfstools command. You can see the command opening up the snapshot file, then opening up the base disk.

 

2_vmkfstools

 

In the third example, I  pass it the snapshot vmdk for a virtual-mode RDM on the same VM –  it traverses the snapshot chain and also correctly reports that the VMDK is a non-passthrough raw device mapping, which means virtual mode RDM.

 

3_vmkfstools

Part of the problem that happened was the size of the RDM changed (increased size) but the snapshot pointed to the wrong smaller size.  However, even without any changes to the storage, a corrupted snapshot chain can  happen  during an out-of-space situation.

I have intentionally introduced a drive geometry mismatch in my test VM below – note that the value after RW in the snapshot TEST-RDM_1-00003.vmdk  is 1 less than the value in the base disk  TEST-RDM_1.vmdk

4_vmkfstools

 

Now if I run it through the vmkfstools command, it reports the error that we were seeing in the vSphere client in Production when trying to boot the VMs – “The parent virtual disk has been modified since the child was created”. But the debugging mode gives you an additional clue that the vSphere client does not give– it says that the capacity of each link is different, and it even gives you the values (20368672 != 23068671).

5_vmkfstools
The fix was to follow the entire chain of snapshots and ensure everything was consistent. Start with the most current snap in the chain. The “parentCID” value must be equal to the “CID” value in the next snapshot in the chain. The next snapshot in the chain is listed in the “parentFileNameHint”.  So TEST-RDM_1-00003.vmdk is looking for a ParentCID value of 72861eac, and it expects to see that in the file TEST-RDM_1.vmdk.

If you open up Test-RDM_1.vmdk, you see a CID value of 72861eac – this is correct.  You also see an RW value of 23068672. Since this file is the base RDM, this is the correct value. The value in the snapshot is incorrect, so you have to go back and change it to match.  All snapshots in the chain must match in the same way.

4_vmkfstools

 

I change the RW value in the snapshot back to match  23068672 – my vmkfstools command succeeds, and I’m also able to delete the snapshot from the vSphere client6_vmkfstools

 

VMware load balancing with Cisco C-series and 1225 VIC card

I recently did a UCS C-series rackmount deployment. The servers came with a 10gbps 1225 VIC card and the core routers were a pair of 4500s in VSS mode.

The 1225 VIC card lets you carve virtual NICs from your physical NICs. You can put COS settings directly on the virtual NICS, enabling you to prioritize traffic directly on the physical NIC. For this deployment, I created 3 virtual NIC for each pNIC – Management, vMotion, and VM traffic. By setting COS to 6 for management, 5 for VMs, and 4 for vMotion on the vNICs, I ensure that management traffic is never interrupted, and I also guarantee that VM traffic will be prioritized over vMotion. This still allows me to take full advantage of 10gbps of bandwidth when the VMs are under light load.

Cisco 1225 VIC vNIC

Cisco 1225 VIC vNIC

My VCAP5-DTD exam experience

I took the VCAP5-DTD beta exam on January 3rd, 2013. Like many people, I received the welcome news today that I passed the exam.

I’m laughing a little to myself as I write this post because my certification folder contains a log of my studying. I downloaded the beta blueprint on December 17, 2012, but I already had Microsoft exams scheduled for December 28th.  I did no studying for this VCAP until the day before the exam, January 2rd, where you can clearly see my feverish morning download activity. I will say though that I have several years of View deployments under my belt, so my knowledge on the engineering side was up-to-date and at the front of my mind.

VCAP5-DTD Folder

I downloaded every PDF referenced in the exam blueprint, and I already had most of the product documentation already downloaded. I am primarily a delivery engineer, but to be successful on the exam you need to put on your designer’s hat. I tried to keep that in mind as I pored through the PDFs – it does make a difference because different information will stand out if you actively look for design elements.

My exam was just after lunch and it was well over an hour away, so I left early and brought my Kindle. I continued going through the PDFs until exam time. The sheer volume of information you have to read through makes VMware design exams quite difficult. I suggest reading the answers before you read the question – this helps you identify clues in the question. There are detailed descriptions requiring 6 or more paragraphs of reading just to answer a single multiple choice question.

The GA version of the exam has 115 questions and 6 diagramming scenarios. Keep track of the number of diagramming questions you get so you can budget your time appropriately. You should not spend any more than 15 minutes on a diagram. Keep in mind that 15 * 6 = 90 minutes, leaving you only 105 minutes to answer 109 questions. The pace you have to sustain is mentally exhausting. The beta was even more difficult with 131  questions, plus the expectation to provide comment feedback on the questions.

I found the diagramming questions to be even more involved than the DCD questions.. I’d say the tool was a bit better behaved than the DCD exam, but not by much. It’s easy to get sucked in to a design scenario and waste far too much time. Remember that you’re not designing the perfect system, it just has to be good enough to meet the stated requirements.

Is It Time To Remove the VCP Class Requirement – Rebuttal

This post is a rebuttal of @networkingnerd‘s blog post Is It Time To Remove the VCP Class Requirement.

I would like to acknowledge that it’s easy for me to have the perspective I do as a VCP holder since version 3. I’ve already got it, so I naturally want it to remain valuable. The fact that my employer at the time paid for the class has opened up an entire career path for me that would have otherwise been closed. But I believe the VCP cert remains fairly elite specifically because of the course requirement.

First, consider Microsoft’s certifications. As a 15-year veteran of the IT industry, I believe I am qualified to state unequivocally that Microsoft certifications are utterly worthless. This is partially because of the proliferation of braindumps. There is no knowledge requirement whatsover to sit the Microsoft exams. You don’t even need to look at a Microsoft product to pass a Microsoft test – go memorize a braindump and pass the test. We’ve all encountered paper MCSEs – their existence completely devalues the certification. I consider the MCSE nothing more than a little checkbox on some recruiter’s wish list.

I would go so far as to say that Microsoft’s test are specifically geared towards memorizers – they acutally encourage braindumping by focusing on irrelevant details and not on core skills. Passing a Microsoft exam has everything to do with memorization and almost nothing to do with your skill as a Windows admin.

On the other hand, to sit the VCP exam you have to go through a week of training. At the very least, you’ve touched the software. You installed it. You configured it. You (wait for it)… managed it.  Obviously there are braindumps out there for the VCP exam too, but everybody starts with the same core of knowledge. The VCP exams have improved to a point where they are not memorize-and-regurgitate. A person who has worked with the product actually stands a reasonable chance of passing the exam.

Quoted directly from the blog post:

For those that say that not taking the class devalues the cert, ask yourself one question. Why does VMware only require the class for new VCPs? Why are VCPs in good standing allowed to take the test with no class requirement and get certified on a new version? If all the value is in the class, then all VCPs should be required to take a What’s New class before they can get upgraded. If the value is truly in the class, no one should be exempt from taking it. For most VCPs, this is not a pleasant thought. Many that I talked to said, “But I’ve already paid to go to the class. Why should I pay again?” This just speaks to my point that the value isn’t in the class, it’s in the knowledge. Besides VMware Education, who cares where people acquire the knowledge and experience? Isn’t a home lab just as good as the ones that VMware built.

There is a tiny window of opportunity after the release of new vSphere edition to go take the exam without a course requirement. Those of us who are able to pass the exam in that small window are the people who do exactly as you say – we are downloading and installing the software in our labs. We are putting in the time to make sure that our knowledge of the newest features is up to par. Many of us partipate in alpha and beta programs, spending far more time with the software than a typical certification candidate. Some of us participate in the certification beta program, where we have just a couple of short weeks to study for and book the exam. I’ve put in quite a few late nights prepping for beta exams.

VMware forces us to learn the new features by putting a time limit on the upgrade period. We have a foundation of knowledge that was created by the original class that we took. There isn’t enough time for braindumps to leak out there, and the vast majority of upgraders wouldn’t use one anyhow. VMware can be reasonably certain that a VCP upgrader without the class really is taking the time to learn the new features. @networkingnerd is correct, the value IS in the knowledge, but the focus is ensuring that every VCP candidate starts with the same core of knowledge.

@networkingnerd suggests an alternative lower level certification such as a VCA with a much less expensive course requirement. I think it’s an interesting idea, but I don’t know how you’d put it into practice. I’m not sure what a 1-day class could prepare you for. It’s one thing for experienced vSphere admins to attend a 2-day What’s New class. But what could you really teach and test on? Just installing vSphere? There’s not a whole lot of value for an engineer who can install but not configure.

Again quoting from the article:

Employers don’t see the return on investment for a $3,000US class, especially if the person that they are going to send already has the knowledge shared in the class. That barrier to entry is causing VMware to lose out on the visbility that having a lot of VCPs can bring.

This suggests that the entry-level certification from the leader in virtualization is somehow not well-known. While I would agree that the VCAP-level certifications do not enjoy the same level of name recognition as the CCNP, I talk to seniors in college who know what the VCP is. There is no lack of awareness of the VCP certification. I also agree that it’s ridiculous to send a VMware admin who has years of experience to the Install Configure Manage class. That’s why the Optimize and Scale and the Fast Track classes exist.

I don’t believe dropping the course requirement would do anything to enhance VMware’s market share. The number of VCP individuals has long since reached a critical mass.  Nobody is going to avoid buying vSphere because of a lack of VCPs qualified to administer the environment. While I agree that Hyper-V poses a credible threat, Microsoft is just now shipping features that vSphere has had for years. Hyper-V will start to capture the SMB market, but it will be a long time before it has the chance to unseat vSphere in the enterprise.

VMware View Composer starts, but does no work.

I worked on a client outage over the weekend, Virutal Center and View Composer were down. It started with a disk full situation on the SQL server hosting the vCenter, Composer, and Events databases. The client was shut down for winter break, so the Composer outage was not noticed for several days. After fixing the SQL Server disk space problem, everything came back up. I was able to restart all services and they appeared to be running. Composer started without issue, but it didn’t respond to any commands – any operations I requested in View Manager were ignored. I didn’t find any obvious errors in the logs.

I ran through the troubleshooting options in KB1030698 without finding any issues. I validated the SDK was responding by going to https://vcenteripaddress/sdk/vimService.wsdl . I couldn’t find any cause for the outage, so I opened up a Sev-1 ticket with VMware Support.

The support tech concluded that a problem with the ADAM database was preventing Composer from doing the job. He had me shut down all but one connection broker, then restart the View services on the remaining broker. At this point, commands issued on the broker were obeyed by Composer. We deleted or refreshed all of the desktops listed under Problem Desktops. Once we were sure that the ADAM database reflected the true state of the environment as reflected in vCenter, we restarted the other brokers. They synced databases and the problem was resolved.

Extending Citrix Cache Drives in vSphere

I have a large client running a Citrix XenDesktop farm on top of vSphere. The environment is using PVS to PXE boot desktops. The VM shells were created with a 2GB cache drive. However, the environment has grown and we needed to extend the drive to 3GB.

PowerShell and PowerCLI to the rescue! First, we need to extend the size of the VMDK from 2 to 3GB. The client wanted me to do this in a controlled manner, so I pointed my script to the AD OU containing computer accounts for a specific pool of desktops. I do realize I could have passed a few more of the variables as parameters.

Param(
    [switch] $WhatIf=$true
)

$LOG_FILE_NAME = "output.txt"

function LogThis($buf)
{
    write-host $buf
    Add-Content -Path $LOG_FILE_NAME $buf
}

if ( Test-Path $LOG_FILE_NAME )
{
    Remove-Item $LOG_FILE_NAME
}

Add-PSSnapin VMware.VimAutomation.Core
Import-Module ActiveDirectory
Connect-VIServer YOURVCENTER.foo.com
$computers = get-adcomputer -Filter * -SearchBase "OU=Some OU2,OU=Some OU,DC=foo,DC=com"
foreach ( $computer in $computers )
{
   LogThis( $computer.Name )
   $vm = Get-VM $computer.Name -ErrorAction SilentlyContinue
   if ( $vm -eq $null)
   {
        LogThis( "Could not locate VM in vCenter" )
   }
   else
   {
        foreach ( $hd in (Get-HardDisk $vm) )
        {
             #$hd.CapacityKB -lt "2097153"    #2097152 is 2048K
             if ( $hd.CapacityKB -lt "2097513" )
             {
                  if ( $WhatIf -eq $true )
                  {
                     LogThis("Running in whatif mode - would have extended disk.")
                  }

                  else
                  {   
                        Set-HardDisk -HardDisk $hd -CapacityKB 3145728 -Confirm:$False
                  }
             }
            else
            {
                LogThis("No disk extension required.")
            }
        }
   }
   LogThis("`r`n")

}

Next, I needed a way to expand the partition for Windows. I thought about some kind of script to disconnect the VMDK, mount it to another VM and extend it that way, but it seemed too destructive. So I looked at diskpart instead. I first thought I was going to use a GPO to trigger a startup script, but apparently you can’t use those with Citrix PVS. The VM thinks it’s the identity of the master on boot – your WMI filters don’t work.

Instead, I went with remote Powershell invocation of diskpart.exe

Param(
    [switch] $WhatIf
)

Add-PSSnapin VMware.VimAutomation.Core
Import-Module ActiveDirectory
Connect-VIServer MYVCENTER.foo.com
$computers = get-adcomputer -Filter * -SearchBase "OU=OU2,OU=OU,DC=foo,DC=com"

$LOG_FILE_NAME = "diskpart_output.txt"

function LogThis($buf)
{
    write-host $buf
    Add-Content -Path $LOG_FILE_NAME $buf
}

if ( Test-Path $LOG_FILE_NAME )
{
    Remove-Item $LOG_FILE_NAME
}

foreach ( $computer in $computers )
{
     LogThis( $computer.Name )
     if ( $WhatIf -eq $true )
     {
        LogThis("Would have performed remote script")
     }
     else
     {
        invoke-command -computername $computer.Name -ScriptBlock { $script = $Null;$script = @("select disk 0","select partition 1","extend","exit");$script | Out-File -Encoding ASCII -FilePath "c:\windows\temp\Diskpart-extend.txt";diskpart.exe /S C:\windows\temp\Diskpart-extend.txt}
     }
  
}

The Invoke-Command line deserves some explanation

The diskpart commands I want to run are:
select disk 0
select partition 1
extend
exit

I create an empty variable and write the diskpart commands out to it. Then I use Out-File to save the diskpart commands to a text file in C:\windows\temp. Then I call diskpart.exe with an /S command switch, which executes the commands in the script. Because I used the -ComputerName parameter, all of my code is remotely executed on the desktop.

Hope this post saves you some time.

License activation for Adobe CS6 in a View linked clone environment

I recently had to work out the process for license activation of the Adobe CS6 suite. Adobe offers an academic FTE licensing scheme similar to Microsoft’s FTE program. The calculation for licensing cost is based on your employee count; the entire district is then licensed and you don’t pay a dime for students. The Adobe K-12 Enterprise Agreement contains Design/Web Premium, Photoshop Elements, Captivate, and Presenter.

The total installed size of these products turns out to be 8-10GB, quite a bit of a nightmare to attempt a ThinApp. I decided to bake the Adobe software directly into the base image. However, Adobe license keys do not survive the quickprep process. The software comes up unlicensed when you log in to a linked clone.

Adobe offers a free enterprise deployment tool called Adobe Application Manager. One of the functions is to create a serialized installer key along with an executable that will license the already-installed Adobe software. Note that this does NOT work on Photoshop Elements. We have a ticket in to Adobe support for assistance, but at the moment it doesn’t appear possible to activate Photoshop Elements anywhere other than during installation.

First, download and install Adobe Application Manager. Then download your Adobe software and unzip the installation files. Then launch Adobe Application Manager. I found that it only worked properly when I chose Run as Administrator.
Launch Adobe Application Manager
Select the Serialization File option from the main menu.
AAM Main Menu Selector
Browse to your unzipped installer, you need to point to the folder that contains Set-up.exe. Then enter a folder name to save the serialized output, and a location on the network to save the folder.
Path to Installer

Enter the serial number.
Enter Serial Number

The output of the tool will be an executable and XML configuration file.
Application Manager output

Now we need to make this script run after guest customization. We put a C:\scripts folder inside each template. Then create customize.cmd in C:\scripts. Customize.cmd is a generic batchfile that will be called by View after it performs guest customization. You can only call one batchfile, so you either need to put every command in the customize.cmd batchfile, or use customize.cmd to call other batchfiles.
The script looks like this:
Customize script

Put one copy of the AdobeSerialization.exe into C:\scripts\adobe. Then create a folder for each Adobe product that you installed. Inside each of those folders is the prov.xml output file. Create the adobe-commands.cmd file and write it to call the executable once for each xml config file.
The syntax to run the licensing is as follows: AdobeSerialization.exe –tool=VolumeSerialize –provfile=prov.xml
Adobe licensing commands

Configure your View pool to run the customization script after the linked clone is quickprepped.
View Post-sync script

Now the Adobe products will be fully activated anytime you recompose your linked clone pools.