Chicago parking enforcement lies, hits me with bogus ticket

I received a $100 parking ticket in Chicago the other day. However, I wasn’t parked illegally. I’m sure I’m one of thousands of people who have been hit with bogus parking tickets. However, the officer who did this isn’t as sneaky as he thinks he is. Maybe it’s a she, I don’t know. In any event, Officer Miles, ID #639, has presented false evidence.

Here’s the citation. I have obscured the full ticket number as well as my license plate.

The actual citation

The actual citation

Here is the picture of the sign that I was allegedly violating.  Officer Miles took this picture and attached it to my citation. Note the timestamp.

Officer Evidence #1


This is a picture that Officer Miles took of my license plate. Note the timestamp – 6 minutes and 10 seconds after the first picture.Officer Evidence #2

For starters, these pictures prove nothing. They don’t prove that I was parked illegally, all they prove is that Officer Miles took a picture of a sign, and then 6 minutes and 10 seconds later took a picture of my license plate.

Here are the pictures I took when I found the ticket on my car. In this one, you can see my vehicle is clearly in front of the pole with a sign pointing to a tow zone in the other direction. If there was a parking restriction in effect, it seems that a great place to put the sign would be on the same pole that shows a parking restriction in the other direction.


Here’s a look at the other side of the pole. This proves that there is only 1 sign, the tow zone, and it’s pointing in the opposite direction. There is no sign showing a 9-6PM restriction.


Here’s a picture shooting up the street with my car on the left, right next to the tow zone pole. There are no other sign poles as far as you can see.

My final picture was taken right next to my passenger door. You can see much more clearly down the block – no signs – and you can also see the lampposts. They look nothing like the lamppost in the picture provided by Officer Miles.


I believe the officer falsified the ticket. He snapped a photograph of the parking restriction sign on another street, then snapped a picture of my vehicle more than 6 minutes later. I am contesting the ticket and hope the judge sees the evidence the same way I see it.


Custom Views and Dashboards in vRealize Operations

This post covers a few of the most common questions my customers ask me as I demonstrate what you can do with vROps. I’m going to take you through an example of needing to frequently check the CPU ready % of your VMs – this was my customer’s most recent request, but know that you can make this happen for any metric collected by vROps.

First, we’re going to create a custom view for CPU Ready % by going to Content>Views, then clicking on the green Plus to add a new View.


I named this one “Custom-CPU Ready” and gave it a description.


Next, pick what our View looks like. In this case, I want to see all of the data in a list format, so I pick List.


Now to select the subjects – these are the objects that the View will be looking at. We want CPU Ready % on Virtual Machines, so we pick the vCenter Adapter and scroll down until we find Virtual Machine.




We now need to find the CPU Ready % metric


Double-click on it when you find it in the list on the left, it will then appear in the Data section. Change the Sort order to descending because we want to see the VM with the highest CPU ready on top.


The Availability options let you control where inside vROps the View will be usable. I lef the defaults.


You now see the custom view in the Views list.


How can we use our brand new view? We want to see the CPU ready for all VMs in the Production cluster. Go to Environment, then drill down into the vSphere World until you reach the Production cluster. Click on the Details tab, you can then scroll down and find the custom View that we created. Click on it and all of your VMs show up, sorted by highest CPU ready.



Let’s say this is a metric that you look at at daily or multiple times a day. You can create a custom dashboard so the metric is immediately visible if you’re using vROps Advanced.

To create a new dashboard, from the Home menu, click Actions, then Create Dashboard


Name the dashboard and select a layout.


We want to show a View in the widget, so we drag View over into the right pane.


Click the Edit icon in the blank View to customize it.


Click on Self Provider to allow us to specify the Production Cluster object on the left, then select our Custom CPU Ready View on the right and click Save.


The dashboard is now ready. The CPU Ready for the Production VMs will now show up in the dashboard.






VMware Hands-on Labs

It always surprises me how few customers are aware of the Hands-on Labs. The labs are full installations of VMware software running in VMware’s cloud, accessible from any browser and completely free for anyone to use.

Each self-paced lab is guided with a step-by-step lab manual. You can follow the manual from start to finish for a complete look at the specific software product. Alternatively, you can focus on specific learning areas due to the modular structure of the lab manual. You can even ignore the manual completely and use the lab as a playground for self-directed study.

You can give the labs a try here:

Moving VMs to a different vCenter

I had to move a number of clusters into a different Virtual Center and I didn’t want to have to deal with manually moving VMs into their correct folders. In my case I happened to have matching folder structures in both vCenters and I didn’t have to worry about creating an identical folder structure on the target vCenter. All I need to do was to record the current folder location and move the VM to the correct folder in the new vCenter.

I first run this script against the source cluster in the source vCenter. It generates a CSV file with the VM name and the VM folder name

$VMCollection = @()
Connect-VIServer "Source-vCenter
$CLUSTERNAME = "MySourceCluster"
$vms = Get-Cluster $CLUSTERNAME | Get-VM
foreach ( $vm in $vms )
	$Details = New-Object PSObject
	$Details | Add-Member -Name Name -Value $vm.Name -membertype NoteProperty 
	$Details | Add-Member -Name Folder -Value $vm.Folder -membertype NoteProperty
	$VMCollection += $Details
$VMCollection | Export-CSV "folders.csv"

Once the first script is run, I disconnect each host from the old vCenter and add it into a corresponding cluster in the new vCenter. I can now run this command aginst the new vCenter to ensure the VMs go back into their original folders.

Connect-VIServer "Dest-vCenter"
$vmlist = Import-CSV "folders.csv"
foreach ( $vm in $vmlist )
	Move-VM -VM $vm.Name -Destination $vm.Folder

The parent virtual disk has been modified since the child was created

Some VMs in my environment had virtual-mode RDMs on them, along with multiple nested snapshots. Some of the RDMs were subsequently extended at the storage array level, but the storage team didn’t realize there was an active snapshot on the virtual-mode RDMs. This resulted in immediate shutdown of the VMs and a vSphere client error “The parent virtual disk has been modified since the child was created” when attempting to power them back on.

I had done a little bit of work dealing with broken snapshot chains before, but the change in RDM size was outside of my wheelhouse, so we logged a call with VMware support. I learned some very handy debugging techniques from them and thought I’d share that information here. I went back into our test environment and recreated the situation that caused the problem.

In this example screenshot, we have a VM with no snapshot in place and we run vmkfstools –q –v10  against the vmdk file
-q means query, -v10 is verbosity level 10

The command opens up the disk, checks for errors, and reports back to you.



In the second example, I’ve taken a snapshot of the VM. I’m now passing the snapshot VMDK into the vmkfstools command. You can see the command opening up the snapshot file, then opening up the base disk.




In the third example, I  pass it the snapshot vmdk for a virtual-mode RDM on the same VM –  it traverses the snapshot chain and also correctly reports that the VMDK is a non-passthrough raw device mapping, which means virtual mode RDM.



Part of the problem that happened was the size of the RDM changed (increased size) but the snapshot pointed to the wrong smaller size.  However, even without any changes to the storage, a corrupted snapshot chain can  happen  during an out-of-space situation.

I have intentionally introduced a drive geometry mismatch in my test VM below – note that the value after RW in the snapshot TEST-RDM_1-00003.vmdk  is 1 less than the value in the base disk  TEST-RDM_1.vmdk



Now if I run it through the vmkfstools command, it reports the error that we were seeing in the vSphere client in Production when trying to boot the VMs – “The parent virtual disk has been modified since the child was created”. But the debugging mode gives you an additional clue that the vSphere client does not give– it says that the capacity of each link is different, and it even gives you the values (20368672 != 23068671).

The fix was to follow the entire chain of snapshots and ensure everything was consistent. Start with the most current snap in the chain. The “parentCID” value must be equal to the “CID” value in the next snapshot in the chain. The next snapshot in the chain is listed in the “parentFileNameHint”.  So TEST-RDM_1-00003.vmdk is looking for a ParentCID value of 72861eac, and it expects to see that in the file TEST-RDM_1.vmdk.

If you open up Test-RDM_1.vmdk, you see a CID value of 72861eac – this is correct.  You also see an RW value of 23068672. Since this file is the base RDM, this is the correct value. The value in the snapshot is incorrect, so you have to go back and change it to match.  All snapshots in the chain must match in the same way.



I change the RW value in the snapshot back to match  23068672 – my vmkfstools command succeeds, and I’m also able to delete the snapshot from the vSphere client6_vmkfstools


Zoning a SAN – The Danger Zone

Updated May 3, 2014

My autographed copy has arrived!


Updated January 6, 2014

Per @StevePantol‘s comment below, I won a free signed copy of his and @ChrisWahl‘s Networking for VMware Administrators, due out at the end of March!

Original Post: January 1, 2014
On New Year’s Eve, @ChrisWahl tweeted:

ChrisWahl - Dange rZone


@StevePantol responded with:

Steve Pantol Lyrics

Challenge accepted. I officially submit my SAN-themed Danger Zone parody lyrics. To refresh your memory, here is a link to the actual lyrics.

Setting up the switches
Nexus or an MDS
Cables are all ready
One task left before you go

Building fibre channel zones
Adding hosts into the zones

Typin’ world wide names
Lost up in a sea of hex
Thanks for FCNS
Without it you’d be truly vexed

Building fibre channel zones
You will start
Adding hosts into the zones

You’ll never get the port online
Until you bind it to the vfc
You’ll have to keep up with the fight
Until the VSAN membership is right

Hoping to see flogi
Praying to be done and free
The longer that it takes
The greater the insanity

Building fibre channel zones
You finished
Adding hosts into the zones

Cannot stop the logging since the logging has been stopped

I was attempting to enable statistics on a VNX5300 running the most current release of the R31 train. I found that the stats were apparently already running, but only for a duration of 2 days. I wanted 7 days, so I tried stopping the logging. Unisphere gave me this gem of an error message:

Cannot stop the logging since the logging has been stopped

Unisphere error message

I couldn’t get it working via NaviSecCLI either. On a whim, I fired up an older version of the Unisphere client that was installed on my laptop and it was able to stop and restart statistics collection.

The heartbleed vulnerability

Heartbleed is a major online security vulnerability in a widely used, open-source encryption library named OpenSSL. The OpenSSL library is found in nearly 70% of all webservers on the internet as well as many other software products. The vulnerability allows any attacker to compromise the private encryption key of the webserver. It also allows any attacker the ability to remotely read pieces of data directly out of the server’s memory. These are both extremely serious flaws.  The fix requires IT staff to first update the OpenSSL library, then replace the SSL certificate with a new one. This is both time consuming and costly. If you are running any Linux-based webserver, particularly any on the public internet, you need to immediately check the version of OpenSSL and remediate if required.

Note that this vulnerability does not exist on a standard Windows web server running IIS.

The most recent release of VMware ESXi, version 5.5, is affected by the bug and there is no patch currently available. Standard security practice is to segment the management IP addresses from the rest of your network to prevent a malicious user from compromising your vSphere host. If you are running 5.5 and you have not segmented the management network, do so immediately.  This is the only workaround available as of April 10th, 2014 at 10:15AM Central

Technical details on the bug can be found here:

Here are official releases from various vendors on this vulnerability:

VMware –

Cisco –

Juniper –

Citrix –

Fortinet –

Barracuda –  No official statement found, although I have verbal confirmation that multiple Barracuda products are vulnerable. A patch already exists for the Message Archiver product.

Palo Alto Networks –

VMware recertification

VMware just announced a new recertification policy for the VCP. A VCP certification expires 2 years after it is achieved. You can recertify by taking any VCP or VCAP exam.

Part of VMware’s justification for this change is “Recertification is widely recognized in the IT industry and beyond as an important element of continuing professional growth.” While I do agree with this statement in general, I don’t believe this decision makes much sense for several reasons:

  • Other vendors – Cisco and Microsoft as two examples – expire after 3 years, not 2 years. Two years is unnecessarily short. It’s also particularly onerous given the VMware course requirement for VCP certification. It’s hard enough to remain current with all of the vendors recertification policies at 3 years.


  • Other vendors – again, Cisco and Microsoft as examples – have no version number tied to their certifications. You are simply “MCSE” or “CCNA”. With VMware, you are “VCP3″, “VCP4″, or “VCP5″. The certifications naturally age themselves out. A VCP3 is essentially worthless at this point. The VCP4 is old, and the VCP5 is current. An expiration policy doesn’t need to be in place for this to remain true.


  • The timing of this implementation is not ideal. VMware likes to announce releases around VMworld, so we’re looking at August 2014 for 6.0.  Most VMware technologists will be interested in keeping certifications with the current major release, so demand for the VCP6 will be high. Will the certification department release 6 in time for everybody to test before expiration? It’s really a waste of my time and money to force me to recertify on 5 when 6 is right around the corner.


  • The expiration policy makes no sense in light of the policy on VCAPs and VCPs. Currently, any VCP makes you eligible to take a VCAP in any of the three tracks, and achieving the VCAP in a track automatically gives you a VCP in the same track. This is a significant timesaver for those of us who are heavily invested in VMware – skip the entry level exam and go straight to the advanced exam. VCAP exam development is obviously even slower than VCP exam development. I have doubts that the VCAPs will come out quickly enough to meet the March 2015 deadline.


  • Adam Eckerle commented in his blog post “I also think it is important to point out that I think it encourages individuals to not only keep their skills up to date but also to branch out. If your VCP-DCV is going to expire why not take a look at sitting the VCP-DT or Cloud, or IaaS exams?  If you don’t use the Horizon products or vCloud Suite as part of your job that can be difficult.”I agree that in some cases, this might encourage you to pursue a certification in a separate track. Before I had the desktop certifications, I might have considered accelerating exam preparation to prepare for this recertification date.  However, I already own 4 of 6 VCAPs. Even as a consultant I have no use for vCloud, there’s just not enough demand from our customers to build a practice area around it. There’s currently no business benefit in pursuing the Cloud track.

It’s VMware’s program and they can do as they please, but I hope they consider 3 years instead of 2 for recertification.

Christian Mohn’s blog has a fairly lively discussion going on, and Vladan Seget also has some thoughts and comments