Xfinity Stream – Roku

I was trying to get a Roku working with my xfinity service. The app is in beta.

The list of supported devices is in this article:

It listed the Roku Ultra 4660 as a supported device. However, Roku no longer makes the 4660, it’s been replaced with the 4661. I was hesitant to try something off the supported list, even though the 4661 is basically the same hardware as the 4660 and they throw in a pair of headphones. The Ultra 4661 works just fine.

I ran into another issue – I’m in the middle of a move and I had 2 xfinity accounts linked to a single username, one at my old property and one at my new property. I only kept internet service, not TV service at the old property.  The xfinity authorization link at isn’t smart enough to understand multiple accounts, unless both accounts have TV service. I had to have xfinity customer service unlink the accounts, then the Roku worked.

Life and death monitoring

The Brookfield Zoo posted this statement on Facebook today and also emailed it out to all members.

On July 10, there was a drop in the oxygen level at Brookfield Zoo’s Stingray Bay habitat. Veterinary staff was promptly on the scene to provide medical treatment to the affected stingrays. Additionally, immediate action was taken by animal care staff to rectify the situation and get the levels back to normal. Despite tireless efforts by staff, all the animals, which included four southern stingrays and 50 cownose rays, succumbed.

“We are devastated by the tragic loss of these animals,” said Bill Zeigler, senior vice president of animal programs for the Chicago Zoological Society, which operates the zoo. “Our staff did everything possible to try and save the animals, but the situation could not be reversed.”

Staff is currently analyzing the life support system to determine the exact cause of the malfunction. At this time, the Chicago Zoological Society has made the decision to not reopen the summer-long temporary exhibit for the remainder of the season. The popular exhibit has been operating since 2007.

The zoo posted a further clarification on what kind of monitoring was used in the enclosure.

Brookfield Zoo 15 minute monitoring - Facebook

Brookfield Zoo 15 minute monitoring – Facebook

I’m not a zoologist, I have no experience with monitoring systems for animals. But I do have vast experience monitoring critical services. A simple Google search finds $200 monitors for home aquariums that take readings every 6 seconds. It seems reasonable to assume that a commercial system would offer something similar.

In IT, we take painstaking care to ensure that our critical servers stay online.  Most of what we do has nothing to do with life-and-death. Even though we’re only protecting our company financials and reputation, any halfway decent system that I stand up has:

  • Independent power feeds from separate areas of the building
  • Dual power distribution units, dual power on all equipment
  • N+1 architecture to sustain the loss of one host in the cluster
  • Redundant storage controllers with redundant paths
  • Battery backup
  • Appropriate monitoring

In general, a monitoring interval of 5 minutes is the maximum I would ever allow for a Production server. Critical servers could be monitored as frequently as 1 minute. Load balancers watch services as frequently as every 5-10 seconds. All of this work is done to ensure availability of the services.

Anything can go wrong and it’s possible that a more frequent monitoring interval would not have made a difference in this case. But at first glance, a 15 minute interval seems negligent. If I can monitor my goldfish every 6 seconds, it seems that the zoo should have been monitoring the rays more closely. A 15 minute monitoring interval means that you can’t expect a human response for at LEAST 20 minutes, and that doesn’t seem sufficient when the lives of 54 animals depend on it.

Chicago parking enforcement lies, hits me with bogus ticket

Update May 21, 2015

I received a letter from the City of Chicago saying the ticket was dismissed. Unfortunately, there was no mention of disciplining the officer, but at least I’m not on the hook for the ticket.

Original Post: March 15, 2015

I received a $100 parking ticket in Chicago the other day. However, I wasn’t parked illegally. I’m sure I’m one of thousands of people who have been hit with bogus parking tickets. However, the officer who did this isn’t as sneaky as he thinks he is. Maybe it’s a she, I don’t know. In any event, Officer Miles, ID #639, has presented false evidence.

Here’s the citation. I have obscured the full ticket number as well as my license plate.

The actual citation

The actual citation

Here is the picture of the sign that I was allegedly violating.  Officer Miles took this picture and attached it to my citation. Note the timestamp.

Officer Evidence #1


This is a picture that Officer Miles took of my license plate. Note the timestamp – 5 minutes and 10 seconds after the first picture.Officer Evidence #2

For starters, these pictures prove nothing. They don’t prove that I was parked illegally, all they prove is that Officer Miles took a picture of a sign, and then 5 minutes and 10 seconds later took a picture of my license plate.

Here are the pictures I took when I found the ticket on my car. In this one, you can see my vehicle is clearly in front of the pole with a sign pointing to a tow zone in the other direction. If there was a parking restriction in effect, it seems that a great place to put the sign would be on the same pole that shows a parking restriction in the other direction.


Here’s a look at the other side of the pole. This proves that there is only 1 sign, the tow zone, and it’s pointing in the opposite direction. There is no sign showing a 9-6PM restriction.


Here’s a picture shooting up the street with my car on the left, right next to the tow zone pole. There are no other sign poles as far as you can see.

My final picture was taken right next to my passenger door. You can see much more clearly down the block – no signs – and you can also see the lampposts. They look nothing like the lamppost in the picture provided by Officer Miles.


I believe the officer falsified the ticket. He snapped a photograph of the parking restriction sign on another street, then snapped a picture of my vehicle more than 5 minutes later. I am contesting the ticket and hope the judge sees the evidence the same way I see it.


Zoning a SAN – The Danger Zone

Updated May 3, 2014

My autographed copy has arrived!


Updated January 6, 2014

Per @StevePantol‘s comment below, I won a free signed copy of his and @ChrisWahl‘s Networking for VMware Administrators, due out at the end of March!

Original Post: January 1, 2014
On New Year’s Eve, @ChrisWahl tweeted:

ChrisWahl - Dange rZone


@StevePantol responded with:

Steve Pantol Lyrics

Challenge accepted. I officially submit my SAN-themed Danger Zone parody lyrics. To refresh your memory, here is a link to the actual lyrics.

Setting up the switches
Nexus or an MDS
Cables are all ready
One task left before you go

Building fibre channel zones
Adding hosts into the zones

Typin’ world wide names
Lost up in a sea of hex
Thanks for FCNS
Without it you’d be truly vexed

Building fibre channel zones
You will start
Adding hosts into the zones

You’ll never get the port online
Until you bind it to the vfc
You’ll have to keep up with the fight
Until the VSAN membership is right

Hoping to see flogi
Praying to be done and free
The longer that it takes
The greater the insanity

Building fibre channel zones
You finished
Adding hosts into the zones

The heartbleed vulnerability

Heartbleed is a major online security vulnerability in a widely used, open-source encryption library named OpenSSL. The OpenSSL library is found in nearly 70% of all webservers on the internet as well as many other software products. The vulnerability allows any attacker to compromise the private encryption key of the webserver. It also allows any attacker the ability to remotely read pieces of data directly out of the server’s memory. These are both extremely serious flaws.  The fix requires IT staff to first update the OpenSSL library, then replace the SSL certificate with a new one. This is both time consuming and costly. If you are running any Linux-based webserver, particularly any on the public internet, you need to immediately check the version of OpenSSL and remediate if required.

Note that this vulnerability does not exist on a standard Windows web server running IIS.

The most recent release of VMware ESXi, version 5.5, is affected by the bug and there is no patch currently available. Standard security practice is to segment the management IP addresses from the rest of your network to prevent a malicious user from compromising your vSphere host. If you are running 5.5 and you have not segmented the management network, do so immediately.  This is the only workaround available as of April 10th, 2014 at 10:15AM Central

Technical details on the bug can be found here:

Here are official releases from various vendors on this vulnerability:

VMware –

Cisco –

Juniper –

Citrix –

Fortinet –

Barracuda –  No official statement found, although I have verbal confirmation that multiple Barracuda products are vulnerable. A patch already exists for the Message Archiver product.

Palo Alto Networks –

Samsung S4 turns on when fully charged

This has been slowly driving me insane since I got my S4. I charge my phone overnight on my nightstand. When it is fully charged, the screen turns on and stays on, bathing the room with enough light to summon Batman. I finally found a solution buried in an Android forum.

First, find the Developer options in your settings.Developer options



If you do not have Developer options, tap on “About Phone”.

About phone

To enable developer mode, tap on the Build number 7 times. Seriously, you have to tap on it 7 times. You’ll get a message that tells you that you are now a developer.

Now go into the Developer options screen and uncheck “Stay awake”.  With this setting unchecked, the phone won’t wake up and neither will you.




Cloudy, with a chance of software…

When you work in IT, and particularly when you work on consulting, people are in to the next big thing. If you’re not working on the next big thing, you feel like you’re missing something critical.

The buzzword d’jour for the last few years has been ‘cloud’. Put it out in the cloud, the cloud makes business more agile, the cloud saves money, etc. The cloud has its place, but you have tradeoffs. You have no control over the environment. You have to continually pay for licensing – in the cloud, you own nothing. You don’t even own the data – legalities aside, the data is sitting on equipment that you don’t own. The cloud provider could go offline at any point leaving you high and dry. I don’t really think that Amazon will go out of business soon – but what happens if it did?  Backups, obviously, but now you have to look at how to back up and recover your cloud environment.

In the past, if you bought Exchange 2007, you owned it. If your cash flow meant you couldn’t afford Exchange 2010 when it came out, then you kept running 2007. In the new cloud world, you’re bound to a monthly or annual subscription. A cash flow problem means your business stops. I don’t object to looking for cloud solutions to many business problems, but people seem to be rushing to the cloud without considering all of the ramifications.


Death Certificates for Exam Cancellation – Another Reason to Loathe Pearson Vue


VMware uses Pearson Vue for all of their certification exams. I have had several interactions with VMware’s certification personnel due to my participation in the VMware Beta exam process. I forwarded this blog post to Randy Becraft, Senior Program Manager, VMware Certification Team. After discussion with the Vue program manager assigned to VMware, Randy provided me with the following bullet points:

  • Pearson VUE delivers thousands of exams to hundreds of clients each month. Theirs is a business that has to have policies that apply to the large volume of candidates.
  • Some test centers have very high volume. Cancellations—particularly at the last minute—cost the test center revenue.
  • Historically enough candidates cancelled so many tests the same day that Pearson VUE had to implement a policy to provide a “buffer” from that business risk, hence the 24-hour cancellation policy.
  • When a cancellation must occur within the 24-hour period for a legitimate reason such as a death in the family, some form of documentation is required to ensure the cancellation privilege is not abused. In the case of a death in the family the policy does not specifically require a death certificate, though that is what was communicated in Patrick’s specific case. For instance, a newspaper death notice is acceptable.

UPDATE 10/28/2013

During my April encounter with Vue, I spoke with a customer service manager. I called him last week and left a voicemail asking for a call back.

The staff running @PearsonVue‘s Twitter account saw a flurry of retweets of this blog post. I received a DM this morning saying that I’d be contacted by one of the Vue staffers.

The customer service manager who got my voicemail just sent me an email. I did not explain my situation in the voicemail; I assume that the social media staff at Vue forwarded the Twitter activity to him. The email says:

Hi Patrick,

I got your VM this morning.  Sorry I was in training last thursday and Friday and missed your call.

While it absolutely is policy to need some sort of documentation to waive the reschedule policy for a death in the family, I booked you for a new exam for end of November as a customer service gesture..  You can go online, call our call center or give me a call to reschedule to a date/time that works better for you.  I am very sorry for your loss.  Please let me know if you have any questions or if there is anything else I can do for you.

Although I’m pleased that the manager did what I believe to be the right thing, I have to think it’s primarily because of the bad publicity on Twitter.  Another victory for social media.

Original post 10/26/2013

I failed my first attempt at the Cisco 640-916 DCICT exam by 4%. I studied in the evenings for a few weeks afterward, prepping for the retake. I worked a maintenance window for a client on the evening of October 23rd, finishing around 10PM. I was scheduled at the same client on the 24th, but that was a backup date in case the 23rd had problems. With no work left to do, I decided to book the exam for 1:30PM on the 24th. This would give me the morning to try cramming useless factoids into my brain.

I was unaware that as just as I was booking the exam, a family member was dying. It was a hospice situation; his passing was expected, but the speed with which it happened was not.

I got the call at 7AM.

I notified work. They didn’t ask for a death certificate. I cancelled my son’s appointments. They didn’t ask for a death certificate.

Then I called Pearson Vue. The cancellation policy requires 24 hour notice, an absurdity on its face because I booked the exam inside the cancellation window – 15 1/2 hours before the scheduled time. This policy means I couldn’t have cancelled the appointment ten seconds after making it. I booked it at an exam center with literally dozens of exam slots open – I didn’t take the final slot available on the 24th and prevent somebody else from testing on that day.

The Vue person demanded a death certificate. I won’t repeat exactly what I said in reply – I suppose the best way to put it is that I ‘impolitely declined’. Vue said there was nothing else that could be done and my exam fee would be forfeited. I hung up.

My wife and I planned to drive up to another family member’s house, which happened to be close to the testing center. At some point I began stewing over what had happened and decided if I forfeited the exam fee, Vue was somehow winning – beating me, stealing the exam fee. I can’t say the logic was sound, but that’s how my mind was operating at the time. I popped out and sat the exam for the second time. I failed by 10 points out of 1,000.

Since 2009, I have sat 21 exam sessions at Pearson Vue at a total cost of $5,000. I haven’t canceled any sessions, although I’ve had an exam canceled due to Vue’s gross incompetence. I think it’s reasonable to give me the benefit of the doubt that a family member did indeed pass away. I would think that even the questionably skilled techs at Vue could design a way to track same-day cancellations. It could be a single field on a form; one column in a database; even just an entry in the comment field. Perhaps Vue could consider dropping the policy altogether. Are there really that many people cancelling appointments on the same day? People spend countless hours preparing for these exams, I highly doubt that there is a flood of same-day cancellations other than true emergency situations.

I wish I could say that I was going to avoid a Vue testing center from now on, but that’s obviously not going to happen due to my career requirements.







Fear and Loathing of Pearson Vue

Computer testing vendor Pearson Vue suffered a massive outage this past week – at least most people would call it an outage. Pearson Vue’s spin team tried to say their systems were 100% up, only slow, but countless posts online contradict this.

The issues were first acknowledged on the company’s Facebook page.

An entire day goes by and they claim the issue is fixed.

But shortly thereafter, another acknowledgement of an ongoing problem.

On April 24th, another acknowledgement of a problem.

A second generic post again on April 24th.


I first learned of this outage when I walked into a Vue testing center for an exam on April 24, only to discover that they were unable to deliver because Vue’s servers were not accessible. The center called in to Vue, and customer service said all their systems were frozen and nothing could be done.

Pearson Vue put up another April 24th post suggesting that users try scheduling during non-peak hours.


On April 25th came the first of many outright lies posted by Pearson Vue.


This leads you to believe the system is up but slow. This was not the case. I tried many times to log in without success, as did others such as this Facebook poster.


Here is the rest of the FAQ from April 25th

I called multiple times, only to be told by customer service that they could not log in. This happened to people worldwide, here are a few of the many posts on Facebook.

April25-CustSvcCanNotSchedule April25-CustSvcCanNotSchedule2 April25-CustSvcCanNotSchedule3

Testing centers were not able to deliver exams, either.


Later in the day on April 25th came a post with another outright lie saying “our systems are operational, just not optimal”


That post prompted me to post the following, which was not replied to or acknowledged in any way.

April25-Patrick1 April25-Patrick2 April25-Patrick3

On April 26th, a series of posts came out saying that engineers had found the problem and they were bringing the system back to expected performance levels.

April26-ProblemFound April26-ProblemFound2 April26-ProblemFound3 April26-ProblemFound4

A Facebook post directly under the above message shows a user who still can’t schedule an exam using customer service – the timestamp on this is April 28th, 9:30AM CDT.


On April 28th at 10:30 AM CDT, Pearson Vue had the audacity to ask users to stress test the system for them.



The user impact of this outage has been massive. It was more of an inconvenience for me. But for others, there were signifiant impacts in time, expense, and even their ability to work.

Here is one Facebook post from a user who has no Pearson Vue facility in their country. They have to get a visa to leave the country to sit an exam. In order to get a visa, they have to make an appointment with their embassy. Once they get their appointment, they have to register for the exam and bring printed confirmation. Unable to register for over a week, this user loses the embassy appointment.


Did Vue suffer data loss on top of the outage?
A user needing to test for starting a job next week.


I know for a fact that I saw dozens more posts with similar problems – physicians unable to go to board exams, nurses unable to work because of results delays. I wish I had thought to screencap more while this was going on, but I didn’t. It appears that those posts were either eaten up by Facebook (yeah right) or deleted by Pearson (likely, but can never be proven).  At least one user wrote a post confirming post removal. None of my posts were deleted.


As an IT professional, I find this outage appalling. The company states this was started by an upgrade. Every place I’ve ever worked at upgrades during off hours and rolls back on failure. Pearson deployed a faulty upgrade then forced its users to pay the price while programmers scrambled desperately to fix their poorly written code. Pearson Vue’s suggestion that they carefully planned and tested their upgrades is nonsense. A proper load test reveals these kinds of failures. Their post from today ‘inviting’ us to load test their fixed system also points to the fact that they are unable or unwilling to test their own systems.

The fact that this upgrade caused a global outage for both scheduling and test delivery demonstraties critical failures of their architecture. Are the same webservers used for scheduling also used for exam delivery? Could a breach of could then result in the theft of exam content?  Or instead are they separate servers connected to the same backend database? In any event, their architecture is an abomination. The global failure to deliver exams points to only two possibilities. Do all global Vue delivery centers connect to the same datacenter, meaning the ability to deliver exams globally relies on a single point of failure? If so, this is a catastrophic design flaw. If they do have datacenter redundancy, then they deployed their upgrade across the entire system at the same time. This demonstrates atrocious planning. Why would you risk multiple datacenters with the same upgrade?

Pearson Vue is a billion dollar, Fortune 500 corporation. This kind of an outage is both unacceptable and inexcusable. Considering the power that Vue wields over people’s careers, it’s frightening to witness the depth of ineptitude demonstrated in this disaster.