Workspace One screenshots

Today, VMware announced the launch of Workspace One and I wanted to throw a couple of screenshots out there. As a field engineer, I use Horizon Workspace every day to access my work applications. I’ve been using Workspace One for the last month and I’m happy with how responsive it is.

This is the Android URL to get the Workspace One App on your phone:
https://play.google.com/store/apps/details?id=com.airwatch.vmworkspace&hl=en

And the Apple AppStore:
https://itunes.apple.com/us/app/vmware-workspace-one/id1031603080?mt=8

This is what my workspace looks like in Google Chrome. I’ve got the Favorites showing, which are the 6 primary apps I use at VMware. Our catalog is full of many dozens of apps, it’s nice to have a quick Favorites list.

Workspace One - Chrome

 

This is the Workspace One app on my iPhone. It’s an almost identical look and feel, and the favorites I set while in Chrome are the same favorites on my iPhone.

Workspace One- iPhone

 

At VMware, we use two factor authentication to access Workspace One. However, I only had to enter my credentials with RSA key once. After that, I can go back into the app with my Touch ID stored on the iPhone.

Workspace One - Credentials

Using snapshots to back up SQL under heavy IOPS loads

I find this problem coming up frequently in the field – you’ve virtualized your SQL server and you’re trying to back it up with your snapshot-based software.  In many cases, the SQL server is under such a heavy load that it’s impossible to commit the snapshot after the backup is taken. There’s just too much IO demand. You end up having to take a SQL outage to stop IO long enough to get the snapshot committed.

Here’s one strategy for setting up your virtual SQL servers to avoid this problem altogether. It uses a disk mode called independent persistent. An independent persistent disk is excluded from snapshots – all data written to an independent persistent disk is immediately committed, even if a snapshot is active on the VM. By placing SQL datafile and logfile drives in independent persistent mode, they will never be snapshot, eliminating the problem of having to commit a post-backup snapshot.

Here’s a disk layout that I’ve used for SQL servers. These drives are set to standard mode, so a snapshot picks them up.

C:\ – 40GB, SCSI 0:0
D:\ – 30GB, SQL Binaries   SCSI 0:1
L:\ – Logs, 1GB  SCSI 1:0
O:\  – Datafiles, 1GB  SCSI 2:0
Y:\   – Backups, 1GB  SCSI 3:0
Y:\SQLBAK01 – SCSI3:1, 2TB+ mounted filesystem under Y:\

Your backup drive is limited to 2TB -512B if you’re using vSphere 5.1 or earlier, but can go up to 62TB in later versions of vSphere.

L:\Logs01 – SCSI 1:1, independent persistent, variable size, mounted filesystem under L:
O:\SQLData01 – SCSI 2:1, independent persistent, variable size, mounted filesystem under O:\

Part of why we used mountpoints was for consistency – no matter what, L: was always logs, O: was always SQL data, and Y: was always backups. There were no questions as to whether a particular SQL server had a certain number of drives for a specific purpose – the entire structure was under a single, consistent drive letter.

Depending on the workload, O:\SQLData01 might have only 1 heavily used database on a single LUN, or it might have a bunch of small databases.  When we needed another one, we’d attach another mountpoint O:\SQLData02 on SCSI 2:2, L:\Logs02 on SCSI 1:2, Y:\SQLBAK02 on SCSI 3:2. Nightly backup jobs wrote SQL backups out to Y:\.  Since the Y drives are all in standard mode, backup jobs picked up the dumps in the normal snapshotting process.

If you had a complete loss of the entire SQL VM, you could restore from backup and you’d still have the L:, O:, and Y: drives with their mountpoints (although they might not have any disk attached to them), and you’d have to restore the database from the SQL dumps on Y:\.  Depending on what the nature of VM loss was, you may have to spend some time manually fixing the mounts.

It took a little bit of work to maintain, but our backups worked every time. Part of setting up a new database was that the DBAs wrote a restore script and stored it in the root of Y: which got backed up as part of the snapshot. Once the VM came back from Veeam restore, the DBAs would bring up SQL, hit the restore scripts, and we were off and running. You also need to coordinate your DBA’s backup schedule carefully with your backup software schedule – what you don’t want is to have backups being written to the Y: drive at the same time you’ve got an active snapshot in place – you could easily fill up the datastore if your backups are large enough. Some backup software allows you to execute pre-job scripting, it’s a fairly simple task to add some code in there to check if an active SQL backup was running. If so, postpone your backup snapshot and try again later.

NSX per-VM licensing compliance

I had a customer with a production cluster of around 100 VMs. They needed to segment off a few VMs due to PCI compliance, and were looking at a large expense to create a physically separate PCI DMZ. I suggested instead the purchase of our per-VM NSX solution. We sell it in packs of 25 and it comes in at a list price of around $10K. This looked great compared to the $75K solution they were considering.

The problem with per-VM licensing is with compliance. VMware doesn’t have a KB explaining a way to make sure you aren’t using more than the number of licenses that you bought. If you add a 25-pack of NSX licenses to a cluster with 100 VMs in it, the vCenter licensing portal will show that you’re using 100 licenses but only purchased 25. VMware KB 2078615 does say “There is no hard enforcement on the number of Virtual Machines licensed and the product will be under compliance.” However, this post is related to the way per-socket licensing displays when you add it to vCenter, not related to per-VM pricing.

I’ve had a few conversations with the NSX Business Unit (NSBU) and the intent of per-VM licensing is to allow customers to use the NSX distributed firewall without physically segmenting clusters. You can run NSX-protected VMs in a cluster alongside non-NSX-protected VMs. However, you have to take some steps to ensure that you’re remaining in licensing compliance. This post shows you how to do it.

One way to do this is to avoid using ‘any’ in your firewall rules. If all of your firewall rules are VM to VM or security group to security group, all you have to do is keep the total VM count below your purchased VM count. It is difficult to craft a firewall policy without using ‘any’, though this is the simplest method if your requirements lend themselves to this method.

An alternative way is to use security tags. It’s a bit more involved but lets you have precise control over where your NSX security policy is applied.

First, I create two custom tags, Custom_NSX.FirewallDisabled and Custom_NSX.FirewallEnabled

nsx-custom-tags

I then assigned tags to my VMs as shown below. The disadvantage to this method is you have to keep making sure that you security tag VMs. But it does make rule writing easier. I’m only creating two groups – NSX enabled and disabled. However, there’s nothing stopping you from creating multiple tags – maybe you have a DMZ1 and DMZ2, maybe PCI and HIPAA are separate tags.

In this case, I assign all of my PCI VMs the FirewallEnabled tag and the rest of my VMs the FirewallDisabled tag.

assign-security-tag

Now, instead of going to the Firewall section, I go to Service Composer. Don’t be confused by the fact that the security groups already exist – I took the screenshot of the Security Groups tab after I created the groups.

service-composer

First, I create an NSX_Disabled group with a dynamic membership of CUSTOM_NSX.FirewallDisabled.

custom-disabled

Next, I create an NSX_Enabled security group with a dynamic membership of CUSTOM_NSX.FirewallEnabled

custom-enabled

I then specifically exclude NSX_Disabled from the NSX_Enabled group. This guarantees that no firewall rules can touch my excluded VMs.

nsx-exclude

I create a new security policy in Service Composer

new-security-policy

In the Firewall Rules section, NSX has something called “Policy’s Security Groups”.  If we assign the policy to the NSX_enabled security group, we can safely use an ‘any’ rule as long as the other side is ‘Policy’s Security Groups’. So source could be ‘any’ if dest is Policy’s Security Groups, or dest could be ‘any’ if source is Policy’s Security Groups. The security group we made enforces that NSX won’t apply rules on VMs that aren’t in the NSX_enabled group.

policys-security-groups

I then apply my new policy to the NSX_Enabled security group.

policy_apply security-group-select

Doing a security policy this way is a bit more involved than simply using the Firewall section of NSX, but it’s worth considering. It’s a perfect way to ensure 100% compliance in a per-VM model. It’s also helping you unlock the power of NSX – all you have to do is security tag VMs and they automatically get their security policy.

 

OpenStack introduction

I have a large deal on the table and they are asking about VMware’s support for OpenStack. Since I know nothing about OpenStack, other than the fact that VMware offers VMware Integrated Openstack, I decided it was time to find some training. Fortunately I have many specialists inside VMware who can help answer customer questions around VIO.

There’s plenty of VIO training internally at VMware, but I needed something even more basic, just an intro. I went to my trusty PluralSight subscription and found Eric Wright‘s Introduction to OpenStack. This is a great course to come up to speed on the basics of OpenStack in only 2.5 hours.

Unable to connect virtual NIC in vCloud Air DRaaS

I had a customer open a service request, they were in the middle of a DR test using vCloud Air DRaaS and were unable to connect 1 virtual machine to the network. It kept erroring out with a generic unable to connect error.

It turns out that their VM had a VMDK sized with a decimal point, like 50.21GB instead of just 50GB. I don’t see it often, but this sometimes happens when P2V a machine. The vCloud Director backend can’t handle the decimal point in the disk size, so it errors out.

I’m not entirely sure why the error happens, but the fix is to resize your source disk to a non-decimal number and run replication again.

A quick NSX microsegmentation example

This short post demonstrates the power of NSX. My example is a DMZ full of webservers – you don’t want any of your webservers talking to each other. If one of your webservers happens to be compromised, you don’t want the attacker to then have an internal launching pad to attack the rest of the webservers. They only need to communicate with your application or database servers.

We’ll use my lab’s Compute Cluster A as an a sample. Just pretend it’s a DMZ cluster with only webservers in it.

Compute Cluster A

 

I’ve inserted a rule into my Layer 3 ruleset and named it “Isolate all DMZ Servers”. In my traffic source, you can see that you’re not stuck with IP addresses or groups of IP addresses like a traditional firewall – you can use your vCenter groupings like Clusters, Datacenters, Resource Pools, or Security Tags to name a few.

Rule Source

I add Computer Cluster A as the source of my traffic. I do the same for the destination.

NSX Source Cluster

 

My rule is now ready to publish. As soon as I hit publish changes, all traffic from any VM in this cluster will be blocked if it’s destined for any other VM in this cluster.

Ready to publish

 

Note that these were only Layer3 rules – so we’re secured traffic going between subnets. However, nothing’s stopping webservers on the same subnet from talking to each other. No worries here though, we can implement the same rule at layer 2.

Once this rule gets published, even VMs that are layer 2 adjacent in this cluster will be unable to communicate with each other!

NSX layer 2 block

This is clearly not a complete firewall policy as our default rule is to allow all. We’d have to do more work to allow traffic through to our application or database servers, and we’d probably want to switch our default rule to deny all. However, because these rules are tied to Virtual Center objects and not IP addresses, security policies apply immediately upon VM creation. There is no lag time between VM creation and application of the firewalling policy – it is instantaneous!  Anybody who’s worked in a large enterprise knows it can take weeks or months before a firewall change request is pushed into production.

Of course, you still have flexibility to write IP-to-IP rules, but once you start working with Virtual Center objects and VM tags, you’ll never want to go back.