Changing a VMC segment type from the Developer Center

Here’s a way to use the API explorer to test out API calls. This is one technique to figure out how the API works before you start writing code in your favorite language.

First I create a disconnected segment in my SDDC

Then I go to the Developer Center in the VMC console, pick API Explorer, NSX VMC Policy API, and pick my SDDC from the dropdown.

Now I need a list of all segments – I find this in /infra/tier-1s/{tier-1-id}/segments

I provide the value ‘cgw’ for tier-1-id and click Execute

It’s easiest to view the results by clicking the down arrow to download the resulting JSON file.

Inside the file I find the section containing my test segment ‘pkremer-api-test’.

        {
            "type": "DISCONNECTED",
            "subnets": [
                {
                    "gateway_address": "192.168.209.1/24",
                    "network": "192.168.209.0/24"
                }
            ],
            "connectivity_path": "/infra/tier-1s/cgw",
            "advanced_config": {
                "hybrid": false,
                "local_egress": false,
                "connectivity": "OFF"
            },
            "resource_type": "Segment",
            "id": "15d1e170-af67-11ea-9b05-2bf145bf35c8",
            "display_name": "pkremer-api-test",
            "path": "/infra/tier-1s/cgw/segments/15d1e170-af67-11ea-9b05-2bf145bf35c8",
            "relative_path": "15d1e170-af67-11ea-9b05-2bf145bf35c8",
            "parent_path": "/infra/tier-1s/cgw",
            "marked_for_delete": false,
            "_create_user": "pkremer@vmware.com",
            "_create_time": 1592266812761,
            "_last_modified_user": "pkremer@vmware.com",
            "_last_modified_time": 1592266812761,
            "_system_owned": false,
            "_protection": "NOT_PROTECTED",
            "_revision": 0
        }

Now I need to update the segment to routed, which I do by finding PATCH /infra/tier-1s/{tier-1-id}/segments/{segment-id}. I fill in the tier-1-id and segment-id values as shown (the segment-id came from the JSON output above as the “id”.

This is code that I pasted in the Segment box

{
    "type": "ROUTED",
    "subnets": [
     {
        "gateway_address": "192.168.209.1/24",
        "network": "192.168.209.0/24"
     }
     ],
     "advanced_config": {
         "connectivity": "ON"
      },
     "display_name": "pkremer-api-test"
}

The segment is now a routed segment.

VMC on AWS – VPN, DNS Zones, TTLs

My customer reported an issue with DNS zones in VMC, so I needed it up in the lab to check the behavior. The DNS service in VMC allows you to specify DNS resolvers to forward requests. By default DNS is directed to 8.8.8.8 and 8.8.4.4. If you’re using Active Directory, you generally will set the forwarders to your domain controllers. But some customers need more granular control over DNS forwarding. For example – you could set the default forwarders to domain controllers for vmware.com, but maybe you just acquired Pivotal, and their domain controllers are at pivotal.com. DNS zones allow you direct any request for *.pivotal.com to a different set of DNS servers.

Step 1

First, I needed an IPSEC VPN from my homelab into VMC. I run Ubiquiti gear at home. I decided to go with a policy-based VPN because my team’s VMC lab has many different subnets with lots of overlap on my home network. I went to the Networking & Security tab, Overview screen which gave me the VPN public IP, as well as my appliance subnet. All of the management components, including the DNS resolvers, sit in the appliance subnet. So I will need those available across the VPN. Not shown here is another subnet 10.47.159.0/24, which contains jump hosts for our VMC lab.

I set up the VPN in the Networks section of the UniFi controller – you add a VPN network just like you add any other wired or wireless network. I add the appliance subnet 10.46.192.0/18 and my jump host subnet 10.47.159.0/24. Peer IP is the VPN public IP, and local WAN IP is the public IP at home.

I could not get SHA2 working and never figured out why. Since this was just a temporary lab scenario, I went with SHA1

On the VMC side I created a policy based VPN. I selected the Public Local IP address, which matches the 34.x.x.x VMC VPN IP shown above. The remote Public IP is my public IP at home. Remote networks – 192.168.75.0/24 which contains my laptop, and 192.168.203.0/24 which contains my domain controllers. Then for local networks I added the Appliance subnet 10.46.192.0/18 and 10.47.159.0/24. The show up as their friendly name in the UI.

The VPN comes up.

Now I need to open an inbound firewall rule on the management gateway to allow my on-prem subnets to communicate with vCenter. I populate the Sources object with subnet 192.168.75.0/24 so my laptop can hit vCenter. I also set up a reverse rule (not shown) outbound from vCenter to that same group. This isn’t specifically necessary to get DNS zones to work since we can only do that for the compute gateway – it’s the compute VMs that need the DNS zone. But I wanted to reach vCenter over the VPN.

I create similar rules on the compute gateway to allow communication between my on-prem subnets and anything behind the compute gateway – best practice would be to lock down specific subnets and ports.

I try to ping vCenter 10.46.224.4 from my laptop on 192.168.75.0/24 and it fails. I run a traceroute and I see my first hop is my VPN connection into VMware corporate. I run ‘route print’ on my laptop and see the entire 10.0.0.0/8 is routed to the VPN connection.

This means I will either have to disconnect from the corporate VPN to hit 10.x IP addresses in VMC, or I have to route around the VPN with a static route

At an elevated command prompt, I run these commands

route add 10.46.192.0 mask 255.255.192.0 192.168.75.1 metric 1 -p
route add 10.47.159.0 mask 255.255.255.0 192.168.75.1 metric 1 -p

This inserts two routes into my laptop’s route table. The -p means persistent, so the route will persist across reboots.

Now when I run a route print I can see that for my VMC appliance subnet and jump host subnet.

I can now ping vCenter by private IP from my laptop. I can also ping servers in the jump host subnet.

Now to create a DNS zone – I point to one of my domain controllers on-prem – in production you would of course point to multiple domain controllers.

I flip back to DNS services and edit the Compute Gateway forwarder. The existing DNS forwarders point to our own internal lab domain, and we don’t want to break that communication. What we do want is to have queries destined for my homelab AD redirected to my homelab DNS servers. We add the zone to the FQDN Zones box and click save.

Now we run a test – you can use nslookup, but I downloaded the BIND tools so I can use dig on Windows.

First dig against my homelab domain controller

dig @192.168.203.10 vmctest88.ad.patrickkremer.com

Then against the VMC DNS server 10.46.192.12

dig @10.46.192.12 vmctest88.ad.patrickkremer.com

The correct record appears. You can see the TTL next to the DNS entry at 60 seconds – the VMC DNS server will cache the entry for the TTL that I have configured on-prem. If I dig again, you can see the TTL counting down toward 0.

I do another lookup after the remaining 21 seconds expire and you can see a fresh record was pulled with a new TTL of 60 seconds.

Let’s make a change. I update vmctest88 to point to 192.168.203.88 instead of .188, and I update the TTL to 1 hour.

On-prem results:

VMC results:

This will be cached for 1 hour in VMC.

I switch it back to .188 with a TTL of 60 seconds on-prem, which is reflected instantly

But in VMC, the query still returns the wrong .88 IP, with the TTL timer counting down from 3600 seconds (1 hour)

My customer had the same caching problem problem, except their cached TTL was 3 days and we couldn’t wait for it to fix itself. We needed to clear the DNS resolver cache. In order to do that, we go to the API. A big thank you to my coworker Matty Courtney for helping me get this part working.

You could, of course, do this programmatically. But if consuming APIs in Python isn’t your thing, you can do it from the UI. Go to the Developer Center in the VMC console, then API explorer. Pick your Org and SDDC from the dropdowns.

Click on the NSX VMC Policy API

In the NSX VMC Policy API, find Policy, Networking IP, Management, DNS, Forwarder, then this POST operation on the tier-1 DNS forwarder

Fill out the parameter values:
tier-1-id: cgw
action: clear_cache
enforcement_point: /infra/sites/default/enforcement-points/vmc-enforcementpoint

Click Execute

We see Status: 200, OK – success on the clear cache operation. We do another dig against the VMC DNS server – even though we were still within the old 1 hour cache period, the cache has been cleared. The VMC DNS server pulls the latest record from my homelab, we see the correct .188 IP with a new TTL of 60 seconds.

AD authentication for vCenter in VMC on AWS

VMware has good documentation on setting up Hybrid Linked Mode in VMC, but the docs are a little bit confusing if all you want is Active Directory authentication into the VMC vCenter. This post shows how I was able to configure AD authentication for a VMC on AWS vCenter.

Step 1

I first wanted to build a domain controller in the connected VPC, allowing AD communication across the ENI. If you already have a domain controller accessible via VPN or Direct Connect, you do not need to worry about this part of the configuration, you can skip to Step 2. But I wanted to demonstrate AD communication across the ENI as part of this post. To figure out which EC2 subnet I need my domain controller in, I looked at Networking & Security Overview

I created a Windows 2016 EC2 instance, gave it an IP of 172.20.0.249, and promoted it to a domain controller. My test domain was named: poc.test. I needed to open the firewall to allow the management network in VMC to communicate with the domain controller. Best practice would obviously be to restrict communication to only Active Directory ports, but I opened it all up to make things simpler. The 0.0.0.0/0 for RDP was to allow domain controller access from the public internet – obviously not something you’d want to do in production, but this is just a temporary lab. The default outbound rule in EC2 is to allow everything, which I left in place.

I also needed to open the compute gateway firewall to allow bidirectional communication across the ENI, which I’ve done below.

Step 2

Once you have a Domain Controller available, you need to point the management gateway DNS to your domain controller. In this example I also pointed the Compute Gateway DNS to the domain controller.

Step 3

Even though you’re not setting up Hybrid Linked Mode, it’s a good idea to use some of the HLM troubleshooting tools to ensure connectivity to the domain controller. I ran the 5 tests shown below against my DC IP 172.20.0.249

Step 4

Now we need to configure an identity source in the VMC vCenter. Log in as cloudadmin@vmc.local. You can find this under Menu>Administration, then Single Sign On>Configuration, then Identity Sources. Click Add to add an identity source.

Select Active Directory over LDAP in the Identity Source Type dropdown.

Fill out the identity source according to your Active Directory environment. You would want to use the secondary LDAP server in production, and you would never use a Domain Admin account as the LDAP user in production.

Once the identity source is added, you will see it in the list.

Log out as cloudadmin@vmc.local and log in as a domain user.

If we enter the correct password, we receive this error. This is OK as we have not granted any domain user access to our vCenter. All domain users are granted No Access by default.

Log back in as cloudadmin and grant privileges to a domain user. In our case we want to grant admin rights at the vCenter level, we click on the vCenter object, then Permissions, then the plus to add a permission.

The AD domain should show up in the dropdown.

If you start typing in the User/Group line, a dropdown will auto-populate with matching AD objects. I pick Administrators.

Be careful here – you cannot grant the Administrator role in VMC because you are not an administrator – only VMware support has full administrative access to an SDDC. Instead, grant the CloudAdmin role. Check Propogate to send the permission down the entire tree.

We now see the new permission in the Permissions list.

Now log off as cloudadmin, and log in as the AD user.

Success! You can now grant permissions to Active Directory users.