I foolishly thought that I would quickly swap out my 2012 domain controllers with 2019 domain controllers, thus beginning a weeks-long saga. I have 2 DCs in my homelab, DC1 and DC2.
Built a new DC, joined to the domain, promoted to a DC (it ran AD prep for me, nice!), transferred FSMO roles (all were on DC1), all looked great! Demoted DC1, all logins failed with ‘Domain Unavailable’.
Thankfully I had my Synology backing up my FSMO role holder DC. So I restored it from scratch. I figured I might have missed something obvious so I did it again. Same result.
Ran through all sorts of crazy debugging, ntdsutil commands looking for old metadata to clean up, found some old artifacts that I thought might have been causing the issue, and repeated the process. Same result.
Several weeks later I realized what happened – I had a failing UPS take down my Synology multiple times until I replaced it a few days ago. Guess which VM I never restarted? The Enterprise CA. The CA caused all of this. Or at least most of it. Even after I powered up the CA, I was unable to cleanly transfer all FSMO roles. Everything but the Schema Master transferred cleanly, even though they all transferred cleanly while the CA was down. I had to seize the schema master role and manually delete DC1 from ADUC – thankfully, current versions of AD do the metadata cleanup for you when you delete a DC from ADUC.
In hilarious irony, I specifically built the CA on a member server and not a domain controller to avoid upgrade problems.
- When you don’t administer AD every day, you forget lots of things
- No AD upgrade is easy
- Make sure you have a domain controller backup before you start
- Turn on your CA
- Run repadmin /showrepl and dcdiag BEFORE you start messing with the domain
- Run repadmin /showrepl and dcdiag AFTER you add a domain controller and BEFORE you remove old domain controllers
- ntdsutil is like COBOL – old and verbose
I posted today’s VMUG presentation in the VMUG section.
I just got the Android 4.1.2 push to my Verizon Droid Bionic and suddenly my map application stopped working. When I tapped the locate icon, Android threw an error “Please enable Google Apps location access”. My first thought was in Location Access under Settings, but that didn’t work. A Google search turned up this dotTech post directing me to Settings > Accounts > Google > Location Settings, but I couldn’t find any way to grant access to my Google account. In the end, I figured out I had to upgrade my account to Google+. Hopefully this post saves you a few hours of tearing your hair out.
I was upgrading vCenter from 4.0 U2 to 4.1 and installing it on a clean Windows 2008 64-bit server. The vCenter upgrade went OK, but the Update Manager install failed with “Error 25085. Setup failed to register VMware vCenter Update Manager extension to VMware vCenter Server.” I found VMware KB1024795 with a few fixes, but they did not resolve the issue.
I was trying to install Update Manager on the D: drive. I opened a ticket with VMware support and after some troubleshooting, their advice was to rebuild the 2008 server. Before starting over, I did a little more poking around. I discovered that somehow the local admins group had been removed from the D: drive permissions.
I was logged on to the domain with administrative permissions on the server and vCenter installed just fine. I’m not sure why Update Manager threw an error, but granting the local administrators group full control of D: resolved the problem.
VMware support confirms that there is a bug related to the vCenter 4.1 upgrade, it appears to be specifically related to Datastore alarms. The workaround was to go through and disable, then enable all datastore alarms. At least it was better than having to delete and recreate them.
We ran into an issue where our custom alarms in vCenter weren’t generating alerts after upgrade to vCenter 4.1. All of our existing alarms that were defined in vCenter 4.0 were still in place after the upgrade. However, alarming was inconsistent. We had one alarm defined on a single folder, some of the datastores that met the alert criteria were alarming, some of them weren’t.
I deleted the alarm definition and recreated it, all of the datastores that should have been alarming lit up… I have a ticket open with VMware support. At this point I’m not sure if I’m going to have to manually rebuild all of our alarms.