Troubleshooting vSphere 7 Installation – Home Lab Edition

My homelab boots off of cheap USB sticks and has done so since 2016.

I easily upgraded my vCenter (VCSA for the win), but could not get Update Manager to upgrade my hosts, it kept throwing an ‘unable to run upgrade script’ error. I tried command-line upgrade without success, then I tried dropping the ISO image on the USB stick with Rufus, booting off the stick and having the installer overwrite the boot image with the installation. I’ve done this same process many times over the past 4 years without issue. However, it kept failing this time. The installation would progress to various percentages, but never beyond 70%

I’ve been working with VMware products since the 3.0 days and I never knew this, thank you to William Lam for his troubleshooting suggestion in VMware’s slack channels. When the progress bar of the installation screen is up, you can press Alt+F1 and get a command prompt. Then you can log in with root and no password. From there you can navigate the filesystem and tail logs, including the install log and vmkernel log. I saw lots of storage errors in the vmkernel log, timeouts and retries.

I then yanked one of the SSD drives in my Synology out and popped it into the host. vSphere 7 installed without issue. Looks like I either need to buy some high-end USB drives, or maybe use this problem to justify swapping out some of the smaller drives in the Synology with larger ones. Then I can rotate the smaller ones into service as boot drives in the hosts.

VMware Event Broker Appliance – Part VIII – Basic Troubleshooting Techniques

In Part VII of this series, we deployed a second sample function – the Host Maintenance functions written in PowerShell. In this post, we discuss some basic troubleshooting techniques. This post was updated on 2020-03-07 with updated screenshots for the VEBA 0.3 release.

The VEBA is running Kubernetes.  Going into this project, I knew nothing about Kubernetes – and still don’t know very much. But I’ve learned enough to be able to do some basic troubleshooting. In the end it’s all a series of Linux commands.

You might want to SSH to the appliance instead of using the console. To do this, log in as root on the console, then execute the command systemctl start sshd.  SSH will run, but not automatically start on the next reboot.

VEBA code is running in Kubernetes pods, and those pods are running in namespaces. kubectl lets you run commands against Kubernetes. So first we list out namespaces with kubectl get namespaces

One of the namespaces you will see is openfaas. All of your functions will be running inside their own container. We issue the command kubectl get pods -n openfaas and see all of our pods that are running OpenFaaS.

PROTIP: kubectl get pods -A gives you all the pods and their associated namespaces in one command.

kubectl logs gets us logfiles from the pods. The VMware Event Router pod is responsible for communicating with vCenter. Obviously these logs could contain interesting information if you’re trying to troubleshoot VEBA communications with vCenter.

kubectl logs vmware-event-router-5dd9c8f858-nph7k -n vmware –follow


Here is the output in a screengrab and also pasted.  Note that we see events topics firing (AlarmStatusChangedEvent) and successful interception and function invocation. Rather than trying to dig through API documentation, one great way to figure out how to react to a vCenter event is to start tailing the logfile with –follow – this is essentially the same as tail-f . Then perform an action on vCenter and see which topic fired. You can then build a function to react to the event.

[OpenFaaS] 2020/03/08 00:28:19 invoking function(s) on topic: AlarmStatusChangedEvent
2020/03/08 00:28:19 Invoke function: powershell-datastore-usage
2020/03/08 00:28:20 Syncing topic map

[OpenFaaS] 2020/03/08 00:28:21 processing event [1] of type *types.AlarmStatusChangedEvent from source https://******************************/sdk: &{AlarmEve nt:{Event:{DynamicData:{} Key:8689704 ChainId:8689704 CreatedTime:2020-03-08 00:28:15.875458 +0000 UTC UserName: Datacenter:0xa84c560 ComputeResource:<nil> Host:<nil> V m:<nil> Ds:0xa84c6c0 Net:<nil> Dvs:<nil> FullFormattedMessage:Alarm ‘Datastore usage on disk’ on WorkloadDatastore changed from Gray to Green ChangeTag:} Alarm:{EntityE ventArgument:{EventArgument:{DynamicData:{}} Name:Datastore usage on disk} Alarm:Alarm:alarm-7}} Source:{EntityEventArgument:{EventArgument:{DynamicData:{}} Name:Datace nters} Entity:Folder:group-d1} Entity:{EntityEventArgument:{EventArgument:{DynamicData:{}} Name:WorkloadDatastore} Entity:Datastore:datastore-60} From:gray To:green}


In the above log output, we can see that OpenFaaS caught a VM powering up, and invoked the function powershell-datastore-usage.  But we don’t know what happened inside that function.

PROTIP: you can also use –since=2m for the last two minutes of logs, or –tail=20 for the last 20 lines of log.

Let’s look directly into the function logs. Your functions run in the openfaas-fn namespace. We list out the pods.
kubectl get pods -n openfaas-fn

We then look at the logs with:
kubectl logs -n openfaas-fn powershell-datastore-usage-847d5c7875-286hv

Looking at the logs, we see the function firing.


Let’s look at a function that I broke intentionally. I update my secret with a configuration that has a bad password – you can test this behavior with any of the sample functions. We happen to be using the pytag function below.


Now I want to look at the logs for my pytag function. Again, the commands are

kubectl get namespace
kubectl get pods -n openfaas-fn
kubectl logs pytag-fn-f66d6cffc-x5ghk -n openfaas-fn

An astute reader might notice that my pytag container changed names between the previous screenshot and the current one. The content for this post was written in 2 sessions – when I came back for the second screenshot, I had redeployed the function numerous times. This means I got a new pod with a new name.You can always find the pod name with the get pods command.

Here are the logs when I try to power up a VM after I pushed the broken vcconfig secret:

The log clues us in to an unauthorized error. It doesn’t specifically say password error, but you know there’s something wrong with authentication – at least you have a place to start troubleshooting

2019/12/29 00:46:07 Path /
{“status”: “500”, “message”: “could not connect to vCenter 401 Client Error: Unauthorized for url:”}


Now let’s fix it – we update the secret with the correct password

You can see in the logfile there was one more authentication failure before the new secret was picked up. I’m not sure at the time I posted this exactly what governs picking up a new secret, but it is important to know that it is not instant.


In Part IX, we deploy the datastore usage alarms sample function.

VMware Event Broker Appliance – Part IX – Deploying the Datastore Usage Email sample script in VMC

In Part VIII, we discussed basic troubleshooting techniques for VEBA. In this post we will deploy a new sample script.

There is a new VEBA sample script in 0.3 that enables you to send emails based on a datastore alarm. Although the functionality is built into on-prem vCenter, you have very little control over the email content. It is not possible to configure email alarms in VMC on AWS because the customer is locked out of that configuration area in vCenter. The sample script is written in PowerShell/PowerCLI.

I wrote extensive documentation on deploying a PowerShell script into VEBA in Part VII . If you don’t already know how to clone a repository with git and how to work with OpenFaaS, check out Part VII. Here I am just going to show the config files and config changes necessary.

We find the code in examples/powercli/datastore-usage-email

NOTE: If you send a large volume of emails from VMC, you need to open a ticket with VMware support to unthrottle SMTP on the shadow VPC that contains your SDDC. AWS by default throttles emails sent out of all their VPCs. You can simply pop open the chat window in the VMC console and request the change. VMware will file a ticket with AWS support and you will be unthrottled.

The default configuration is set to work with GMail authenticated SMTP. However, many customers run SMTP relays either on-prem or up in VMC for various email related needs. You can change the SMTP_PORT to 25 and leave SMTP_USERNAME and SMTP_PASSWORD blank to use standard SMTP.

Here’s how I changed it to get it work using VMware’s SMTP relay to deliver to my corp email address.


My modified stack.yml looks like this

Create the secret and push the function with faas-cli

faas-cli secret create vc-hostmaint-config –from-file=vc-hostmaint-config.json –tls-no-verify

faas-cli up –tls-no-verify

Now we need to cause a storage alarm. We need to find the default Datastore Usage on Disk alarm and edit it

We change the warning level very low, here I change it to 7% to force an alarm. Then I hit next to the end of the config wizard.

The datastore shows a warning. Now we perform the same Edit operation and set the warning percentage back to 70%. The datastore warning should clear.

If everything worked, you should have 2 emails in your inbox – a warning email, and back to normal email.

If you don’t see the emails, check your spam folder – you may have to whitelist the emails depending on your spam settings.

If you have issues, troubleshoot the VMware event router logs and the email function logs as show in the troubleshooting section Part VIII .

VMware Event Broker Appliance – Part VII – Deploy the Sample Host Maintenance Function

In Part VI of this series, we showed how to sync a fork of our VEBA repository to the upstream repository maintained by VMware.  Back in Part IV, we deployed our first sample function. In this post, we will deploy another sample function – the host maintenance function. This post was updated on 2020-03-07 to include screenshots for the VEBA 0.3 release.

Our first sample function was written in Python. As of the date of this post, the other available samples are all in PowerShell. We will be working with the hostmaint-alarms function. This function will disable alarm actions when you pop a host into maintenance mode. No more alerts for a host that you’re doing maintenance on!

We had a problem in the 0.2 release with secret names colliding. Here is the stack.yml file for our python tagging function

Here is the sample file we used to generate the secrets named ‘vcconfig’.

Here is the stack.yml for our host maintenance alarms function in the 0.2 release

We no longer have this issue in the 0.3 release as we named our secret vc-hostmaint-config.

Here is the vcconfig.json file we use to configure the secret named ‘vcconfig’ for our PowerShell function.

A problem happens when you use secrets of the same name for scripts that aren’t expecting the same secrets format. In 0.2, we had a secret named vcconfig used for both functions, but the secret files have a completely different configuration. Neither script can read the other’s secret because they weren’t programmed to do so. The TOML secret file is a configuration file format popular with Python. The PowerShell secret file is simple JSON.  This means that we will need to change the secrets file to a different name, one for the Python script and one for PowerShell. Note that it doesn’t have to be this way – there’s nothing stopping a script writer from using TOML format for PowerShell and JSON for Python – all that matters is how how the script is written. You could write your scripts to use a single secret format and they could all share a single secret.

We now need to change the sample script to point to a different secrets file. In order to do that, we need create a new secret using our vcconfig.json file.

After editing the file to point to our environment, we push it into the VEBA appliance. I name it ‘vcconfig-hostmaint’ but you can name it whatever you want. To match the current 0.3 script, you should name it ‘vc-hostmaint-config’. If you match what’s in the script, you don’t have to rebuild any container images – the default container image will work. But there are many reasons why you would need to rebuild the container image. Any time you want to improve the existing functions, or write your own, you will need to build your own container image.  This post will continue on showing how to finish deploying by rebuilding the container image.

To create the secret file, remember you need to log in with faas-cli first, for a refresher look at Part IV of this series.

Now that we have our secrets file, we need to change our code to use it.

First we edit the first line of script.ps1 in the handler folder. We need to change the secret name to whatever we named it in the cli – here, I change it to: vcconfig-hostmaint

Looking again at the stack.yml file, we have additional problems. We built a new secret, so we can change the secrets: section to point to vcconfig-hostmaint. Our gateway needs to point to our VEBA appliance. Then we need to worry about our image. Because we changed PowerShell code, we have to rebuild the container image that runs the code. Otherwise, the container that launches is the default container that ships with the VEBA appliance.


We sign up for a Docker account


Now we download and install Docker Desktop. The installation is simple, you can find the official installation documentation here.


After installation there’s a little whale icon in my system tray

I right-click and log in with the account I just created



Now when I right-click the whale, it shows that I’m signed in.

Now we edit the stack.yml file.  We make sure our gateway is pointing to our VEBA. We change the secrets to point to our new secret. And we change the image name – the account name gets changed to the Docker account that we just created.

Now we need to build, push, and deploy our new container image. The faas-cli function creation documentation shows us the commands we need to use.

First, we need to log in to Docker. Since I had no experience with Docker, it took me forever to figure out this seemingly simple task. I tried many different ways, but all failed with an Unauthorized error.

The simple answer is to make sure you’re logged into the Docker desktop

Then you issue a docker login command with no arguments. Just docker login.


We now issue the faas-cli build -f stack.yml command.

Multiple screens of information scroll past, but we end with a successful build.

Now we push the containers into the Docker registry with faas-cli push -f stack.yml

Now we deploy the container to our VEBA with faas-cli deploy -f stack.yml –tls-no-verify

PROTIP: Once you understand what the 3 commands do – build, push, deploy – you can use a nifty OpenFaaS shortcut command: faas-cli up –tls-no-verify
This command runs build, push, and deploy in sequence, automatically.

Now we log into vCenter and look at the Alarm actions on a host. They are currently enabled.


After entering maintenance mode, the alarm actions have been disabled by our function


Now we exit maintenance mode and our function enables alarm actions. Success!


Finally, we want to verify that we did not break our other Python function, the one that tags a VM when it powers on. We check our test VM and see it has no tags.

After power on, a tag has been applied. Now we have both functions working!

We have now successfully deployed our PowerShell host maintenance function! In Part VIII, we will look at some troubleshooting techniques.


VMware Event Broker Appliance – Part IV – Deploying the First Sample Function

In Part III of this series, we created our vCenter tag and cloned the code repository from Github. In Part IV, we will deploy our function and watch it fire in response to a vCenter event. This post was updated 2020-03-07 with new screenshots for the VEBA 0.3 release.

The python sample function is sitting in the root path /examples/python/tagging. The sample function will cause a vCenter tag to be applied to a VM when the VM powers on.
First we want to modify vcconfig.toml

Here we need to change the server to our vCenter server, put in the correct username/password, and change the tag URN to match the URN we retrieved in Part III

To quickly review, one way to get a Tag’s URN is via the Get-Tag PowerCLI cmdlet


vcconfig.toml now looks like this

We continue with the OpenFaaS instructions in the getting started documentation.

When the correct password is used, we should get this:

If you get an error, a number of things could be the cause:

  • The VM might not be fully online; it can take a while for services to come up after initial boot, particularly in a homelab
  • You might be running into a hostname problem – you must set the OPENFAAS_URL to the exact name you used when you deployed the OVF. If you deployed as the FQDN, you have to use the FQDN here. If you deployed with the short name, you must use the short name here.
  • You are typing the incorrect credentials

Next, we want to create a secret named ‘vcconfig’ in faas-cli so we’re not keeping credentials in plaintext files.

Next, we edit stack.yml.  The gateway has to change to your VEBA VM – make sure it matches the name you deployed. We also need to look at the topic. The documentation warns us that if we’re trying to monitor a VM powered on event in a DRS cluster, the topic is actually DrsVmPoweredOnEvent. We change the topic: entry to match.

My changed stack.yml looks like this:

We now issue a faas-cli template pull command (only required for the first deployment), and then deploy the function with:
faas-cli deploy -f stack.yml –tls-no-verify


Now the moment of truth, time to test our function! The expected behavior is that when powered on, a VM gets the alert-power-operations tag assigned to it. First we check my test VM to see if it has any tags – it has none.

After we power on the VM, has the tag been assigned?

Success! Note that you may have to hit the refresh button to get the tag to appear in the GUI.

That’s it for your first sample function!

Since this product is open source, you can make changes to it. Even if you’re not a developer, a great way to contribute is to keep documentation up-to-date. In part V, we will explore the basics of using git to commit code back to the repository. Even if you’re not a developer, you can learn the basics of git.

VMware Event Broker Appliance – Part Ia – AWS EventBridge Deployment

Deploying VEBA for AWS EventBridge

Part I of this series covers deploying VEBA. As of the 0.3 VEBA release, we support 2 event processors – built-in OpenFaaS, and external AWS EventBridge. This post covers how to configure AWS EventBridge.

William Lam has a good EventBridge blog showing his setup, and the EventBridge documentation has detailed instructions. I won’t repeat all that content, but I will show you my setup.

First I created a new EventBridge event bus.

Then I created an event forwarding rule

It’s important to make sure you have a filter – I suggest using the following event pattern for testing. You can always change it later to fit whatever events you’re trying to respond to. But if you don’t filter, EventBridge will react to every single event that comes out of your vCenter. If you’re running a Prod vCenter, that’s going to be a LOT of traffic.

I picked these two events because they’re relatively low traffic – most customers aren’t popping their hosts in and out of maintenance mode thousands of times a day.

When you deploy VEBA with the EventBridge processor, you have to enter the following settings in the EventBridge. Access key and secret are straightforward, the same IAM credentials that you’d create for programmatic access to any AWS service.  Event Bus Name needs to match the event bus you created. The region is easily found by clicking on the region name in the AWS console. The ARN is just a copy-paste from the Rule page in the AWS console – just make sure you copy the Rule ARN and not the Event Bus ARN. In my case, this:  arn:aws:events:us-east-1:##########:rule/SET-VMC-EventBus/VEBA-event-forwarding

When you finish deploying VEBA and it boots, you can look at the logfiles to see what’s happening with the Event Broker. Or obviously you can just put a host in maintenance mode and look in CloudWatch to see if a log appears.

To look on the VEBA – Enable SSH on the VEBA by following the instructions in Part VIII, or just do it on the console of the VM.

Type kubectl get pods -A and look for the event router pod

Now type kubectl logs vmware-event-router-5dd9c8f858-7b9pk -n vmware –follow
You are doing the Kubernetes equivalent of tailing a logfile.

Near the top, you see the Event Router connecting to AWS EventBridge and pulling down the configured filters. You then see the EventRouter begin receiving vCenter events

Let’s put a host into maintenance mode and see what happens.

The event was intercepted and sent to EventBridge

Now we take the host out of maintenance mode and we also see the event logged

Let’s check CloudWatch and see if our events were logged – they were!

You can return to Part I of this series for more information on deployment, or move on to Part II to begin working on the prerequisites for deploying code samples to VEBA.


VMware Event Broker Appliance – Part I – Deployment


I became aware of the VMware Event Broker Appliance Fling (VEBA) in December, 2019. The VEBA fling is open source code released by VMware which allows customers to easily create event-driven automation based on vCenter Server Events. You can think of it as a way to run scripts based on alarm events – but you’re not limited to only the alarm events exposed in the vCenter GUI. Instead, you have the ability to respond to ANY event in vCenter.

Did you know that an event fires when a user account is created? Or when an alarm is created or reconfigured? How about when a distributed virtual switch gets upgraded or when DRS migrates a VM?  There are more than 1,000 events that fire in vCenter;  you can trap the events and execute code in response using VEBA. Want to send an email and directly notify PagerDuty via API call for an event? It’s possible with VEBA. VEBA frees you from the constraints of the alarms GUI, both in terms of events that you can monitor as well as actions you can take.

VEBA is a client of the vSphere API, just like PowerCLI. VEBA connects to vCenter to listen for events. There is no configuration stored in the vCenter itself, and you can (and should!) use a read-only account to connect VEBA to your vCenter. It’s possible for more than one VEBA to listen to the same single vCenter the same way multiple users can interact via PowerCLI with the same vCenter.

For more details, make sure to check out the VMworld session replay “If This Then That” for vSphere- The Power of Event-Driven Automation (CODE1379UR)

Installing the VEBA – Updated on March 3, 2020 to show the 0.3 release

If you notice screenshots alternating between an appliance named veba01 and veba02, it’s because I alternate between the two as I work with multiple VEBA versions.

The instructions can be found on the VEBA Getting Started page.

First we download the OVF appliance from and deploy it to our cluster

VMC on AWS tip: As with all workload VMs, you can only deploy on the WorkloadDatastore

I learned from my first failed 0.1 VEBA deployment to read the instructions – it says the netmask should be in CIDR format, but I put in In 0.3, the subnet mask/network prefix is a dropdown, eliminating this problem.

Pay attention to DNS and NTP settings – they are space separated in the OVF, the instructions have said ‘space separated’ since 0.2 but it’s important to enter them correctly here. NTP settings were new in 0.2

Proxy support was new in 0.2. I don’t have a proxy server in the lab so I am not configuring it here.

These settings are all straightforward. Definitely disable TLS verification if you’re running in a homelab. One gotcha are passwords – I used blank passwords for the purposes of this screenshot, but you can move ahead with blank passwords.  This resulted in a failure after the appliance booted because it couldn’t connect to vCenter. On the plus side it lead to my capture of how to troubleshoot failures, which is shown further down in this post.

PROTIP: If you’re brand new to this process, it’s easiest to get this working with an administrator account. It is best practice to use a read-only vCenter account. You can easily change this credential after everything is working by following the procedure at the bottom of this post – edit /root/event-router/config.json (file moved in version 0.4) /root/config/event-router-config.json and restart the pod as shown in the directions.

New in 0.3, you can decide whether you want to use AWS EventBridge or OpenFaaS as your event processor.  EventBridge setup is covered in Part Ia of this series. For this setup, we choose OpenFaaS.

Since we’re not configuring EventBridge, we leave this section blank.

The POD CIDR network is an important setting new in 0.2. This network is how the containers inside the appliance communicate with each other. If the IP that you assign to the appliance is in the POD CIDR subnet, the containers will not come up and you will be unable to use the appliance. You must change either the POD CIDR address or the appliance’s IP address so they do not overlap.

This is what the VEBA looks like during first boot

If you end up with this, [IP] in brackets instead of your hostname, something has failed. Did you use a subnet mask instead of CIDR? Did you put comma-separated DNS instead of space? Did you put in the incorrect gateway? If you enabled debugging at deploy time, you can look at /var/log/boostrap-debug.log for detailed debug logs to help you pinpoint the error. If not, see what you can find in /var/log/boostrap.log

There is an entire troubleshooting section for this series. When the pod wasn’t working despite successful deployment, I needed to determine why.

Enabling SSH on the appliance is covered in the troubleshooting section. You could also use the VM console without enabling SSH.

kubectl get pods -A lists out the available pods

The pods that is crashing is the event router pod.
kubectl logs vmware-event-router-5dd9c8f858-q57h6 -n vmware will show us the logs for the event router pod. We can see that the vCenter credentials are incorrect.

We need to edit the JSON config file for Event Router.

vi /root/config/event-router-config.json

Oops. The password is blank

Fix the password

Delete and recreate the pod secret

kubectl -n vmware delete secret event-router-config
kubectl -n vmware create secret generic event-router-config –from-file=event-router-config.json

Get the current pod name with
kubectl get pods -A

Delete the event router pod

kubectl -n vmware delete pod vmware-event-router-5dd9c8f858-zsfq5

The event router pod will automatically recreate itself.  Get the new name with
kubectl get pods -A

Now check out the logs with
kubectl logs vmware-event-router-5dd9c8f858-pcq2v -n vmware

We see the event router successfully connecting and beginning to receive events from vCenter

Now we have a VEBA appliance and need to configure it with functions to respond to events. Note that it may take some time for the appliance to respond to http requests, particularly in small homelab. It took about 4 minutes for it to fully come up after succesful boot in my heavily overloaded homelab. One additional gotcha: the name you input at OVF deployment time is the name encoded in the SSL certificate, so if you input “veba01” as your hostname, you cannnot then reach it with

In part II of this series, I will demonstrate how I configured my laptop with the prereqs to get the first sample function working in VEBA.

VMware Event Broker Appliance – Part VI – Syncing Your Fork

In Part V of this series, we looked at how to contribute to the VEBA project. In this short post, we will show how to sync your fork back to the upstream repository – the project that you originally forked from.

If you work with an open source fork for any length of time, you will inevitably come to a situation where your fork is out of sync with the upstream repo. Other people have contributed to the upstream repo, and your code is now out of sync. You could issue a pull request to pull the upstream into your fork, but then you are still out of sync – your fork will be 1 commit ahead of the upstream.

The following post shows how I used Github’s Syncing a Fork post to sync my repo with the upstream VEBA project.

First, my fork in Github shows that it’s 5 commits behind the upstream repository vmware-samples:master.

In order to fix this, I need the upstream’s clone URL

We add the URL as an upstream branch

Fetch the upstream code and then check out our local master branch


Merge the upstream changes into my local branch

We then push the local copy into the Github repo

Our fork is now even with the upstream repo.

In Part VII, we will look at deploying additional VEBA sample functions.

VMware Event Broker Appliance – Part V – Contributing to the VEBA Project

In Part IV of this series, we successfully deployed our sample function, it fired when a VM powered up and correctly tagged the VM.

In this post, we will explore contributing to the vCenter Event Broker Appliance Open Source project. Open Source is obviously reliant on contributions from the community. One way to contribute is using your skills as a developer. But if you’re not a developer, you can still contribute in the form of documentation. When I started working with VEBA, I knew nothing about git. You can do some extremely complex things with Git – but the basics of making a change to a file and submitting it back for inclusion into this project are easy to learn.

Note: this post assumes you’ve already installed Git. Take a look at Part II if you don’t have Git installed.

For further learning outside of this blog post, I strongly recommend Commitmas, a great vBrownBag series on how to use git. Thank you to Kyle Ruddy for the suggestion.

The first thing you need is a Github account. Once you’ve signed up and verified your account, open up your Git Bash and use the git config –global command to set the and variables to match your name and verified e-mail address from Github. You will need this to match in order to sign your code when you commit it.

There are a few basic git operations that you need to understand:

  • clone – Creates a copy of the specified repository and saves it on your local workstation
  • commit – Git tracks your code changes. When you issue a commit, you’re telling git that you’re done making changes to files and want to save them into the local repository. Commit packages up all of your changes.
  • diff – Show the differences between files you’ve updated vs files currently in the repository
  • push – Takes code that you’ve committed locally and pushes it into a remote repository
  • fork – A way to copy an existing repository into your own Github account. This is how you can work on Open Source projects when you don’t have direct push permissions.
  • pull – You open up a pull request (PR) to ask the owner of a repository to merge the changes in your forked repository into their repository.

As I was going through the documentation, I read a file named An .md file is a markdown file, you will find most documentation in Github written in this format.

A few of the parts of this getting started file were incorrect. When I tried to use the referenced stack.yml file, I received an error saying that the provider name was invalid, and instead should be ‘openfaas’. I changed “name: faas” to “name: openfaas” and it worked.

I then found a typo of “read_debuge” instead of “read_debug”.

Finally, there was a mistake in the faas-cli commands at the bottom of this screenshot. The second command says to use “faas” when it should have read “faas-cli”.

I wanted to fix these and save them back to the main VEBA repository so others could benefit from my updates.

Because I don’t have rights to directly modify code in the VEBA repository, I need to fork the project. You can see the Fork button on the top righthand corner.

After clicking on the fork button, I end up with a copy of the code in my own personal account – Github tells you on the top lefthand corner that this is the vcenter-event-broker-appliance repository under my account, kremerpatrick, but it’s forked from the vmware-samples account

Now that I have a repository in my own personal account, I’m going to clone it to my local workstation

Note in this clone command I call my local folder “vcenter-event-broker-appliance4” because I have many different copies of the VEBA project as I played around with git.

I edit the file and make the 3 fixes – provider name, read_debug, and faas-cli. I save the file

Now I issue a git diff command. This shows me the differences between the files I edited and the code in my local repository. You can see all 3 of my changes, removed code is in red, and added code is in green.

Everything looks good in the diff. Time to commit the code. I issue the command
git commit -a -s

-a stands for “all”, meaning we want to commit all changed files. I only changed one in this case, but if you changed multiple files, the -a switch is one way to commit all of them.

-s means that I’m signing the file with the and variables that we populated above.

When I issue the command, git pops open my text editor of choice. I need to write a comment documenting my changes. I write it, save it, then close the text editor.

The commit is now complete. As expected, 1 file changed, and 3 lines of code changed in that file.

Now the code is committed locally, and I need to push the code up to my personal Github repository

I go back to Github and find my repository

I click on my repository and Github reports that my branch is 1 commit ahead of the base repository. This is expected as I performed 1 commit.

Now I need to ask the maintainers of the VEBA repository to merge my change into their repository. This is called a pull request (PR). I click on the “New pull request” button in my repository.

Because my repository is forked from the base VMware repository, git knows what to compare my commit against. It shows me a diff, and that diff matches the diff that I ran myself in git – all 3 of my changes are there. Everything looks OK so I click “Create pull request”.

This is the last step to filing a PR – I write a summary and explanation of my fixes. In my case, I had already fixed these errors elsewhere in the code, but I missed  I fill out the fields and click Create pull request.

My changes are now ready to merge. The repository owners are now notified that there is a new PR to approve. They can respond to me with requests for changes, or they can commit my change.

All done! We have now made changes to an open source repository – everybody who clones this repository after the PR is merged will be able to take advantage of the changes I sent over.

In Part VI of this series, we will look at how to sync our fork back to the upstream repository.