Deploying Pivotal Container Service (PKS) on Azure Using Karsten Bott’s Automation Magic

Introduction and blabbing… (skip ahead to next section for actual blog.)

A friend and fellow MVP, Steve Buchanan recently tweeted the following article: Career Advice for IT Professionals in 2019. He mentioned a few points from this article in his tweets that really grabbed my eye. One is “Change is always constant.” A few other points he highlighted, “Swimming against the current will only wear you out“, “Don’t get attached to tools, systems, or platforms“, and “Nothing last forever.”

These points that Steve picked out ring so true in my career right now. A few years ago I was comfortable doing the same thing over and over again. I left the consulting world for a full time role. I was the SCOM guy and I also worked with ConfigMgr. Who needs cloud anyway, right? SCOM will never go away and look what they have in Azure for monitoring anyway, right? (This was a few years back….) I do miss working with SCOM by the way.

Anyway, what I am trying to get to here. Before I actually start my blog about Deploying Pivotal Cloud Foundry PKS solution on Azure using AzureStackGuy’s Automation Magic is this. I would never have thought I would be working with solutions like PKS from Pivotal, or even Kubernetes. Those were for the developers and those other people. You know the non-infrastructure people. I was an infrastructure guy.

Well, like one of Steve’s points that he highlight said, Change is always constant. If it wasn’t for changes in SCOM and changes in System Center Advisor, which evolved to Operations Management Suite. Which really exist anymore but all part of many solutions within Azure itself. I may not have gotten to refocus my skills to other solutions. Such as Microsoft’s Hyper-Converged solutions, which landed me a role on our project focusing on Azure Stack almost 2 years ago. Working with Azure Stack has brought me into the Azure world 100% now. Which in turn now has me working with solutions like Kubernetes, Openshift, and Pivotal PAS and PKS.

So long story short, I stopped swimming against the current, which I still find myself still doing once in a while. I am looking forward to change which is bringing me some awesome new projects and technologies to work with, along with many new friends across the globe. I have learned to give not get attached to my comfort tools and solutions. With working with Pivotal, K8s, Azure, and Azure Stack I have gone beyond my comfort zone, learning to work with tools that I would never have thought I would even touch. I have also learned nothing last forever. The solutions I loved working with in the past like SCOM are still there, but the types of roles I am in lately and now don’t include those tools.

So thanks for reading this section if you did and listening to me blab about something that is completely off the topic of this blog. So now, without anymore delays….. Deploying PKS on Azure!


Deploying Pivotal Container Service (PKS) on Azure

Preparing Azure

Now I am going to be using Karsten Bott’s ARM Templates and Terraform automation scripts aka AzureStack_guy on Github to deploy my first PCF Foundation to Azure. I do need to make this statement: I am not one that works with Azure CLI or Linux a lot. So between those two things and not really working with PKS, this was a fun yet exciting learning experience for me. I am also so glad that Karsten is an awesome guy and member our our community. He was very helpful throughout this process.

We need to do the following before we can deploy this solution:

  1. Create a SSH Key
  2. Install and Configure Azure CLI (If not installed already)
  3. Create .env file
  4. Create an AAD Application
  5. Create and Configure a Service Principal
  6. Perform Registration of providers on Azure Subscription
  7. Gather Data about Tenant and Subscription
  8. Obtain your Pivnet Token
  9. Create a parameters.json file

Create a SSH Key

First, I am not really sure if I did this section correctly. At the end of the day it worked for me. So you may want to double check the Pivotal documentation and also double check with Karsten’s Github documentation. I simply created a public and private SSH key using PuttyGen.

Note: Do not forget to copy the SSH Key Data that will be needed later for the parameters json file that we will use.

I did what Karsten documented as well. I created a SSH Key Pair for the admin user.

The environment parameters and variables can be found on Karsten’s github site.

ssh-keygen -t rsa -f ~/${JUMPBOX_NAME} -C ${ADMIN_USERNAME}

NOTE: At the time of this blog the variable ${ADMIN_USERNAME} should be kept at the default ubuntu.

Install and Configure Azure CLI

This can be done in various ways. You can install the Azure CLI for Windows or for Linux. I have recently started to use the Windows Subsystem for Linux (WSL). I use Ubuntu that is available on the Microsoft Store. I have tried running Azure CLI on Windows and it works but I am forcing my brain to learn new things. You can read how to install it various ways on the Microsoft Docs site and the document: Install the Azure CLI.

There is a hint that I would like to share. Well, maybe not so much of a hint but something I learned during this project. I stayed away from the WSL because I could never figure out how to get connect to that instance of Linux via SSH or WInSCP. So trying to get files into and out of the Linux Subsystem seemed almost impossible. I finally found how somewhere in in one of the Microsoft Docs links. If I find it again I will post it on this blog one day. So, pretty easy. To access anything on your Windows file system all you have to do is change the directory in your WSL to /mnt/.

Cd /mnt/c

This allowed me to view anything on my windows file system and target files I needed. Such as the .env file and the parameter.json file I used later.

To install Azure CLI on Ubuntu just follow the Microsoft Documentation: Install Azure CLI with apt.

Create the .env file

For a non Linux guy like myself this took me a little long to learn. I was a little too proud to ask Karsten. That mistake ended up taking me a little longer to research and to figure out it really isn’t hard at all. I mean, he provides a sample .env file on his github site. All I needed to do was fill out the variables that I was going to use and learn how actually get Linux to read this file.

IAAS=azure
 JUMPBOX_RG=RG_JUMPBOX
 JUMPBOX_NAME=your_jumpbox_hostname
 ADMIN_USERNAME=ubuntu
 AZURE_CLIENT_ID=fake your azure client id
 AZURE_CLIENT_SECRET=fake your azure client secret
 AZURE_REGION=westeurope
 AZURE_SUBSCRIPTION_ID=fake your azure subscription id
 AZURE_TENANT_ID=fake your azure tenant
 PIVNET_UAA_TOKEN=fave your pivnet refresh token
 ENV_NAME=yourenv
 ENV_SHORT_NAME=yourenvshort
 OPS_MANAGER_IMAGE="ops-manager-2.4-build.131.vhd"
 PKS_DOMAIN_NAME=yourdomain.com
 PKS_SUBDOMAIN_NAME=yourpks
 PKS_VERSION=1.3.0
 PKS_OPSMAN_USERNAME=opsman
 PKS_NOTIFICATIONS_EMAIL="example@example.io"
 PKS_AUTOPILOT="TRUE"
 NET_16_BIT_MASK="10.10"
 OPS_MANAGER_IMAGE_REGION="westus"
 USE_SELF_CERTS="TRUE"
 BRANCH=master
 ARTIFACTS_LOCATION="https://raw.githubusercontent.com/bottkars/pks-jump-azure/${BRANCH}"
 VMSIZE="Standard_D2s_v3"
 OPS_MANAGER_IMAGE_REGION=westus

I created a file with the above variables that I needed. I then saved it as a .env file on my drive. I learned after some researching online that all I have to do in Linux is use the source command. So from my Ubuntu WSL instance, I ran the following:

root@USTEXPLN612445:/mnt/c/Temp/pks-jump-azure# source .env

Pretty simple. I was able to test if those variables work by running the following command:

root@USTEXPLN612445:/mnt/c/Temp/pks-jump-azure# echo ${JUMPBOX_NAME}
 pksjumphost

Now I have my .env file created and sourced and can move on.

Create the Azure Active Directory Application

This process is a bit to document since it is already documented very well on Pivotal Documentation. Create an AAD Application document for more details steps.

Note: I did have to go an extra step to get the actually Client ID Secret. Maybe I missed a step or copied the wrong output down but I ended up having to to to the application in the portal and create a new key that I used for future steps.

So, pretty simple. From Azure CLI I will create the AAD Application. I didn’t change much from the Pivotal documentation outside of the password of course.

#  Create an AAD Application
az ad app create --display-name "Service Principal for BOSH" \
--password "<PASSWORD>" --homepage "http://BOSHAzureCPI" \
--identifier-uris "http://BOSHAzureCPI"

Copy the AppID and save it for later use. This will also be referred to to as the ClientID.

Create and Configure the Service Principal

From Azure CLI run the following command. Replace <AppID> with the Application ID I created early.

az ad sp create --id <AppID>

I will then need to assign the service principal a role that will give it the permissions it needs. This is where I will do something a little different than what is in the Pivotal documents. Only because of how the Automation works currently. Normally I would assign the SPN the “Contributor” role. For the automation to work, I will assign this SPN the “Owner” role.

az role assignment create --assignee "<SPN Name from above>" \
--role "Owner" --scope /subscriptions/<Subscription Deploying To>

At this point I needed to go to Azure Active Directory on my Azure Portal and grab a new key for my application. The current automation script didn’t run with the above password created because it was expecting more characters. The actually SPN works fine if I test it with both the new key and the password created above.

So I tested using the following Azure CLI commands

az login --username 0d40460f-f3ab-4e9a-8777-c7fb48ca13dc --password <PASSWORD or App KEY> \
--service-principal --tenant <TENANT>

Perform Registration of providers on Azure Subscription

This wasn’t in Karsten’s instructions but I saw it on the Pivotal documentation. So I did it anyway. This only needs to be done on a new Subscription. Since this is a subscription I just created then these providers most likely haven’t registered yet.

Following the Pivotal documentation, Register Your Subscriptions, I just register the Storage, Network, and Computer providers.

az provider register --namespace Microsoft.Storage

az provider register --namespace Microsoft.Network

az provider register --namespace Microsoft.Compute

Gather Subscription and Tenant Information

I grabbed my tenant id and the subscription id that PKS will be deployed to. These will be used in both the .env file and the parameters.json file.

To get the subscription Id I logged on to the Azure Portal, open the Subscriptions blade, and my subscription ID is listed next to the Subscription name.

I can get my tenant Id by opening the Azure Active Directory blade, clicking properties, and it was listed under the Directory ID.

Get Pivnet UAA Token

I was able to obtain the Pivnet Token from my profile on my pivnet account. It is called the Pivnet UAA Token.

I signed on to my Pivnet Account. I clicked Edit the profile and at the bottom of the screen there is a button “Request New Refresh Token.” Click that button and copy the UAA API Token that will be needed later.

Create the parameters.json file

There was a few ways to start the installation. One was to start the install and declare what parameters I wanted in the script. I decided to create a parameters.json file. Then I would point the script to us that file.

Here is an example of the file I used. I actually copied Karsten’s PCF parameter.json and edited it to meet my needs for PKS.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "adminUsername": {
        "value": "ubuntu"
        },
        "sshKeyData": {
            "value": "SSH Key Created Earlier"
        },
        "clientID": {
            "value": "Client ID or Application ID"
        },
        "tenantID": {
            "value": "Tenant ID"
        },
        "subscriptionID": {
            "value": "Subscription ID"
        },
        "clientSecret": {
            "value": "Client Secret or App Key"
        },
        "pivnetToken": {
            "value": "Pivnet UUA Token"
        },
        "envName": {
            "value": "pks"
        },
        "envShortName": {
            "value": "pkssandbox"
        },
        "opsmanImage": {
            "value": "ops-manager-2.4-build.142.vhd"
        },
        "pksDomainName": {
            "value": "Domain Name"
        },
        "pksSubdomainName": {
            "value": "Sub Domain"
        },
        "net16bitmask": {
            "value": "10.12"
        },
        "notificationsEmail": {
            "value": "Email and Also Login"
        },
        "opsmanUsername": {
            "value": "opsman"
        },
        "pksAutopilot": {
            "value": "TRUE"
        },
        "useSelfCerts": {
            "value": "TRUE"
        },
        "pksVersion": {
            "value": "1.3.0"
        },
        "dnsLabelPrefix": {
            "value": "pksjumphost"
        },
        "ubuntuOSVersion": {
            "value": "18.04-LTS"
        },
        "vmSize": {
            "value": "Standard_D2s_v3"
        },
        "location": {
            "value": "WestUS"
        }
    }
  }

Deployment Time

Now, I have everything ready for deployment. The .env file is created and sourced. The parameters.json file is created. All my prerequisites are done and I am ready to start the deployment. From this point it is pretty much start the script, and watch log files.

From Azure CLI I ran the following script:

az group create --name rg-westus-pks-jumphost --location WestUS
az group deployment create --resource-group rg-westus-pks-jumphost \
    --template-uri https://raw.githubusercontent.com/bottkars/pks-jump-azure/master/azuredeploy.json \
    --parameters @azuredeploy.parameters.json

Since I am using my Ubuntu WSL I changed the directory to the location on my laptop where I have my parameters.json file located. This enables me to still point to Karsten’s azuredeploy.json but use my own parameters file.

The above script creates a new Resource Group called rg-westus-pks-jumphost in the WestUS Azure Region.

Note: Make sure that you check your subscription is capable of using some of the VM Sizes before running these scripts. I ended up switching to a few regions throughout my testing.

It then kicked off the deployment based off of the ARM template provided by Karsten. This will then create my Jump Host that will then run all the Terraform and various other scripts Karsten has created. It will also be my main source to track the progress, connect to my resources, etc.

Take note of the IP and DNS name for the newly created jump host.

Once the Jump Host has been created I will SSH into the host using Putty. That is pretty much the only tool I have used since I don’t normally SSH into anything. :) I have setup Putty to use the previously created SSH Public Key which allows me to connect to my newly deployed jump host. I also will use the ubuntu@{JUMPHOST_NAME} to connect via SSH.

From here I will monitor the logs and make sure everything is going well. Run the following command to get the install logs displayed:

tail -f ~/install.log

It will start creating the environment resource group, and all the resources needed for the Ops Manager and Bosh VM’s to be deployed.

The process will copy the OpsMan image from Pivotal, then deploy the OpsMan VM along with many other resources, including the load balancers, security groups, etc.

So, my first few deployments, at this point the automation would just sit and hang. Waiting for DNS to resolve so it could move forward with the installation and configuration of Ops Manager, etc.

I found out after re-reading Karsten’s readme file that I needed to configure my subdomain DNS records.

Long story short, when I created my subdomain on my domain registrar I needed to also add Azure’s NS servers to the list of NS servers. However, Azure has a lot of DNS Name Servers. So, as soon as the Azure DNS Zone was created in the new resource group, copy the Name Servers that the zone uses, then I added them as my subdomain’s NS Servers.

Once Ops Manager has been installed you will need to switch over to other log to monitor the status of BOSH and PKS installations. You can see several different logs now in the ~/logs directory on the jump host. The best one to monitor from my experience is the om_init.sh log file.

You can also now login to the Ops Manager website as well and watch the logs from the console there.

So now we have Ops Manager installed, Bosh Director installed and configured, and our Pivotal Container Service (PKS) tile is installed and configured.

Once the deployments of the Pivotal Container Service (PKS) is competed the automation will continue. The next step created my first Kubernetes cluster. This creates a small cluster with 1 master and 3 minions, oh I mean nodes. I still love the minion name. Go Kevin! Again, you can watch the status via the logs on the Jump Host.

After a few donuts, and quick nap, saving a kitten from the saws of a wild boar, and chugging a monster energy drink, everything was finished! I now have a newly deployed PCF Foundation in Azure running Ops Manager, Bosh Director, and the Pivotal Container Service (PKS). Along with my first Kubernetes cluster.


Validation Time and Tools Installation

Now I need to validate the installation I will check to see if I can login. I will also install the PKS and Kubernetes CLI’s onto my Linux host on running on desktop.

First we need to download the PKS CLI from Pivotal. The instructions to do so can be found on the Pivotal Documentation site along with the instructions to install the K8s CLI as well.

PKS CLi
https://docs.pivotal.io/runtimes/pks/1-3/installing-pks-cli.html

Kubernetes CLI
https://docs.pivotal.io/runtimes/pks/1-3/installing-kubectl-cli.html

Validate!

So I have the CLI’s installed. Now I need to test that I can login using the PKS CLI.

pks login -a api.<subdomain>.<Domain> -u k8sadmin -p <PVINET_UAA_TOken> --skip-ssl-validation

I wanted to make sure that I can see my newly created K8s cluster. So once connected via PKS CLI, run the following commands:

pks clusters

pks show-cluster k8s1

Everything looks good. Karsten points out something I would have not have known. The UUID that I can see after running the pks show-cluster <clustername> is the same identifier in the Azure portal of the matching Availability Sets.

Testing Kubernetes Dashboard

The last thing I tested after deployment is the Kubernetes Dashboard. This had me a bit confused yet again. However, after a few hours of trying I was able to get the Dashboard working. :)

From my Ubuntu WSL session I logged in using the PKS CLI.

pks login -a api.<subdomain>.<domain> -u k8sadmin -p <PIVNET UAA TOKEN> --skip-ssl-validation
pks get-credentials k8s1

Once logged on it will create a config file that I will need to access the K8s Dashboard. This config file will be created locally ~/.kube/config. I will admit, when I use the PKI CLI for Windows I can’t find this file anywhere. So I had to copy the config file from my Ubuntu WSL session to a windows directory. This way I could access that config file.

Next I ran the following command to start the kubectl proxy from my local host:

kubectl proxy

I am now able to navigate to the following URL:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

The Kubernetes Dashboard will need me to browse for that config file. Then click Sign on.

At this point I am ready to move on with managing my new PKS environment. That will be an entire new blog I think. One that focuses on PKS management and the other on Kubernetes management.

Final Thoughts

Karsten is making this better and better with each deployment. This beats doing it manually. I want to try this on Azure Stack here soon but I know we will need to play around with the deployments to get them to work. I have so much to learn and sometimes it seems like too little time! I will be honest, I was a little skeptical about PKS at first. However, the power it brings to managing an enterprises Kubernetes deployments is pretty cool.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s