Helm is a great tool for automating deployments on Kubernetes because it allows you to describe the state of your deployment in a parameterized way, and takes care of applying those changes in an idempotent way. I also love how you ran rollback those changes if needed because it tracks the differences in each deployment and stores that information in Kubernetes. Helm charts exist for many applications and can really improve your productivity when needing to deploy them in Kubernetes. But what about those of us that are deploying more complex applications or components across multiple environments and clusters? How you organize and structure your charts can make a world of difference in how it's built and maintained. The recipe I'm outlining in this post is how we're currently using Helm in the portal and micro-services platform in IBM Services for Managed Applications.

A Bit of Background

At IBM Services for Managed Applications, we run a portal platform and micro-services framework that gives customers the ability to interact with their managed services via a browser or via API. This platform allows teams to independently develop, deploy, and maintain their application and/or service independently of the others, so a consistent ecosystem for these components to run in is extremely important. While these applications and services are independently deployed and maintained, they do have dependencies on things like monitoring, databases, configuration, secrets, etc. We've created a Helm Chart that builds out all of these dependencies so that once deployed in a new cluster or environment, teams have that needed ecosystem to deploy their applications and services into.

Some Challenges

Using IBM Cloud Kubernetes allows us to quickly create and scale Kubernetes clusters as needed as we expand our global footprint. Unfortunately, each cluster is confined to the region it was deployed in. This means that in order to have a presence in US-South and US-East, we need have completely separate clusters that have no knowledge of each other. It follow that we need to deploy our platform Helm chart twice; once in each region, each with their own set of configurations.

We also have multiple environments. Not only do we have a Production environment, but we also have On-Deck, Staging, and Development environments. We probably could have found a way to have a single cluster for all environments, but each environment had its own requirements in terms of access policy, availability, network configuration, etc. It was just easier to isolate the clusters than increase complexity with segmentation within a single cluster.

There are some configuration values that are not sensitive and can be checked into source control. However, there are several things in our platform that are sensitive and cannot be checked into source control. Things like OAuth secrets, database credentials, API keys, etc. What we needed was a way to persist this information in source control, but in such a way that it didn't expose this items and only a select few could access them.

To summarize, the top issues we were facing:
  • Multiple Kubernetes clusters, in various regions globally.
  • Multiple environments (Production, Staging, etc.)
  • Securely storing our configuration in source control.

Helm Chart Structure

Defining a solid structure for the Helm Chart can make a world of difference in how you deploy them in the various environments. For us, we created a hierarchical structure to store the configuration values in, with the Environment at the top, followed up the region/location.



With this structure, we put common configuration for the environment, directly in the ./env/{environment} directory, and region specific configuration in the ./env/{environment}/{region} directory. When we run helm, we just specify the appropriate values.yaml files for the region and environment we're deploying to. We realized at the beginning that manually passing this information in via command line was error prone, so we wrote an automation script to do that for us (more on that later). In our situation, it just so happen the sensitive information was common across the environment, and did not differ between regions. This is why you see a secrets.yaml file for the environment, and a values.yaml file for the region.

The services-platform directory is the actual helm chart, and the setup and util directories contain helper scripts to install helm, configure Kubernetes networking, etc.

To deploy/upgrade a chart with this structure, it would like this:

$ helm-wrapper upgrade -i platform ./services-platform --install -f ./env/staging/us-south/values.yaml -f ./env/staging/secrets.yaml

As you can see, it allows flexibility but it's not something you'd want to hand type every time.

Securing the Configuration

For sensitive information, we're using the helm-secrets plugin. This plugin provides arguments that will encrypt/decrypt a secrets file, and provides a wrapper command that will decrypt the secrets file, pass the commands on to helm, and delete the decrypted file. The encryption and decryption are handled with SOPS and a number of supported encryption algorithms. In our case, we use PGP. Each environment has it's own key pair, and a select few have the private key installed on their local machine. When helm-secrets see's a secrets.yaml file, it looks for a corresponding .sops.yaml file that tells helm-secrets what the encryption algorithm and public key is. Here's an example of what it looks like in it's encrypted state:
Where is the working being done to display server metrics? Are we thinking that'll be started and completed in the 1 sprint we have for October?

What’s nice about this is you can see the yaml structure within the file, which makes it really easy to reference when building your chart. To edit the file, you run a helm-secrets command that will decrypt the file and open it up in your default text editor, and re-encrypt it when the editor saves the file and exits. For example, to edit the secrets file I run $ helm secrets edit ./env/staging/secrets.yaml. This decrypt the file and opens the decrypted file in my default editor. After saving the file and closing, helm-secrets will re-encrypt it the new file. Another thing to note, is that when if the private key is password protected, helm-secrets will prompt you for the password when decrypting the file, whether that be for editing, or actually applying the chart to a Kubernetes cluster.

Instead of using the normal helm command, we now use helm-wrapper. The update deployment command now looks like this:

$ helm-wrapper upgrade -i platform ./services-platform --install -f ./env/staging/us-south/values.yaml -f ./env/staging/secrets.yaml


Automation

Referencing in the various configuration files depending on your environment and region is definitely error prone. Not to mention, in order to apply the helm chart, there are a series of steps you have to perform to be logged in and authenticated with IBM Cloud. I simplified this process by writing a deployment bash script. When given the environment and region, the script performs the following actions:
  • Targets the appropriate IBM Cloud Resource Group
  • Sets the appropriate IBM Cloud Region.
  • Obtains the kubectl configuration and applies it.
  • Validates that the current kubectl settings are applied by comparing the output of a kubectl command to some settings obtained from the IBM Cloud CLI.
  • Runs helm-secrets passing in the appropriate environment and region configuration values.

To deploy the platform chart to our staging cluster in us-east, we run $ ./deploy.sh staging us-east .

The setup.sh script is run once for a new cluster to add the tiller account for helm, install helm on the cluster, and add the helm-secrets plugin.

No comments:

Post a Comment