There was a pretty strange issue with service annotations (they would be silently dropped by kubectl, from what I can tell), which turned out to be caused by kubectl being updated to 1.9 by brew.
How many different Kubernetes clusters and versions do you need to work with? What happens to your scripts and workflows when you switch to a newer version of kubectl?
Do you upgrade your tools, and rely on backwards compatibility? What if you need to work with two clusters, which are two minor versions apart?
I’d like to share my approach with you, which can be used to switch between clearly defined versions of kubectl and related tools, so you can be sure that everything works as expected.
My Situation
I help companies with Kubernetes. It starts with setting up a first on-prem Kubernetes cluster, and ends with transitioning towards a production-ready setup and a team who’s ready to work with it with confidence.
Usually, there’s at least a few different clusters within a single company - older k8s versions which were set up a while ago and are still being used.
As I work with more than one company, there’s a bunch of Kubernetes versions that I interact with on a regular basis.
Why Not Just Upgrade ALL THE CLUSTERS?
There’s better stuff to on for a business. Companies who run their clusters on-prem don’t do it without non-technical reasons in the mix. There’s no point in incremental improvements which are not really needed for more important reasons than convenience.
Why risk breaking workflows and causing blocking issues without sufficient benefits for the company, nor pressing security concerns?
Usually, new k8s features and improvements to the setup are less important than ensuring the reliability of workloads and the productivity of people who are working with them.
Can’t I Just Use A Single Kubectl Version?
That works if the the version skew isn’t too great. Kubectl supports one version skew back and forward. You might even get away with two if you’re feeling lucky.
Personally, I’m very fond of reproducible and reliable workflows.
For that, it’s essential to know the exact versions of all major tools which are important for a project, if the project isn’t an early prototyping experiment.
At the very least, I’d want to be sure that all tools are available and I can be sure that everything will work as expected. It’s even cooler if you’re able to start working on a project from scratch after cloning a repo and issuing a single command.
When you need to use different versions of an executable from time to time, the most basic approach is to backup and overwrite files in a PATH-covered directory.
However, this gets old very quickly, and becomes a major pain if you need to use multiple kubectl versions during a single day.
When copying stuff around became tedious, I wondered: “How are other people handling similar issues within differen ecosystems?”
Looking For Inspiration
Using particular application versions for a different project is not a new problem.
In the Python world, there’s virtualenv (or pipenv) which can be used to make sure that a project only has access to modules which it’s supposed to used, in the exact versions one expects. You can create an environment with a specific Python version, and be sure that system updates won’t mess anything up in the future.
The tool creates a folder with all necessary data, and a few helper scripts.
You can activate
an environment, and deactivate
it again. Activating an environment makes sure
that any call to python
will end up with the virtualenv-version instead of the other ones on your system.
Also, you can see the name of the virtualenv which you’re using in the commandline.
Interested in getting better at Go? Check out Gopher Dojo!
In the Go world, there’s a lot going on around switching between different versions of the language:
- There’s the Go Version Manager gvm,
- Gimme by the TravisCI folks,
- Homebaked solutions,
- Discussions,
- And more discussions…
The solutions you’ll find differ.
Some are simple scripts, and you need to specify the exact Go version you need in each command. Some help you switch between versions easily. Others even help you install any version you want with a single command. A lot of sophistication indeed!
The fact that there’re competing solutions make it seem like it’s still a painful problem to deal with.
My Solution
I opted towards keeping it simple.
For general purposes, I still have a ~/.kube
folder, and the most recent versions of kubectl and similar tools
in ~/local/bin
, which is covered by the PATH environment variable.
The same directory houses
Kubectx and kubens which are not project-specific
and sometimes nice to have around. (Both kubectx and kubens save data in the ~/.kube/
directory, and need
to be handled with care when switching between projects, but it hasn’t been a dealbreaker yet.)
The environments-per-client are located in a different place. There’s one folder for each client,
in a location which is not covered by PATH.
Within the client-specific folder, are subfolders for
relevant Kubernetes versions - something like client_name/1.11
for example.
These folders contain a kubectl
binary, and other binaries like helm
in the correct version.
client_name
└── 1.11
├── helm
├── kubectl
└── staging-cluster
├── config
└── k8s.sh
In the 1.11
-style folders, there’s subfolders per cluster (or project) which use this version of Kubernetes.
They are named after the project or purpose of the cluster.
In each, there’s at least one kubeconfig file other config files, and a small k8s.sh
bash script.
The k8s.sh script is just a few lines of code to set environment variables in a terminal session:
#! /bin/bash
# variables which are reused later
# a short name of the cluster, just for me
NAME="1.11 staging"
# a relative path to the kubeconfig file to use
CERT="config"
# Helper vars
# The absolute directory of the script
SCRIPT_DIR=$(dirname $(readlink -f $0))
# The parent directory where kubectl resides
PARENT_DIR=$(dirname $SCRIPT_DIR)
# First, prepend the binary directory, to detect binaries there first
export PATH="$PARENT_DIR:$PATH"
# Point kubectl to the right cluster config
export KUBECONFIG=$SCRIPT_DIR/$CERT
# Use an individual helm directory for the project
export HELM_HOME=$SCRIPT_DIR/.helm
# Add a bit of text to the terminal prompt
export PROMPT="($NAME) $PROMPT"
The script can be sourced
from anywhere, and works reliably.
It results in a slightly modified terminal prompt, to see what environment you’re using at the moment:
$ source 1.11/staging-cluster/k8s.sh
(1.11 staging) $
After executing it, you can be sure that you’re working with the right binary versions.
A bit more finesse could be added (‘deactivating’ the environment for example), but the script and folder structure have been working reliably for me. Just opening a new terminal tab to close the “exit the environment” works well enough.
If I need to interact with a particular Kubernetes version, I open a new terminal tab, source the right k8s.sh script and can focus on the work that needs to be done instead of worrying about using the right binaries from then on. The prompt name is a nice reminder of the currently active environment.
Possible Improvements
You can do better than that home-baked-script-approach, and I’m sure there’ll be a set of nifty open source solutions to address this issue eventually.
Right now, I’m quite happy with that setup, but would not mind switching to something more structured and better maintained.
To my knowledge, there’s no reusable project which helps to deal with different kubectl versions out of the box right now.
The best match might be Envirius. It’s a “Universal Virtual Environments Manager”. Although it does not deal with Kubernetes stuff right now, I can see how it could be adjusted to be a good solution. But a Kubernetes-focused project which doesn’t need a lot of effort would be even better.
In Conclusion
So, that was my approach for handling multiple versions of kubectl and separating cluster-specific data by version and client.
I hope that you can make use of the presented script-approach to smoothen your workflows until there’s a better solution!
Although it works well enough for now, I’m looking forward to see new projects around this issue in the future - aimed at making different kubectl versions a bit easier to use out of the box.
By the way: are you working with kubectl day-to-day? If you want, drop me you email below and I’ll send you my top 5 tips to help improve your kubectl workflows.