KubeCon Europe 2018 Takeaways

TAGS: K8S

The KubeCon Europe this year (2018) was a massive and exceptionally well organized event. Over four thousand people came to Copenhagen. There were a lot of very good talks and interesting people to meet.

Of course, I mostly focused on topics which are close to my professional interest: helping small teams to get started with Kubernetes by designing deployment pipelines and building internal clusters.

The questions I’ve been asking myself were:

  • How are small teams getting started with Kubernetes?
  • What are frequent surprises, gotchas and open questions?
  • How to smoothen the transition towards being comfortable with Kubernetes?
  • What are deployment workflows people use? For what reasons?
  • What are frequent choices regarding CI/CD tooling and approaches?

I’d like to share my high-level takeaways from the event.

K8s is getting really big

Last year the KubeCon Europe was in Berlin, and about one thousand people attended. This year, it was over four thousand. That’s a 4x increase. It was a huge crowd.

Most people are in the motions of starting out

No surprise, given the explosive growth in interest. Most of the attendees at the beginning of their Kubernetes journey.

Companies are still evaluating how (and whether) to use Kubernetes in the near future, and are looking for more first-hand information to help them get started.

Treating your configurations like code and using a versioning system have been regarded as best practices for a while now.

GitOps takes it one step further, and makes everything revolve around typical Git workflows. The idea is not quite new, but it’s very nice to finally have a term for this type of workflow. The folks from WeaveWorks have coined the term - you can read more here. There were many talks which touched on the topic.

Your infrastructure configs reside in a Git repository (on GitHub, GitLab or Bitbucket for example), and people changes can be proposed via pull requests. Proposed changes can be checked by the people who are responsible, or an automated pipeline can take over.

In the case of a Kubernetes cluster, you can use it to automate the creations of new namespaces, resource limits or security rules. You get a history of changes, accountability and more control among others. Neat!

Deploying to Kubernetes is a mixed bag

I was very interested in all topics around CI/CD (I’m weird like that), and made sure to attend as many talks as possible where deployment pipelines played a role. My conclusion: it’s a tricky, very individual topic and there is no “one way” to do it.

There are a lot of tools to choose from for every part of a deployment pipeline, and different styles of getting your code deployed. Some tools are pretty much interchangeable, others work better together, some tech stacks are best served with a particular choice. There’s no one-size-fits-all.

Thus, almost every company has their own set of tools, which enable a workflow which serves their product and team needs. Some end up writing their own tools for a part of their pipeline, because nothing else really fits.

Deployment pattern: custom resources and custom controllers

At the core of Kubernetes, you have a declarative approach. You describe how things should look and Kubernetes takes care to make it happen.

Conventional deployment methods are rather imperative. You specify what needs to happen at each step of the process.

By using Kubernetes custom resources and custom controllers, you can let your cluster take care of deploying your applications in a declarative fashion! That’s a very very interesting pattern, and variations of it were presented in multiple talks.

Your build pipeline produces new deployment artifacts (like Docker images in a registry), and everything else is taken care of internally. That’s especially neat if you want to use complex deployment methods, or simply don’t want to give third-party tools access to your internal infrastructure and the credentials to do so.

Most people shouldn’t develop on Kubernetes

I really don’t see a good reason for teams to even touch Kubernetes for their development workflows. Just develop locally, maybe run backing services in containers if you need them.

Sure, there are tools which make it possible to run a development server locally, and have it be part of your Kubernetes setup on the cluster, but my impression is that very few usecases warrant this kind of effort.

A few of the issues you’ll face with most approaches are slow iteration times, frustrating debugging limitations and unnecessary complexity. If you feel like you need to rely on your Kubernetes cluster during development, your workflows are probably broken.

That serverless stuff is pretty neat

I haven’t been too eager when it comes to serverless. But after a lot of conversations and research I have reconsidered.

There’s a lot to gain if you’re working with event-driven applications. FaaS can be an incredibly powerful tool, and new services like AWS Fargate (with virtual-kubelet) can make it possible to run any number of containers in a serverless fashion as if they were on your Kubernetes cluster.

Running workflows for data engineering and machine learning are the usecase which I’m most excited about.

Kubernetes for data plumbing, machine learning and batch processing

Here’s the biggest reason why I’m excited about serverless: with FaaS you were really constrained when it came to machine learning and data processing tasks. The code does not really fit size-wise and it might need to run a bit longer than a typical serverless function to be useful.

With the possibility to run serverless containers, a whole new world of very exciting possibilities is opening up.

Even without “serverless” in the mix, the Kubernetes ecosystem is growing and becoming more mature for hosting complex data-crunching tasks. You can abstract away a lot of boilerplate work while utilizing all available resources.

Security!

Right now, a lot of things related to Kubernetes are non-secure by default. Most people don’t realize, until they start working on their company’s internal clusters.

After you have a functional cluster, there’s a lot to take care of before it is fit to be maintainable, usable and reliable. Most of those things, have to do with security and observability in one way or another.

Luckily, security is this year’s focus in the Kubernetes contributor community! A lot of effort is being invested, and pretty much everybody will benefit from this.

Audit logs have been mentioned in a few talks - a very useful feature once you have user management in place, and it’s only available since Kubernetes 1.10. I’m looking forward to all developments around making clusters more secure, and easier to use right! Oh, and everything which makes multi-tenancy less of a pain.

You’ll need a team for that

When starting out with a small internal cluster, you don’t need to have all topics covered by the team, nor to know everything there is to know about Kubernetes. It’s a learning process, your cluster is non-critical and pretty much nothing can go wrong.

But if the workloads are growing, critical applications start running on your cluster(s) or you even start to run clusters which are accessible to the outside world, you’ll need to reconsider.

“Who’s responsible for the security of your Kubernetes cluster?” is a very well meant question, which you should be able to answer once Kubernetes is an important part of your infrastructure. Other areas require the same level of attention though.

You’ll need people who are focusing completely on a single topic in the Kubernetes environment. There’s no other way to keep up and do a proper job. Nobody can know everything there is to know about running Kubernetes in a proper and reliable fashion.

Just One Talk

You haven’t watched any of the talks, don’t want to deal with something purely technical and would like to see something with lots of interesting takeaways?

Check out my favourite talk of the event, most probably the best talk of the conference, on YouTube!

Subscribe to my newsletter!
You'll get notified via e-mail when new articles are published. I mostly write about Docker, Kubernetes, automation and building stuff on the web. Sometimes other topics sneak in as well.

Your e-mail address will be used to send out summary emails about new articles, at most weekly. You can unsubscribe from the newsletter at any time.

Für den Versand unserer Newsletter nutzen wir rapidmail. Mit Ihrer Anmeldung stimmen Sie zu, dass die eingegebenen Daten an rapidmail übermittelt werden. Beachten Sie bitte auch die AGB und Datenschutzbestimmungen .

vsupalov.com

© 2024 vsupalov.com. All rights reserved.