Interview: Getting to Know BuzzBird

With a successful product and still-small tech team, they're about to make their first DevOps hire.

An interview with Klaus Breyer, CTO/CPO at BuzzBird
April 2018

Can you tell me a bit about BuzzBird?

Buzzbird is a startup with the goal to automate influencer marketing. Among others, we’ve built a matching algorithm - an assistant which picks the right influencers for brands and also helps with the booking process. We cover the complete spectrum of cooperation between brands and influencers. This includes payment processing at the end. The team is growing, we are 20 people now - 7 developers among them if you count me in.

What most important for the company right now?

We need to build up our infrastructure for artificial intelligence solutions.

What are you currently using for deployment and automation in your team?

Currently we’re using AWS. Our frontend is hosted via Elastic Beanstalk - one deployment for our production environment, one for the corresponding workers, and two more for the staging environment, demo environment and others. Each is a single Elastic Beanstalk application.

(Is that how you started?)

We started out differently. In the beginning an external service provider took care of the setup. They created and configured an EC2 server and an AWS machine image. It was very opaque. You didn’t know where the configuration files were, and it was really hard to replicate.

With Elastic Beanstalk, you can replicate an environment with ease, without risking to mess up the original one, as you have a single configuration file which contains all relevant settings.

Replicating the existing setup in a second environment was what I did first, once I took over the responsibility by myself. The production environment was live by then. Setting up a mirrored system gave me a safe place where I could test things and understand the behaviour of the production system faster.

Doing so helped me gain a good overview in a relatively short amount of time without having to deploy to the original system and risking disrupting the service. I don’t think it would have been possible in that way without Elastic Beanstalk. Configuring a server is always a pain.

If I were a developer at BuzzBird and wanted to deploy - how does the process look like? What steps do I need to take?

If you’re developing a new feature, you’d need to check out the right Git branch. We’re working with feature branches, and each feature branch has an own Elastic Beanstalk config pointing to its own staging environment. Feature branches mean: the features are developed in their own branch, and are merged back into the development branch once they’re ready. When you’re developing the feature, you can use the Elastic Beanstalk CLI to deploy to the feature staging environment with a single command.

Behind the scenes, the tool takes the current Git commit, pushes it to the Elastic Beanstalk service and takes care of the details for deploying it, like running all necessary initialization scripts in the right order.

Currently, we have finally managed to align all configuration files across all environments. Now environment variables are sufficient to configure everything. We already use Travis for CI and now want to use it in the next step for an automated deployment.

What’s most important for you about your current setup?

We’re using other AWS services, like S3 for storage or RDS for databases, but Elastic Beanstalk is the most important part for us. It frees us from the responsibility of setting up and maintaining servers. Not having to deal with this, means that the team and I can focus on other tasks instead.

What’s the biggest problem?

The architecture is not perfect, especially if you want to use multiple languages. You can’t have two programming languages in a single Elastic Beanstalk application - Node and PHP for example.

At the moment we are very busy with serverless, AWS Lambda in particular. Especially in the area of data processing. There are incredible synergies when you combine them with Kinesis or trigger S3 events.

Another thing where lambda comes in quite handy are tools like Headless Chrome. We’d really like to use it, for taking screenshots or generating PDFs. You can generate thousands of PDFs in parallel. Integrating it into Elastic Beanstalk however, is simply not possible without a lot of effort. The images are built to be complete, and don’t leave much room for extensive customization - apart from executing a few custom PHP scripts for example.

But serverless doesn’t solve all problems either. Due to the great demand for compute power, machine learning in particular can hardly be mapped with it at all. That is why we are currently working on Docker solutions in parallel.

What’s your biggest learning regarding deployment, infrastructure and automation?

I’d pick the approach to set up a mirrored system in the beginning. You can take over the responsibility for a live system and get to know it quickly by setting up a shadow-setup where you can try things.

It really makes sense if you can’t be sure where side effects might occur, or what the quality is. Especially if there is low test coverage and a lot of time pressure.