An Overview of Docker Networking

Networking is a topic I have been tip-toeing around for a while. It always takes a bit of effort and intentionality to learn about basics, which are not immediately needed for daily work.

Here’s the overview I wish I had about Docker networking for practical purposes, I hope it will help you get an overview of the topic.

The Docker docs on networking are cool. However, I felt like I needed to read a lot and look in many different places while researching the topic.

The Networks On Your Host

Learning by doing is a great way to get started!

What networks exist on your current Docker host? You can find out with

$ docker network ls

There should be at least three entries: bridge, host and none.

Those are default networks Docker brings alone. If you’re using docker-compose, you’ll see a few additional networks, named after your projects. Their auto-generated names end in _default and they are bridge networks.

Bridge Networks

Alright, so you want a bunch of containers to see each other, and reach the internet, but not have access to other network services running on the host?

Bridge networks are the tool of choice.

By default, a new bridge network is created for each docker-compose stack. All containers running in the stack are attached to the same bridge network, and they can reach each other.

The Default Bridge Network Is Different

User-defined bridges provide automatic DNS resolution between containers.

If you start a container without specifying a non-default network, the container will be attached to the default bridge network.

No DNS resolution is provided there. So those containers would only be able to connect to each other via IP. (Well, except for using the –link option, but it’s considered a legacy approach by now).

However, if you run your containers on a user-defined non-default network, containers are able to reach each other. This is why you can reach your “db” container from your “app” container when using docker-compose without having to put in much effort.

If you’re interested in the nitty gritty, stuff is surprisingly complicated by the way.

Which Network Does This Container Use?

You can find out by inspecting the container.

Look up the ID of the container which interests you via:

$ docker container ls

And inspect it with:

docker container inspect $THE_ID_FROM_ABOVE

Look for a "Network" entry, it will contain information about which network is being used.

Sidenote: I like to use the less command to search through these kinds of long command outputs. Modify the command from above to be docker container inspect $THE_ID_FROM_ABOVE | less and type /Networks followed by Enter to search through the output. Press n or b to jump between matches. You can get out by pressing q.

Host Network

If you run containers on a bridge network, they are isolated from other bridge networks and the host.

When you run a container on the host network, it’s able to see everything going on on the host’s network. Essentially, you are skipping network isolation for that container.

This also means, that publishing ports when running on the host network is pointless. The services your container launches bind to the ports on the host interface already.

How Are Restrictions Enforced?

This happens via iptables. It’s a tool which can be used to restrict and allow certain kinds of traffic between networks.

Docker manages those rules for you so everything runs smoothly. Here’s a nasty gotcha: if you use another iptables based on-host firewall (like ufw), chances are that the rules Docker sets clash with your firewall rules.

In practice this means, that your published ports can be accessible from outside, despite firewall rules. To save yourself some headache, just use an external firewall on your Docker hosts, this way you won’t have to worry about iptable rule interferences.

The Null Type

If you don’t want a network to have access to any network, you tell your container to use the none network.

Or you tell Docker to leave the networking of this particular container alone to use custom network drivers, which is…

Fancy Stuff

There’s a lot more to Docker networking than listed here. To be honest, that’s almost the edge of my current understanding.

There are overlay networks, which can be used to make it possible for containers on different hosts to talk to one another. An essential requirement for orchestrators like Swarm or Kubernetes.

Legacy application supposedly can be in need of the macvlan type, and there are networking plugins but actual hands-on experience with these topics is not something I have consciously encountered in the past few years.

I’d be curious to dig deeper in the future though! Especially, to discover if there are topics which can help with practical problems.

In Conclusion

I hope this small overview was useful to you!

In essence, I think it’s important to know that bridge networks can help isolate groups of containers from each other, that bridge networks you create provide name resolution between containers while the default bridge network doesn’t.

It’s possible to run containers on the host network, which usually shouldn’t be what you want. Knowing that iptables are beneath most of the fancy networking functionality is good to know, just in case.

Networking has been one of those topics I have taken for granted, or actively avoided for a long time. I think it’s a great idea to invest the time and learn about the basic ins and outs of it though! Chances are, I’ll be doing so myself, and hopefully writing more about it in the future.

I’m curious if there’ll be a few more things to add to my personal “things I wish I learned earlier about Docker” list through this work.