Skip to content

Docker Networking

BRIDGE

The bridge network driver will allow containers using the bridge to communicate with each other and provide external connectivity using NAT. Below i create a local bridge, and then run 2 new containers using that bridge

docker network create --driver bridge  
                      --subnet 192.168.199.0/24 
                      local_bridge
!
docker run -itd --network=local_bridge --name HOST1 i386/ubuntu
!
docker run -itd --network=local_bridge --name HOST2 i386/ubuntu
!

Next I installed ping and ifconfig, then tried to ping from HOST2 to HOST1. Host 2 received 192.168.199.3 and Host 1 is using 192.168.199.2

HOST2$ ping 192.168.199.2
PING 192.168.199.2 (192.168.199.2) 56(84) bytes of data.
64 bytes from 192.168.199.2: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.199.2: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 192.168.199.2: icmp_seq=3 ttl=64 time=0.047 ms

INTERNAL ONLY BRIDGE

If you need containers to be able to communicate with each other, but would not like them to have external connectivity, then you can create a bridge using the option –internal. This just tells docker not to create an iptables nat rule.

# As a test, I create a temp network bridge without specifying internal:
docker network create --driver bridge 
                      --subnet 10.0.0.0/24 temp

Then check out the docker host's iptables nat rules:
iptables -t nat -L
....
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.0.0/24          anywhere
MASQUERADE  all  --  172.17.0.0/16        anywhere

# The 172.17.0.0/16 is my default bridge that is automatically created by docker. 
# Now i'll delete the temp bridge and create it again with internal
docker network rm temp
docker network create --driver bridge 
                      --subnet 10.0.0.0/24 
                      --internal temp

# Since it's marked as internal, it does not create an entry for the 10.0.0.0/24 subnet. 
iptables -t nat -L
....
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

Now any host you toss into the ‘temp’ network bridge will be able to communicate with each other, but will not have any external connectivity

MACVLAN

This network driver lets you put containers onto your physical network, so you will be able to ping it or access it directly from other machines in your network. I was not able to get this to work with windows, so it looks like it’s only available on linux versions that support macvlan. The downside is that right now they do not support your containers pulling an IP address from your own DHCP server, so you have to specify your subnet as well as give the network an ip range to hand out IPs from. Make sure that these are excluded from your dhcp server to avoid any ip conflicts in the future. I also had to specify the parent interface on the docker node to use as it seemed to by default connect the network to the docker0 interface on my node. Below the subnet is my home network subnet : 192.168.0.0/24. The range i’m assigning docker to give the containers is 192.168.0.128/28 and the gateway is my home router 192.168.0.1.

lsmod | grep macvlan
macvlan                24576  0
docker network create --driver macvlan 
                      --subnet 192.168.0.0/24 
                      --ip-range 192.168.0.128/28 
                      --gateway 192.168.0.1 
                      --opt parent=eth0 home_net
!
docker run -itd --network=home_net --name home_ubuntu i386/ubuntu
!
docker exec -it home_ubuntu bash

Tested by doing an apt-get update which was successful. Install iputils-ping and net-tools so I could perform an ifconfig and ping

root@9f7f13bd134f:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:10:05:80
inet addr:192.168.0.128 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe10:580/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:22087 errors:0 dropped:0 overruns:0 frame:0
TX packets:21216 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:28702243 (28.7 MB) TX bytes:2074257 (2.0 MB)
!
root@9f7f13bd134f:/# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=63 time=0.417 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=63 time=0.410 ms

SWARM

Networking between swarm nodes is performed by, obviously, creating the swarm and then building an overlay network which uses VXLan between the swarm nodes. You then create and map a service to the overlay. Sounds complicated, but the config is extremely easy. The swarm consists of a swarm manager, and 1 or more worker nodes.

1. Setup the swarm on the manager node

sudo docker swarm init --advertise-addr 192.168.0.1
Swarm initialized: current node (dgg163to9z35gjgo3o7dimi0c) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join 
    --token SWMTKN-1-2euj56reqj23fio3oi6nnx5o3z9nbn4dnrw7hryt3hhibt522m-24uc3lm4fhpbhr6lus0s240tk 
    192.168.0.1:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

2. Join the worker nodes to the swarm

C:WINDOWSsystem32>docker swarm join 
                    --token SWMTKN-1-2euj56reqj23fio3oi6nnx5o3z9nbn4dnrw7hryt3hhibt522m-24uc3lm4fhpbhr6lus0s240tk
                    192.168.0.1:2377
This node joined a swarm as a worker.

3. Configure the overlay network

sudo docker network create --driver overlay 
                           --subnet 10.0.0.0/24 overlay_net10

4. Create a new service and map it to the overlay

sudo docker service create --network overlay_net10 
                           --name ubuntu 
                           --replicas 2 i386/ubuntu sleep infinity

Verify the new containers are now running

C:WINDOWSsystem32>docker ps -a
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
70219248658c        i386/ubuntu:latest   "sleep infinity"    2 minutes ago       Up 2 minutes                            ubuntu.2.388eco4qhasr0axmeey7diyfq

on manager:
 sudo docker ps -a
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
af03b9127d06        i386/ubuntu:latest   "sleep infinity"    4 minutes ago       Up 4 minutes                            ubuntu.1.6b0m43h9ydtuycmzav2mmat87

Now to test out by pinging each other. I connected to each and installed ping. I’ll also note that I initially set this up using an ubuntu box with a Windows box as a worker node, but it turns out windows nodes are not totally supported, so it didn’t work. I ended up bringing up another ubuntu box as a worker node to complete this test. In this case, the container on the manager was 10.0.0.4 and the worker container was 10.0.0.3.

root@a3a213806b79:/# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.666 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=1.31 ms
Published inTech

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *