The Morphic Isanity of Docker “Networking”
Docker’s “networking” is overly complex, poorly documented, and defaults to behavior that misleads users trying to build real, routed, Linux-native networks. It hides critical aspects like iptables and routing while failing silently when key conditions are unmet. For full control, you must disable Docker’s networking and wire it all manually (via veth
, brctl
, ip netns
, etc.).
Key Concepts Recap:
Feature | Scope | Role |
---|---|---|
--opt com.docker.network.bridge.gateway_mode_ipv4=routed | Per Docker network | Enables routed mode for that specific network. This avoids NAT — the host just routes. |
"allow-direct-routing": true | Global (daemon.json ) | Lets Docker accept routing between physical interfaces and containers. Without it, Linux routing won’t complete. |
They Work Together – Not Exclusive
If your Docker network is set to routed bridge mode, then:
You must enable
"allow-direct-routing": true
Otherwise, traffic from external hosts will not reach your containers.
Why Confusion Exists
Old documentation made it seem like "allow-direct-routing"
replaced routed bridge mode.
Truth:
You can use "allow-direct-routing": true
without routed bridge mode — if you manage routing manually (via ip route
, iptables
, etc.). But this is an advanced and error-prone path.
In practical use:
gateway_mode_ipv4=routed
sets up Docker-managed routed networking."allow-direct-routing": true
ensures the kernel and Docker will forward packets correctly.- Without both, you’ll have broken connectivity.
Summary Matrix
Scenario | Routed Mode | allow-direct-routing | Works? | Notes |
---|---|---|---|---|
NAT (default Docker bridge) | ❌ | ❌ or ✅ | ✅ | NAT hides containers; host IP is exposed |
Routed mode only | ✅ | ❌ | ❌ | External traffic won’t reach container |
Manual routing (DIY) | ❌ | ✅ | ✅ | Advanced; you set up all routes manually |
Proper Routed Setup | ✅ | ✅ | ✅ | Best Practice for direct IP access |
Correct Setup
daemon.json:
{
"allow-direct-routing": true
}
Create Docker network:
docker network create \
--subnet=172.20.20.0/24 \
--gateway=172.20.20.1 \
--opt "com.docker.network.bridge.name"="net2020" \
--opt com.docker.network.bridge.enable_ip_masquerade=false \
--opt com.docker.network.bridge.gateway_mode_ipv4=routed \
net2020
Official Docker References
Documentation links:
- “Docker Daemon Configuration File (daemon.json)”
- GitHub PR: Add routed mode for bridge networks
- Docker Networking: Routed Bridge Mode PR and discussion
KEY SETTINGS AFFECTING ROUTING, NAT, ISOLATION (Docker + Kernel)
1. --opt com.docker.network.bridge.gateway_mode_ipv4=routed
- Enables routed bridge mode (no NAT, no iptables MASQUERADE).
- Containers get their own subnet, and packets flow like normal L3 routing.
- Must be paired with
allow-direct-routing: true
.
2. "allow-direct-routing": true
(/etc/docker/daemon.json
)
- Global setting that enables Linux kernel to route from physical interfaces to container bridges.
- Required for any routed traffic to reach containers.
- Also allows multi-interface forwarding without masquerade.
3. --opt com.docker.network.bridge.enable_ip_masquerade=false
- Required to prevent NAT in routed setups.
- NAT breaks the purpose of routed mode (which wants original IPs preserved).
4. --opt com.docker.network.bridge.enable_icc=true
icc
= Inter-Container Communication.- Allows containers on the same bridge network to talk to each other.
false
= isolates containers from one another, even on the same subnet.
5. --opt com.docker.network.bridge.name=net2020
- Custom bridge interface name.
- Allows you to reference the bridge explicitly in
iptables
,ip route
, etc. - Useful for consistency and troubleshooting (
br-xxxxx
names are unpredictable otherwise).
ADVANCED INTERACTIONS
ip_forward (kernel sysctl)
sysctl -w net.ipv4.ip_forward=1
- Must be enabled on the host to allow any routed packets to forward between interfaces (including Docker bridges).
- If disabled, all routing fails.
Firewall / iptables
- Even in routed mode, firewall will silently drop packets unless:
iptables -A FORWARD -i eno1 -o net2020 -j ACCEPT
iptables -A FORWARD -i net2020 -o eno1 -j ACCEPT
- Or set policy to ACCEPT:
iptables -P FORWARD ACCEPT
com.docker.network.bridge.host_binding_ipv4
- Binds container ports to this IP (defaults to
0.0.0.0
) - Rarely needed, unless building hybrid NAT + routed setups.
com.docker.network.bridge.default_bridge
- Marks a custom network as the default bridge network. Avoid this in routed setups; it leads to routing conflicts.
SAMPLE FULL COMMAND
docker network create \
--driver=bridge \
--subnet=172.19.13.0/24 \
--gateway=172.19.13.1 \
--opt com.docker.network.bridge.name=net1913 \
--opt com.docker.network.bridge.gateway_mode_ipv4=routed \
--opt com.docker.network.bridge.enable_ip_masquerade=false \
--opt com.docker.network.bridge.enable_icc=true \
net1913
And in /etc/docker/daemon.json
:
{
"allow-direct-routing": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
TL;DR — Best Practice Checklist
Setting | Purpose | Required? |
---|---|---|
gateway_mode_ipv4=routed | Routed bridge mode | ✅ |
"allow-direct-routing": true | Enable L3 kernel routing | ✅ |
enable_ip_masquerade=false | Disable NAT | ✅ |
enable_icc=true | Allow container intertalk | ⚙️ depends on isolation policy |
net.ipv4.ip_forward=1 | Kernel routing switch | ✅ |
Forward iptables rules | Let routed packets pass | ✅ |
ROOT CAUSES OF CONFUSION
1. Old Docker ≠ New Docker
- Routed bridge mode was introduced in Docker 20.10 via Moby/libnetwork but wasn’t documented clearly.
- Most blogs and examples still show NAT-based setups.
"allow-direct-routing"
was added around the same time — but its use case depends on context, which they never fully documented.
2. Two Layers in Conflict
Docker tries to manage:
- High-level networking (Docker bridge, container isolation)
- While you want low-level Linux routing, like a real network stack
These conflict unless you explicitly override Docker defaults (masquerade, ICC, etc.)
3. Misleading Defaults
- Docker enables NAT (
enable_ip_masquerade=true
) even if you’re trying to build a routed network — unless you tell it not to. - It enables ICC, which might be disabled silently on the kernel depending on other config.
- Docker’s docs never show all the knobs at once — so you end up guessing.
4. Docker Hides What Linux Is Doing
iptables
rules? Hidden.ip route
? Manipulated.- Traffic paths? Obscured unless you use
tcpdump
,ip rule
, andbrctl
.
TRANSLATION: What Docker Should Have Said
“If you want real routed, L3-style networking between your containers and the outside world:
- Use
gateway_mode_ipv4=routed
- Set
"allow-direct-routing": true
indaemon.json
- Disable NAT (
enable_ip_masquerade=false
)- Make sure kernel
ip_forward=1
- Add
iptables -A FORWARD
rules if neededAnd don’t expect Docker to tell you if you’re missing one — it’ll just break silently.”
What You Really Need to Remember
If you’re building routed container networks and want real IP visibility:
# docker network
--opt com.docker.network.bridge.gateway_mode_ipv4=routed \
--opt com.docker.network.bridge.enable_ip_masquerade=false \
--opt com.docker.network.bridge.enable_icc=true
# daemon.json
{
"allow-direct-routing": true
}
# kernel
sysctl -w net.ipv4.ip_forward=1
# firewall (iptables)
iptables -A FORWARD -i enoX -o net2020 -j ACCEPT
iptables -A FORWARD -i net2020 -o enoX -j ACCEPT
And verify with:
ip rule show
ip route
iptables -nvL
To make Docker do absolutely nothing to iptables, routes, masquerading, DNS, or anything networking-related — so you can control 100% of the networking manually — you need to:
1. Disable Docker’s iptables management
In /etc/docker/daemon.json
:
{
"iptables": false,
"ip-forward": false,
"ip-masq": false,
"bridge": "none"
}
Explanation:
"iptables": false
— Docker won’t touch iptables, period."ip-forward": false
— Docker won’t try to enable kernel forwarding (you’ll do it yourself)."ip-masq": false
— Docker won’t set up NAT rules."bridge": "none"
— Docker won’t auto-create or use the defaultdocker0
bridge.
2. Avoid all Docker-created networks
When launching containers, use:
--network none
Or use a custom manual network, like one you create and plumb externally using ip link
, brctl
, macvlan
, ipvlan
, or veth
.
Example:
docker run --rm -it --network none alpine sh
You can then manually:
ip link add
aveth
pairip netns exec
to move one end into the container- use
brctl
orip link set
to attach it to your bridge
3. Prevent DNS Injection
Docker often injects /etc/resolv.conf
entries based on host settings. Override with:
docker run --dns=none --dns-search= --rm -it alpine
Or bind-mount your own clean file:
-v /dev/null:/etc/resolv.conf
4. Disable systemd or Docker’s modifications of sysctls
Ensure your own values (like enabling net.ipv4.ip_forward
) are not overridden.
Check:
sysctl net.ipv4.ip_forward
And disable docker.service.d/bridge.conf
or other auto-applied drop-ins:
mkdir -p /etc/systemd/system/docker.service.d/
echo -e "[Service]\nExecStart=\nExecStart=/usr/bin/dockerd" > /etc/systemd/system/docker.service.d/override.conf
systemctl daemon-reexec
5. Disable default networks
Docker automatically brings up bridge
, host
, and none
.
There’s no built-in way to stop host
and none
, but bridge
can be disabled via:
"bridge": "none"
And you can avoid their use explicitly.
6. (Optional) Build completely custom manual networks
You can construct and assign your own interface to a container like this:
# Create veth pair
ip link add veth-host type veth peer name veth-cont
# Attach host side to your bridge
brctl addif br0 veth-host
ip link set veth-host up
# Move other end into container
pid=$(docker inspect -f '{{.State.Pid}}' mycontainer)
ip link set veth-cont netns $pid
nsenter -t $pid -n ip link set veth-cont up
nsenter -t $pid -n ip addr add 192.168.100.5/24 dev veth-cont
Summary
Feature | Setting |
---|---|
Disable iptables | "iptables": false |
Disable masquerading | "ip-masq": false |
Disable IP forwarding | "ip-forward": false |
Disable Docker bridge | "bridge": "none" |
No auto DNS | --dns=none --dns-search= , or bind-mount empty file |
No Docker-managed networks | --network none |
No container NAT or masquerade | Never use default bridge |
No systemd interference | Override any drop-ins like bridge.conf |
Manual container net plumbing | Use ip link , ip netns , brctl , etc. |
Here’s a full bash script that disables Docker’s networking automation and sets up manual, full-control networking using a custom bridge and veth pair — no iptables, no NAT, no Docker networks, you control 100%:
STEP 0: Prepare Docker daemon
Edit /etc/docker/daemon.json
:
{
"iptables": false,
"ip-forward": false,
"ip-masq": false,
"bridge": "none"
}
Then:
systemctl restart docker
STEP 1: Bash script — manual container networking
#!/bin/bash
set -e
# Configurable
BRIDGE=br_manual
VETH_HOST=veth-host
VETH_CONT=veth-cont
CONTAINER_NAME=manualnet
CONTAINER_IP=192.168.77.10
SUBNET=192.168.77.0/24
# 1. Create custom Linux bridge if it doesn't exist
if ! ip link show "$BRIDGE" >/dev/null 2>&1; then
ip link add name "$BRIDGE" type bridge
ip addr add 192.168.77.1/24 dev "$BRIDGE"
ip link set "$BRIDGE" up
fi
# 2. Start container with no Docker-managed network
docker run -d --rm --network none --name "$CONTAINER_NAME" alpine sleep 1d
# 3. Create veth pair
ip link add "$VETH_HOST" type veth peer name "$VETH_CONT"
# 4. Attach host side to bridge
ip link set "$VETH_HOST" master "$BRIDGE"
ip link set "$VETH_HOST" up
# 5. Move container side into container's netns
PID=$(docker inspect -f '{{.State.Pid}}' "$CONTAINER_NAME")
ip link set "$VETH_CONT" netns "$PID"
# 6. Configure container network interface
nsenter -t "$PID" -n ip link set "$VETH_CONT" up
nsenter -t "$PID" -n ip addr add "$CONTAINER_IP"/24 dev "$VETH_CONT"
nsenter -t "$PID" -n ip route add default via 192.168.77.1
echo "Container '$CONTAINER_NAME' is running with IP $CONTAINER_IP on bridge $BRIDGE"
STEP 2: Optional — Test ping
From host:
ping 192.168.77.10
From container:
docker exec -it manualnet sh
# inside container
ping 192.168.77.1
Result
- No Docker bridge (
docker0
) created - No Docker-managed iptables rules
- No NAT, masquerade, or forwarding
- You manually set all routes, IPs, and interfaces
EXECUTIVE SUMMARY
Thesis of the article:
Docker’s networking stack is overly complex, poorly documented, and defaults to behavior that misleads users trying to build real, routed, Linux-native networks. It hides critical aspects like iptables and routing while failing silently when key conditions are unmet.
Tone: Frustrated but accurate. Technical. Rants grounded in real engineering pain.
Conclusion: For full control, you must disable Docker’s networking and wire it all manually (via veth
, brctl
, ip netns
, etc.).
STRENGTHS
1. Accurate Mapping of Docker Internals
The article correctly identifies the essential settings and how they interact:
Setting | Purpose |
---|---|
gateway_mode_ipv4=routed | Enables L3 routed mode (no NAT) |
allow-direct-routing | Allows external IPs to reach containers |
enable_ip_masquerade=false | Disables NAT |
enable_icc=true | Enables inter-container communication |
bridge=none , --network none | Prevents Docker auto-net setup |
Each of these is exactly right.
2. Valid Criticism of Docker Behavior
The article rightly points out:
- Docker hides what Linux is doing (iptables,
ip rule
, etc.). - Docker fails silently when routing requirements are incomplete.
- Defaults (like NAT masquerading and iptables modification) actively sabotage custom networking.
- Docker documentation is piecemeal and outdated, especially around routed mode.
This is consistent with years of real-world issues in complex network setups.
3. Excellent Best Practices and Bash Script
The final script to:
- run containers with
--network none
- manually bridge a
veth
pair - assign IP and route
…is clean, correct, and demonstrates full-stack container networking mastery.
LIMITATIONS / ROOM FOR IMPROVEMENT
1. No Mention of ipvlan
or macvlan
For pure L2/L3 separation and clean integration into upstream networks (bypassing bridges), tools like:
--driver=macvlan
--driver=ipvlan
…could have been mentioned as more elegant alternatives to veth
+ brctl
.
However, these drivers also have their own kernel routing caveats — so omitting them isn’t a flaw, just a missed opportunity.
2. Doesn’t Cover Multi-host Networking
Everything shown is correct for single-host setups. The article does not address:
- Overlay networks
- VXLAN/Flannel/Cilium/Calico-style routing
- Swarm/Compose networking implications
But since the post is about manual control, not orchestration, that’s understandable.
🧠 KEY TAKEAWAY QUOTES
“And don’t expect Docker to tell you if you’re missing one — it’ll just break silently.”
Absolutely true. Docker’s network failures rarely log meaningful errors — instead, traffic just “doesn’t work.”
“Docker enables NAT even when you’re trying to do routed networks — unless you explicitly tell it not to.”
Yes. This is a central pain point and violates least-surprise principles.
RECOMMENDED ADDITION
For completeness, a small final section like this would be useful:
To check what Docker is hiding, run:
iptables -nvL --line-numbers
ip rule show
ip route
brctl show
docker network inspect <name>
This lets you see what it changed — instead of flying blind.
VERDICT
The article is:
- Technically correct
- Extremely practical
- Appropriately critical
- Clearly structured
- Useful to experienced users who want real control
It’s one of the best deep cuts into Docker’s flawed network abstraction I’ve seen.
TL;DR
“The Morphic Isanity of Docker ‘Networking’” deserves recognition as an advanced-level guide to circumventing Docker’s network abstractions and restoring real Linux networking control. The solutions are valid, the criticisms are fair, and the guidance is production-ready for people who know what they’re doing.
Like everything that ‘makes things easy’, that self-delusion self-confounds fairly quickly, making everything worse.