This content originally appeared on DEV Community and was authored by Darshan Vasani
Docker Networking A2Z โ Masterclass for Developers & DevOps
What is Docker Networking?
Docker networking allows containers to communicate with:
- Each other
- The host machine
- The external internet
Docker automatically creates networks and connects containers based on mode.
Key Terms
Term | Meaning | Emoji |
---|---|---|
Network | Virtual connection b/w containers | ![]() |
Bridge | Default, isolated internal network | ![]() |
Host | Shares hostโs network stack | ![]() |
None | No network access | ![]() |
Overlay | Cross-host communication (Swarm) | ![]() |
Bridge Mode (Default)
Bridge network is like a private switch where containers talk to each other.
Created Automatically:
docker network ls
Look for: bridge
How it works:
- Containers get private IPs (like
172.17.0.x
) - They can access the internet via NAT
- But cannot be accessed from outside without
-p
port mapping
Try it:
docker run -d --name container1 nginx
docker run -d --name container2 busybox sleep 9999
# Ping container1 from container2 by IP
docker exec -it container2 ping 172.17.0.x
By default, they canโt talk by name unless in custom network
Custom Bridge Network (Recommended)
Custom networks support container name resolution (DNS)!
Create a custom bridge:
docker network create my-network
Launch containers into it:
docker run -d --name app1 --network my-network nginx
docker run -it --name app2 --network my-network busybox sh
Now, inside app2
:
ping app1
Works!
Containers can ping by name!
Why Use Custom Bridge?
Feature | Benefit |
---|---|
![]() |
Resolve container names |
![]() |
Only containers in same network can talk |
![]() |
Use multiple networks |
![]() |
Inspect with docker network inspect
|
Host Network Mode
Shares the hostโs network stack directly.
Use:
docker run --network host nginx
Pros:
Super fast โ no NAT or port mapping
Useful for monitoring tools (Prometheus, Grafana)
Cons:
No isolation
Cannot run 2 containers on same port!
None Network Mode
Container has no networking at all.
docker run --network none busybox
Fully isolated
- Useful for security testing or offline compute jobs
Overlay Network (Advanced โ Docker Swarm)
Enables containers on different hosts to communicate
Use Case:
- Docker Swarm
- Distributed Microservices
docker network create --driver overlay my-overlay
Requires Swarm mode:
docker swarm init
Connect Containers to Multiple Networks
docker network create frontend
docker network create backend
docker run -d --name api \
--network frontend \
--network-alias api \
nginx
Then attach to another network:
docker network connect backend api
Inspect a Network
docker network inspect my-network
Shows:
- Container list
- IPs
- Aliases
- Subnets
CLI Recap
Command | Purpose |
---|---|
docker network ls |
List networks |
docker network create <name> |
Create a custom network |
docker run --network <name> |
Connect to specific network |
docker network inspect <name> |
Inspect config and members |
docker network rm <name> |
Delete a network |
docker network connect |
Connect running container |
docker network disconnect |
Disconnect container |
Best Practices for Docker Networking
Practice | Why Itโs Great |
---|---|
![]() |
Enable name resolution + isolation |
![]() |
Can expose host stack |
![]() |
Maximum isolation |
![]() |
Avoid connecting everything together |
![]() inspect to debug IPs |
Know who talks to whom |
![]() busybox or alpine to test ping |
Lightweight network testing tools |
Summary Table
Mode | Isolated? | Can Use DNS? | Host Access? | Notes |
---|---|---|---|---|
bridge |
![]() |
![]() |
![]() -p
|
Default |
custom bridge |
![]() |
![]() |
![]() -p
|
Best for local |
host |
![]() |
![]() |
Direct | No port mapping |
none |
![]() |
![]() |
![]() |
For isolation |
overlay |
![]() |
![]() |
Swarm only | For multi-node |
Final Analogy
Bridge network = Private Wi-Fi router
Custom bridge = Guest Wi-Fi with name tags
Host mode = Ethernet cable directly into host
None mode = Airplane mode
Overlay = Corporate VPN connecting multiple offices
Docker Networking Overview
Container networking is the foundation of container communication. Every container is equipped with a network interface, IP address, routing table, DNS config, etc.
By default, containers can:
- Make outbound connections (internet)
- Be connected to default or custom networks
- Be isolated or exposed, depending on the configuration
Types of Docker Network Drivers
Network Driver | Description | Use Case |
---|---|---|
bridge |
![]() |
Local development |
host |
![]() |
Low-latency, host-level apps |
none |
![]() |
Offline or compute-only tasks |
overlay |
![]() |
Distributed systems |
macvlan |
![]() |
IoT, legacy systems |
ipvlan |
Similar to macvlan, IP only | More control over routing |
custom plugins |
![]() |
SDN, advanced use |
Default Bridge Network vs
User-Defined Bridge Network
Both are based on the bridge
driver, but they differ significantly in features & behavior.
Default bridge
Network
Created automatically by Docker when installed.
docker network ls
# OUTPUT will contain:
# bridge bridge local
Example:
docker run -d --name app1 nginx
docker run -d --name app2 busybox sleep 999
Limitations:
![]() |
Explanation |
---|---|
![]() |
Can’t resolve container names |
![]() |
All containers using default bridge are technically on the same LAN |
![]() |
Need to link manually or use IPs |
![]() |
Containers can talk across networks by default if not isolated |
User-Defined Bridge Network
Created with:
docker network create my-custom-net
Advantages:
![]() |
Benefit |
---|---|
![]() |
Containers can resolve each other by name |
![]() |
Containers only talk to others on the same network |
![]() |
Works like microservices |
![]() |
Inspect, attach, detach easily |
![]() |
Compose, Swarm, or manually |
Comparison: Default vs User-Defined Bridge
Feature | Default bridge ![]() |
User-Defined Bridge ![]() |
---|---|---|
DNS support | ![]() |
![]() |
Service name resolution | ![]() |
![]() |
Isolation | ![]() |
![]() |
Compose support | ![]() |
![]() |
Security | Basic | Scoped & controlled |
Container-to-container name access | ![]() |
![]() ping app1 ) |
Preferred for production/dev | ![]() |
![]() |
Inspecting Networks
docker network inspect my-custom-net
Output includes:
- Subnet
- Gateway
- Connected containers
- DNS aliases
Example Test
1. Default Bridge (no DNS):
docker run -d --name alpha nginx
docker run -it --rm busybox
# ping alpha โ ❌ fails
2. User-defined Bridge:
docker network create testnet
docker run -d --name alpha --network testnet nginx
docker run -it --rm --network testnet busybox
# ping alpha โ ✅ works
Other Drivers โ Quick Overview
Driver | Emoji | Key Use |
---|---|---|
host |
![]() |
High-perf apps (no NAT), not isolated |
none |
![]() |
Secure offline or processing containers |
overlay |
![]() |
Multi-host Swarm networking |
macvlan |
![]() |
Assign physical IPs from hostโs LAN |
ipvlan |
![]() |
Fine-grained routing control |
custom plugin |
![]() |
CNI integrations, SDNs (like Calico) |
When to Use What?
Use Case | Best Network Driver |
---|---|
Local dev, isolated apps | ![]() |
Multi-container orchestration | ![]() |
High-speed, low-latency app (e.g. Prometheus) | ![]() |
No internet access container | ![]() |
Containers across multiple hosts | ![]() |
Assign IPs from LAN for legacy systems | ![]() |
Key Docker Networking Commands Cheat Sheet
![]() |
![]() |
---|---|
docker network ls |
![]() |
docker network create <name> |
![]() |
docker network rm <name> |
![]() |
docker network inspect <name> |
![]() |
docker run --network <name> <image> |
![]() |
docker run -d --name <container> --network <network> <image> |
![]() |
docker network connect <network> <container> |
![]() |
docker network disconnect <network> <container> |
![]() |
Real-World Example: Create and Use a Custom Bridge Network
# 1⃣ Create a custom bridge network
docker network create my-bridge
# 2⃣ Run a container (nginx) attached to that network
docker run -d --name my-app --network my-bridge nginx
# 3⃣ Run another container (busybox) in the same network
docker run -it --name client --network my-bridge busybox
# 4⃣ Inside 'client', you can ping 'my-app' by name
ping my-app
Result:
client
can resolve and communicate with my-app
using container name thanks to Dockerโs internal DNS in custom bridge networks.
Bonus Tip: Disconnect & Reconnect
docker network disconnect my-bridge my-app # Disconnect from network
docker network connect my-bridge my-app # Reconnect to same or new network
Final Takeaways
Default bridge is basic โ no name resolution, low security
User-defined bridge is preferred for real-world apps
Use overlay for distributed microservices
Know when to use each driver to optimize performance & security
Always test communication with tools like
ping
,curl
,netcat
Docker Networking Logic โ Container Communication
1. User-Defined Bridge Network = Container DNS Heaven
When you create a user-defined bridge network, Docker automatically enables an internal DNS service.
That means containers can talk to each other by name!
Example:
docker network create my-net
docker run -d --name db --network my-net mongo
docker run -d --name web --network my-net node-app
Now, inside
web
, you can:
ping db
Boom! It works because Docker auto-resolves
db
โ container’s IP inside the same network.
2. Default Bridge Network = Only IP-Based Access
Containers in the default bridge cannot resolve each other by name.
Example:
docker run -d --name app1 nginx # default bridge
docker run -it --name app2 busybox # default bridge
Inside app2
:
ping app1 # ❌ FAILS
Why? Because DNS resolution doesnโt work in the default bridge
without using the old --link
option (now deprecated ).
Legacy --link
(Avoid Using It)
docker run -d --name db mongo
docker run -d --name web --link db node-app
Works โ but deprecated & removed in newer Docker versions.
3. Containers in Different Networks =
No Communication
Example:
docker network create net-a
docker network create net-b
docker run -d --name app1 --network net-a nginx
docker run -d --name app2 --network net-b busybox
Inside app2
:
ping app1 # ❌ FAILS โ isolated networks
Networks are isolated by default. Containers on different bridge networks cannot talk to each other unless you connect one container to both using:
docker network connect net-a app2
Now, app2
belongs to both networks!
Real-World Analogy
Concept | Analogy |
---|---|
![]() |
Talking in a crowd with no names, only IPs (![]() |
![]() |
Talking in a chatroom where everyone has a username (![]() |
![]() |
Having one foot in two rooms ![]() ![]() |
Recap โ Container Communication Matrix
Scenario | Communicate by name? | Communicate by IP? | Notes |
---|---|---|---|
Same user-defined bridge | ![]() |
![]() |
Best practice |
Same default bridge | ![]() |
![]() |
No name resolution |
Different networks | ![]() |
![]() |
Unless manually connected |
With --link (legacy) |
![]() |
![]() |
Deprecated, avoid |
Summary
User-defined networks allow name-based communication via Docker’s built-in DNS.
Default bridge networks don’t support DNS โ only IPs.
Each network is isolated โ containers inside a network can talk, but can’t reach containers in other networks unless manually connected.
Use
docker network connect
to join a container to multiple networks if needed.
Overview of Advanced Docker Network Drivers
Driver | Purpose | Host-to-Container | Container-to-Host | Cross-Host Support |
---|---|---|---|---|
overlay |
Cross-host container communication (Swarm) | ![]() |
![]() |
![]() |
macvlan |
Assign real MAC & IP from LAN to container | ![]() |
![]() |
![]() |
ipvlan |
IP-level control without creating MAC addresses | ![]() |
![]() |
![]() |
1⃣
Overlay Network
What It Is:
Allows containers on different Docker hosts to securely communicate as if they were on the same LAN.
Requires Docker Swarm (or other orchestrators).
Creates an encrypted VXLAN tunnel between hosts.
Real-world Analogy:
Like a VPN that connects branch offices (containers) across cities (hosts).
Use Cases:
- Multi-host microservices
- Docker Swarm services
- HA + distributed architecture
How to Use (with Swarm):
# 1. Initialize Swarm
docker swarm init
# 2. Create overlay network
docker network create --driver overlay my-overlay
# 3. Deploy service to use overlay
docker service create \
--name webapp \
--replicas 3 \
--network my-overlay \
nginx
Key Features:
Feature | Benefit |
---|---|
![]() |
Uses IPSEC tunneling |
![]() |
Works across nodes |
![]() |
Container-to-service routing |
![]() |
Great for multi-host orchestration |
2⃣
Macvlan Network
What It Is:
Allows containers to appear as physical devices on the hostโs network, each with their own IP and MAC.
The container bypasses Dockerโs NAT, appearing directly on your LAN.
Real-world Analogy:
Like plugging a new computer (container) directly into your office switch with its own IP.
Use Cases:
- Legacy apps that require static IPs or MACs
- IoT, embedded, or bare metal simulation
- When containers must be reachable from the LAN directly
How to Use:
# 1. Create macvlan network
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
macvlan-net
# 2. Run a container on that network
docker run -d --name myrouter \
--network macvlan-net \
busybox sleep 3600
parent=eth0
: your physical hostโs network interface
Limitations:
![]() |
Description |
---|---|
![]() |
Can’t ping container from host by default |
![]() |
Be cautious in shared infra |
![]() |
Must avoid IP conflicts |
Tip to Enable Host
Container Communication (Workaround):
Create a dummy interface on the host:
ip link add macvlan-shim link eth0 type macvlan mode bridge
ip addr add 192.168.1.200/24 dev macvlan-shim
ip link set macvlan-shim up
3⃣
IPvlan Network
What It Is:
Similar to macvlan, but no extra MACs per container.
All containers share hostโs MAC, just get different IPs.
More compatible with cloud and DHCP setups where duplicate MACs are not allowed.
Analogy:
Multiple workers using one ID card (MAC) but different phone numbers (IP).
Use Cases:
- Performance-sensitive systems
- Cloud infra with MAC restrictions
- Advanced network routing with minimal overhead
How to Use:
docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
ipvlan-net
docker run -it --rm --network ipvlan-net alpine
IPvlan supports two modes:
-
l2
: Same subnet, like macvlan -
l3
: Different subnet, routing via host
Benefits:
Feature | Benefit |
---|---|
![]() |
No MAC duplication |
![]() |
Good for secured environments |
![]() |
Great for advanced network setups |
Advanced Network Drivers Comparison Table
Feature/Driver |
overlay ![]() |
macvlan ![]() |
ipvlan ![]() |
---|---|---|---|
Cross-host support | ![]() |
![]() |
![]() |
Requires Swarm? | ![]() |
![]() |
![]() |
Uses physical IP/MAC | ![]() |
![]() |
![]() |
Host ![]() |
![]() |
![]() |
![]() |
Best for | Microservices on multiple hosts | LAN-level communication | Custom IP routing or cloud infra |
Security model | Swarm-controlled | Exposes real IP on LAN | More controlled than macvlan |
Complexity | ![]() |
![]() |
![]() |
Security Considerations
Driver | Security Tip |
---|---|
Overlay | Isolated per service; enable encryption |
Macvlan | Bypasses Dockerโs firewall โ isolate via VLAN |
IPvlan | Good firewall compatibility; still isolate with subnet rules |
When to Use What?
Situation | Use Driver |
---|---|
![]() |
overlay |
![]() |
macvlan |
![]() |
ipvlan |
Summary
-
overlay
: Best for Swarm, cross-node services, scalable infra -
macvlan
: Best for LAN visibility, legacy hardware, IP-bound apps -
ipvlan
: Best for performance and controlled environments (e.g., cloud)
Docker none
Network Driver โ Ultimate Guide
What is the none
Network?
The
none
network is a special Docker network driver that completely disables networking for a container.
No IP address
No routing
No DNS
No internet access
No communication with host or other containers
Use Case:
When you want your container to run in complete isolation, especially for:
CPU-intensive or file-only tasks
Secure environments with no external communication
Containers that interact only via volume sharing or IPC
Avoiding network-related attacks (like SSRF, port scanning, etc.)
Real-World Analogy
Itโs like putting a person in a soundproof, windowless room
They can compute, read, or write files โ but cannot talk or hear the outside world.
How to Use It
docker run -it --rm --network none alpine
Then inside the container:
ping google.com # ❌ Fails
ip addr # Shows no IP
The container runs, but itโs completely cut off from any kind of networking.
Check from the Host
docker inspect <container-id> | grep -i "NetworkMode"
# Output: "NetworkMode": "none"
Useful Scenarios
Use Case | Why none Works |
---|---|
![]() |
Doesn’t need the internet |
![]() |
No attack vector via networking |
![]() |
Simulate โno internetโ condition |
![]() |
Avoid leaking credentials over network |
![]() |
Run isolated build/test tasks with no exposure |
Warning
- You cannot
ping
,curl
,apt update
, or download anything inside containers using--network none
- Any tools that require internet or inter-container access will fail
- It’s not usable for most microservices or web APIs
Tip: Combine with Volumes or IPC
If you want to exchange data without a network:
# Create a shared volume
docker volume create shared-data
# Use it with the isolated container
docker run -it --rm --network none -v shared-data:/data alpine
This lets you read/write to shared storage without needing any network access.
Summary Table
Feature |
none Driver |
---|---|
IP address | ![]() |
DNS | ![]() |
Host access | ![]() |
Container-to-container | ![]() |
Internet | ![]() |
Use case | Security, sandboxing, isolated compute |
Final Verdict
If you need absolute network isolation, --network none
is your zero-trust go-to option.
Itโs perfect for:
Security-first workloads
Testing internal-only logic
Disabling all remote calls
This content originally appeared on DEV Community and was authored by Darshan Vasani