This content originally appeared on DEV Community and was authored by Shivam
It was one of those moments for me when a simple task gets you into a 4-hour rabbit hole that teaches you more than months of reading. I was just trying to SSH into my own machine (yep, localhost), and suddenly, ofc nothing worked right.
Step 2: The Port Detective and a Confusing Detour
It began as “why can’t I SSH into my own computer?” turned into an unexpected masterclass in network debugging. Here’s how I was able to create my framework for debugging these issues
When a Simple Task Goes Wrong
I was setting up a local development environment with Docker. One of my containers needed to create an SSH tunnel back to a database running on my host machine. It’s a common setup. The command inside the container would look something like this:
# From inside the container, tunnel to host services
ssh -L 5432:localhost:5432 user@host.docker.internal
Before involving the container, I wanted to test the connection on my host machine first. A simple SSH to myself should happen quickly, right?
ssh shivam@localhost
It worked, but it felt slow. Really slow. An instant connection took several seconds. If my containers were going to use this for database connections, that lag would hurt performance. Something was off, and I needed to find out why.
**Step 1: Checking the Basics (The Network Itself)
The first rule of troubleshooting is to check the obvious. Is my machine even communicating properly with itself?
I started with the most basic tools.
ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.067 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.052 ms
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2075ms
rtt min/avg/max/mdev = 0.052/0.065/0.078/0.010 ms
The response was immediate, with times around 0.05 ms. So, basic connectivity was perfect. Next, I checked the route.
traceroute localhost
traceroute to localhost (127.0.0.1), 30 hops max, 60 byte packets
1 localhost (127.0.0.1) 0.395 ms 0.337 ms 0.318 ms
It showed a single hop, as expected. This indicated that the problem wasn’t at the basic network layer. The pipes were clear.
Lesson #1: Always check the basics first. ping
and traceroute
can quickly tell you if you have a real network routing issue or something else.
Step 2: The Port Detective and a Confusing Detour
Okay, the network is fine. What about the SSH service itself? Is it listening on the correct port? I used ss
to check.
ss -ltn | grep :22
LISTEN 0 4096 *:22 *:*
The output confirmed that a service was listening on port 22. Good. I remembered that I did play with config files and it felt like I may have messed up something so I checked the SSH config file.
sudo grep -i port /etc/ssh/sshd_config
Output:
# configuration must be re-generated after changing Port, AddressFamily, or
#Port 2222
And yeah, as I felt ,the running service was on port 22, but the config file said it should be on port 2222. This was a classic distraction. After restarting the SSH service (sudo systemctl restart ssh
) and seeing it still running on port 22, I realized the config file change was outdated and had never been applied correctly. The running service was the actual source of truth.
Lesson #2: Config files can be misleading. Always verify what the service is actually doing, not just what the config says it should do.
Step 3: Finding the real issue
I set aside the port confusion and re-focused on the original issue: the slowness.
The connection worked; it was just slow. This usually indicates that the problem isn’t with the network connection itself but with the application-level processes on top of it. To see what was happening during the connection, I ran the SSH command with the verbose flag.
ssh -v shivam@localhost
As I watched the output scroll by, I saw it. The delay was occurring during the security checks specifically host key verification. SSH was going through its full security handshake, which is unnecessary for a trusted localhost connection.
The solution was to tell SSH to skip these checks for this specific case.
time ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null shivam@localhost "echo 'test'"
And it was a success the connection was now instant. The slowness was never a network issue; it was an SSH application feature. For my Docker tunnel, I could now use an optimized command:
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -N -L 5432:localhost:5432 shivam@localhost
Lesson #3: Application protocols can have their own overhead. A perfect network connection can still feel slow if the application on top is doing extra work.
The Real Lesson: A Method to the Madness
I fixed the problem that night. But the real gain wasn’t just the solution; it was learning a systematic way to think. Instead of randomly trying commands, I worked my way through the layers:
- Network Layer: Is there a connection? (
ping
,traceroute
) - Transport Layer: Is the port open and listening? (
ss
,netstat
) - Application Layer: Is the service configured correctly, and what is it doing? (
ssh -v
, config files)
This structured approach is what separates guessing from true debugging.I think I would apply the same method to systematically find the root cause again
So my Go-To Network Debugging Checklist
Here’s the simple playbook I now use for any connection issue.
Check Reachability: Can I see the machine?
ping $hostname
Check the Path: Is the network route clear?
mtr $hostname
Check DNS: Is the name resolving to the correct IP?
dig $hostname
Check the Port: Is the service listening?
ss -ltn | grep $port
ornetstat -tulnp | grep $port
Check the Application: Can I connect, and what is the app doing?
curl -v $protocol://$host:$port
orssh -v $user@$host
Go Deeper (If Needed): Look at the raw packets.
tcpdump -i any host $hostname -n
Network debugging isn’t magic. It’s a process of elimination. That frustrating evening spent on a “simple” localhost issue gave me a solid framework that I now use to tackle complex production problems.
Sometimes, the best lessons come from problems that seem too small to matter.
This content originally appeared on DEV Community and was authored by Shivam