This content originally appeared on DEV Community and was authored by Alessandro Grosselle
Memory leaks are a common issue in Node.js applications, especially when running inside containerized environments like Kubernetes.
A memory leak occurs when an application keeps allocating memory without releasing it, leading to increased memory usage and eventually to container restarts or crashes.
Here’s a typical graph of what a memory leak looks like: memory usage constantly increasing over time until the pod is restarted.
Common causes of memory leaks in Node.js
- Unclosed Redis or database connections
- Global variables accumulating data
- Event listeners that are never removed
In short: memory leaks happen more often than you think, especially in complex services.
And the tricky part? You won’t always see them in your local environment, since production traffic and timeouts behave differently.
In our case, for example, the leak was caused by too many requests to an external API with a very long default timeout, resulting in pending requests piling up in memory.
Pro tip: Always define your own timeout! Never rely on external providers’ default timeouts.
Step-by-Step Guide to Access Heap Snapshots in Node.js on Kubernetes
Let’s go through a practical process to capture and inspect memory data from a Node.js application running inside a Kubernetes pod.
Connect to the Kubernetes Pod
First, find the pod you want to debug:
kubectl get pods | grep <your-app-name>
Then check which process ID is running Node.js (our case, it’s 1):
kubectl exec <your-app-name-64dcf6b84d-nx5cl> -- ps aux
Now, send the SIGUSR1 signal to that process:
kubectl exec <your-app-name-64dcf6b84d-nx5cl> -- kill -SIGUSR1 1
In Node.js, the SIGUSR1 signal has an official purpose: it enables the Node.js inspector for debugging a running process.
Next, forward the debugging port from the pod to your local machine:
kubectl port-forward <your-app-name-64dcf6b84d-nx5cl> 9229:9229
Connect the Chrome DevTools Debugger
Open Chrome and navigate to:
chrome://inspect
Then click “Open dedicated DevTools for Node”.

In the Connections panel, click Add connection and enter:
localhost:9229
Now you’re connected to your Node.js process running inside the Kubernetes pod. You should even see your application logs appearing in the DevTools console.
Take and Compare Heap Snapshots
Go to the “Memory” tab in DevTools and click “Take Heap Snapshot”.
Wait for the snapshot to generate; this shows all memory allocations in your app. If a leak exists, you’ll notice that memory usage keeps growing between snapshots.
To confirm, take a second snapshot after some time (say 10 minutes) and switch to the “Comparison” view. Select the first snapshot as the baseline.

This will show which objects were allocated between the two snapshots, and whether any aren’t being released.
Sometimes, your pod might restart before the debugger finishes sending data over the WebSocket, preventing you from downloading the snapshot.
That’s exactly what happened to me when I clicked “Take snapshot”:
![]()
Here’s a simple trick: expose a custom endpoint in your app to create and save a heap snapshot programmatically.
This example shows an Express.js endpoint:
// Heap snapshot endpoint for memory profiling
server.get('/heap-snapshot', async (req: Request, res: Response) => {
try {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `/tmp/heap-snapshot-${timestamp}.heapsnapshot`;
const session = new Session();
const fd = fs.openSync(filename, 'w');
session.connect();
session.on('HeapProfiler.addHeapSnapshotChunk', (m) => {
fs.writeSync(fd, m.params.chunk);
});
const result = await session.post('HeapProfiler.takeHeapSnapshot');
session.disconnect();
fs.closeSync(fd);
logger.logInfo(
`Heap snapshot created: ${filename}`,
req.headers[LABEL_REQUESTID.toLowerCase()] as string
);
res.json({
success: true,
filename,
message: 'Heap snapshot created successfully',
result,
});
} catch (error) {
const errorMessage = getErrorMessage(error) ?? 'Unknown error occurred';
logger.logError(
`Failed to create heap snapshot: ${errorMessage}`,
req.headers[LABEL_REQUESTID.toLowerCase()] as string
);
res.status(HTTPStatusCode.InternalServerError).json({
success: false,
error: 'Failed to create heap snapshot',
message: errorMessage,
});
}
});
Generate the snapshot, calling the endpoint:
kubectl exec -it <your-app-name-64dcf6b84d-nx5cl> wget http://localhost:<your-app-port>/heap-snapshot
Then, copy the snapshot from the pod to your local machine:
kubectl cp <your-app-name-64dcf6b84d-nx5cl>:/tmp/heap-snapshot-2025-10-19T15-19-34-391Z.heapsnapshot ./heap-snapshot.heapsnapshot
You can open this .heapsnapshot file in Chrome DevTools later and analyze it offline, clicking “Loading profile”:
Conclusion
And that’s it! This workflow gives you a practical and reproducible method to detect memory leaks in Node.js applications running on Kubernetes.
It’s saved me a lot of time, hopefully, it’ll help you too.
Thanks for reading!
This content originally appeared on DEV Community and was authored by Alessandro Grosselle



