Learn Docker in 3 hours.



This content originally appeared on DEV Community and was authored by Morning Redemption

Hi,

During the deployment of a MERN application, I encountered Docker for the first time. What initially seemed like just another tech buzzword turned out to be a game-changer in how applications are packaged and delivered. This post aims to provide both a conceptual foundation and a practical understanding of Docker. By the end, you’ll have the knowledge and confidence to create your own containers and run applications with minimal adjustments.

Prerequisites

Before we begin, make sure you have the following ready:

Familiarity with the terminal/command line.

A working understanding of Node.js and npm (or any backend framework you’re containerizing).

Very basic networking concepts (ports, host vs. container).

Docker Desktop (Windows/Mac) or Docker Engine (Linux).

docker-compose (usually comes bundled with Docker Desktop).

Git (to manage and clone your code repository).

You can copy the Test repo: https://github.com/pksri1996/Docker_Learn/tree/main/mern-book-app

What is Docker

Docker isn’t exactly a virtual machine, but it’s close enough for a first mental model. Think of it as a lightweight virtual machine that runs on your host machine. Instead of emulating an entire operating system like a traditional VM, Docker shares the host’s kernel and resources, while giving each container its own isolated file system, network, and ports.

In simple words:

🤜 Docker is like a computer within your computer, with its own file system and networking, but without the overhead of running a full OS. You can reference the diagram below.

This is an image which shows using a block diagram, the difference between Virtual Machine and Docker Container.

The real utility of Docker comes from the fact that you can shape a container to do just one job — run your application. Nothing extra, nothing bloated. This makes your app more secure, consistent, and easier to deploy.

Before We Proceed: Some Key Terms

Image

Think of an image as a blueprint. It contains everything needed to run your application: code, libraries, dependencies, environment settings.

Container

A container is the running instance of an image. If an image is like a recipe, then a container is the dish prepared from that recipe. You can create many containers from the same image, just like you can cook the same recipe multiple times.

Lets spin a basic container of Docker in order to see how it works. I will suggest downloading Docker desktop as a beginner since it will help you in visualising the exact series of events which are happening.

1. Download & Install Docker Desktop

Download Docker desktop from docker’s website. This will make sure that you do not have issues with docker being a beginner.

2. Verify Installation

Use command to check version and make sure that It is installed properly.

docker –version

3. Run Your First Container (Hello World)

Helloworld is a ready made image which the Docker hub contains. It just helps you to understand how a bare bones container will work.

docker run hello-world

Now that we have a basic understanding of Docker on a conceptual level, let us begin with the actual usage. Kindly refer my github code tagged below for reference.

https://github.com/pksri1996/Docker_Learn/blob/main/mern-book-app/

Look at the Dockerfile in the current repository. The Dockerfile is what Docker uses to create an image. Without it, every time you ran a container you’d have to manually install Node.js, copy over your code, set environment variables, expose ports, and finally start the app — which is both repetitive and error-prone. Instead, we write all those steps once inside a Dockerfile. Then Docker takes care of building the image for us, so whenever we run a container from it, everything is already prepared and ready to go.

Let’s analyze every statement of Dockerfile.

1- FROM node:18 — This tells Docker which base image to use. Here it’s Node.js version 18. Think of this like starting with a computer that already has Node installed.

2- WORKDIR /usr/src/app — This gets the user to be in the directory where we need it to run the application. Similar to ‘cd’

3- COPY package*.json ./ — We copy the package.json and package-lock.json files into the container. This copies them to the working directory

4- RUN npm install — I hope this was clear. If you have any experience working with Node environment, this should not be too difficult to understand.

5- COPY . . — This will Copy everything to your working directory.

6- EXPOSE 5000 — This tells Docker that our app will listen on port 5000. By itself it doesn’t publish the port, but it documents and makes it available for mapping later in docker-compose.yml.[If you have doubts with this statement, we will take care of this later. You can safely ignore this.]

7- CMD ["node", "src/app.js"] –Finally, this is the command that runs when the container starts. Here we’re telling it to run our backend app with Node.

If you have followed along this far you would be finding striking similarities with the way you work in your local environment and Docker and one doubt must come to your mind. If this doubt is not present, please go back and read this again. You need to hammer down everything above a little more.

The doubt has to be, Why not just enter these statements.

COPY . .
npm install

This will actually work, the problem lies with optimisation.

Here’s the deal: every line in the Dockerfile creates a layer in the image. Think of it like Git commits — every new instruction is like a new snapshot. If you change something in your code, only the COPY . . layer is rebuilt, while the cached layer for npm install stays intact. That saves us a ton of time because we don’t have to reinstall dependencies every time we make a small code change.

Image saying ,

So Let’s break this down again and it needs to be engraved in your brain.

Every line in this Dockerfile will create a new layer with it’s cache.

So COPY package*.json ./ creates a layer which will then be followed by the layer of RUN npm install. Now everything get copied and has it’s own cache because of COPY . ..

Imagine changing a route or adding a route without changing any dependency. This will make sure that once we commit that only the 3rd layer cache is destroyed and we do not have to RUN npm install again. Which is an expensive process.

Image of a hammer saying Hammer this down.

Make sure this is understood before you proceed.

Now we will move to the second file “docker-compose.yml” This is responsible for running that image.

Before we go there though, you might be thinking why do we not have a Dockerfile for Mongo, It’s because we do not need a custom image for Mongo. For Node container we needed to get some customisation before we could find it usable. For mongo it’s not the case, default mongo image is enough for us.

docker-compose.yml

This is an important file. This actually helps in running the container and it is hence very important to understand.

Refer to my repository and now let’s understand each statement as we did in Dockerfile’s case.
Please note that the Dockerfile was just to create an image it was not spinning new containers, it was just giving us a prebuilt image which could be used to fire up new instances.

1- version: "3.9" — Specify the Docker Compose file format version.

2- services: — This lists all the services and we have 2 of those.

1- Node Backend

3- backend: — This names the service which we are going to use. Note: do not confuse this with name of the container, that is used to name the particular instance of a container. Moreover you cannot name a container in a scalable system, that will be done by Docker by default.

4- build: . — This builds the Docker using Dockerfile present in the current place/directory. This is done, since we a building a custom image for backend and not the usual pre-built images like mongo. Also note, that you can use this to run Dockerfile present in multiple locations by just changing the directory.

5- container_name: mern_backend — This declares the name of container. It cannot be used if you want containers to be scalable.

6- ports: - "5000:5000" — This maps the port of your container to the port of your local computer. [We will talk about this in detail later as discussed above]

7- depends_on: - mongodb — This tell Docker container that it needs to depend on the second service which is mongodb.

8- environment: - MONGO_URI=mongodb://root:password@mongodb:27017/mern_db?authSource=admin

— This is the environment variable for the instance which is going to spin up. Think of this like your .env file.

9- restart: always — This tells Docker to restart the container whenever it crashes.

2 – Mongo Backend.

10- mongodb: — This is the name of service similar to backend [point 3]

11- image: mongo:6.0 — This asks Docker to pulling the image of MongoDB. similar to point 4.

12- container_name: mern_mongodb — I hope this is self explainatory if you have gone through point 5.

13- ports: - "27017:27017" — This is mapping port of container to map of the host machine.

14- environment: - MONGO_INITDB_ROOT_USERNAME=root - MONGO_INITDB_ROOT_PASSWORD=password — This is similar to environment file. We need to make sure this is not exposed in your production environment.

15- volumes: - mongo_data:/data/db — This is where Docker stores all of data of MongoDB. This is for something called a ##Mount. I will explain this in detail below.

16- volumes: mongo_data: — This names the volume of Mongo DB’s data storage mount.

Mount- What is this, and Why is it needed.

Mount is a place in Docker which will reserve space in the memory and it will not be damaged or deleted even though all the containers become non functional. Imagine the container hosting Mongo DB going down for some reason, since this mount/Storage unit is outside of container [virtually of course]. It makes sure that data that needs to persist beyond the life cycle of a container is intact regardless of status of the container.

Port mapping and allowing ports.

When you use the ports option in docker-compose.yml, you’re telling Docker to map a port inside the container to a port on your host machine. ports: – “27017:27017”
Here’s the important bit: mapping a port doesn’t automatically make it visible to the internet. What Docker does is simply connect the container’s port to the host’s port. Whether that host port is open to the outside world depends on the host machine’s firewall or network rules. Since we need to make sure that our .yml file is able to deploy on any machine, to make it secure we must not connect our DB ports to host machine and connect only the ports which are essential. This is an important security consideration so read this section again. This could very well save you from a lot of embarrassment.

Common Docker Commands

You’ll Actually Use Working with Docker every day isn’t about remembering all the commands — it’s about having the 10–15 you actually use on your fingertips.
Let’s go through them in plain English.

1- Commands for image

1- docker build -t myapp:latest . –This creates an image from your Dockerfile. The -t gives it a tag (like naming your blueprint).
2- docker images — Shows all the images on your computer.
3- docker rmi myapp:latest — Delete a blueprint you no longer need.

2- Command for container

1- docker run -d -p 5000:5000 myapp:latest –Spins up a container from your image. -d means detached (runs in background), and -p maps a port so you can access it from your host.
2- docker ps –This is what you will use like ls in ubuntu which navigating. Super important to list all the container.
3- docker stop container_id — Stops a container.
4- docker rm container_id — Deletes the container. With stop you can restart the container, by deleting you cannot restart. similar to shutting down a computer, vs deleting the entire drive.

3- Logs & Debugging

1- docker logs container_id — Just console.log for your Docker container.
2- docker exec -it container_id /bin/bash — takes you to the shell of Docker container.
3- docker inspect container_id — Run this yourself, you are at an advanced stage now and you can do this.

4- Docker Compose

1- docker-compose up -d — Brings up all services in the background.
2- docker-compose down — Cleans up and shuts down.
3- docker-compose up --build — Forces a fresh rebuild instead of using cache.

5- System Cleanup

1- docker system prune -a — run this and see for yourself, make sure the machine does not have any production instance.
2- docker volume prune — This is to make sure that no data stays in your computer.

Now you are all ready. Packed with knowledge of Docker, you can use Docker to do anything. ChatGPT might be a friend who can help you with specific cases.


This content originally appeared on DEV Community and was authored by Morning Redemption