Understanding virtualization & containers in the simplest way



This content originally appeared on DEV Community and was authored by Deborah Emeni

What you’ll learn

By the end of this section, you’ll

  • Understand what virtual machines (VMs) are and why they were created.
  • Learn the problems VMs solve and their limitations
  • See why containers exist and how they compare to VMs
  • Get an introduction to Docker and why it is used
  • Complete a hands-on project to run an Ubuntu container and execute basic commands inside it.

How were applications traditionally run?

Before we get into virtual machines and containers, let’s step back and talk about how teams used to run software in the early days.

Now, every application, as you might know already, needs to run somewhere, right?

And that means it requires a computer, which in turn needs an operating system (OS) such as Linux, Windows, or macOS.

On top of that, applications rely on what we call “dependencies,” like runtime libraries or language versions, to function properly.

Now, before what we call “virtualization,” which you will soon understand, each workload had its own server.

By “workload”, I mean any software or service running on a server, like a web app, database, or file server, that uses system resources such as CPU, memory, and storage.

To better understand this, let me explain what this looked like in practice:

Three separate physical servers running a web app, a database, and a file server, each with its own OS and dependencies to avoid software conflicts

Let’s say a company wants to run three different parts of its system:

  1. a web application that customers interact with
  2. a database that stores all the data
  3. a file server for internal documents and media

Now, this company wants to keep things simple and avoid problems like one service breaking another.

For instance, the database could need a different version of a library that the web app can’t work with, which is usually called “software version conflicts”.

So what do they do?

They set up three separate physical servers, with one for each.

Meaning that each server has:

  • its own operating system
  • its own set of dependencies
  • and it only runs one service, so that nothing conflicts

Okay, now that you get the “gist”. Next, I’ll tell you what was wrong with this setup.

What were the limitations?

Don’t get me wrong, this setup worked, okay? But it came with serious issues like:

  1. It was expensive: You had to buy and maintain separate hardware for each workload, even if it didn’t use all the resources.
  2. Resources were wasted: Most servers were sitting idle for most of the time, using only 10-30% of their total capacity.
  3. Scaling was hard: If you needed more resources for the web app, for example, you couldn’t just tweak something. You had to buy a whole new server, install everything again, and configure it from scratch.

So, the question became:

How can we run more than one workload on the same machine, without creating conflicts or wasting resources?

That’s what led to the rise of “Virtualization”, which I will define in the next section.

So… what is virtualization & virtual machines (VMs)?

Now that you understand what brought about virtualization. It’s time to understand what it is.

Virtualization allows multiple operating systems to run on the same physical machine.

So, in place of having one OS per machine, you can create multiple virtual machines on a single physical server.

Each virtual machine acts like a separate computer with its own operating system, memory, and storage, even though they all share physical hardware.

Example showing where virtualization is applied

Next, I’ll give you a real-world use case in cloud computing so that you can better understand how virtualization works in practice.

Have you heard of cloud providers? Like AWS, Google Cloud, or Microsoft Azure? They all use virtualization to rent out virtual machines in place of physical machines.

So when you create a cloud server with any of these cloud providers, what’s really happening is you’re getting a virtual machine running inside a massive data center (which is a facility filled with thousands of interconnected physical servers that host VMs for multiple users).

Now, without virtualization, as you can see, cloud computing wouldn’t exist, and companies would have to buy and maintain their own physical servers. If you don’t understand what cloud computing means here, this is how I’d define it:

Cloud computing is the ability to access computing resources like servers, storage, and databases over the internet without owning a physical hardware.

So, what makes all this (virtualization) possible is this special software called a “hypervisor” which allows multiple VMs to run on the same physical machine. See the illustration below:

Diagram showing virtualization: one physical server running multiple virtual machines using a hypervisor. Each VM includes its own OS, memory, and storage, while sharing the same physical hardware.

What problems did virtual machines solve?

As you can see, VMs solved the wasted resources problem of physical servers because with virtualization, one physical server could host multiple VMs, each running different applications.

And with this came several benefits like:

  • Better resource utilization: A single machine can run multiple applications while maximizing resources.
  • Cost savings: Fewer physical machines are needed which automatically reduces the cost of hardware and maintenance.
  • Isolation: Each application runs its own VM, thereby preventing conflicts between applications.

Now, even though VMs obviously improved things, they still had limitations and that’s what I’ll talk about next.

What were the limitations of virtual machines?

There are several reasons why VMs are not always the best solutions, let’s quickly run through them:

  1. Heavy resource usage: Each VM runs a full OS, which takes up a lot of RAM and CPU.
  2. Slow startup: Booting a VM can take minutes, just like starting a computer.
  3. Inefficient scaling: Spinning up new VMs requires significant computing power and takes time.
  4. OS redundancy: If you run 10 Ubuntu VMs, you’re running separate copies of Ubuntu, wasting storage.

Because of these limitations, it led to the need for “Containers” which we would discuss next.

What are containers, and how do they compare to VMs?

Containers solve many of the problems VMs have. In place of running a full operating system for each application, containers share the host OS, making them lightweight, fast, and resource-friendly. See the illustration below:

vms vs containers

So, how are containers different from VMs? Look at the table below to understand their differences.

Feature VMs Containers (Docker)
Startup time Minutes Seconds
Resource usage High (full OS per VM) Low (shared OS)
Portability Limited (OS-dependent) High (runs anywhere)
Scalability Slow Fast (instantly spin up containers)
Isolation Strong (separate OS) Strong but lightweight

With containers, applications start in seconds instead of minutes, and they use fewer resources since they don’t need a full OS for each instance.

Next we’ll talk about the platform that makes this possible.

What is Docker, and why use it?

Docker is a containerization platform that allows developers to create, deploy, and manage containers easily. In place of setting up separate VMs, you can package an application with all its dependencies into a lightweight, portable container.

An example use case of Docker can be seen when developer use Docker to make sure that applications run exactly the same way in development, testing, and production environments. It takes out the “works on my machine” problem by making software behave the same way everywhere.

Next, we’ll do a mini project to put all you’ve learned so far into practice.

Mini-project: Run an Ubuntu container using Docker

Now that you understand the difference between VMs and containers, it’s time to get hands-on and run your first container using Docker.

But before we do that, let’s talk about what an Ubuntu container is and why you would want to run an Ubuntu container in the first place.

What is an Ubuntu container?

A containerized version of Ubuntu, also known as Ubuntu container, is a lightweight, minimal version of Ubuntu that runs inside a container. It does not include a full desktop environment, but it does have all the essential Linux utilities needed to run software.

When you install Ubuntu on a computer, it comes with:

  • The Linux kernel (which interacts with the hardware).
  • System files and utilities
  • Preinstalled software

When would you want to run an Ubuntu container?

Let’s see some reasons why you would want to run an ubuntu container in a practical scenario:

1. Testing software in a clean environment

Let’s say you’re developing an application and need to test it on Ubuntu 22.04, but your computer runs Windows or macOS. Instead of setting up a virtual machine, you can launch an Ubuntu container in seconds and test your application inside it.

2. Running Linux tools on a non-Linux system

If you use Windows or macOS, you may sometimes need access to Linux commands or tools that are only available on Ubuntu. Running an Ubuntu container gives you access to an Ubuntu terminal without installing Ubuntu on your machine.

3. Experimenting with a different Linux distribution

You might be working on a server that runs Ubuntu, but your computer runs another Linux distribution like Fedora or Arch. Running an Ubuntu container allows you to test commands in an Ubuntu-specific environment before applying them to a real server.

4. Learning Linux without installing a new OS

If you want to practice Linux commands but don’t want to reinstall your operating system, running an Ubuntu container gives you a safe place to try out Linux without affecting your main system.

What other containers can you run?

Ubuntu is just one example of a container you can run with Docker. There are many different container images available for different purposes, including:

  • Alpine Linux: A lightweight Linux container for minimal environments.
  • Nginx: A web server container to serve web pages.
  • PostgreSQL: A database container for managing data.
  • Node.js: A container with Node.js preinstalled for JavaScript development.
  • Python: A container with Python and all necessary dependencies for scripting.

You can pull and run any of these containers using Docker, just like you will with the Ubuntu container.

Note: Follow the steps in this article to properly install and setup Docker before you go on

Running an Ubuntu container using Docker

Let’s now run an Ubuntu container and interact with it as if it were a real Ubuntu system.

Step 1: Download the Ubuntu container image

Before you can run an Ubuntu container, you need to download the official Ubuntu image from Docker Hub.

To pull the Ubuntu image, open your terminal and run:

docker pull ubuntu

This command downloads the latest Ubuntu container image from Docker Hub.

Understanding the output of docker pull ubuntu

Once you run the command, you’ll see several lines printed in the terminal. Let’s break down what each part means so you know exactly what’s happening.

docker pull ubuntu command

  • Using default tag: latest

You didn’t specify which version of Ubuntu you want, so Docker used the default tag, which is latest. That means it will pull the most up-to-date version available.

If you wanted a specific version, you could run:

docker pull ubuntu:22.04
  • latest: Pulling from library/ubuntu

This shows where the image is coming from.

  • library/ubuntu is the official Ubuntu image maintained by Docker.
  • It lives on Docker Hub, which is Docker’s public registry of container images.

  • 2f074dc76c5d: Pull complete

Docker images are built in layers. Each one adds something on top of the previous layer.

This message means a specific layer of the Ubuntu image has been successfully downloaded.

  • Digest: sha256:...

This is the unique ID (checksum) of the image you pulled. Think of it like a fingerprint for this exact version. It helps Docker verify the integrity and version of the image.

  • Status: Downloaded newer image for ubuntu:latest

Docker checked your system to see if you already had the image.

In this case, it didn’t find it or found an older version, so it downloaded the newer one.

If the image was already up to date, you would see:

Status: Image is up to date for ubuntu:latest
  • docker.io/library/ubuntu:latest

This confirms the full path of the image you now have locally:

  • It came from docker.io (Docker Hub)
  • It’s the official library/ubuntuimage
  • The tag is latest

Now that the Ubuntu image is on your system, you’re ready to use it to run your first container, which we’ll do next. You don’t need to download it again; Docker will use this image from your local machine.

Step 2: Run the Ubuntu container

Once the image is downloaded, you can start an Ubuntu container.

Run this command:

docker run -it ubuntu

What does this command do?

  • docker run tells Docker to start a new container.
  • -itmakes it interactive and gives you access to a terminal inside the container.
  • ubuntu tells Docker to use the Ubuntu image (if it wasn’t already downloaded, Docker pulled it from Docker Hub automatically).

After running this command, your terminal will change. You are now inside the Ubuntu container, running a Linux shell.

Your terminal will look like this:

docker run ubuntu

What you see in your terminal root@6f63eabbab0e:/# is the Ubuntu container’s shell prompt. You are now inside the container, running Ubuntu as the root user.

Let’s break it down:

  • root → You are logged in as the root user (the default in most containers).
  • @6f63eabbab0e → This is the short container ID assigned by Docker. It uniquely identifies your running container.
  • :/# → You are currently at the root directory (/) inside the Ubuntu file system. The # symbol confirms you’re logged in as root.

From this point, you can run Linux commands inside the container as if you were using a real Ubuntu server. You’re not in a simulation, you’re in a real Ubuntu environment, isolated from your host machine.

Step 3: Check the OS version inside the container

You are now inside a working Ubuntu container, but keep in mind that this is a minimal version of Ubuntu. It’s stripped down to keep the container lightweight, so some common commands are not included by default.

To check the Ubuntu version running inside the container, you can use this built-in command:

cat /etc/os-release

This command reads a system file that contains the OS version details. You should see output similar to this:

os release ubuntu command

Here’s what each part means:

  • PRETTY_NAME="Ubuntu 24.04.2 LTS"

    This is the full name of the OS version, written in a human-readable way. In this case, it’s Ubuntu version 24.04.2, Long-Term Support (LTS).

  • NAME="Ubuntu"

    This confirms the base distribution is Ubuntu.

  • VERSION_ID="24.04"

    This is the base version number of the operating system. It’s commonly used by scripts or automation tools.

  • VERSION="24.04.2 LTS (Noble Numbat)"

    This gives both the version number and the codename (“Noble Numbat”) assigned to this Ubuntu release.

  • VERSION_CODENAME=noble and UBUNTU_CODENAME=noble

    These provide the codename in a more machine-friendly format.

  • ID=ubuntu and ID_LIKE=debian

    These are used by tools to identify what kind of system they’re running on. ID_LIKE=debian means that although this is Ubuntu, it behaves similarly to Debian.

  • The HOME_URL, SUPPORT_URL, and BUG_REPORT_URL lines point you to official Ubuntu resources for learning more or reporting issues.

  • LOGO=ubuntu-logo

    This is mostly used in graphical environments or tools that display branding.

This output confirms that your Ubuntu container is based on Ubuntu 24.04.2 LTS, and you’re working inside a clean, isolated Linux environment (even if your main system is running something else like Windows or macOS).

If you prefer using the lsb_release command (which gives similar OS version details in a cleaner format), you’ll need to install it manually, because it isn’t included in the minimal Ubuntu image.

So, run this in your container:

apt update && apt install -y lsb-release

Let’s break this down:

  • apt update tells Ubuntu to refresh its list of available packages. Think of it as checking for the latest versions and availability of software.
  • apt install -y lsb-release installs the lsb-release utility. The y flag tells Ubuntu to automatically confirm that you want to proceed, so it won’t stop and ask you to type “yes.”

After running the command, you’ll see an output like this:

lsb-release-output-command-ubuntu

You’ll see a long list of messages in your terminal. That’s completely normal, let’s quickly walk through what’s happening so you know what to expect.

First, Docker connects to Ubuntu’s package servers and pulls the most up-to-date list of software. You’ll see lines like:

Get:1 http://ports.ubuntu.com/ubuntu-ports noble InRelease

These are the repositories being contacted. You don’t need to interact with any of this, just let it run.

Once the package list is refreshed, Ubuntu starts installing the lsb-release tool. You’ll see a few confirmations like:

The following NEW packages will be installed: lsb-release

That just tells you this package wasn’t already on the system and is being added now.

It then downloads the package and unpacks it:

Unpacking lsb-release...
Setting up lsb-release (12.0-2)...

This part completes in a few seconds. Once you see the “Setting up” line, you’re ready to use the command.

Now you can type:

lsb_release -a

and you’ll get a clean, structured summary of the Ubuntu version your container is running. Let’s move on to that next.

Here’s what that command does:

  • lsb_release is a small tool that prints version info about the current Linux distribution.
  • The a flag means “all,” so you’ll see a full breakdown: the distribution name, description, version, and codename.

This output will be very similar to what you saw earlier with cat /etc/os-release, but a bit more focused and formatted for readability.

result of lsb_release -a command

If you’re in a situation where you’re writing shell scripts or doing system automation and you only need the codename or release number, lsb_release is a quick way to get exactly that.

Either method is valid, you can stick with cat /etc/os-release if you don’t want to install anything extra, or use lsb_release -a if you prefer its format.

Step 4: Install software inside the container

You can install software inside the Ubuntu container just like you would on a normal Ubuntu system.

For example, to install curl, run:

apt update && apt install -y curl

What’s happening here?

  • apt update updates the list of available packages.
  • apt install -y curl installs curl without asking for confirmation (y).

Once installed, you can run:

curl --version

This verifies that curl is now available inside the container.

curl-installed

Step 5: Exit the container and restart it later

After you’re done working inside the container, you can exit it by typing:

exit

This will stop the running container and return you to your regular terminal prompt. Exiting doesn’t delete the container, it just stops it temporarily.

Check for stopped containers

To see a list of all containers (including the ones that have stopped), use:

docker ps -a

This command shows both running and exited containers. The output in your screenshot looks like this:

docker-ps-a-command
Let’s break this down line by line:

  • CONTAINER ID: This is the unique ID Docker assigns to each container. You can use this ID to start, stop, inspect, or remove a container. In the screenshot above, 6f63eabbab0e refers to your Ubuntu container.
  • IMAGE: This shows which Docker image was used to create the container. In this case, it shows you used the ubuntu image and also ran hello-world earlier.
  • COMMAND: This is the default command that runs when the container starts. For Ubuntu, it’s "/bin/bash", which opens an interactive shell. For hello-world, it’s "/hello", which just prints a message and exits.
  • CREATED: This tells you how long ago the container was created. It helps you keep track of how old a container is, especially if you’re managing several. For example, 26 hours ago shows your Ubuntu container was created a day ago.
  • STATUS: This shows the current state of the container. If it says Exited, the container is stopped. If it says Up, then the container is running. You can also see how recently it exited (like About a minute ago).
  • PORTS: This column lists any port mappings between your host machine and the container. For example, if a web server container exposes port 80, this column would show which host port it’s connected to. In your case, the Ubuntu container has no ports exposed, so this is blank.
  • NAMES: Docker assigns a random, readable name to each container if you don’t give it one yourself. In the screenshot, the Ubuntu container was named keen_meninsky. You can rename containers or assign a custom name when creating one using the -name flag.

Restart and attach to a stopped container

If you want to restart the Ubuntu container and attach to it, use:

docker start -ai <container_id>

Replace <container_id> with your container ID, which in this example is 6f63eabbab0e, so you would run:

docker start -ai 6f63eabbab0e
  • start brings the container back to life.
  • a means “attach” — so your terminal will connect to the container’s input and output.
  • i means “interactive” — so you can type commands and see results like before.

Once this command runs, you’re back inside the same container, with everything still intact. This is helpful if you’ve installed tools or created files in your container and want to pick up where you left off.

The screenshot below confirms this worked. You’re back at the container prompt:

container-restart

You’ve covered a lot already, and it’s the kind of foundation that sets you up for everything else we’ll do with Docker.

Here’s what you’ve done:

  • You learned how applications used to run, and why virtualization became necessary.
  • You understood what virtual machines are, where they’re useful, and where they fall short.
  • You broke down why containers were introduced, and how they solve those limitations.
  • You saw the difference between VMs and containers using clear examples.
  • You got a proper introduction to Docker (what it is and how it fits into the container world).
  • You ran your first hands-on container, checked the OS inside it, installed software, exited it, and brought it back.

If you followed along with the mini-project, you now know how to pull images, run containers interactively, install software inside them, and restart them after they’ve exited. That’s a major first step.


This content originally appeared on DEV Community and was authored by Deborah Emeni