πŸ› οΈ FastAPI in Production: Build, Scale & Deploy – Series B : Services, Queues & Containers



This content originally appeared on DEV Community and was authored by Mr Chike

Welcome to Part 2 of our 3-part FastAPI Series πŸ“¦Full source code is available on GitHub!

In case you missed it, here’s 🔥PART 1 to get you up to speed!

To whet your appetite, here are just a few of the interesting features we’ll be focusing on:

  • 🐳 Dockerized Environment Setup
  • πŸš€ Asynchronous Task Processing with Celery
  • πŸ“¦ Boss-Level Project Documentation with MKDocs.

Before we dive in, here’s a quick look at what we’ll cover:

πŸ“š Table of Contents

  • ⚽ Picking Up Where We Left Off
  • πŸ—‚ Project Structure
  • πŸ›’ Setup Database
  • πŸ’Ύ Perform CRUD Operations
  • πŸš€ Offloading CPU Intensive Workloads with Celery
  • πŸ“„ Project Documentation with MKDocs
  • 🐳 Running Your Project in Docker

⚽ Picking Up Where We Left Off

Let’s pick things up by cloning the repo from Part 1 and setting up our project environment. Follow these steps to get started:

# Clone project
git clone --branch=seriesA https://github.com/MrChike/media_app.git

cd media_app

# Create and activate virtual environment
python3 -m venv env && source env/bin/activate

# Install dependencies
pip install -r requirements.txt

Now that our project is all set up, let’s create a setup script to bootstrap the series_b files we’ll be working with:

# Create the setup script file
touch series_b_setup.sh

# Make the setup script executable
chmod +x series_b_setup.sh

# Run the setup script (after updating it with the setup script repo link)
./series_b_setup.sh

At this point, your project structure would look like this πŸ‘‡πŸΌ

πŸ—‚ Project Structure

media_app/

β”œβ”€β”€ base/                                # Core feature module
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ router.py                        # Defines HTTP API endpoints and maps them to controller functions
β”‚   β”œβ”€β”€ controller.py                    # Handles request-response cycle; delegates business logic to services
β”‚   β”œβ”€β”€ service.py                       # Core business logic for async I/O operations
β”‚   β”œβ”€β”€ model.py                         # SQLAlchemy ORM models representing database tables
β”‚   β”œβ”€β”€ schema.py                        # Pydantic models for input validation and output serialization
β”‚   β”œβ”€β”€ dependencies.py                  # Module-specific DI components like authentication and DB sessions
β”‚   └── tasks.py                         # Core business logic for CPU-bound operations

β”œβ”€β”€ movies/                              # Movie feature module (same layout as base)
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ router.py
β”‚   β”œβ”€β”€ controller.py
β”‚   β”œβ”€β”€ service.py
β”‚   β”œβ”€β”€ model.py
β”‚   β”œβ”€β”€ schema.py
β”‚   β”œβ”€β”€ dependencies.py
β”‚   └── tasks.py

β”œβ”€β”€ static/                              # (Optional) Static files (e.g., images, CSS)
β”œβ”€β”€ templates/                           # (Optional) Jinja2 or HTML templates for frontend rendering
β”œβ”€β”€ docs/                                # (Optional) API documentation, design specs, or OpenAPI enhancements

β”œβ”€β”€ shared/                              # Project-wide shared codebase
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ config/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ base_settings.py             # Base config for environments
β”‚   β”‚   └── settings.py                  # Pydantic-based config management
β”‚   β”œβ”€β”€ db/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── connection.py                # DB engine/session handling
β”‚   β”œβ”€β”€ dependencies/                    # Shared DI functions (e.g., auth, DB session)
β”‚   β”‚   └── __init__.py
β”‚   β”œβ”€β”€ middleware/                      # Global middlewares (e.g., logging, error handling)
β”‚   β”‚   └── __init__.py
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ external_apis/               # Third-party integrations
β”‚   β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”‚   └── omdb_movies.py           # Integration with OMDB API
β”‚   β”‚   └── internal_operations/         # CPU-intensive logic, background tasks
β”‚   β”‚       └── __init__.py
β”‚   └── utils/                           # Generic helpers
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── fetch_request_with_error_handling.py  # Error-resilient HTTP requests

β”œβ”€β”€ scripts/                             # Developer or DevOps utilities
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── sanity_check.py                  # A friendly reminder not to lose your mind while debugging

tests/                                     # Root of all tests
β”œβ”€β”€ __init__.py
β”‚
β”œβ”€β”€ unit/                                  # Fast, isolated logic-level tests
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ base/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── test_service.py
β”‚   └── movies/
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”œβ”€β”€ test_controller.py
β”‚       β”œβ”€β”€ test_service.py
β”‚       └── test_tasks.py
β”‚
β”œβ”€β”€ integration/                           # DB/API/network dependent tests
β”‚   β”œβ”€β”€ __init__.py
β”‚
β”œβ”€β”€ e2e/                                   # High-level, full user flow tests
β”‚   β”œβ”€β”€ __init__.py
β”‚
β”œβ”€β”€ system/                                # System resilience, performance, fault-tolerance tests
β”‚   β”œβ”€β”€ __init__.py

β”œβ”€β”€ migrations/                          # Alembic migration files
β”‚   β”œβ”€β”€ env.py
β”‚   β”œβ”€β”€ README
β”‚   β”œβ”€β”€ script.py.mako
β”‚   └── versions/                        # Versioned migration scripts

β”œβ”€β”€ alembic.ini                          # Alembic configuration for database migrations
β”œβ”€β”€ celeryconfig.py                      # Celery settings for async task queue

β”œβ”€β”€ docker-compose.api.yaml              # Docker Compose API
β”œβ”€β”€ docker-compose.db.yaml               # Docker Compose DB
β”œβ”€β”€ Dockerfile                           # Base app Dockerfile
β”œβ”€β”€ Dockerfile.nginx                     # Nginx reverse proxy Dockerfile
β”œβ”€β”€ nginx.conf                           # Nginx configuration
β”œβ”€β”€ entrypoint.sh                        # Shell script to run app container
β”œβ”€β”€ series_a_setup.sh                    # SeriesA Environment setup script
β”œβ”€β”€ series_b_setup.sh                    # SeriesB Environment setup script

β”œβ”€β”€ .example.env                         # Template for environment variables
β”œβ”€β”€ .coveragerc                          # Code coverage settings
β”œβ”€β”€ .gitignore                           # Files and folders ignored by Git

β”œβ”€β”€ main.py                              # FastAPI application entrypoint
β”œβ”€β”€ pytest.ini                           # Pytest configuration
β”œβ”€β”€ requirements.txt                     # Python dependency list
β”œβ”€β”€ JOURNAL.md                           # Development log: issues faced, solutions, and resources
└── README.md                            # Project overview, setup, and usage

πŸ›’ Setup Database

Now that we know what to expect, we will set up our database(s).
At first, we will start by updating these files:

Now you’ve updated the files we will be moving next to setting up alembic.
What the awwwwn is an Alembic…? you ask? πŸ˜‰

Alembic is a database migration tool for Python that integrates with SQLAlchemy to generate, track, and apply versioned schema changes across different environments.

Now that you’ve got all that off your chest, it’s time to initialize the migrations folder.

Quick NoteπŸ’₯: You can technically name this folder anything, even alembic. But I prefer migrations because it’s clear and straightforward, and it helps anyone reading the code later (including future me) to quickly understand its purpose without confusion.

Next, run the command:

alembic init migrations
  Creating directory /path/to/media_app/migrations ...  done
  Creating directory /path/to/media_app/migrations/versions ...  done
  Generating /path/to/media_app/migrations/README ...  done
  Generating /path/to/media_app/alembic.ini ...  done
  Generating /path/to/media_app/migrations/env.py ...  done
  Generating /path/to/media_app/migrations/script.py.mako ...  done
  Please edit configuration/connection/logging
  settings in /path/to/media_app/alembic.ini before proceeding.

After running the command, you’ll notice an alembic.ini file and a migrations folder have been created.

Our focus will be on updating the migrations/env.py file.

This file is generated automatically when you run alembic init, but there are several key parts we’ll be updating to make it work with our PostgreSQL database:

  • Line 5 (Import application settings)
  • Line 6 (Import modules models)
  • Line 11 (Configure PostgreSQL connection URL)
  • Line 31 (Register modules migration)
  • Line 53 (Insert database connection string)
  • Line 76 (Add database URL and connection parameters)

Now that the settings are complete, it’s time to configure the database using a Docker Compose file.

Update your docker-compose.db.yaml file to reflect the necessary services.

Note: I purposely included all database configurations in a single Docker Compose file for simplicity. I understand that some of you might already be using external databases. However, for the purpose of this tutorial, I’ve bundled Redis, Postgres, and MongoDB along with their respective GUIs: RedisInsight, pgAdmin, and Mongo Express into one file for convenience.

If you’re already using external databases, you can skip the next step. Otherwise, start the services by running:

docker-compose -f docker-compose.db.yaml up

This command will spin up the databases along with their respective GUIs. Give it a minute or so for all the services to initialize properly. (Between you and I, pgAdmin is an elder, so he takes more time to get prepared, so please be patient with him)πŸ˜‰

Once they’re up, you can access them locally using the following URLs and credentials:

DB GUIs

Now that we have our databases up and running, let’s not lose sight of what we’ve accomplished so far with Alembic.

So far, we’ve only completed the first step in the 3-step Alembic flow: initializing migrations. The next two steps are just as important, generating and applying those migrations. That’s what we’ll focus on next.

To generate a new migration based on your current models, run the following command:

alembic revision --autogenerate -m "Initial Migration"

Replace “Initial Migration” with a message that describes the changes you’re capturing, if needed.

INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.autogenerate.compare] Detected added table 'movies'
INFO  [alembic.autogenerate.compare] Detected added index 'ix_movies_id' on '('id',)'
INFO  [alembic.autogenerate.compare] Detected added index 'ix_movies_title' on '('title',)'
  Generating /path/to/media_app/migrations/versions/b7283418aefb_initial
  _migration.py ...  done

To apply the generated migrations to your relational (PostgreSQL) database, run the following command:

alembic upgrade head
INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> b7283418aefb, Initial Migration

This will apply the latest migration(s) to your database schema, syncing it with your current models.

Your PostgreSQL database has been updated with the movies table! If you’re new to pgAdmin, no wπŸ˜‰rries!. I’ll guide you through how to view it. First, head to http://localhost:8083/browser/ and register a new server. Under the General tab, name your server something like Media App. Then, in the Connection tab, update the following details:

  • Host name/address: postgres
  • Port: 5432
  • Maintenance database: root
  • Username: root
  • Password: root
  • Save password?: βœ…

And Don’t forget to smash the Save button… & follow for more!πŸ‘

Once you’re connected, go to Databases(2) -> root -> Schemas -> public -> Tables (2). Right-click on the movies table, and choose View/Edit Data -> All Rows to see the contents.

pgadmin

Now that we’ve completed the migration process and have our database up and running, it’s time to move on to the next phase. πŸ‘£

πŸ’Ύ Perform CRUD Operations

What’s an app without CRUD? 😀
At the heart of every killer app, no matter how sleek the UI, how fancy the animations, or how many laws of space-time complexities it defies, one simple truth is: users just want to create, read, update, and delete stuff.

If your app can’t handle their basic requests? Game Over!❌
So let’s give our stakeholders what they truly want, reliable CRUD endpoints that just work.

We will be updating the following files according to the order of the request life cycle explained in part 1.

Now that we’ve updated these files, let’s check if unit tests are all intact. Run the command:

pytest
======================================================================= test session starts ========================================================================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.6.0
rootdir: /path/to/media_app
configfile: pytest.ini
testpaths: tests/
plugins: cov-6.1.1, anyio-4.9.0
collected 13 items                                                                                                                                                 

tests/unit/movies/test_controller.py ......                                                                                                                  [ 46%]
tests/unit/movies/test_service.py .......                                                                                                                    [100%]

========================================================================== tests coverage ==========================================================================
_________________________________________________________ coverage: platform linux, python 3.10.12-final-0 _________________________________________________________

Name                   Stmts   Miss  Cover   Missing
----------------------------------------------------
movies/controller.py      22      0   100%
movies/service.py         86      0   100%
movies/tasks.py            0      0   100%
----------------------------------------------------
TOTAL                    108      0   100%
======================================================================== 13 passed in 8.68s ========================================================================

NB: The test configuration had already been set here in part 1. So if interested feel free to check it up…

Everythings seems good. Let have a view of our changes, run the command:

uvicorn main:app --reload --port 8000
INFO:     Will watch for changes in these directories: ['/path/to/media_app']
INFO:     Uvicorn running on http://127.0.0.1:8006 (Press CTRL+C to quit)
INFO:     Started reloader process [59568] using WatchFiles
INFO:     Started server process [59588]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

App Loaded

At this endpoint, feel free to play with the endpoints. (See what i did there)😏

So, to add the final icing on this piece of cake, in the next section, we’ll be introducing the one and only Celery… πŸ’ƒ

πŸš€ Offloading CPU Intensive Workloads with Celery

If async gives FastAPI the speed of Superman, then adding Celery makes it the Flash on three cups of coffee and a shot of tequila.

So first, we will conclude by updating the pending files related to crud operations for celery task processing. Go ahead and update the following files:

And for the celery configuration itself update the following files:

Now that we’re done with the setup. Let’s run the following command to get our celery workers up and running…

celery -A shared.config.settings.redis_broker worker --loglevel=INFO --concurrency=4

 -------------- celery@mrchike-vm v5.5.2 (immunity)
--- ***** ----- 
-- ******* ---- Linux-6.8.0-60-generic-x86_64-with-glibc2.35 2025-07-10 00:32:39
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x722396ee9cf0
- ** ---------- .> transport:   redis://:**@localhost:6379/0
- ** ---------- .> results:     redis://:**@localhost:6379/0
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[tasks]
  . movies.tasks.process_heavy_task

[2025-07-10 00:32:40,268: INFO/MainProcess] Connected to redis://:**@localhost:6379/0
[2025-07-10 00:32:40,286: INFO/MainProcess] mingle: searching for neighbors
[2025-07-10 00:32:41,385: INFO/MainProcess] mingle: all alone
[2025-07-10 00:32:41,494: INFO/MainProcess] celery@mrchike-vm ready.

The tasks have been registered and now we need to execute this through the api.

Celery Api calls

[2025-07-10 00:37:33,717: INFO/MainProcess] Task movies.tasks.process_heavy_task[c0553b69-9529-42df-8ce4-506b3fa8d2ed] received
[2025-07-10 00:41:37,690: INFO/ForkPoolWorker-4] ==============================================
[2025-07-10 00:41:37,692: INFO/ForkPoolWorker-4] Successfully Processed 1 Billion Transactions...
[2025-07-10 00:41:37,692: INFO/ForkPoolWorker-4] ==============================================
[2025-07-10 00:41:37,723: INFO/ForkPoolWorker-4] Task movies.tasks.process_heavy_task[c0553b69-9529-42df-8ce4-506b3fa8d2ed] succeeded in 243.99923642900103s: 1000000000

This task takes 243.999 seconds, which is approximately 4.07 minutes. Now imagine if this task weren’t offloaded to the background using Celery, and your user had to wait the full 4 minutes before being able to continue. That’s enough time to completely lose their attention, and likely their patience.

Now scale that up to thousands or even millions of users. Blocking execution like that doesn’t just degrade user experience, it can cripple your application. Background task queues like Celery aren’t just convenient but essential for performance and scalability.

At this point your changes would have reflected in the Redis DB. Check it up. And that’s a wrap for this section. We’ll be moving to project documentation.

πŸ“„ Project Documentation with MKDocs

The project documentation we will be building can be found below but it’s important to note that the documentation structure has already been set when you ran the setup script.

To enable display of our documentation static files, we would be udpating the main.py to include static import and mounting

After you’re done with the update. You can now access the documentation page on http://127.0.0.1:8000/docs/

project docs site

And now for the grand finale, let’s put it all in a box.

🐳 Running Your Project in Docker

To wrap things up, you will be updating the following files

Take your time and go through the DockerFiles. That’s your assignment.
What i would be focusing on is explaining Nginx and Entrypoint.

Nginx serves as a reverse proxy.
What’s a reverse proxy, you asked?

A reverse proxy is a server that sits between clients and backend servers, forwarding client requests to the appropriate backend server and then returning the server’s response back to the client and nginx.conf is what handles this.

The entrypoint is designed to build your project documentation, run your database migrations, execute tests, and start the application. All of this is very helpful when done during each build. However, what’s most important for me to explain to you is the set -e command.

set -e is used to make your script safer and more predictable by stopping execution as soon as something goes wrong. For example, if your tests fail, the app won’t start, which is actually very helpful. It gives you a sense of what proper CI/CD should look like: your code shouldn’t be deployed to production if the tests are failing.

During development, you can comment it out to avoid interruptions, but once you’re done, be sure to turn it back on.

πŸ”§ Update the service hostnames in your .env file to match these: πŸ‘‰ env.example

Note: The hostnames are the same as the service names defined in your Docker Compose file. This allows containers to communicate with each other via Docker’s internal networking, so you won’t need to manually update IP addresses every time.

Now run the following commands:

# Stop and remove containers, including orphans
docker-compose -f docker-compose.db.yaml -f docker-compose.api.yaml down --remove-orphans

# Build and start services
docker-compose -f docker-compose.db.yaml -f docker-compose.api.yaml up --build

And with that, we’ve come to the end of this tutorial.
In the next part of this series, we’ll focus on deploying our app for a live demo.

πŸ’‘ Enjoyed this article? Connect with me on:

Your support means a lot, if you’d like to buy me a coffee β˜• to keep me fueled, feel free to check out this link. Your generosity would go a long way in helping me continue to create content like this.

Until next time, happy coding! πŸ‘¨πŸΎβ€πŸ’»πŸš€

Previously Written Articles:

  • πŸ”₯ FastAPI in Production: Build, Scale & Deploy – Series A: Codebase Design β†’ Read here
  • πŸš€ Series-X: Create Professional Portfolios Using Material for MkDocs β†’ Read here
  • How to Learn Effectively & Efficiently as a Professional in any Field 🧠⏱🎯 β†’ Read here


This content originally appeared on DEV Community and was authored by Mr Chike