This content originally appeared on DEV Community and was authored by Mr Chike
Welcome to Part 2 of our 3-part FastAPI Series
– Full source code is available on GitHub!
In case you missed it, hereβs 🔥PART 1
to get you up to speed!
To whet your appetite, here are just a few of the interesting features weβll be focusing on:
Dockerized Environment Setup
Asynchronous Task Processing with Celery
Boss-Level Project Documentation with MKDocs.
Before we dive in, here’s a quick look at what we’ll cover:
Table of Contents
Picking Up Where We Left Off
Project Structure
Setup Database
Perform CRUD Operations
Offloading CPU Intensive Workloads with Celery
Project Documentation with MKDocs
Running Your Project in Docker
Picking Up Where We Left Off
Letβs pick things up by cloning the repo from Part 1 and setting up our project environment. Follow these steps to get started:
# Clone project
git clone --branch=seriesA https://github.com/MrChike/media_app.git
cd media_app
# Create and activate virtual environment
python3 -m venv env && source env/bin/activate
# Install dependencies
pip install -r requirements.txt
Now that our project is all set up, let’s create a setup script
to bootstrap the series_b files weβll be working with:
# Create the setup script file
touch series_b_setup.sh
# Make the setup script executable
chmod +x series_b_setup.sh
# Run the setup script (after updating it with the setup script repo link)
./series_b_setup.sh
At this point, your project structure would look like this
Project Structure
media_app/
βββ base/ # Core feature module
β βββ __init__.py
β βββ router.py # Defines HTTP API endpoints and maps them to controller functions
β βββ controller.py # Handles request-response cycle; delegates business logic to services
β βββ service.py # Core business logic for async I/O operations
β βββ model.py # SQLAlchemy ORM models representing database tables
β βββ schema.py # Pydantic models for input validation and output serialization
β βββ dependencies.py # Module-specific DI components like authentication and DB sessions
β βββ tasks.py # Core business logic for CPU-bound operations
βββ movies/ # Movie feature module (same layout as base)
β βββ __init__.py
β βββ router.py
β βββ controller.py
β βββ service.py
β βββ model.py
β βββ schema.py
β βββ dependencies.py
β βββ tasks.py
βββ static/ # (Optional) Static files (e.g., images, CSS)
βββ templates/ # (Optional) Jinja2 or HTML templates for frontend rendering
βββ docs/ # (Optional) API documentation, design specs, or OpenAPI enhancements
βββ shared/ # Project-wide shared codebase
β βββ __init__.py
β βββ config/
β β βββ __init__.py
β β βββ base_settings.py # Base config for environments
β β βββ settings.py # Pydantic-based config management
β βββ db/
β β βββ __init__.py
β β βββ connection.py # DB engine/session handling
β βββ dependencies/ # Shared DI functions (e.g., auth, DB session)
β β βββ __init__.py
β βββ middleware/ # Global middlewares (e.g., logging, error handling)
β β βββ __init__.py
β βββ services/
β β βββ __init__.py
β β βββ external_apis/ # Third-party integrations
β β β βββ __init__.py
β β β βββ omdb_movies.py # Integration with OMDB API
β β βββ internal_operations/ # CPU-intensive logic, background tasks
β β βββ __init__.py
β βββ utils/ # Generic helpers
β βββ __init__.py
β βββ fetch_request_with_error_handling.py # Error-resilient HTTP requests
βββ scripts/ # Developer or DevOps utilities
β βββ __init__.py
β βββ sanity_check.py # A friendly reminder not to lose your mind while debugging
tests/ # Root of all tests
βββ __init__.py
β
βββ unit/ # Fast, isolated logic-level tests
β βββ __init__.py
β βββ base/
β β βββ __init__.py
β β βββ test_service.py
β βββ movies/
β βββ __init__.py
β βββ test_controller.py
β βββ test_service.py
β βββ test_tasks.py
β
βββ integration/ # DB/API/network dependent tests
β βββ __init__.py
β
βββ e2e/ # High-level, full user flow tests
β βββ __init__.py
β
βββ system/ # System resilience, performance, fault-tolerance tests
β βββ __init__.py
βββ migrations/ # Alembic migration files
β βββ env.py
β βββ README
β βββ script.py.mako
β βββ versions/ # Versioned migration scripts
βββ alembic.ini # Alembic configuration for database migrations
βββ celeryconfig.py # Celery settings for async task queue
βββ docker-compose.api.yaml # Docker Compose API
βββ docker-compose.db.yaml # Docker Compose DB
βββ Dockerfile # Base app Dockerfile
βββ Dockerfile.nginx # Nginx reverse proxy Dockerfile
βββ nginx.conf # Nginx configuration
βββ entrypoint.sh # Shell script to run app container
βββ series_a_setup.sh # SeriesA Environment setup script
βββ series_b_setup.sh # SeriesB Environment setup script
βββ .example.env # Template for environment variables
βββ .coveragerc # Code coverage settings
βββ .gitignore # Files and folders ignored by Git
βββ main.py # FastAPI application entrypoint
βββ pytest.ini # Pytest configuration
βββ requirements.txt # Python dependency list
βββ JOURNAL.md # Development log: issues faced, solutions, and resources
βββ README.md # Project overview, setup, and usage
Setup Database
Now that we know what to expect, we will set up our database(s).
At first, we will start by updating these files:
- movies/
model.py
- movies/
schema.py
- shared/config/
settings.py
- shared/db/
connection.py
Now you’ve updated the files we will be moving next to setting up alembic.
What the awwwwn is an Alembic…? you ask?
Alembic is a database migration tool for Python that integrates with SQLAlchemy to generate, track, and apply versioned schema changes across different environments.
Now that youβve got all that off your chest, itβs time to initialize the migrations folder
.
Quick Note: You can technically name this folder anything, even alembic. But I prefer migrations because itβs clear and straightforward, and it helps anyone reading the code later (including future me) to quickly understand its purpose without confusion.
Next, run the command:
alembic init migrations
Creating directory /path/to/media_app/migrations ... done
Creating directory /path/to/media_app/migrations/versions ... done
Generating /path/to/media_app/migrations/README ... done
Generating /path/to/media_app/alembic.ini ... done
Generating /path/to/media_app/migrations/env.py ... done
Generating /path/to/media_app/migrations/script.py.mako ... done
Please edit configuration/connection/logging
settings in /path/to/media_app/alembic.ini before proceeding.
After running the command, youβll notice an alembic.ini
file and a migrations
folder have been created.
Our focus will be on updating the migrations/env.py
file.
This file is generated automatically when you run alembic init
, but there are several key parts weβll be updating to make it work with our PostgreSQL database:
-
Line 5
(Import application settings) -
Line 6
(Import modules models) -
Line 11
(Configure PostgreSQL connection URL) -
Line 31
(Register modules migration) -
Line 53
(Insert database connection string) -
Line 76
(Add database URL and connection parameters)
Now that the settings are complete, itβs time to configure the database using a Docker Compose file.
Update your docker-compose.db.yaml
file to reflect the necessary services
.
Note: I purposely included all database configurations in a single Docker Compose file for simplicity. I understand that some of you might already be using external databases. However, for the purpose of this tutorial, Iβve bundled Redis
, Postgres
, and MongoDB
along with their respective GUIs: RedisInsight
, pgAdmin
, and Mongo Express
into one file for convenience.
If youβre already using external databases, you can skip the next step. Otherwise, start the services by running:
docker-compose -f docker-compose.db.yaml up
This command will spin up the databases along with their respective GUIs. Give it a minute or so for all the services to initialize properly. (Between you and I, pgAdmin is an elder, so he takes more time to get prepared, so please be patient with him)
Once theyβre up, you can access them locally using the following URLs and credentials:
RedisInsight (Redis)
URL: http://localhost:5540/
Connection String:
redis://:root@redis:6379
Mongo Express (MongoDB)
URL: http://localhost:8081/
Login: Already logged in as
admin
pgAdmin (Postgres)
URL: http://localhost:8083/
Login:
Β Β Β Email:root@mailinator.com
Β Β Β Β Password:root
Β
Now that we have our databases up and running, letβs not lose sight of what weβve accomplished so far with Alembic.
So far, weβve only completed the first step in the 3-step Alembic flow: initializing migrations. The next two steps are just as important, generating and applying those migrations. Thatβs what weβll focus on next.
To generate a new migration based on your current models, run the following command:
alembic revision --autogenerate -m "Initial Migration"
Replace “Initial Migration” with a message that describes the changes you’re capturing, if needed.
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'movies'
INFO [alembic.autogenerate.compare] Detected added index 'ix_movies_id' on '('id',)'
INFO [alembic.autogenerate.compare] Detected added index 'ix_movies_title' on '('title',)'
Generating /path/to/media_app/migrations/versions/b7283418aefb_initial
_migration.py ... done
To apply the generated migrations to your relational (PostgreSQL) database, run the following command:
alembic upgrade head
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> b7283418aefb, Initial Migration
This will apply the latest migration(s) to your database schema, syncing it with your current models.
Your PostgreSQL database has been updated with the movies table! If you’re new to pgAdmin, no wrries!. Iβll guide you through how to view it. First, head to http://localhost:8083/browser/ and register a new server. Under the General tab, name your server something like
Media App
. Then, in the Connection tab, update the following details:
-
Host name/address:
postgres
-
Port:
5432
-
Maintenance database:
root
-
Username:
root
-
Password:
root
-
Save password?:
And Donβt forget to smash the Save button… & follow for more!
Once you’re connected, go to Databases(2)
-> root
-> Schemas
-> public
-> Tables (2)
. Right-click on the movies table, and choose View/Edit Data
-> All Rows
to see the contents.
Now that weβve completed the migration process and have our database up and running, itβs time to move on to the next phase.
Perform CRUD Operations
Whatβs an app without CRUD?
At the heart of every killer app, no matter how sleek the UI, how fancy the animations, or how many laws of space-time complexities it defies, one simple truth is: users just want to create, read, update, and delete stuff.
If your app canβt handle their basic requests? Game Over!
So letβs give our stakeholders what they truly want, reliable CRUD endpoints that just work.
We will be updating the following files according to the order of the request life cycle
explained in part 1.
- movies/
router.py
- movies/
controller.py
- movies/
service.py
- movies/
dependencies.py
- shared/services/external_apis/
omdb_movies.py
- shared/utils/
fetch_request_with_error_handling.py
- tests/unit/movies/
test_controller.py
- tests/unit/movies/
test_service.py
main.py
Now that we’ve updated these files, let’s check if unit tests are all intact. Run the command:
pytest
======================================================================= test session starts ========================================================================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.6.0
rootdir: /path/to/media_app
configfile: pytest.ini
testpaths: tests/
plugins: cov-6.1.1, anyio-4.9.0
collected 13 items
tests/unit/movies/test_controller.py ...... [ 46%]
tests/unit/movies/test_service.py ....... [100%]
========================================================================== tests coverage ==========================================================================
_________________________________________________________ coverage: platform linux, python 3.10.12-final-0 _________________________________________________________
Name Stmts Miss Cover Missing
----------------------------------------------------
movies/controller.py 22 0 100%
movies/service.py 86 0 100%
movies/tasks.py 0 0 100%
----------------------------------------------------
TOTAL 108 0 100%
======================================================================== 13 passed in 8.68s ========================================================================
NB: The test configuration had already been set
here
in part 1. So if interested feel free to check it up…
Everythings seems good. Let have a view of our changes, run the command:
uvicorn main:app --reload --port 8000
INFO: Will watch for changes in these directories: ['/path/to/media_app']
INFO: Uvicorn running on http://127.0.0.1:8006 (Press CTRL+C to quit)
INFO: Started reloader process [59568] using WatchFiles
INFO: Started server process [59588]
INFO: Waiting for application startup.
INFO: Application startup complete.
At this endpoint, feel free to play with the endpoints. (See what i did there)
So, to add the final icing on this piece of cake, in the next section, weβll be introducing the one and only Celery…
Offloading CPU Intensive Workloads with Celery
If async gives FastAPI the speed of Superman, then adding Celery makes it the Flash on three cups of coffee and a shot of tequila.
So first, we will conclude by updating the pending files related to crud operations for celery task processing. Go ahead and update the following files:
- movies/
router.py
- movies/
controller.py
- movies/
service.py
- movies/
tasks.py
- tests/unit/movies/
test_controller.py
- tests/unit/movies/
test_service.py
- tests/unit/movies/
test_tasks.py
And for the celery configuration itself update the following files:
- shared/config/
settings.py
celeryconfig.py
Now that we’re done with the setup. Let’s run the following command to get our celery workers up and running…
celery -A shared.config.settings.redis_broker worker --loglevel=INFO --concurrency=4
-------------- celery@mrchike-vm v5.5.2 (immunity)
--- ***** -----
-- ******* ---- Linux-6.8.0-60-generic-x86_64-with-glibc2.35 2025-07-10 00:32:39
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x722396ee9cf0
- ** ---------- .> transport: redis://:**@localhost:6379/0
- ** ---------- .> results: redis://:**@localhost:6379/0
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. movies.tasks.process_heavy_task
[2025-07-10 00:32:40,268: INFO/MainProcess] Connected to redis://:**@localhost:6379/0
[2025-07-10 00:32:40,286: INFO/MainProcess] mingle: searching for neighbors
[2025-07-10 00:32:41,385: INFO/MainProcess] mingle: all alone
[2025-07-10 00:32:41,494: INFO/MainProcess] celery@mrchike-vm ready.
The tasks have been registered and now we need to execute this through the api.
[2025-07-10 00:37:33,717: INFO/MainProcess] Task movies.tasks.process_heavy_task[c0553b69-9529-42df-8ce4-506b3fa8d2ed] received
[2025-07-10 00:41:37,690: INFO/ForkPoolWorker-4] ==============================================
[2025-07-10 00:41:37,692: INFO/ForkPoolWorker-4] Successfully Processed 1 Billion Transactions...
[2025-07-10 00:41:37,692: INFO/ForkPoolWorker-4] ==============================================
[2025-07-10 00:41:37,723: INFO/ForkPoolWorker-4] Task movies.tasks.process_heavy_task[c0553b69-9529-42df-8ce4-506b3fa8d2ed] succeeded in 243.99923642900103s: 1000000000
This task takes 243.999 seconds, which is approximately 4.07 minutes. Now imagine if this task werenβt offloaded to the background using Celery, and your user had to wait the full 4 minutes before being able to continue. Thatβs enough time to completely lose their attention, and likely their patience.
Now scale that up to thousands or even millions of users. Blocking execution like that doesnβt just degrade user experience, it can cripple your application. Background task queues like Celery arenβt just convenient but essential for performance and scalability.
At this point your changes would have reflected in the Redis DB. Check it up. And that’s a wrap for this section. We’ll be moving to project documentation.
Project Documentation with MKDocs
The project documentation we will be building can be found below but it’s important to note that the documentation structure has already been set when you ran the setup script.
To enable display of our documentation static files, we would be udpating the main.py
to include static import
and mounting
After you’re done with the update. You can now access the documentation page on http://127.0.0.1:8000/docs/
And now for the grand finale, let’s put it all in a box.
Running Your Project in Docker
To wrap things up, you will be updating the following files
Take your time and go through the DockerFile
s. That’s your assignment.
What i would be focusing on is explaining Nginx and Entrypoint.
Nginx serves as a reverse proxy.
What’s a reverse proxy, you asked?
A reverse proxy is a server that sits between clients and backend servers, forwarding client requests to the appropriate backend server and then returning the server’s response back to the client and
nginx.conf
is what handles this.
The entrypoint is designed to build your project documentation, run your database migrations, execute tests, and start the application. All of this is very helpful when done during each build. However, whatβs most important for me to explain to you is the set -e
command.
set -e
is used to make your script safer and more predictable by stopping execution as soon as something goes wrong. For example, if your tests fail, the app won’t start, which is actually very helpful. It gives you a sense of what proper CI/CD should look like: your code shouldnβt be deployed to production if the tests are failing.
During development, you can comment it out to avoid interruptions, but once you’re done, be sure to turn it back on.
Update the service hostnames in your
.env
file to match these:![]()
env.example
Note: The hostnames are the same as the service names defined in your Docker Compose file. This allows containers to communicate with each other via Dockerβs internal networking, so you wonβt need to manually update IP addresses every time.
Now run the following commands:
# Stop and remove containers, including orphans
docker-compose -f docker-compose.db.yaml -f docker-compose.api.yaml down --remove-orphans
# Build and start services
docker-compose -f docker-compose.db.yaml -f docker-compose.api.yaml up --build
And with that, we’ve come to the end of this tutorial.
In the next part of this series, we’ll focus on deploying our app for a live demo.
Enjoyed this article? Connect with me on:
Your support means a lot, if youβd like to buy me a coffee to keep me fueled, feel free to check out
this link
. Your generosity would go a long way in helping me continue to create content like this.
Until next time, happy coding!
Previously Written Articles:
FastAPI in Production: Build, Scale & Deploy β Series A: Codebase Design β
Read here
Series-X: Create Professional Portfolios Using Material for MkDocs β
Read here
-
How to Learn Effectively & Efficiently as a Professional in any Field
β
Read here
This content originally appeared on DEV Community and was authored by Mr Chike