This content originally appeared on DEV Community and was authored by
One boring old SQL database might be the best backend in 2025.

Modern web development feels like building IKEA furniture with 15 screwdrivers, none of which fit.
You want to ship a simple app a dashboard, a chat tool, a personal project. Suddenly, you’re spinning up Redis for caching, Elasticsearch for search, Firebase for real-time sync, cron jobs with serverless functions, Auth0 for logins, a GraphQL server, analytics via some shady cookie tracker, and an AWS bill that could fund a coffee addiction.
Here’s the plot twist:
Postgres can now do most of that stuff and it’s actually good at it.
Yeah, Postgres. The boring old SQL database that bootcamp tutors told you was “fine, but legacy.” It’s quietly evolved into a backend beast. With the right extensions and practices, you can replace 10+ tools, services, and SaaS bills and run everything from one place with plain SQL.
This article is your guide to doing just that.
No hype. No fragile stacks. Just underrated, SQL-powered simplicity.
Covered in this article:
- TL;DR what Postgres replaces in one glance
- Flexible schema with JSONB unstructured data, meet SQL querying
- Jobs & schedules with pg_cron cron jobs without crontabs
- Caching with unlogged tables Redis-like speed, no Redis
- AI vector search with pgvector RAG-style magic in your DB
- Full-text search with TSVECTOR build your own search engine
- Auth, JWTs & row-level security login flows with just SQL
- Analytics & dashboards track events without sending data away
- Conclusion + recap table tools replaced and where to go next
1. What Postgres replaces (and when it’s perfect)
If you’ve got terminal tabs open, prod on fire, or a Zoom standup in 3 minutes, here’s the short version:
Postgres can now replace:
Press enter or click to view image in full size
When Postgres-first is perfect:
- Building MVPs or indie SaaS
- You’re solo or on a small team
- You hate wiring up 6 APIs to send a password reset
- You want fewer tools, less config, more actual shipping
Real-world: Supabase Studio itself uses PostgreSQL + pg_cron + Row-Level Security under the hood.
Many indie SaaS tools now run 90% of backend logic from inside the DB and scale just fine.
When it’s not:
- You’re at FAANG-scale and running multi-region
- Your stack depends on non-SQL data stores (e.g., massive object blobs)
- You enjoy YAML pipelines and SQS queues (respect)
We have also made a video on this; you can watch it here:
Runitbare: https://www.youtube.com/channel/UCyb38ARmbRdMlVEVXDBD4xw/2. Flexible schema with JSONB unstructured data, meet SQL querying
Back in the day, if you wanted flexible, schema-less data, you reached for MongoDB. Want to store random user settings, flags, or logs? SQL wasn’t built for that until it was.
Postgres introduced JSONB
, a binary JSON column type that gives you the chaos of NoSQL with the power of SQL. Best of both worlds.
Use case: user preferences, plugin configs, product metadata
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email TEXT,
settings JSONB
);
Insert flexible data:
INSERT INTO users (email, settings)
VALUES (
'dev@postgres.gg',
'{"theme": "dark", "notifications": {"email": true}}'
);
Query deeply nested keys:
SELECT email FROM users
WHERE settings->'notifications'->>'email' = 'true';
Why it’s awesome:
- No need to change schema every time a field changes
- Query with full SQL syntax
- Add indexes with
GIN
for performance
Don’t go full JSON:
If everything’s a blob, you lose type safety and joins. Use JSONB
for the flexible parts leave your core tables structured.
3. Jobs & schedules with pg_cron cron jobs without crontabs
We’ve all been there SSH into a server, crack open crontab -e
, copy-paste a random schedule string, and hope it still runs next month.
With Postgres, you don’t need to touch the terminal. Just use pg_cron
.
What is pg_cron
?
A Postgres extension that lets you schedule SQL statements to run on intervals. That’s it. No external services. No Linux magic.
Use case: auto-deleting expired sessions
CREATE EXTENSION IF NOT EXISTS pg_cron;
SELECT cron.schedule(
'clean_sessions',
'0 * * * ', -- hourly
$$ DELETE FROM sessions WHERE expires_at < now(); $$
);
It runs directly in the DB. No servers. No jobs queue. Just clean SQL.
Bonus: view/edit jobs like normal table rows
SELECT FROM cron.job;
Need to pause, update, or delete? It’s all queryable.
Notes:
- Only runs on the primary node
- Needs background workers enabled (works great on Neon, Supabase, Railway)
For most apps, this replaces:
- GitHub Actions for cleanup
- Lambda/Cloud Function timers
- DevOps crying over misfired shell scripts
4. Caching with unlogged tables Redis-like speed, no Redis
Redis is fast, sure. But spinning it up means:
Another service
Another mental model
Another bill
If all you need is fast, disposable cache, Postgres has your back with UNLOGGED
tables.
What’s an unlogged table?
- Skips write-ahead logging (WAL)
- Lives in memory (or mostly)
- Doesn’t survive a crash
- Is much faster for write-heavy temp data
Use case: session tokens, auth codes, ephemeral flags
CREATE UNLOGGED TABLE session_cache (
token UUID PRIMARY KEY,
user_id INT,
expires_at TIMESTAMPTZ
);
Insert & query just like normal:
INSERT INTO session_cache (token, user_id, expires_at)
VALUES (gen_random_uuid(), 42, now() + interval '30 minutes');
SELECT FROM session_cache WHERE token = '...';
Same SQL as always
No Redis networking/config
Local dev parity
Just remember:
- Crashes = data gone
- Don’t use for anything critical or persistent
- Not shared across replicas in most setups
For throwaway speed? This replaces Redis for tons of use cases no Docker container required.
Press enter or click to view image in full size
5. AI vector search with pgvector RAG-style magic in your DB
You’re building something AI-powered: a doc assistant, smart search, maybe a chatbot that doesn’t hallucinate 90% of the time.
You reach for Pinecone… then immediately get hit with SDKs, sync headaches, and API limits.
Instead, try this: store your embeddings directly in Postgres using pgvector
.
What is pgvector?
An extension that adds a VECTOR(n)
type to Postgres so you can:
- Store OpenAI/Cohere/Claude embeddings
- Run similarity search (cosine, L2, inner product)
- Index those vectors for speed
Use case: semantic search from your own docs
CREATE EXTENSION IF NOT EXISTS vector;
CREATE TABLE docs (
id SERIAL PRIMARY KEY,
content TEXT,
embedding VECTOR(1536)
);
Then search:
SELECT content
FROM docs
ORDER BY embedding <-> '[0.011, -0.043, ...]'::vector
LIMIT 5;
Works with real data
Combine with SQL filters (
WHERE category = 'legal'
) No vector DB billing tier stress
When it’s not enough:
- You’re scaling to millions of high-dim vectors
- You need HNSW or advanced approximate nearest neighbor search (ANN)
For most MVPs and 10k–100k vector use cases? pgvector is all you need and it lives in the same DB as everything else.
6. Full-text search with TSVECTOR build your own search engine
Search is one of those features that seems simple until you start trying to make it fast, ranked, typo-tolerant, and non-horrible. Most devs reach for Elasticsearch. But again… it’s another service, another config, another thing to break.
Postgres has full-text search built in and it’s surprisingly powerful.
Meet TSVECTOR
It lets you:
- Index large blobs of text
- Run ranked searches
- Handle stemming (e.g., “run” vs “running”)
- Handle basic typo/fuzzy matches (with
pg_trgm
)
Use case: blog or help center search
ALTER TABLE articles ADD COLUMN search TSVECTOR;
UPDATE articles
SET search = to_tsvector('english', title || ' ' || body);
Add an index:
CREATE INDEX search_idx ON articles USING GIN(search);
Search it:
SELECT title, ts_rank(search, plainto_tsquery('postgres vector magic')) AS rank
FROM articles
WHERE search @@ plainto_tsquery('postgres vector magic')
ORDER BY rank DESC
LIMIT 10;
Real ranking
Fast with GIN index
Zero external services
When it’s not ideal:
- You need Google-level search UX with semantic awareness
- Massive scale with real-time indexing
But for most apps with articles, docs, or forums? This kills the need for Elasticsearch in one shot.
Press enter or click to view image in full size
7. Auth, JWTs & row-level security login flows with just SQL
Auth usually sends people running to Firebase Auth, Auth0, or some obscure OAuth provider that breaks in staging.
But with Postgres, you can build a secure, production-grade login system using just SQL and a couple of extensions.
Use pgcrypto
to hash passwords
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- Storing password
UPDATE users
SET password = crypt('plaintext_pw', gen_salt('bf'));
To verify:
SELECT FROM users
WHERE email = 'user@site.dev'
AND password = crypt('plaintext_pw', password);
Use pgjwt
(or generate tokens in your app)
Store JWT secrets in Postgres, and even generate tokens via SQL if needed. You can also validate them in your API middleware layer.
Use Row-Level Security (RLS) to lock data by user
ALTER TABLE todos ENABLE ROW LEVEL SECURITY;
CREATE POLICY user_owns_todo ON todos
USING (user_id = current_setting('app.current_user')::INT);
Now every query will automatically scope results to the current user.
What this replaces:
- Firebase Auth
- Auth0
- Writing custom auth middleware in 3 microservices
Heads up:
- Still need secure token management if you’re exposing an API
- Great for small-to-medium apps not enterprise SSO madness
8. Analytics, dashboards & event logs tracking without Google
Want to know what users are clicking, when they log in, or how many visits your landing page got?
You don’t need Google Analytics. You don’t need Mixpanel. You don’t need to ship data off to some black box that might sell it to a questionable ad network.
You can do it all inside Postgres.
Option 1: DIY event table
CREATE TABLE pageviews (
path TEXT,
user_id INT,
viewed_at TIMESTAMPTZ DEFAULT now()
);
Query page hits:
SELECT path, COUNT(*)
FROM pageviews
GROUP BY path
ORDER BY count DESC;
Option 2: Use TimescaleDB or pgme
- TimescaleDB → advanced time-series DB, drop-in compatible with Postgres
- pgme → light analytics built for Postgres (DuckDB-style columnar engine)
Great for:
- Custom dashboards
- Usage reports
- Funnel steps & retention curves
Benefits:
- All in your stack
- Full control of data
- No pixel blockers or ad trackers to deal with
Conclusion: One boring database to rule them all
Postgres isn’t just a database anymore. It’s quietly become a full backend platform.
In 2025, spinning up 15 different services just to build a to-do app feels like overkill. With the right extensions and a little SQL, you can:
- Schedule jobs
- Cache data
- Handle auth
- Power AI search
- Build GraphQL APIs
- Do full-text ranking
- Track metrics
- Sync data in real-time
All from one battle-tested, free, open-source system you probably already use.
TL;DR what Postgres replaces
Press enter or click to view image in full size
Featured Postgres Tools & Extensions
- pgvector (vector search)
2. ElectricSQL (real-time sync)
- Website: https://electric-sql.com
3. PostGraphile (GraphQL from schema)
4. pg_cron (cron jobs in SQL)
6. TimescaleDB (time-series for analytics)
- Website: https://www.timescale.com/
7. pgcrypto (password hashing)
8. Row-Level Security (auth/data isolation)
Postgres Hosting Options

This content originally appeared on DEV Community and was authored by