This content originally appeared on DEV Community and was authored by
Schema design, indexing tricks, and performance wars what devs actually care about when picking their database fighter.
If databases were video game characters, Postgres and MySQL would be those two rivals you always pick just to trash-talk your friends: the tanky but flexible mage (Postgres) vs the fast, reliable brawler (MySQL). Both can sling JSON now, but the way they do it changes how you design schemas, how you write queries, and even how your app scales.

The “event” that sparked this showdown? JSON isn’t just a hack anymore it’s a first-class citizen in both Postgres and MySQL. Postgres came in hot with jsonb
(binary storage + magic operators) and an entire family of advanced indexes. MySQL fired back with its native JSON
type in 5.7.8 and leaned on InnoDB’s ACID reliability. Suddenly, every dev building API-driven apps has to pick a side.
And let’s be real: this debate isn’t just about syntax. It’s about whether your queries fly or crawl, whether your indexes save you or betray you, and whether you’ll be up at 3 AM cursing at VACUUM
logs or gap locks.
TLDR: This article isn’t about “which database is better.” It’s about how JSON actually behaves in both Postgres and MySQL data types, operators, indexing, transactions, replication, benchmarks, and real-world decision points. By the end, you’ll know when to grab the Postgres elephant, when to trust the MySQL dolphin, and when to run screaming into a document store.
Data types & JSON capabilities
The first thing you hit when comparing Postgres and MySQL is how they actually store and manipulate JSON. Spoiler: one feels like a toolbox, the other feels like you’re solving puzzles with duct tape.
PostgreSQL: json vs jsonb
Postgres gives you two toys:
-
json
: just stores the raw JSON text, unmodified. Great if you mostly insert and rarely query. -
jsonb
: binary representation, normalized (no duplicate keys, order doesn’t matter). This is the one you actually want in production because it unlocks operators, indexing, and query speed.
Postgres also has operator sugar that makes working with JSON feel less painful:
-- Grab a JSON field
SELECT data->'user'->>'name'
FROM events
WHERE data->'status' = '"active"';
It also ships with functions like jsonb_set
(update fields in place) and full-on jsonpath queries (think SQL meets JSONPath syntax).
Example:
-- Find events with nested array values
SELECT
FROM events
WHERE data @? '$.items[] ? (@.price > 100)';
That @?
operator is Postgres saying: yeah, I do NoSQL cosplay now.
MySQL: native JSON type
MySQL caught up in version 5.7.8 with its native JSON
type. Before that, people were just stuffing JSON into TEXT
columns and praying. With the new type, MySQL validates JSON on insert and adds a decent bag of functions like JSON_EXTRACT
, JSON_ARRAYAGG
, and JSON_SET
.
Example:
-- Extract a nested field
SELECT JSON_EXTRACT(data, '$.user.name')
FROM events
WHERE JSON_EXTRACT(data, '$.status') = 'active';
Or updating a JSON doc:
UPDATE events
SET data = JSON_SET(data, '$.status', 'inactive')
WHERE id = 42;
The big difference? In MySQL, you’ll be using functions constantly instead of operators. It feels more verbose, and sometimes slower, but it’s consistent.
Side-by-side vibes
- Postgres lets you chain operators like a gamer combo move.
- MySQL feels more like playing turn-based JRPGs: powerful, but you’re always typing out the whole spell name.
Dev pain story
Ever migrated a JSON-heavy app from MySQL to Postgres? You’ll suddenly feel like someone handed you a pro keyboard after years of typing on a sticky café laptop. On the flip side, going from Postgres to MySQL is like downgrading from vim
to Notepad everything works, but you’re constantly muttering “why can’t I just…”
Bottom line: both support JSON natively, but Postgres’s jsonb
+ operators give you more freedom, while MySQL makes you lean on functions and live with some verbosity.

Indexing: when GIN/GiST/BRIN win
Querying JSON without indexes is like running a boss raid naked technically possible, but you’ll regret it. The way Postgres and MySQL handle indexing on JSON is one of the biggest real-world differences that impacts schema and query design.
PostgreSQL: the index buffet
Postgres doesn’t mess around. You get multiple index families:
- B-tree: the classic default, good for scalars.
- GIN (Generalized Inverted Index): the MVP for JSONB, supports fast lookups on keys and values.
- GiST (Generalized Search Tree): flexible, supports range queries and similarity search.
- BRIN (Block Range Index): lightweight, great for giant tables with sequential data.
- Bloom extension: probabilistic, fast membership checks.
For JSONB specifically, GIN is the one you’ll actually use. Example:
CREATE INDEX idx_events_data_gin
ON events USING GIN (data jsonb_path_ops);
-- Query speeds up dramatically
SELECT
FROM events
WHERE data @> '{"status": "active"}';
That @>
operator checks containment (data contains this JSON
) and with GIN it flies. Without it, you’re scanning the whole table like a scrub.
You can also mix in expression indexes:
CREATE INDEX idx_events_status
ON events ((data->>'status'));
Suddenly your JSON field feels like a first-class SQL column.
MySQL: functional index life
MySQL’s JSON indexing story is… simpler. You can’t just throw a magic index on the JSON column. Instead, you create a generated column from a JSON path, then index that.
Example:
ALTER TABLE events
ADD COLUMN status VARCHAR(20)
GENERATED ALWAYS AS (JSON_UNQUOTE(JSON_EXTRACT(data, '$.status'))) STORED,
ADD INDEX idx_status (status);
Now you can query:
SELECT
FROM events
WHERE status = 'active';
It works, but it’s like carrying around a pocket translator extra steps, extra storage.
Query plan difference
Run the same dataset through Postgres and MySQL:
- Postgres with GIN: instant index hit, query plan shows clean index scan.
- MySQL with generated column: works fine, but you feel the overhead.
Postgres just feels more “native” for JSON queries, while MySQL makes you compromise with generated columns.
Dev analogy
Postgres indexing feels like having cheat codes: “Up, Up, Down, Down, Left, Right, GIN.”
MySQL indexing feels like writing down item recipes on a notepad because the game doesn’t tell you how to craft them.
If your app lives and dies on JSON queries, Postgres gives you a buffet of index types. MySQL can keep up, but you’ll do more schema gymnastics.

Transactions & isolation (InnoDB defaults)
Indexes get you speed, but transactions decide whether your app crashes spectacularly at 2 AM or quietly keeps humming. Both Postgres and MySQL have serious transaction game, but they play it differently.
PostgreSQL: MVCC all the way
Postgres uses MVCC (Multi-Version Concurrency Control), which basically means: every transaction sees a snapshot of the database at the time it started. No readers blocking writers, no writers blocking readers. Everyone gets their own version of reality like multiple devs editing the same code branch without merge conflicts… until later.
- Isolation levels:
READ COMMITTED
(default),REPEATABLE READ
, and fullSERIALIZABLE
. - Because of MVCC, Postgres doesn’t really use locks for reads. It just keeps old row versions around.
- Downside: you eventually need
VACUUM
to clean up dead tuples, or else bloat builds up like unmerged Git branches.
Example:
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
Both queries run without blocking readers looking at those same rows. Clean.
MySQL: InnoDB and gap locks
MySQL’s InnoDB engine is default for good reason it’s ACID-compliant and has predictable isolation. Its default isolation level is REPEATABLE READ, which is stricter than Postgres’s default.
- Uses row-level locking, but also gap locks (locking ranges of index values to prevent phantom reads).
- Writers can block readers more often than in Postgres.
- No vacuuming dance, but you can get deadlocks if transactions overlap awkwardly.
Example:
START TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
Looks the same, but under the hood, InnoDB might lock a range of rows if your query isn’t indexed perfectly. Cue head-scratching at 3 AM when you hit ERROR 1213 (40001): Deadlock found
.
Practical dev take
- Postgres feels safer for complex, read-heavy systems. You don’t sweat over read locks, but you do sweat about tuning autovacuum.
- MySQL feels predictable for OLTP workloads (think ecommerce carts), but gap locks can ruin your day if you’re sloppy with indexes.
Rhetorical question
Which hurts more: tuning VACUUM
parameters in Postgres, or chasing random deadlocks in MySQL? That’s basically asking if you’d rather debug CSS or YAML.
Postgres uses snapshots (MVCC) for smooth concurrency but requires vacuum cleanup. MySQL leans on InnoDB’s row + gap locks simpler day-to-day, but you need to watch for deadlocks.

Replication, topologies, and HA
Replication is where theory meets production panic. Both Postgres and MySQL can keep your data alive, but their styles differ.
Postgres:
- Streaming replication (fast, read-only replicas)
- Logical replication (replicate tables/subsets, handy for migrations)
- Tools like Patroni or Stolon for failover.
- Backups:
pg_dump
for logical,pg_basebackup
/WAL archiving for physical.
MySQL:
- Async replication (simple, but lag happens)
- Semi-sync (safer, a bit slower)
- Group replication / InnoDB Cluster for multi-master setups.
-
Backups:
mysqldump
, Percona XtraBackup (what people actually use).
Postgres gives you more flexibility with logical replication; MySQL has a longer battle-tested history for “it just works” replication setups.
Benchmarks: OLTP vs analytics
Performance isn’t about “who’s faster,” it’s about faster at what.
OLTP (transactions, frequent reads/writes):
- MySQL + InnoDB usually wins simple queries, generated columns, predictable response times.
- Postgres can keep up with GIN indexes, but MVCC + autovacuum adds overhead.
Analytics (dashboards, filters, big scans):
- Postgres shines GIN/GiST indexes + parallel queries handle JSON filtering like a champ.
- MySQL can do it, but JSON functions bog down on large datasets.
MySQL for heavy OLTP with JSON sprinkled in. Postgres for analytics or complex JSON queries.

Decision matrix: pick Postgres if X, MySQL if Y
Pick Postgres if:
- JSON is central to your schema.
- You need advanced operators (
@>
, jsonpath). - Analytics or dashboards are part of the workload.
Pick MySQL if:
- JSON is just metadata on top of mostly relational data.
- You care more about stable OLTP performance.
- Replication and HA matter more than fancy indexes.
Postgres = JSON playground with powerful queries. MySQL = rock-solid OLTP with JSON support that “just works.”
Conclusion: the JSON war isn’t really about JSON
At the end of the day, the Postgres vs MySQL JSON showdown isn’t just about JSON. It’s about philosophy.
- Postgres treats JSON as a first-class citizen: operators, indexing families, jsonpath filters, even the ability to pretend it’s a document store when you feel spicy. It’s a playground for devs who love tinkering, optimizing, and pushing SQL into “wait, it can do that?” territory.
- MySQL treats JSON as a practical add-on: it works, it’s stable, it’s good enough for most OLTP systems. But it’s not the star of the show, and that’s fine. MySQL’s real strength is still InnoDB’s predictable, boring reliability the thing ops teams secretly thank when traffic spikes on Black Friday.
My hot take
If you’re building something JSON-centric think API gateways, event-driven apps, or dashboards go Postgres. You’ll thank yourself when you need GIN indexes and @>
operators.
If you’re building high-throughput transactional systems where JSON is just metadata, go MySQL. The ops tooling and replication story will make your life easier.
And if you’re that dev who still insists “just use MongoDB,” well… enjoy explaining eventual consistency to your PM when money goes missing.
Forward look
Both engines are converging. JSON is no longer an afterthought in SQL databases, and the lines between relational and document worlds keep blurring. In 5 years, we’ll probably be laughing about how hard JSON queries used to be right before debugging some new AI-powered query planner bug.
Helpful resources
Postgres docs
MySQL docs

This content originally appeared on DEV Community and was authored by