How to Solve MySQL and Redis Cache Inconsistency Like a Pro



This content originally appeared on DEV Community and was authored by John Still

Redis is often used as a high-performance cache in front of MySQL to reduce latency and database load. But what happens when your MySQL data is updated — and Redis isn’t?

This can lead to data inconsistency, one of the most common and frustrating bugs in distributed systems.

In this article, we’ll explore:

  • What causes cache inconsistency between Redis and MySQL
  • Proven strategies to maintain consistency
  • How to test and validate these strategies locally

🧨 The Problem: Stale Cache After Database Writes

Here’s a typical setup:

  1. You query Redis for a value.
  2. If it’s a miss, you fall back to MySQL.
  3. The result is written back into Redis for future access.

Now suppose the underlying MySQL data changes — a user updates their profile, or a product price is modified. If Redis still holds the old value, your system serves stale or invalid data.

These inconsistencies can lead to bugs, incorrect business logic, and poor user experience.

💡 Want to simulate and test this kind of inconsistency locally?
ServBay lets you instantly spin up MySQL + Redis environments on your local machine, so you can reproduce and debug caching problems in minutes — no Docker setup required.

⚙ Strategy 1: Cache Aside (Lazy Loading)

This is the most common pattern used in caching systems:

  • Read: Application checks Redis → fallback to MySQL → update Redis
  • Write: Update MySQL → Delete Redis cache

By deleting the cache instead of updating it directly, you reduce the chance of stale data being served if the write operation is only partially successful.

But this strategy still faces challenges under high concurrency…

🕓 Strategy 2: Delayed Double Deletion

Let’s say:

  1. Thread A reads from Redis (miss).
  2. Thread B writes to MySQL and deletes Redis.
  3. Thread A writes the old data back to Redis.

This race condition is dangerous. To avoid it, use delayed double deletion:

  1. Update MySQL
  2. Delete Redis
  3. Wait 300–500ms
  4. Delete Redis again

This helps clear any stale data re-inserted during the gap.

🧱 Complementary Techniques for Consistency

Even with delayed deletion, large-scale systems often need additional safeguards:

  • TTL (Time-To-Live): Set expiration time on cache keys to reduce stale lifetime.
  • Message Queues: Use a queue (like Kafka or RabbitMQ) to asynchronously delete or refresh cache.
  • Version Numbers / Timestamps: Add metadata to data to ensure the freshest version is served.
  • Cache Penetration & Avalanche Protection: Use null caching and circuit breakers to protect Redis from overload.

🧪 Want to validate TTL, queues, or async cache updates locally?
ServBay makes it easy to test these mechanisms in isolation with Redis and MySQL on your local machine, without needing a heavy infrastructure setup.

✅ Conclusion

There is no “perfect” solution for cache consistency — each business scenario demands trade-offs between latency, consistency, and system complexity.

But by combining patterns like cache-aside, delayed deletion, TTL, and message queues, you can design a more resilient caching system.

Start by testing these strategies locally, simulating real-world concurrency and failure conditions. A small investment in early validation can prevent large-scale bugs in production.

Happy caching! 🔥


This content originally appeared on DEV Community and was authored by John Still