This content originally appeared on DEV Community and was authored by Samarth Gambhir
In modern backend development, microservices are everywhere, and for good reason.
Instead of building a giant monolithic application, we break the system into smaller, focused services, each responsible for doing one thing well. This is the essence of microservice architecture:
- One service = One responsibility
- Each service = Independently deployable
- Services talk to each other = System communication
For example, in an e-commerce platform:
| Service | Responsibility |
| ------------------ | ------------------------------- |
| OrderService | Accepts and creates new orders |
| SMSService | Sends SMS notifications |
| EmailService | Sends order confirmation emails |
| InventoryService | Updates product stock |
Each service:
- Runs independently
- Can be deployed separately
- Owns its own data (DB)
But breaking a monolith into microservices is only the first step.
What follows is much harder:
“How do these small, independent services talk to each other reliably, at scale, and without bringing the system down?”
To understand this, let’s walk through a simple, relatable example:
A user places an order in an e-commerce application.
This one event will help us explore the evolution of communication patterns. When a user places an order, we want to:
- Send an order confirmation SMS
- Send an order confirmation Email
So, how can we make this happen?
The Naive Approach – HTTP Call
The first solution that comes to our mind is making a HTTP call from one service to another.
But what happens if:
- SMS or Email service is down?
- It’s slow to respond?
- The network drops?
This is a synchronous way as our Order service is blocked or fails too. That’s too fragile. We can use a simple Message Queue for handling this.
What is a Message Queue?
A message queue follows simple FIFO (First-In, First-Out) logic. It decouples events from consumers — meaning:
- The producer service doesn’t care when or how the message is processed.
- The consumer service pulls messages independently, processes them, and acknowledges success.
This model allows both services to scale, fail, and recover independently.
The Asynchronous Solution – Message Queue
Message queues provide at-least-once delivery. That means a message might be processed more than once if retries happen. It’s the consumer’s job to make message handling idempotent — i.e., safely repeatable. So, we introduce a Message Queue between our services:
Now:
- Order service drops the message to queues and moves on.
- The queue stores it until their respective service processes it.
- If it fails, the message is pushed back to the queue for retry.
Limitations
If we continue using a message queue, only one service will receive the message. Others will miss it.
We could try adding multiple separate queues, but that means the producer (OrderService) now needs to know all the consumers — tightly coupling everything again.
The Solution – Pub/Sub Architecture
In Pub/Sub systems like Kafka, you can group consumers by concern — and each consumer group gets its own copy of the message. But delivery guarantees depend heavily on the implementation. So, we switch to a Pub/Sub system:
Now:
- OrderService just publishes an event.
- All interested services subscribe to it.
- Everyone receives the same message, independently.
Limitations
- Message loss possible if a service is offline during event publication.
- No per-service retries as the event can’t be republished for retry.
- Consumers may race each other if not isolated.
Solution – Fan-Out Architecture
Fan-Out is a hybrid of Pub/Sub + Queues.
Instead of one message going to many services directly, we create a queue per subscriber, resulting in a fallback for retrying failed events.
Now, each service:
- Gets its own durable message queue.
- Can retry failures independently.
- Is fully isolated from the others.
Key Takeaways
- Message Queues are great for decoupling a task from its processor, but not ideal where multiple services are involved.
- Pub/Sub lets you notify multiple systems from a single event, but do not provide a fallback.
- Fan-Out adds durability, reliability and a fallback point to the pub/sub pattern.
A single event — like an order being placed — may seem small. But behind the scenes, it may trigger multiple workflows, handled by independent services, and needing different levels of reliability. Understanding when to use which pattern can make or break the scalability and resilience of your system.
This content originally appeared on DEV Community and was authored by Samarth Gambhir