What is a Message Queue?
- RabbitMQ is a message broker
- it places a “mailbox” between two services.
- The sender drops a message in and moves on. The receiver picks it up whenever it’s ready.
Core Concepts
| Concept | What it is |
|---|---|
| Producer | The sender (Shopping Cart) |
| Consumer | The receiver (Warehouse) |
| Queue | Where messages are stored (order_queue) |
| Exchange | Message router — decides which queue a message goes to. This project uses the default exchange, which routes directly by queue name |
| Channel | A lightweight virtual channel multiplexed over one TCP connection. All operations happen on a Channel |
| Connection | The TCP connection to RabbitMQ (expensive to create — should be reused) |
Key Properties of a Message Queue
| Property | What it means | How it works |
|---|---|---|
| Persistence | Messages survive crashes and restarts | durable=true on queue + PERSISTENT on message |
| Decoupling | Producer and Consumer are fully independent | They only share a queue name, nothing else |
| Buffering | Absorbs traffic spikes without overwhelming the consumer | Messages pile up in queue, consumer processes at its own pace |
| Delivery guarantee | Choose how reliably messages are delivered | autoAck=true (at-most-once) or autoAck=false (at-least-once) |
| Ordering | Messages delivered in FIFO order within one queue | But parallel consumers break processing order |
1. Persistence
Queue and messages can be made durable — they survive a RabbitMQ restart.
durable=true→ queue survives restartPERSISTENT_TEXT_PLAIN→ message written to disk, not just memory
Without persistence, a RabbitMQ restart loses everything in the queue.
2. Decoupling
Producer and Consumer only need to agree on the queue name and message format. They can be deployed, restarted, or scaled completely independently.
Cart → [ order_queue ] → Warehouse
only knows queue name only knows queue name
3. Buffering
If traffic spikes, messages pile up in the queue instead of crashing the consumer.
1000 requests/sec arrive → [ queue grows ] → Warehouse processes at 50/sec
The queue acts as a shock absorber. This is what the queue length graph in RabbitMQ UI shows.
4. Delivery Guarantee
Three modes — you choose based on your needs:
| Mode | How | Risk |
|---|---|---|
| At-most-once | autoAck=true — deleted on delivery | May lose messages if consumer crashes |
| At-least-once | autoAck=false + Manual ACK | May deliver twice if ACK is lost |
| Exactly-once | Not supported by RabbitMQ natively | Must implement deduplication in business logic |
This project uses at-least-once (autoAck=false). If a consumer crashes before ACKing, the message is re-queued and redelivered.
5. Ordering
A single queue delivers messages in FIFO order. However, with multiple parallel consumer threads, processing order is not guaranteed.
Queue: msg-1 → msg-2 → msg-3
↓ ↓ ↓
Thread-3 Thread-1 Thread-2 ← finish out of order
Fine for this project — each order is independent.
Why use a message queue instead of a direct HTTP call?
Decoupling + async + buffering.
Cart doesn’t have to wait for Warehouse. If Warehouse goes down, messages stay in the queue and are consumed after recovery. Traffic spikes don’t overwhelm Warehouse.
Sample
Scenario: After checkout, Shopping Cart needs to notify Warehouse to prepare the order.
- Option A: Direct HTTP call
- Problems:
- Cart must wait for Warehouse to finish before returning to the client (synchronous blocking)
- Warehouse goes down → Cart checkout also fails
- High traffic → Warehouse can’t keep up → Cart times out
Cart ──POST /warehouse/process──→ Warehouse
│
waiting for response...
│
←── 200 OK (hundreds of ms later)
- Option B: Message Queue (what this project uses)
Cart ──publish──→ [ order_queue ] ←──consume── Warehouse
│
message written → return 200 immediately
no waiting for Warehouse
- Benefits:
| Direct HTTP | Message Queue | |
|---|---|---|
| Cart wait time | Waits for Warehouse to finish | Only waits for message to be written |
| Warehouse goes down | Cart checkout fails | Message stays in queue, consumed after recovery |
| High traffic | Warehouse gets overwhelmed | Messages queue up, Warehouse consumes at its own pace |
| Coupling | Tightly coupled | Decoupled |
Producer (Cart)
│
│ basicPublish
▼
┌─────────────────────────────┐
│ RabbitMQ │
│ │
│ Exchange ──routing──→ Queue│
│ key (order_queue)
└─────────────────────────────┘
│
│ basicConsume
▼
Consumer (Warehouse)
Connection vs Channel vs Channel Pool
- Connection
- A TCP connection to RabbitMQ. Expensive to create
- the whole program should share one.
- Channel
- A lightweight logical lane multiplexed over one Connection. All actual operations (
basicPublish,basicConsume) happen on a Channel, not the Connection directly.
- A lightweight logical lane multiplexed over one Connection. All actual operations (
One Connection (TCP)
├── Channel 1 ← Thread A uses this
├── Channel 2 ← Thread B uses this
└── Channel 3 ← Thread C uses this
- Channel Pool
- A container that pre-creates a fixed number of Channels
- lets threads borrow/return them instead of creating new ones every time.
Pool = [ Ch1, Ch2, Ch3 ... Ch20 ] ← created once at startup
Request arrives:
pool.borrowObject() → get Ch5
basicPublish(...)
pool.returnObject() → Ch5 back in pool
100 concurrent requests:
Ch1–Ch20 all borrowed
request #21 waits up to 5s for one to be returned
Interview Q&A
Q: How do you guarantee a message is not lost?
- Two sides:
- Publisher Confirms (
waitForConfirmsOrDie) on the producer ensures RabbitMQ durably wrote the message. - Manual ACK (
autoAck=false) on the consumer ensures the message is only deleted after successful processing.
- Publisher Confirms (
Q: What is at-least-once delivery? What’s the downside?
- With Manual ACK, every message is guaranteed to be delivered at least once.
- The downside:
- if a consumer processes a message but crashes before ACKing, the message is redelivered — potentially processed twice.
- Consumers should be idempotent (safe to run twice).
Q: Why use a Channel Pool instead of creating a Channel per request?
- Creating and closing a Channel on every request adds overhead.
- Under high concurrency this becomes a bottleneck.
- A pool pre-creates 20 Channels and reuses them — capping resource usage and avoiding repeated setup cost.
Q: What is prefetchCount (basicQos)?
- Limits how many unacknowledged messages one consumer thread can hold at once.
- Without it, one fast thread could grab all messages from the queue, leaving other threads idle.
basicQos(10)means each thread gets at most 10 messages at a time.
Q: Are messages in a queue ordered?
- Yes, a single queue delivers messages in FIFO order.
- But with 10 parallel consumer threads, processing order is not guaranteed — two threads can finish out of order.
- In this project that’s fine since each order is independent.
RabbitMQ vs Other Message Queues
- RabbitMQ is a classic Message Broker — the most widely used for task queues and service decoupling.
| RabbitMQ | Kafka | AWS SQS | AWS SNS | |
|---|---|---|---|---|
| Type | Message Broker | Distributed Log | Managed Queue | Pub/Sub Notification |
| Best for | Task queues, decoupling | High throughput, event streaming | Simple cloud-native queues | Fan-out to multiple subscribers |
| Message ordering | FIFO within one queue | FIFO within one partition | Not guaranteed | Not guaranteed |
| Persistence | Optional (durable flag) | Always persistent | Always persistent | No — fire and forget |
| Consume mode | Push | Pull | Pull | Push |
| Complexity | Medium | High | Low | Low |
SNS vs SQS:
- SNS is pub/sub — one message pushed to many subscribers (email, Lambda, SQS, HTTP).
- SQS is a queue — one message consumed by one worker.
- They are often combined: SNS fan-out → multiple SQS queues.
SNS Topic
├── SQS Queue A ← Service A receives the message
├── SQS Queue B ← Service B receives the same message
└── Lambda ← Function triggered by the same message
This project uses RabbitMQ because it is the most beginner-friendly
- Publisher Confirms,
- Manual ACK
- , and Channel are all clearly visible concepts, ideal for learning distributed systems.