What is a Message Queue?

  • RabbitMQ is a message broker
  • it places a “mailbox” between two services.
  • The sender drops a message in and moves on. The receiver picks it up whenever it’s ready.

Core Concepts

ConceptWhat it is
ProducerThe sender (Shopping Cart)
ConsumerThe receiver (Warehouse)
QueueWhere messages are stored (order_queue)
ExchangeMessage router — decides which queue a message goes to. This project uses the default exchange, which routes directly by queue name
ChannelA lightweight virtual channel multiplexed over one TCP connection.
All operations happen on a Channel
ConnectionThe TCP connection to RabbitMQ (expensive to create — should be reused)

Key Properties of a Message Queue

PropertyWhat it meansHow it works
PersistenceMessages survive crashes and restartsdurable=true on queue + PERSISTENT on message
DecouplingProducer and Consumer are fully independentThey only share a queue name, nothing else
BufferingAbsorbs traffic spikes without overwhelming the consumerMessages pile up in queue, consumer processes at its own pace
Delivery guaranteeChoose how reliably messages are deliveredautoAck=true (at-most-once) or autoAck=false (at-least-once)
OrderingMessages delivered in FIFO order within one queueBut parallel consumers break processing order

1. Persistence

Queue and messages can be made durable — they survive a RabbitMQ restart.

  • durable=true → queue survives restart
  • PERSISTENT_TEXT_PLAIN → message written to disk, not just memory

Without persistence, a RabbitMQ restart loses everything in the queue.

2. Decoupling

Producer and Consumer only need to agree on the queue name and message format. They can be deployed, restarted, or scaled completely independently.

Cart  →  [ order_queue ]  →  Warehouse
  only knows queue name        only knows queue name

3. Buffering

If traffic spikes, messages pile up in the queue instead of crashing the consumer.

1000 requests/sec arrive → [ queue grows ] → Warehouse processes at 50/sec

The queue acts as a shock absorber. This is what the queue length graph in RabbitMQ UI shows.

4. Delivery Guarantee

Three modes — you choose based on your needs:

ModeHowRisk
At-most-onceautoAck=true — deleted on deliveryMay lose messages if consumer crashes
At-least-onceautoAck=false + Manual ACKMay deliver twice if ACK is lost
Exactly-onceNot supported by RabbitMQ nativelyMust implement deduplication in business logic

This project uses at-least-once (autoAck=false). If a consumer crashes before ACKing, the message is re-queued and redelivered.

5. Ordering

A single queue delivers messages in FIFO order. However, with multiple parallel consumer threads, processing order is not guaranteed.

Queue:    msg-1 → msg-2 → msg-3
               ↓       ↓       ↓
           Thread-3  Thread-1  Thread-2  ← finish out of order

Fine for this project — each order is independent.


Why use a message queue instead of a direct HTTP call?

Decoupling + async + buffering.
Cart doesn’t have to wait for Warehouse. If Warehouse goes down, messages stay in the queue and are consumed after recovery. Traffic spikes don’t overwhelm Warehouse.

Sample

Scenario: After checkout, Shopping Cart needs to notify Warehouse to prepare the order.

  • Option A: Direct HTTP call
  • Problems:
    • Cart must wait for Warehouse to finish before returning to the client (synchronous blocking)
    • Warehouse goes down → Cart checkout also fails
    • High traffic → Warehouse can’t keep up → Cart times out
Cart  ──POST /warehouse/process──→  Warehouse
        │
        waiting for response...
        │
        ←── 200 OK  (hundreds of ms later)
  • Option B: Message Queue (what this project uses)
Cart  ──publish──→  [ order_queue ]  ←──consume──  Warehouse
        │
        message written → return 200 immediately
        no waiting for Warehouse
  • Benefits:
Direct HTTPMessage Queue
Cart wait timeWaits for Warehouse to finishOnly waits for message to be written
Warehouse goes downCart checkout failsMessage stays in queue, consumed after recovery
High trafficWarehouse gets overwhelmedMessages queue up, Warehouse consumes at its own pace
CouplingTightly coupledDecoupled
Producer (Cart)
    │
    │  basicPublish
    ▼
┌─────────────────────────────┐
│  RabbitMQ                   │
│                             │
│  Exchange ──routing──→ Queue│
│              key      (order_queue)
└─────────────────────────────┘
                    │
                    │  basicConsume
                    ▼
            Consumer (Warehouse)

Connection vs Channel vs Channel Pool

  • Connection
    • A TCP connection to RabbitMQ. Expensive to create
    • the whole program should share one.
  • Channel
    • A lightweight logical lane multiplexed over one Connection. All actual operations (basicPublish, basicConsume) happen on a Channel, not the Connection directly.
One Connection (TCP)
  ├── Channel 1  ← Thread A uses this
  ├── Channel 2  ← Thread B uses this
  └── Channel 3  ← Thread C uses this
  • Channel Pool
    • A container that pre-creates a fixed number of Channels
    • lets threads borrow/return them instead of creating new ones every time.
Pool = [ Ch1, Ch2, Ch3 ... Ch20 ]   ← created once at startup

Request arrives:
  pool.borrowObject()  → get Ch5
  basicPublish(...)
  pool.returnObject()  → Ch5 back in pool

100 concurrent requests:
  Ch1–Ch20 all borrowed
  request #21 waits up to 5s for one to be returned

Interview Q&A

Q: How do you guarantee a message is not lost?

  • Two sides:
    • Publisher Confirms (waitForConfirmsOrDie) on the producer ensures RabbitMQ durably wrote the message.
    • Manual ACK (autoAck=false) on the consumer ensures the message is only deleted after successful processing.

Q: What is at-least-once delivery? What’s the downside?

  • With Manual ACK, every message is guaranteed to be delivered at least once.
  • The downside:
    • if a consumer processes a message but crashes before ACKing, the message is redelivered — potentially processed twice.
    • Consumers should be idempotent (safe to run twice).

Q: Why use a Channel Pool instead of creating a Channel per request?

  • Creating and closing a Channel on every request adds overhead.
  • Under high concurrency this becomes a bottleneck.
  • A pool pre-creates 20 Channels and reuses them — capping resource usage and avoiding repeated setup cost.

Q: What is prefetchCount (basicQos)?

  • Limits how many unacknowledged messages one consumer thread can hold at once.
  • Without it, one fast thread could grab all messages from the queue, leaving other threads idle. basicQos(10) means each thread gets at most 10 messages at a time.

Q: Are messages in a queue ordered?

  • Yes, a single queue delivers messages in FIFO order.
  • But with 10 parallel consumer threads, processing order is not guaranteed — two threads can finish out of order.
  • In this project that’s fine since each order is independent.

RabbitMQ vs Other Message Queues

  • RabbitMQ is a classic Message Broker — the most widely used for task queues and service decoupling.
RabbitMQKafkaAWS SQSAWS SNS
TypeMessage BrokerDistributed LogManaged QueuePub/Sub Notification
Best forTask queues, decouplingHigh throughput, event streamingSimple cloud-native queuesFan-out to multiple subscribers
Message orderingFIFO within one queueFIFO within one partitionNot guaranteedNot guaranteed
PersistenceOptional (durable flag)Always persistentAlways persistentNo — fire and forget
Consume modePushPullPullPush
ComplexityMediumHighLowLow

SNS vs SQS:

  • SNS is pub/sub — one message pushed to many subscribers (email, Lambda, SQS, HTTP).
  • SQS is a queue — one message consumed by one worker.
  • They are often combined: SNS fan-out → multiple SQS queues.
 SNS Topic
   ├── SQS Queue A  ← Service A receives the message
   ├── SQS Queue B  ← Service B receives the same message
   └── Lambda       ← Function triggered by the same message

This project uses RabbitMQ because it is the most beginner-friendly

  • Publisher Confirms,
  • Manual ACK
  • , and Channel are all clearly visible concepts, ideal for learning distributed systems.