Skip to content

Pub/Sub Patterns

Interactive Message Queue Visualizer

See how different messaging patterns work. Compare point-to-point, pub/sub, and consumer group delivery in real time.

Message Queue Visualizer

Explore messaging patterns: Point-to-Point, Pub/Sub, and Consumer Groups

Each message is delivered to exactly one consumer. Messages are load-balanced across consumers.
Produced:0
Delivered:0
In Queue:0
Throughput:0/5s
ProducerQueue0messagesConsumer 1Consumer 2Consumer 3
How Point-to-Point Works
Messages are placed in a queue and delivered to exactly one consumer using round-robin or load-balancing. This is ideal for task distribution where each task should be processed once. Examples: job queues, order processing, task workers.

The publish/subscribe (pub/sub) pattern is a messaging paradigm where message senders (publishers) do not send messages directly to specific receivers (subscribers). Instead, messages are published to a channel (topic), and all subscribers to that topic receive a copy. This decouples producers from consumers, enabling flexible, scalable event-driven architectures.


Point-to-Point vs Publish/Subscribe

Point-to-Point (Queue)

┌──────────┐
Producer ──MSG 1──▶ │ │ ──MSG 1──▶ Consumer A
Producer ──MSG 2──▶ │ Queue │ ──MSG 2──▶ Consumer B
Producer ──MSG 3──▶ │ │ ──MSG 3──▶ Consumer A
└──────────┘
Each message delivered to exactly ONE consumer.
Consumers compete for messages.

Use cases: Work distribution, task processing, load balancing across workers

Publish/Subscribe (Topic)

┌──────────┐ ──MSG 1──▶ Subscriber A
Publisher ──MSG 1──▶│ │ ──MSG 1──▶ Subscriber B
│ Topic │ ──MSG 1──▶ Subscriber C
Publisher ──MSG 2──▶│ │ ──MSG 2──▶ Subscriber A
└──────────┘ ──MSG 2──▶ Subscriber B
──MSG 2──▶ Subscriber C
Each message delivered to ALL subscribers.
Subscribers receive independent copies.

Use cases: Event notifications, real-time updates, broadcasting state changes

Hybrid: Consumer Groups

Modern systems like Kafka combine both patterns using consumer groups:

┌──────────┐
│ │ ──▶ Consumer A1 ┐ Consumer
Publisher ──MSG──▶ │ Topic │ ──▶ Consumer A2 ┘ Group A
│ │
│ │ ──▶ Consumer B1 ┐ Consumer
│ │ ──▶ Consumer B2 ┘ Group B
└──────────┘
Within a consumer group: point-to-point (one consumer gets each message)
Across consumer groups: pub/sub (each group gets every message)

Topic-Based Routing

Hierarchical Topics

Topics can be organized hierarchically using dots or slashes to enable fine-grained subscription:

Topic Hierarchy:
orders.created
orders.updated
orders.cancelled
orders.shipped
payments.processed
payments.failed
payments.refunded
inventory.low
inventory.restocked
Subscription Patterns:
"orders.*" → All order events
"payments.*" → All payment events
"orders.created" → Only new orders
"*.failed" → All failure events (if supported)

Content-Based Routing

Messages are routed based on their content (headers or body), not just the topic:

Message: {
"type": "order",
"region": "US",
"priority": "high",
"amount": 500
}
Routing Rules:
Rule 1: region = "US" AND priority = "high" → US-Priority Queue
Rule 2: region = "EU" → EU Queue
Rule 3: amount > 1000 → Large Order Queue
import redis
import json
import threading
import time
class PubSubBroker:
"""Pub/Sub implementation using Redis."""
def __init__(self, host='localhost', port=6379):
self.redis = redis.Redis(
host=host, port=port,
decode_responses=True
)
self.pubsub = self.redis.pubsub()
def publish(self, topic: str, message: dict):
"""Publish a message to a topic."""
envelope = {
'id': str(time.time_ns()),
'topic': topic,
'timestamp': time.time(),
'body': message
}
self.redis.publish(topic, json.dumps(envelope))
return envelope['id']
def subscribe(self, pattern: str, handler):
"""
Subscribe to topics matching a pattern.
Supports wildcard patterns like 'orders.*'
"""
def message_handler(message):
if message['type'] == 'pmessage':
data = json.loads(message['data'])
handler(data)
self.pubsub.psubscribe(
**{pattern: message_handler}
)
def start_listening(self):
"""Start listening for messages in a thread."""
thread = self.pubsub.run_in_thread(
sleep_time=0.01
)
return thread
# Usage
broker = PubSubBroker()
# Subscriber 1: All order events
def handle_order(message):
print(f"[Orders] {message['topic']}: "
f"{message['body']}")
broker.subscribe('orders.*', handle_order)
# Subscriber 2: Only failed events
def handle_failures(message):
print(f"[Failures] {message['topic']}: "
f"{message['body']}")
broker.subscribe('*.failed', handle_failures)
# Start listening
listener = broker.start_listening()
# Publisher
time.sleep(0.1) # Wait for subscribers to connect
broker.publish('orders.created', {
'order_id': 'ORD-001', 'amount': 99.99
})
broker.publish('orders.failed', {
'order_id': 'ORD-002', 'reason': 'Payment declined'
})
broker.publish('payments.failed', {
'payment_id': 'PAY-001', 'reason': 'Card expired'
})

Fan-Out and Fan-In Patterns

Fan-Out

One message triggers multiple independent processing paths:

┌──▶ Email Service
Order Created ──▶ [Topic] ────┼──▶ Inventory Service
├──▶ Analytics Service
└──▶ Shipping Service
One event, four independent consumers.
Each processes the event differently.

Use cases:

  • Order processing (notify multiple services)
  • User registration (welcome email, setup defaults, analytics)
  • Content publishing (CDN invalidation, search index, notifications)

Fan-In

Multiple message sources converge into a single processing pipeline:

Sensor A ──▶ ┐
Sensor B ──▶ ├──▶ [Aggregator Queue] ──▶ Processing Service
Sensor C ──▶ ┘
Multiple producers, single consumer that aggregates.

Use cases:

  • Log aggregation from multiple services
  • IoT sensor data collection
  • Merging results from parallel processing

Fan-Out/Fan-In Combined

Scatter-gather pattern: distribute work, then aggregate results.

Request ──▶ [Scatter] ──▶ Worker 1 ──▶ ┐
──▶ Worker 2 ──▶ ├──▶ [Gather] ──▶ Response
──▶ Worker 3 ──▶ ┘
Example: Search across multiple indexes
Query "shoes" ──▶ Product Index ──▶ ┐
──▶ Review Index ──▶ ├──▶ Merge Results
──▶ Price Index ──▶ ┘

Message Filtering

Subscribers can filter messages to receive only those they are interested in, reducing unnecessary processing.

Broker-Side Filtering

The broker evaluates filters and only delivers matching messages:

Publisher sends to topic "orders":
{ type: "electronics", amount: 500, region: "US" }
Subscriber A filter: type = "electronics" → RECEIVES
Subscriber B filter: region = "EU" → DOES NOT RECEIVE
Subscriber C filter: amount > 100 → RECEIVES

Advantages: Reduces network traffic and consumer processing Disadvantages: More complex broker; filter evaluation adds latency

Client-Side Filtering

The broker delivers all messages; consumers filter locally:

All messages delivered to all subscribers.
Each subscriber discards messages it does not need.
Pros: Simple broker
Cons: Wasted bandwidth and processing

AWS SNS Message Filtering

{
"order_type": ["electronics", "clothing"],
"amount": [{"numeric": [">=", 100]}],
"region": ["US", "CA"],
"priority": [{"exists": true}]
}

Dead Letter Queues (DLQ)

A dead letter queue captures messages that cannot be processed successfully after a configured number of retry attempts. Instead of losing the message or retrying forever, it is moved to a DLQ for investigation and reprocessing.

Normal Flow:
Producer ──▶ Main Queue ──▶ Consumer ──▶ SUCCESS
Failure Flow:
Producer ──▶ Main Queue ──▶ Consumer ──▶ FAIL (attempt 1)
──▶ Consumer ──▶ FAIL (attempt 2)
──▶ Consumer ──▶ FAIL (attempt 3)
──▶ Dead Letter Queue
Admin investigates and
either fixes and replays
or discards the message

Why Messages End Up in DLQ

ReasonExample
Malformed messageInvalid JSON, missing required fields
Processing errorBusiness logic exception, null pointer
External dependency failureDatabase down, API unreachable (after retries)
Schema mismatchProducer sends v2 format, consumer expects v1
TimeoutProcessing takes longer than the visibility timeout
Poison messageA specific message that always causes a crash
import json
import time
import logging
from dataclasses import dataclass, field
logger = logging.getLogger(__name__)
@dataclass
class Message:
id: str
body: dict
attempt: int = 0
max_attempts: int = 3
class QueueWithDLQ:
def __init__(self, name: str, max_retries: int = 3):
self.name = name
self.max_retries = max_retries
self.main_queue: list[Message] = []
self.dead_letter_queue: list[Message] = []
self.processing: list[Message] = []
def enqueue(self, message_id: str, body: dict):
msg = Message(
id=message_id, body=body,
max_attempts=self.max_retries
)
self.main_queue.append(msg)
def dequeue(self) -> Message | None:
if not self.main_queue:
return None
msg = self.main_queue.pop(0)
msg.attempt += 1
self.processing.append(msg)
return msg
def acknowledge(self, message: Message):
"""Mark message as successfully processed."""
self.processing.remove(message)
logger.info(
f"Message {message.id} acknowledged"
)
def reject(self, message: Message):
"""
Reject a message. Retry or move to DLQ.
"""
self.processing.remove(message)
if message.attempt >= message.max_attempts:
# Max retries exceeded → DLQ
self.dead_letter_queue.append(message)
logger.warning(
f"Message {message.id} moved to DLQ "
f"after {message.attempt} attempts"
)
else:
# Re-enqueue for retry
self.main_queue.append(message)
logger.info(
f"Message {message.id} requeued "
f"(attempt {message.attempt})"
)
def replay_dlq(self):
"""Move all DLQ messages back to main queue."""
count = len(self.dead_letter_queue)
while self.dead_letter_queue:
msg = self.dead_letter_queue.pop(0)
msg.attempt = 0 # Reset attempts
self.main_queue.append(msg)
logger.info(f"Replayed {count} messages from DLQ")
@property
def dlq_count(self) -> int:
return len(self.dead_letter_queue)
# Usage
queue = QueueWithDLQ('orders', max_retries=3)
# Enqueue messages
queue.enqueue('msg-1', {'order': 'ORD-001'})
queue.enqueue('msg-2', {'order': 'INVALID'})
# Process messages
def process_message(msg: Message) -> bool:
if msg.body.get('order') == 'INVALID':
raise ValueError("Invalid order format")
print(f"Processed: {msg.body}")
return True
while msg := queue.dequeue():
try:
process_message(msg)
queue.acknowledge(msg)
except Exception as e:
print(f"Failed: {e}")
queue.reject(msg)
print(f"DLQ count: {queue.dlq_count}")

Backpressure Handling

Backpressure occurs when a producer generates messages faster than consumers can process them. Without backpressure handling, queues grow unbounded, memory is exhausted, and the system crashes.

Producer rate: 1000 msg/s
Consumer rate: 200 msg/s
Without backpressure:
Queue depth grows by 800 msg/s
After 1 hour: 2,880,000 unprocessed messages
Eventually: out of memory, system crash
With backpressure:
Queue depth reaches threshold
System takes corrective action

Backpressure Strategies

StrategyHow It WorksTrade-off
Drop oldestDiscard oldest messages when queue is fullData loss; good for real-time metrics
Drop newestReject new messages when queue is fullProducer must handle rejections
Block producerProducer blocks until space is availableCan cascade upstream
Rate limitingLimit producer’s message rateProducer must adapt
Scale consumersAuto-scale consumers based on queue depthCosts money; has scaling lag
SamplingProcess every Nth message when overloadedApproximate results acceptable

Auto-Scaling Consumers Based on Queue Depth

Queue Depth: Consumer Instances:
0-100 2 (minimum)
100-500 5
500-1000 10
1000-5000 20
5000+ 50 (maximum)
Scaling Policy:
IF queue_depth > 500 for 5 minutes → scale up
IF queue_depth < 100 for 15 minutes → scale down

Message Ordering

Maintaining message order is one of the biggest challenges in pub/sub systems:

Ordering GuaranteeDescriptionPerformance
No orderingMessages may arrive in any orderHighest throughput
Per-partition orderingMessages with the same key are ordered within a partitionGood throughput
Total orderingAll messages globally orderedLowest throughput
Per-partition ordering (Kafka approach):
Producer sends: M1(user=A), M2(user=B), M3(user=A), M4(user=B)
Partition 0 (user A): M1, M3 → Ordered within partition
Partition 1 (user B): M2, M4 → Ordered within partition
Consumer of partition 0 sees: M1 before M3 (guaranteed)
Consumer of partition 1 sees: M2 before M4 (guaranteed)
But: No guarantee about ordering between M1 and M2

Summary

ConceptKey Takeaway
Point-to-PointOne consumer per message; competing consumers for load distribution
Pub/SubAll subscribers receive every message; event broadcasting
Topic RoutingHierarchical topics with wildcard subscriptions
Fan-OutOne event triggers multiple independent processing paths
Fan-InMultiple sources converge into a single processing pipeline
Dead Letter QueueCaptures messages that fail processing for investigation
BackpressureStrategies for handling producers faster than consumers
Message OrderingPer-partition ordering balances order guarantees with throughput