Go Desktop Agent Part 2: Redis Pub/Sub - The Nervous System
In Part 2 of our Go Desktop Agent series, we take a deep dive into Redis pub/sub — the messaging backbone that connects your infrastructure to user desktops. We’ll explore how it works, why we chose it, and where this component could evolve as the project matures.
Why Redis Pub/Sub?
When designing the communication layer for the desktop agent, we evaluated several options: WebSockets, gRPC streaming, MQTT, RabbitMQ, and Redis pub/sub. Each has merits, but Redis emerged as the winner for our MVP for several compelling reasons:
1. Zero Additional Infrastructure
Many organizations already run Redis for caching, session storage, or job queues. Piggybacking on existing infrastructure means faster adoption and lower operational overhead.
2. Ridiculous Simplicity
Redis pub/sub requires exactly two commands to get started: SUBSCRIBE and PUBLISH. No topic configuration, no broker setup, no schema registry. It just works.
3. Blazing Fast
Redis handles millions of messages per second. For desktop notifications — where we’re talking hundreds or maybe thousands of messages per hour — Redis barely breaks a sweat.
4. Battle-Tested Reliability
Redis has been in production at scale for over a decade. The Go client libraries are mature and well-maintained.
5. Perfect for Fire-and-Forget
Desktop notifications are inherently ephemeral. If a user’s machine is offline, they probably don’t need yesterday’s “disk full” warning. Pub/sub’s lack of persistence is actually a feature here.
How Pub/Sub Works in the Agent
6 Let’s examine the communication flow in detail:
┌─────────────────────────────────────────────────────────────────┐
│ Redis Server │
│ │
│ ┌────────────────────┐ ┌────────────────────────┐ │
│ │ Channel: │ │ Channel: │ │
│ │ "notifications" │ │ "replies:workstation1" │ │
│ │ │ │ │ │
│ │ ┌──────────┐ │ │ ┌──────────────┐ │ │
│ │ │ Message │ │ │ │ Response │ │ │
│ │ │ Queue │ │ │ │ Queue │ │ │
│ │ └────┬─────┘ │ │ └──────▲───────┘ │ │
│ └────────┼──────────┘ └──────────┼────────────┘ │
│ │ │ │
└────────────┼───────────────────────────────┼─────────────────┘
│ Broadcast │ Targeted Reply
▼ │
┌───────────────┐ ┌───────┴───────┐
│ All Agents │ │ Specific │
│ Subscribed │ │ Publisher │
└───────────────┘ └───────────────┘
The Notification Channel
All desktop agents subscribe to a shared notification channel (default: notifications). When a publisher sends a message, Redis broadcasts it to all subscribed agents. This is a classic fan-out pattern.
// Simplified subscription logic from redis.go
func (c *RedisClient) Subscribe(ctx context.Context, channel string) {
pubsub := c.client.Subscribe(ctx, channel)
for {
msg, err := pubsub.ReceiveMessage(ctx)
if err != nil {
// Handle reconnection
continue
}
// Process the notification
c.handleMessage(msg.Payload)
}
}
The Reply Channel Pattern
Here’s where it gets interesting. When a notification requires a response (like an OK/Cancel dialog), the publisher includes a reply channel in the message:
{
"title": "Maintenance Alert",
"message": "System restart in 10 minutes. Save your work.",
"type": "OKCancel",
"reply_channel": "replies:monitoring-server"
}
The agent displays the dialog, captures the user’s response, and publishes back to that specific reply channel:
{
"hostname": "developer-laptop",
"response": "OK",
"original_title": "Maintenance Alert",
"timestamp": "2026-02-08T10:30:00Z"
}
This pattern creates a pseudo-request/response flow over inherently one-way pub/sub messaging.
Message Protocol Deep Dive
Our message format is deliberately simple JSON. No Protobuf, no Avro, no schema versioning — at least not for the MVP. Here’s the full specification:
Notification Message (Publisher → Agent)
{
"title": "string (required)", // Dialog title
"message": "string (required)", // Dialog body text
"type": "string (optional)", // "Simple", "OK", "OKCancel"
"reply_channel": "string (optional)", // Where to send responses
"priority": "string (optional)", // Future: "low", "normal", "high"
"expires_at": "string (optional)" // Future: ISO timestamp
}
Response Message (Agent → Publisher)
{
"hostname": "string", // Machine that responded
"response": "string", // "OK", "Cancel", "Dismissed"
"original_title": "string", // Echo back for correlation
"timestamp": "string" // When user responded
}
Notification Types Explained
| Type | Behavior |
|---|---|
| Simple | Toast notification, auto-dismisses |
| OK | Dialog with single OK button |
| OKCancel | Dialog with OK and Cancel buttons |
The Go Implementation
Let’s look at the actual Redis integration code:
package main
import (
"context"
"encoding/json"
"log"
"time"
"github.com/redis/go-redis/v9"
)
type RedisClient struct {
client *redis.Client
ctx context.Context
}
func NewRedisClient(addr, password string) *RedisClient {
client := redis.NewClient(&redis.Options{
Addr: addr,
Password: password,
DB: 0,
})
return &RedisClient{
client: client,
ctx: context.Background(),
}
}
func (r *RedisClient) SubscribeNotifications(handler func(Notification)) error {
pubsub := r.client.Subscribe(r.ctx, "notifications")
defer pubsub.Close()
// Wait for subscription confirmation
_, err := pubsub.Receive(r.ctx)
if err != nil {
return err
}
ch := pubsub.Channel()
for msg := range ch {
var notification Notification
if err := json.Unmarshal([]byte(msg.Payload), ¬ification); err != nil {
log.Printf("Invalid message: %v", err)
continue
}
handler(notification)
}
return nil
}
func (r *RedisClient) PublishResponse(channel string, response Response) error {
data, err := json.Marshal(response)
if err != nil {
return err
}
return r.client.Publish(r.ctx, channel, data).Err()
}
Connection Resilience
In production, networks fail. Redis restarts. The agent needs to handle this gracefully:
func (r *RedisClient) SubscribeWithReconnect(channel string, handler func(Notification)) {
backoff := time.Second
for {
err := r.SubscribeNotifications(handler)
if err != nil {
log.Printf("Subscription error: %v. Reconnecting in %v", err, backoff)
time.Sleep(backoff)
backoff = min(backoff*2, 30*time.Second) // Exponential backoff
continue
}
backoff = time.Second // Reset on successful connection
}
}
Beyond the MVP: Where Could This Go?
The current implementation is intentionally minimal — an MVP that proves the concept. But the architecture has room to grow. Here are directions we’re considering:
1. Redis Streams Instead of Pub/Sub
Pub/sub is fire-and-forget. If the agent is offline, messages are lost. Redis Streams would provide:
- Message persistence: Agents can catch up on missed messages
- Consumer groups: Load balancing across multiple agents
- Message acknowledgment: Guarantee delivery
- Message replay: Debugging and auditing
# Future: Using Redis Streams
XADD notifications * title "Alert" message "Disk full" type "OK"
XREAD GROUP agents consumer1 BLOCK 0 STREAMS notifications >
2. Channel Segmentation
Currently, all agents receive all messages. In larger deployments, you might want:
- Department channels:
notifications:engineering,notifications:finance - Severity channels:
notifications:critical,notifications:info - Geographic channels:
notifications:eu,notifications:us
Agents could subscribe to multiple channels based on configuration:
channels := []string{
"notifications:global",
"notifications:engineering",
"notifications:critical",
}
pubsub := client.Subscribe(ctx, channels...)
3. Authentication and Authorization
The MVP trusts anyone who can connect to Redis. Production deployments need:
- Agent authentication: Verify agents before allowing subscriptions
- Publisher authorization: Control who can send notifications
- Message signing: Ensure messages haven’t been tampered with
One approach is JWT-based authentication:
{
"title": "Security Alert",
"message": "Unauthorized access detected",
"signature": "eyJhbGciOiJIUzI1NiIs...",
"publisher_id": "security-system-1"
}
4. Message Routing and Filtering
Rather than broadcasting everything, a routing layer could:
- Target specific machines: Send only to
workstation-42 - Target user groups: Send to all users in the “sysadmin” group
- Filter by criteria: Only agents running Windows, only logged-in users
{
"title": "Windows Update",
"message": "Reboot required",
"routing": {
"os": "windows",
"groups": ["developers"],
"exclude_hosts": ["build-server-1"]
}
}
5. Delivery Guarantees and Tracking
For critical notifications, you want to know:
- Was the message delivered to the agent?
- Was the notification displayed to the user?
- Did the user interact with it?
This requires adding:
- Delivery receipts: Agent confirms message received
- Display confirmations: Agent confirms notification shown
- Read receipts: User saw the notification
- Response tracking: Full audit trail
6. Integration Hub
The Redis layer could become more than just pub/sub:
┌─────────────────────────────────────────────────────────────┐
│ Notification Hub │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Webhook │ │ gRPC │ │ REST │ │ CLI │ │
│ │ Receiver │ │ Server │ │ API │ │ Tool │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │ │
│ └─────────────┴──────┬──────┴─────────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Message Router │ │
│ │ & Transformer │ │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Redis Pub/Sub │ │
│ └─────────────────┘ │
└─────────────────────────────────────────────────────────────┘
7. Metrics and Observability
The Redis layer is a natural point to collect metrics:
- Messages published per channel
- Active agent connections
- Response rates and latencies
- Failed deliveries
These could feed into Prometheus, Grafana, or your existing observability stack.
Trade-offs to Consider
Every architectural decision has trade-offs. Here’s what to weigh as the system evolves:
| Approach | Pros | Cons |
|---|---|---|
| Pure Pub/Sub | Simple, fast, ephemeral | No persistence, no delivery guarantee |
| Redis Streams | Persistence, replay, consumer groups | More complex, state to manage |
| Adding a Hub | Flexibility, multiple ingestion points | Another service to deploy and maintain |
| Authentication | Security | Complexity, key management |
The beauty of the current design is its simplicity. Add complexity only when the use case demands it.
Practical Advice
If you’re building something similar, here’s what we’ve learned:
-
Start with pub/sub. It’s good enough for most MVPs. Don’t over-engineer day one.
-
Design for horizontal scale. Even if you have one Redis today, structure your code so adding replicas or clusters is straightforward.
-
Log everything in development. You’ll thank yourself when debugging why a message didn’t appear.
-
Test network failures. Simulate Redis going down. Your agent should recover gracefully.
-
Keep messages small. Notifications are small by nature. Resist the urge to stuff metadata you don’t need.
What’s Next
In Part 3, we’ll shift focus to the desktop side: system tray integration, native dialogs, and the platform-specific challenges of rendering notifications on Windows, macOS, and Linux.
The journey from MVP to production-ready system is long, but the pub/sub foundation we’ve built here provides a solid platform for growth. Whether you stick with simple pub/sub or evolve toward streams and routing layers, the core pattern remains the same: decouple publishers from consumers, let Redis handle the middle, and keep your agents focused on what they do best — getting information in front of users.
The complete source code is available at gitlab.com/hocmodo/go-desktop-agent. Questions or suggestions? Reach out through our contact page.