Operator Guide

For operators deploying ZeroDDS to production. Seven sections covering the full life-cycle.

Production Deployment

Three deployment modes:

Configuration Files

Each daemon reads /etc/zerodds/<daemon>.yaml. Schema:

listen: 0.0.0.0:8080
domain: 0
tls:
  cert: /etc/zerodds/tls/server.crt
  key: /etc/zerodds/tls/server.key
  client_ca: /etc/zerodds/tls/clients.crt   # optional, mTLS
auth:
  mode: bearer                                # none | bearer | jwt | mtls | sasl-plain
  tokens: /etc/zerodds/tokens.txt
acl:
  default: deny
  rules:
    - { subject: "user:alice", op: read,  topic: "Sensors/*" }
    - { subject: "*group:editors*", op: write, topic: "Commands/#" }
metrics:
  enabled: true
  address: 127.0.0.1:9090

Monitoring

Built-in observability:

Backup & Recovery

Use the zerodds-recorder-bridge daemon to capture topics to .zddsrec files. zerodds-replay reads recordings and republishes at scaled wallclock for disaster recovery, demo replay, or regression testing. State that survives is in the recordings; daemons themselves are stateless beyond TLS keys.

Security Hardening

Capacity Planning

Two questions to answer before sizing: how much data per second per process, and what is the bottleneck when scaling beyond that. The numbers below are reference points from the in-tree benchmark suite — single-process, no GC pause, on the llvm bench host (AMD Ryzen Threadripper PRO 3955WX, 24 cores, Linux 6.1 vanilla). Use them as upper-bound guidance: your application's serialisation, QoS profile and broker configuration will dominate well before the bridge does.

PathReference rateBottleneck when scaling
DDS over RTPS UDP (LAN)~4 GiB/s payloadNIC, kernel UDP buffer
DDS over shared memory< 5 µs roundtripcache-line traffic, polling vs. eventfd wake-up
WebSocket bridge~250k frames / sTLS + per-frame allocation
MQTT bridge~100k messages / supstream broker (mosquitto / HiveMQ)
gRPC bridge~50k unary calls / sHTTP/2 connection & HPACK table churn
AMQP bridge~100k messages / sbroker (RabbitMQ) flow-control credits

Sizing guidance

Upgrade Path

Within 1.x: drop-in replacement, no QoS changes required, wire stays RTPS 2.5. Across major versions: see the migration note in the corresponding release.

Full handbook on GitHub →