___ _____ ____
|_ _|___/__ \ | _ \
| |/ _ \ / /\ / | | | |
| | (_) / / \ \ | |_| |
|___\___/\/ \_\|____/
IoT Daemon - MQTT Server
A high-performance MQTT server daemon implementation in Rust using Tokio, designed for scalability, reliability, and extensibility. Built with a modern async architecture supporting multiple transport protocols and thousands of concurrent connections.
- MQTT v3.1.1 protocol support with all packet types
- QoS=0, QoS=1, and QoS=2 message delivery with full protocol support
- Message routing with full MQTT wildcard support (
+single-level,#multi-level) - Clean session with session takeover and proper cleanup
- Keep-alive mechanism with configurable timeouts
- Retained messages with storage limits and wildcard delivery
- Will messages (Last Will and Testament) support
- Pluggable persistence - InMemory and SQLite storage backends
- TLS/SSL support - Encrypted connections with certificate-based config
- Multiple listen addresses - TCP and TLS listeners simultaneously
- Username/password authentication - File-based credential management
- Topic-based ACLs - Fine-grained publish/subscribe access control
- Event-driven architecture using tokio::select! for high performance
- Race-condition-free shutdown using CancellationToken
- UNIX signal handling (SIGINT graceful, SIGTERM immediate)
- Comprehensive configuration with TOML support
- Extensive test coverage with 136+ tests validating all functionality
IoTD has completed Milestone 5 - QoS=2! 🎉
The project now supports all three MQTT QoS levels, providing "exactly once" delivery guarantee with the complete PUBREC/PUBREL/PUBCOMP four-step handshake protocol.
- Complete MQTT v3.1.1 protocol support with all packet types
- Message routing system with full MQTT wildcard support (
+,#) - Clean session logic with session takeover and DISCONNECT notifications
- Keep-alive mechanism with configurable timeouts and automatic cleanup
- Retained messages with storage limits and wildcard delivery
- Will messages (Last Will and Testament) support
- Protocol compliance with validation, error codes, and client ID rules
- Topic validation for both topic names and subscription filters
- Race-condition-free architecture using CancellationToken
- Comprehensive test suite with 95+ tests
- ✅ QoS=1 message delivery - "At least once" guarantee fully implemented
- ✅ PUBACK handling - Proper acknowledgment flow for reliable delivery
- ✅ Message retransmission - Automatic retry with DUP flag on timeout
- ✅ Multiple in-flight messages - Support for concurrent QoS=1 messages
- ✅ Duplicate detection - DUP flag prevents routing duplicate messages
- ✅ Packet ID management - Sequential generation with wrap-around
- ✅ Configurable retransmission - Interval and max retry limits
- ✅ QoS downgrade - Proper min(publish QoS, subscription QoS)
- ✅ Comprehensive QoS=1 tests - 10+ tests covering all scenarios
- ✅ Performance benchmarks - Tested with high message throughput
- ✅ Unified Storage trait - Pluggable backend architecture
- ✅ InMemoryStorage - Fast storage for development/testing
- ✅ SqliteStorage - Durable storage for production
- ✅ Session persistence - Restore sessions for clean_session=false clients
- ✅ Subscription persistence - Subscriptions survive reconnects
- ✅ In-flight message persistence - QoS=1 messages restored on reconnect
- ✅ Retained message persistence - Retained messages survive restarts
- ✅ Atomic state save - All-or-nothing session persistence
- ✅ Config-based backend selection - Choose memory or sqlite
- ✅ TLS/SSL support - Encrypted connections via
tls://listener prefix - ✅ Multiple listen addresses - Serve TCP and TLS simultaneously
- ✅ Username/password authentication - File-based user credentials
- ✅ Topic-based ACLs - Fine-grained publish/subscribe access control
- ✅ Pluggable auth backend -
allowallandfilebackends - ✅ Config-based TLS - Certificate and key file paths in TOML
- ✅ TLS integration tests - Self-signed cert testing with rcgen
- ✅ QoS=2 message delivery - "Exactly once" guarantee fully implemented
- ✅ PUBREC/PUBREL/PUBCOMP flow - Complete four-step handshake protocol
- ✅ QoS=2 state machine - AwaitingPubRec and AwaitingPubComp states
- ✅ Inbound QoS=2 tracking - Broker receives and processes QoS=2 messages
- ✅ Outbound QoS=2 delivery - Broker sends QoS=2 to subscribers
- ✅ QoS=2 retransmission - PUBLISH and PUBREL retry on timeout
- ✅ QoS=2 persistence - State survives server restarts
- ✅ Comprehensive QoS=2 tests - 5+ integration tests covering all scenarios
- Milestone 1: ✅ Basic MQTT Server (Completed)
- Milestone 2: ✅ QoS=1 Support (Completed)
- Milestone 3: ✅ Persistence Layer (Completed)
- Milestone 4: ✅ Security - TLS, Authentication, ACLs (Completed)
- Milestone 5: ✅ QoS=2 "exactly once" delivery (Completed)
- Milestone 6 (Next): Observability (Prometheus, Grafana)
- Milestone 7: Flow control & production features
- v1.0: Production-ready single-node broker
- v2.0: MQTT 5.0 protocol support
- v3.0: Clustering and high availability
- v4.0: Multi-tenancy and enterprise features
For a detailed development roadmap, see docs/roadmap.md.
IoTD has been manually tested and verified to work on the following platforms:
- macOS (Apple Silicon) - Native ARM64 support
- Linux GNU (aarch64) - ARM64 with glibc
- Linux musl (aarch64) - ARM64 with musl libc (Alpine Linux)
- Linux GNU (x86_64) - Intel/AMD 64-bit with glibc
- Linux musl (x86_64) - Intel/AMD 64-bit with musl libc (Alpine Linux)
- IPv4 - Full support on all platforms
- IPv6 - Full support on all platforms
- Dual-stack - Can listen on both IPv4 and IPv6 simultaneously
The single binary design and Rust's cross-platform capabilities ensure consistent behavior across all supported platforms.
IoTD follows a modular, event-driven architecture built on Tokio's async runtime:
- Server → Broker → Session → Router hierarchy for clean separation of concerns
- Event-driven design using
tokio::select!for responsive packet handling - Race-condition-free shutdown using
CancellationTokenthroughout - Thread-safe operations with optimal locking strategies
- Zero-copy message routing for maximum performance
Key components:
server.rs- Lifecycle management and signal handlingbroker.rs- Connection acceptance and session managementsession.rs- Client state machine and packet processingrouter.rs- Publish/subscribe with wildcard supportprotocol/- MQTT v3.1.1 packet encoding/decoding
For detailed architecture documentation, see docs/arch.md.
- Rust 1.75 or later
- Tokio runtime
cargo build --release# Default (localhost only)
cargo run
# or
./target/release/iotd
# Custom listen address
./target/release/iotd -l 0.0.0.0:1883 # All IPv4 interfaces
./target/release/iotd -l [::]:1883 # All IPv6 interfaces
./target/release/iotd -l tls://0.0.0.0:8883 # TLS (requires [tls] in config)
# Help and version
./target/release/iotd --help
./target/release/iotd --versionThe server listens on 127.0.0.1:1883 by default (localhost only).
# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_simple_connect# Build image
docker build -t iotd .
# Run container
docker run -p 1883:1883 iotdOptimized for:
- Low latency: Async I/O with Tokio
- High throughput: Multiple in-flight QoS=1 messages
- Memory efficient: ~8.45 KB per connection
- Single binary: No external dependencies
- MQTT spec compliant: Strict adherence to MQTT 3.1.1
Tested on Mac Studio 2025 (M4 Max 16-core) with RUST_LOG=error:
- Memory efficiency: Only 8.45 KB per connection
- 1000 connections: ~10 MB total memory usage
- Throughput: 8,600+ messages/second routed to 950 subscribers
- Message rate: 4,000-9,000 msg/sec for small messages
For detailed benchmarks and testing tools, see BENCHMARKS.md.
You can test with any MQTT client:
# Using mosquitto clients
mosquitto_sub -h localhost -t "test/topic"
mosquitto_pub -h localhost -t "test/topic" -m "hello world"
# Using mqttx cli
mqttx sub -h localhost -t "test/topic"
mqttx pub -h localhost -t "test/topic" -m "hello world"The server uses a comprehensive configuration system with TOML support:
# Listen addresses (single string or array, with optional protocol prefix)
listen = ["tcp://0.0.0.0:1883", "tls://0.0.0.0:8883"]
# Maximum retained messages
retained_message_limit = 10000
# QoS=1 retransmission settings
retransmission_interval_ms = 5000
max_retransmission_limit = 10
# Persistence backend: "memory" (default) or "sqlite"
[persistence]
backend = "memory"
# Authentication backend: "allowall" (default) or "file"
[auth]
backend = "allowall"
# password_file = "passwd"
# ACL backend: "allowall" (default) or "file"
[acl]
backend = "allowall"
# acl_file = "acl.conf"
# TLS configuration (required when any listener uses tls://)
[tls]
cert_file = "server.crt"
key_file = "server.key"Configuration can be provided through:
- Configuration files (TOML) via
-c config.toml - Command-line arguments (
-lfor listen address) - Default values
MIT