fix(dash-spv): split peer TcpStream and add per-peer writer task#720
fix(dash-spv): split peer TcpStream and add per-peer writer task#720xdustinface wants to merge 1 commit intov0.42-devfrom
TcpStream and add per-peer writer task#720Conversation
`Peer` now splits its `TcpStream` into independent read and write halves via `tokio::io::split`. The read half lives behind its own `Mutex<ReadState>` so the reader can frame inbound bytes without contending with senders. The write half is owned by a dedicated per-peer writer task that drains an `mpsc::Sender<NetworkMessage>` onto the socket. `send_message` now just queues into that channel and returns immediately. This removes the single `Mutex<ConnectionState>` that previously serialised the reader, the maintenance ping, every distributed send, and every broadcast through the same critical section, so a long inbound `Headers2` decompression no longer holds outbound pings off the wire. The reader loop also no longer has anything to gain from holding a mutating lock to call `receive_message`. The `Peer` API is adjusted accordingly: `send_message`, `receive_message`, and `handle_ping` are now `&self` since the only state they touch lives behind `Arc`s. `bytes_sent` becomes an `AtomicU64` shared with the writer task. `Peer::connect_instance` is retained for compatibility but routes through the same path.
📝 WalkthroughWalkthroughThis PR refactors the Dash SPV network layer to decouple inbound and outbound message handling. The peer architecture shifts from a single shared socket mutex to separate read-state locking and an mpsc-bounded outbound queue with a dedicated writer task. The peer reader loop now uses read locks for message acquisition and explicitly marks peers disconnected on fatal errors. ChangesPeer Concurrency Architecture & Manager Integration
Sequence DiagramsequenceDiagram
actor Mgr as Manager/Reader
participant P as Peer
participant Ch as Out Channel
participant WT as Writer Task
participant Net as Network Socket
Mgr->>P: send_message(msg)
P->>Ch: enqueue NetworkMessage
Ch-->>WT: message available
par Concurrent Paths
Mgr->>P: receive_message()
P->>P: lock read_state
P->>Net: read from socket
Net-->>P: raw bytes
P->>P: decode, validate checksum
P-->>Mgr: Message
P->>P: unlock read_state
and
WT->>Ch: dequeue NetworkMessage
WT->>WT: encode RawNetworkMessage
WT->>Net: write + flush
Net-->>WT: acked
WT->>P: increment bytes_sent (atomic)
end
Mgr->>P: disconnect()
P->>P: tear_down_connection()
P->>Ch: drop out_tx
Ch-->>WT: channel closed
WT->>WT: exit writer task
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
TcpStream and add per-peer writer taskTcpStream and add per-peer writer task
There was a problem hiding this comment.
🧹 Nitpick comments (1)
dash-spv/src/network/peer.rs (1)
345-350: Remove unreachable error handling branches forWouldBlockandTimedOuton async reads.With
tokio::io::AsyncReadExtonReadHalf<TcpStream>, neitherWouldBlocknorTimedOuterrors surface to the caller. Tokio's async runtime handles non-blocking I/O internally, and socket timeouts are managed viatokio::time::timeout()(which returnsElapsed, notTimedOut). No socket-level timeout configuration exists in the codebase. Remove lines 345–350 for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/peer.rs` around lines 345 - 350, The match arms handling Err(e) where e.kind() == std::io::ErrorKind::WouldBlock and ErrorKind::TimedOut are unreachable for async reads using tokio::io::AsyncReadExt on ReadHalf<TcpStream>; remove those branches from the error handling in the read routine (the function handling reads from ReadHalf<TcpStream> / the code that matches on the read result) and rely on tokio timeouts via tokio::time::timeout (which returns Elapsed) or propagate the error instead; update the match to only handle real IO errors and the Ok/None cases, deleting the WouldBlock and TimedOut arms to keep the logic correct and clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@dash-spv/src/network/peer.rs`:
- Around line 345-350: The match arms handling Err(e) where e.kind() ==
std::io::ErrorKind::WouldBlock and ErrorKind::TimedOut are unreachable for async
reads using tokio::io::AsyncReadExt on ReadHalf<TcpStream>; remove those
branches from the error handling in the read routine (the function handling
reads from ReadHalf<TcpStream> / the code that matches on the read result) and
rely on tokio timeouts via tokio::time::timeout (which returns Elapsed) or
propagate the error instead; update the match to only handle real IO errors and
the Ok/None cases, deleting the WouldBlock and TimedOut arms to keep the logic
correct and clear.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5c0736c7-28fb-4719-a120-a8c9fcd3d143
📒 Files selected for processing (4)
dash-spv/src/network/constants.rsdash-spv/src/network/manager.rsdash-spv/src/network/peer.rsmasternode-seeds-fetcher/src/probe.rs
💤 Files with no reviewable changes (1)
- dash-spv/src/network/constants.rs
Peernow splits itsTcpStreaminto independent read and write halves viatokio::io::split. The read half lives behind its ownMutex<ReadState>so the reader can frame inbound bytes without contending with senders. The write half is owned by a dedicated per-peer writer task that drains anmpsc::Sender<NetworkMessage>onto the socket.send_messagenow just queues into that channel and returns immediately.This removes the single
Mutex<ConnectionState>that previously serialised the reader, the maintenance ping, every distributed send, and every broadcast through the same critical section, so a long inboundHeaders2decompression no longer holds outbound pings off the wire. The reader loop also no longer has anything to gain from holding a mutating lock to callreceive_message.The
PeerAPI is adjusted accordingly:send_message,receive_message, andhandle_pingare now&selfsince the only state they touch lives behindArcs.bytes_sentbecomes anAtomicU64shared with the writer task.Peer::connect_instanceis retained for compatibility but routes through the same path.Summary by CodeRabbit