Connection Lifecycle
A connection in zerg goes through a well-defined lifecycle: accept, register, use, close, and optionally return to pool.
Lifecycle Stages
Accept (Acceptor)
│
▼
┌──────────────┐
│ Distribute │ round-robin to reactor
│ via queue │
└──────┬───────┘
│
▼
┌──────────────┐
│ Register │ arm multishot recv
│ in reactor │ add to connections dict
└──────┬───────┘
│
▼
┌──────────────┐
│ Notify │ push to Channel
│ application │ AcceptAsync() returns
└──────┬───────┘
│
▼
┌──────────────┐
┌──▶│ ReadAsync │◀─┐
│ │ + process │ │ read/write loop
│ │ + Write │ │
│ │ + FlushAsync│──┘
│ └──────┬───────┘
│ │
│ ▼ (IsClosed or error)
│ ┌──────────────┐
│ │ Close │ connection removed
│ │ + cleanup │ from reactor dict
│ └──────┬───────┘
│ │
│ ▼
│ ┌──────────────┐
└───│ Pool/Reuse │ Clear(), return to pool
│ (optional) │ generation incremented
└──────────────┘Accept Phase
- The acceptor’s
io_uringdelivers a CQE with the new client fd TCP_NODELAYis set on the socket- The fd is enqueued to the target reactor’s
ConcurrentQueue<int>
Registration Phase
On its next loop iteration, the reactor:
- Dequeues the fd from its queue
- Creates or retrieves a
Connectionfrom the pool - Calls
connection.SetFd(clientFd).SetReactor(this)which:- Assigns the file descriptor
- Clears the
_closedflag - Resets
_pendingand_armedflags - Resets the
_readSignalcompletion source - Clears the SPSC receive ring
- Stores the connection in
connections[clientFd] - Arms multishot recv with buffer selection for the fd
- Pushes a
ConnectionItemto theChannel<ConnectionItem>forAcceptAsync()
Active Phase
The connection is now active. The handler can:
- ReadAsync() – park until data arrives, then drain ring items
- Write() – stage bytes into the unmanaged write slab
- FlushAsync() – tell the reactor to send staged bytes
- ResetRead() – prepare for the next read cycle
See Connection Read and Connection Write for API details.
Close Phase
A connection closes when:
- Client disconnects: recv CQE arrives with
res == 0(EOF) orres < 0(error) - Ring overflow: the SPSC recv ring is full (1024 items) – the connection is force-closed as a safety measure
- Application closes: the handler exits the read loop
When the reactor detects a close (recv CQE with res <= 0):
- Returns any buffer used by the final CQE to the buffer ring
- Removes the connection from the reactor’s
connectionsdictionary - Marks the connection as closed (
_closed = 1) - Wakes any waiting
ReadAsync()so the handler seesIsClosed == true
Pooling and Reuse
Connections can be pooled to avoid repeated allocation. The Connection class supports two reset methods:
Clear() – Safe Reset
- Increments
_generationto invalidate in-flightValueTasktokens - Publishes
_closed = 1 - Cancels any waiting read or flush waiter with
OperationCanceledException - Resets all write buffer state (WriteHead, WriteTail = 0)
- Resets both
_readSignaland_flushSignal - Clears the SPSC receive ring
Clear2() – Fast Reset
- Increments
_generation - Publishes
_closed = 1 - Clears the receive ring and resets completion state
- Does not cancel waiters (assumes they’ve already exited)
- Faster than
Clear()for hot-path pooling
Generation Counter
The _generation counter (incremented on every reuse) serves as the ValueTask token. If a stale ReadAsync() completes after the connection has been reused, GetResult() detects the mismatched token and returns RingSnapshot.Closed() instead of delivering stale data. This prevents use-after-free bugs in the async machinery.
Connection Object Layout
partial class Connection : IBufferWriter<byte>, IValueTaskSource<RingSnapshot>, IValueTaskSource, IDisposable
{
// Identity
int ClientFd;
Engine.Reactor Reactor;
int _generation;
// Read state
SpscRecvRing _recv; // capacity: 1024
ManualResetValueTaskSourceCore<RingSnapshot> _readSignal;
int _armed, _pending, _closed;
// Write state
byte* WriteBuffer; // 64-byte aligned unmanaged slab
int WriteHead, WriteTail, WriteInFlight;
int SendInflight; // reactor-owned flag
ManualResetValueTaskSourceCore<bool> _flushSignal;
int _flushArmed, _flushInProgress;
}