High-performance TCP server frameworks for C# built on Linux io_uring. Direct control over sockets, buffers, and scheduling with no hidden abstractions.
HTTP plaintext benchmark on Docker, 12 reactors.
i9-14900K, 64GB DDR5 6400MHz, Linux 6.17, liburing 2.9. Load tested with gcannon.
Same reactor design, different io_uring backends. Pick the one that fits your constraints.
Both frameworks share the same reactor architecture and connection API. Choose the implementation that fits your deployment.
Both packages share the same connection API via the core library. Switching between them is a one-line import change.
using zerg; using zerg.core; using zerg.Engine; using zerg.Engine.Configs; var engine = new Engine(new EngineOptions { Port = 8080, ReactorCount = 1 }); engine.Listen(); while (engine.ServerRunning) { var connection = await engine.AcceptAsync(CancellationToken.None); if (connection is null) continue; _ = HandleAsync(connection); } static async Task HandleAsync(Connection connection) { while (true) { var result = await connection.ReadAsync(); if (result.IsClosed) break; var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result); // process rings.ToReadOnlySequence() ... rings.ReturnRingBuffers(connection.Reactor); connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8); await connection.FlushAsync(); connection.ResetRead(); } }
using terraform; using zerg.core; using terraform.Engine; using terraform.Engine.Configs; var engine = new Engine(new EngineOptions { Port = 8080, ReactorCount = 1 }); engine.Listen(); while (engine.ServerRunning) { var connection = await engine.AcceptAsync(CancellationToken.None); if (connection is null) continue; _ = HandleAsync(connection); } static async Task HandleAsync(Connection connection) { while (true) { var result = await connection.ReadAsync(); if (result.IsClosed) break; var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result); // process rings.ToReadOnlySequence() ... rings.ReturnRingBuffers(connection.Reactor); connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8); await connection.FlushAsync(); connection.ResetRead(); } }
Both frameworks expose the same configuration surface for reactor tuning.
var engine = new Engine(new EngineOptions { Port = 8080, ReactorCount = 4, AcceptorConfig = new AcceptorConfig(IPVersion: IPVersion.IPv6DualStack), ReactorConfigs = Enumerable.Range(0, 4).Select(_ => new ReactorConfig( RingEntries: 8192, // io_uring SQ/CQ depth RecvBufferSize: 32 * 1024, // 32KB per buffer BufferRingEntries: 16 * 1024, // 16K pre-allocated recv buffers BatchCqes: 4096, // max CQEs per loop iteration CqTimeout: 1_000_000, // 1ms wait timeout (nanoseconds) IncrementalBufferConsumption: false // zerg only, kernel 6.12+ )).ToArray() });
Pick the abstraction that fits your protocol parser.
One acceptor distributes connections round-robin to N reactor threads. No locks on hot paths.
Both frameworks share the same connection API. The difference is how they talk to io_uring.
Native liburing bindings via a thin C shim. Battle-tested, feature-complete.
Pure C# io_uring with direct syscalls. Zero native dependencies.