zerg & terraform

High-performance TCP server frameworks for C# built on Linux io_uring. Direct control over sockets, buffers, and scheduling with no hidden abstractions.

zerg
dotnet add package zerg
dotnet add package zerg.core
terraform
dotnet add package terraform
dotnet add package zerg.core

Benchmarks

HTTP plaintext benchmark on Docker, 12 reactors.

i9-14900K, 64GB DDR5 6400MHz, Linux 6.17, liburing 2.9. Load tested with gcannon.

zerg
3.56M
req/s
132
us latency
1192%
CPU
terraform
3.54M
req/s
134
us latency
1187%
CPU
System.Net.Sockets
2.71M
req/s
274
us latency
1560%
CPU
Throughput
zerg 3.56M
+31%
terraform 3.54M
+31%
Socket 2.71M
base
Latency
zerg 132 us
-52%
terraform 134 us
-51%
Socket 274 us
base
CPU usage
terraform 1187%
-24%
zerg 1192%
-24%
Socket 1560%
base

zerg vs terraform

Same reactor design, different io_uring backends. Pick the one that fits your constraints.

Feature
zerg
terraform
io_uring implementation
Native C shim (liburing)
Pure C#
Native binary required
liburingshim.so
None
Multishot accept/recv
Provided buffer rings
SQPOLL mode
DEFER_TASKRUN + SINGLE_ISSUER
Incremental buffer consumption
kernel 6.12+
Per-connection buffer rings
PipeReader adapter
Stream adapter
Fully debuggable in C#
C shim opaque
Minimum kernel
6.1
6.1
.NET targets
8 / 9 / 10
8 / 9 / 10

Built for extreme throughput

Both frameworks share the same reactor architecture and connection API. Choose the implementation that fits your deployment.

Multishot io_uring
Single SQE arms multiple accept and recv operations, dramatically reducing syscall overhead at high connection rates.
Provided buffer rings
Pre-allocated buffer pools let the kernel pick buffers directly, eliminating per-recv allocation and enabling zero-copy reads.
Thread-isolated reactors
Each reactor owns its own io_uring, buffer ring, and connection map. No cross-thread communication on the hot path — each reactor is fully self-contained.
DEFER_TASKRUN
Defers kernel task work to the submitting thread, avoiding cross-thread IPI overhead and keeping all processing on the reactor's core.
SQPOLL mode
Optional kernel-side submission queue polling eliminates the submit syscall entirely for the lowest possible latency.
Incremental buffers
On kernel 6.12+, a single buffer can serve multiple recv completions. Reduces buffer ring pressure and enables per-connection buffer isolation. zerg only
Three read API levels
From zero-copy ring buffers to PipeReader to BCL Stream — pick the abstraction level that fits your protocol parser.

Same API, two implementations

Both packages share the same connection API via the core library. Switching between them is a one-line import change.

zerg
terraform
using zerg;
using zerg.core;
using zerg.Engine;
using zerg.Engine.Configs;

var engine = new Engine(new EngineOptions { Port = 8080, ReactorCount = 1 });
engine.Listen();

while (engine.ServerRunning)
{
    var connection = await engine.AcceptAsync(CancellationToken.None);
    if (connection is null) continue;
    _ = HandleAsync(connection);
}

static async Task HandleAsync(Connection connection)
{
    while (true)
    {
        var result = await connection.ReadAsync();
        if (result.IsClosed) break;

        var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result);
        // process rings.ToReadOnlySequence() ...
        rings.ReturnRingBuffers(connection.Reactor);

        connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8);
        await connection.FlushAsync();
        connection.ResetRead();
    }
}
using terraform;
using zerg.core;
using terraform.Engine;
using terraform.Engine.Configs;

var engine = new Engine(new EngineOptions { Port = 8080, ReactorCount = 1 });
engine.Listen();

while (engine.ServerRunning)
{
    var connection = await engine.AcceptAsync(CancellationToken.None);
    if (connection is null) continue;
    _ = HandleAsync(connection);
}

static async Task HandleAsync(Connection connection)
{
    while (true)
    {
        var result = await connection.ReadAsync();
        if (result.IsClosed) break;

        var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result);
        // process rings.ToReadOnlySequence() ...
        rings.ReturnRingBuffers(connection.Reactor);

        connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8);
        await connection.FlushAsync();
        connection.ResetRead();
    }
}

Tune every knob

Both frameworks expose the same configuration surface for reactor tuning.

var engine = new Engine(new EngineOptions
{
    Port = 8080,
    ReactorCount = 4,
    AcceptorConfig = new AcceptorConfig(IPVersion: IPVersion.IPv6DualStack),
    ReactorConfigs = Enumerable.Range(0, 4).Select(_ => new ReactorConfig(
        RingEntries:        8192,          // io_uring SQ/CQ depth
        RecvBufferSize:     32 * 1024,    // 32KB per buffer
        BufferRingEntries:  16 * 1024,    // 16K pre-allocated recv buffers
        BatchCqes:          4096,          // max CQEs per loop iteration
        CqTimeout:          1_000_000,     // 1ms wait timeout (nanoseconds)
        IncrementalBufferConsumption: false  // zerg only, kernel 6.12+
    )).ToArray()
});

Read and write at every level

Pick the abstraction that fits your protocol parser.

API
Type
Copy
TryGetRing / RingItem.AsSpan()
Read
Zero-copy
GetAllSnapshotRingsAsUnmanagedMemory().ToReadOnlySequence()
Read
Zero-copy
ConnectionPipeReader
Read
Zero-copy
ConnectionStream
Read / Write
One copy
connection.Write(data u8)
Write
Direct
IBufferWriter<byte> — GetSpan() / Advance()
Write
Direct

Multi-threaded reactor pattern

One acceptor distributes connections round-robin to N reactor threads. No locks on hot paths.

Acceptor Thread
multishot accept ring
round-robin via lock-free MPSC queues
Reactor #0
io_uring instance
buffer ring
connection map
Reactor #1
io_uring instance
buffer ring
connection map
Reactor #N
io_uring instance
buffer ring
connection map
zerg
native C shim (liburing)
or
terraform
pure C# direct syscalls
Linux Kernel io_uring
SQ / CQ / buffer rings

When to use which

Both frameworks share the same connection API. The difference is how they talk to io_uring.

zerg

Native liburing bindings via a thin C shim. Battle-tested, feature-complete.

  • You want incremental buffer consumption on kernel 6.12+
  • You need per-connection buffer ring isolation
  • You value liburing's years of production hardening
  • Deploying native .so files is not a constraint
  • You're running high-throughput workloads at scale

terraform

Pure C# io_uring with direct syscalls. Zero native dependencies.

  • You want zero native binary dependencies
  • You need full debuggability — step into ring code in C#
  • Your environment restricts native library loading
  • You want a smaller NuGet package footprint
  • You want to audit or extend the io_uring implementation