263.1ns
Full Pipeline Tick
68.4ns
STUN Parse
3.63M
Ops/Sec (64 vCPUs)
0 GC
Garbage Collections

What Is a Pipeline Tick?

A pipeline tick is the atomic unit of work in V100's real-time video infrastructure. It represents the complete processing of a single protocol operation from the moment raw bytes arrive at the network buffer to the moment the response bytes are ready to send. In a running video call, thousands of pipeline ticks execute every second.

The pipeline is not a single function call. It is a sequence of discrete stages, each with its own latency budget. The 263.1ns total is the sum of these stages, measured end-to-end on our Graviton4 benchmark hardware (AWS c8g.16xlarge, 64 vCPUs).

To understand why 263 nanoseconds is significant, consider some reference points. A single L3 cache miss on modern hardware costs 30-50ns. A system call to the kernel costs 100-200ns. A mutex lock/unlock cycle costs 20-50ns. Our entire pipeline tick — including all meaningful work of parsing, validating, looking up state, constructing a response, and serializing it — fits within the latency budget of roughly 5 cache misses.

Anatomy of 263 Nanoseconds

Here is what happens during a full pipeline tick, using a STUN binding request as the reference operation (the most common operation in any WebRTC call):

Stage 1: Packet Parse ................................ ~68ns
  Read STUN header (20 bytes): message type, length, transaction ID
  Validate magic cookie (0x2112A442)
  Parse attributes: XOR-MAPPED-ADDRESS, MESSAGE-INTEGRITY, FINGERPRINT

Stage 2: Validation .................................. ~35ns
  Check message type against expected request/response
  Verify transaction ID format
  Validate attribute lengths against message length

Stage 3: State Lookup ................................ ~40ns
  Lock-free hash map lookup for session state
  Retrieve peer address mapping
  Check allocation expiry (TURN only)

Stage 4: Response Build .............................. ~85ns
  Construct STUN success response header
  Encode XOR-MAPPED-ADDRESS (34.5ns IPv4 / 125.8ns IPv6)
  Add SOFTWARE attribute
  Calculate and append FINGERPRINT (CRC-32)

Stage 5: Serialize .................................. ~35ns
  Write response bytes to send buffer
  Update connection metrics
  Return control to event loop

TOTAL ................................................ 263.1ns
            

The breakdown reveals where time is spent. Response construction dominates at approximately 85 nanoseconds — building the response packet, encoding the XOR-mapped address, and computing the CRC-32 fingerprint. Packet parsing is second at 68 nanoseconds. Validation, state lookup, and serialization each consume approximately 35-40 nanoseconds.

Why XOR-Mapped Address Matters

The XOR-mapped address encoding is one of the most frequent operations in WebRTC. Every STUN binding response includes it. For IPv4, V100 encodes this in 34.5 nanoseconds. For IPv6, it takes 125.8 nanoseconds — roughly 3.6x longer because IPv6 addresses are 128 bits vs. 32 bits, and the XOR operation involves the full transaction ID in addition to the magic cookie.

These numbers matter because in a large-scale deployment, XOR-mapped address encoding is called millions of times per second across all active sessions. A 10x difference in this single operation can shift the scaling characteristics of the entire TURN infrastructure.

Why Rust Makes This Possible

The 263ns pipeline tick is a direct result of language-level properties that Rust provides, which garbage-collected languages cannot.

Zero-Copy Parsing

V100's STUN parser operates directly on the incoming byte buffer. It does not allocate an intermediate representation of the message. The parser returns references (borrows) into the original buffer, which means the parsed message occupies zero additional heap memory. In Go, Java, or JavaScript, the parser would typically allocate a struct/object and copy the relevant bytes into it. That allocation and copy is invisible in profiles but adds 50-100ns per message.

No Garbage Collection Pauses

Rust has no garbage collector. Memory is freed deterministically when values go out of scope. This means there are no GC pauses — ever. Go's GC pauses are typically 100 microseconds to several milliseconds. Java's ZGC pauses are sub-millisecond but still non-zero. For a 263ns pipeline tick, even a 100-microsecond GC pause would stall 380 operations.

This is not a theoretical concern. At 3.63 million operations per second, a 1ms GC pause means 3,630 operations are delayed. If those operations are STUN keepalives, participants experience a connectivity hiccup. If they are TURN relay forwards, media packets are delayed. GC pauses in the media relay path are audible and visible to users.

Fearless Concurrency

V100's pipeline runs across 64 vCPUs on Graviton4. Each worker processes operations independently using lock-free data structures for shared state. Rust's ownership system guarantees at compile time that no data races exist. This means we can use aggressive concurrency without the defensive locking that other languages require. Defensive locking adds latency; lock-free structures do not.

The lock-free advantage: V100 uses a concurrent hash map (DashMap) for session state. Lookups take approximately 40ns without acquiring any mutex. A mutex-guarded HashMap would add 20-50ns for lock acquisition and release, nearly doubling the state lookup stage.

Benchmark Methodology

We are precise about how these numbers were obtained because benchmarking at the nanosecond scale is full of pitfalls. Measurement overhead, CPU frequency scaling, branch predictor warming, and cache effects can all distort results.

Hardware

Parameter Value
Instance AWS c8g.16xlarge
Processor AWS Graviton4 (ARM Neoverse V2)
vCPUs 64
Architecture AArch64
OS Amazon Linux 2023
Compiler rustc (stable), LTO enabled, codegen-units=1

Methodology

  1. Criterion.rs for statistical benchmarking with automatic outlier detection
  2. 5-second warmup to stabilize CPU frequency and branch predictors
  3. Minimum 100 iterations per benchmark
  4. 95% confidence intervals reported for all measurements
  5. Isolated benchmarks: each operation measured independently to avoid pipeline effects
  6. Throughput benchmarks: mixed workloads to simulate production traffic patterns
  7. Dedicated instance: no other workloads running during benchmarks

Why Graviton4

We chose Graviton4 specifically because ARM Neoverse V2 cores do not have turbo boost. x86 processors (Intel and AMD) dynamically adjust clock frequency based on thermal conditions and workload. This makes nanosecond-level benchmarks on x86 unreliable unless you pin the frequency — and pinning the frequency does not reflect production conditions. Graviton4 runs at a consistent clock speed, which means benchmark results match production performance.

The Full Benchmark Table

Operation Latency Notes
XOR Mapped Address (IPv4) 34.5ns 32-bit XOR with magic cookie
STUN Binding Parse 68.4ns Zero-copy, no heap allocation
XOR Mapped Address (IPv6) 125.8ns 128-bit XOR with cookie + transaction ID
Full Pipeline Tick 263.1ns End-to-end, parse through serialize
TURN Channel Binding 526.9ns Includes state allocation
STUN Integrity (HMAC-SHA1) 664.2ns Cryptographic, dominates TURN auth
TURN Credential Validation 863.0ns Full long-term credential check

Notice the pattern. The pure protocol operations (parsing, address encoding) are in the 30-130ns range. The cryptographic operations (HMAC-SHA1, credential validation) are in the 500-900ns range. Cryptography dominates — which is expected, because HMAC-SHA1 involves a keyed hash computation that is inherently more expensive than byte manipulation. The pipeline tick at 263.1ns represents the non-cryptographic fast path, which is the majority of operations in a running call.

Scaling to 3.63 Million Ops/Sec

Single-operation latency is one metric. Sustained throughput under contention is another. V100 achieves 3.63 million operations per second on 64 vCPUs, with a pipeline throughput of 3.61 million ops/sec. The gap between raw operation throughput and pipeline throughput is less than 1%, which indicates minimal overhead from the event loop, task scheduling, and inter-worker coordination.

The per-operation latency at sustained throughput is 0.3 microseconds (300 nanoseconds), which is consistent with the isolated benchmark of 263.1ns plus a small amount of scheduling overhead. This means performance does not degrade significantly under load — a property that garbage-collected runtimes cannot guarantee because GC pressure increases with allocation rate, and allocation rate increases with throughput.

Putting 263 Nanoseconds in Context

For context, here are some common operations and how long they take on modern hardware:

Operation Typical Latency V100 Pipeline Ticks
L1 cache hit ~1ns 0.004 ticks
L3 cache miss ~40ns 0.15 ticks
Mutex lock/unlock ~25ns 0.1 ticks
System call ~150ns 0.57 ticks
V100 pipeline tick 263ns 1 tick
Go GC pause (typical) ~100,000ns 380 ticks
DNS lookup ~10,000,000ns 38,023 ticks
Twilio API response ~50,000,000ns 190,118 ticks

In the time it takes for a single Twilio API response (~50ms), V100 can process approximately 190,000 pipeline ticks. In the time it takes for a Go garbage collection pause (~100 microseconds), V100 processes 380 operations. These comparisons are not entirely fair — an API response and a pipeline tick are different things — but they illustrate the scale difference between millisecond-world and nanosecond-world.

What We Optimize Next

The 263ns pipeline tick is our current production baseline. We have identified additional optimizations that we have not yet shipped:

We publish benchmark results for each optimization only after it ships to production and is verified under sustained load. Micro-benchmark improvements that do not translate to production throughput gains are not counted.

What we have not measured: End-to-end latency including network transit, client-side encoding/decoding, and rendering. The 263ns pipeline tick is server-side only. Total call latency includes network hops, jitter buffers, and client processing — all of which are orders of magnitude larger than our server-side overhead.

Conclusion

A 263-nanosecond pipeline tick means V100's protocol processing is effectively invisible in the latency stack of a real-time video call. The server-side overhead is less than a microsecond. The latency users experience is dominated by physics (speed of light across fiber), network conditions (congestion, jitter), and client-side processing (encoding, rendering) — not by the video infrastructure itself.

This is what "lowest latency" actually means. Not a marketing claim about API response times, but a measured, reproducible, nanosecond-resolution benchmark of the operations that keep your video calls alive. 68.4ns to parse a STUN binding. 34.5ns to encode an IPv4 address. 263.1ns for a full pipeline tick. 3.63 million operations per second sustained on 64 vCPUs. All 542 tests passing.

The numbers are real. The benchmarks are reproducible. The architecture is 20 Rust microservices with zero Node.js, running on AWS Graviton4 hardware. Read more about how we compare to other WebRTC servers in our 2026 WebRTC server comparison or see the full latency analysis.

Build on Sub-Microsecond Infrastructure

V100's 263ns pipeline tick means your video infrastructure is never the bottleneck. Start building today.

Get Started with V100