When you integrate a video API, you are trusting that platform with your customers' video calls, recordings, personal data, and in many cases, protected health information or legal proceedings. You are trusting that the STUN/TURN server correctly implements RFC 5389 and RFC 5766. You are trusting that the encryption does not have edge cases that leak plaintext. You are trusting that the rate limiter actually limits rates and does not fail open under load. You are trusting that the recording system produces complete, uncorrupted files. You are trusting all of this because the vendor told you it works.
The question you should ask any video API vendor is: how do you verify that it works? How many tests do you run? How often do they pass? What do they cover? Every enterprise software vendor claims their product is "enterprise-grade" and "production-ready." Very few can back those claims with numbers.
V100 runs 938 tests across 5 Rust crates and JavaScript tests. Every test passes on every commit. Zero failures. Zero flaky tests. Zero skipped tests. This post is a full breakdown of what those 938 tests cover, why each category exists, and what it means for customers who are deciding whether to trust V100 with their video infrastructure.
The Full Breakdown: 938 Tests Across 5 Crates
| Crate / Module | Category | Count | Status |
|---|---|---|---|
| v100-turn | Library (unit) | 606 | Pass |
| v100-turn | Integration | 134 | Pass |
| v100-turn | Latency verification | 10 | Pass |
| v100-turn | RFC compliance (5389/5766/8489) | 108 | Pass |
| gateway | Library (unit) | 28 | Pass |
| gateway | Integration | 74 | Pass |
| gateway | Other (middleware, routing) | 32 | Pass |
| meeting-signaling | WebSocket + room management | 37 | Pass |
| security | JWT / SSRF / XSS | 13 | Pass |
| HIPAA compliance | Encryption, audit, access control | 14 | Pass |
| Total | 938 | All Pass | |
v100-turn: 858 Tests for the Media Server
The v100-turn crate is V100's Rust-native STUN/TURN media server — the component that handles all real-time media relay, NAT traversal, and WebRTC connectivity. It is the largest and most critical component in the stack, and it has the most extensive test suite: 858 tests across four categories.
606 Library Tests
The unit test suite covers every public function, every data structure, and every code path in the v100-turn library. STUN message encoding and decoding. TURN allocation state machines. Channel binding lifecycle. Permission management. Nonce generation and validation. Credential verification. Each test verifies a single behavior in isolation, with no external dependencies, no network calls, and no shared state. These tests run in under 2 seconds on a laptop.
The 606 unit tests include boundary tests for every buffer size, overflow tests for every counter, and edge case tests for every parser. A STUN message with a zero-length attribute. A TURN allocation request with the maximum number of permissions. A channel binding with a port number of 0. These are the tests that catch the bugs that only appear at 3 AM on the busiest day of the year when a client sends a malformed packet.
134 Integration Tests
Integration tests verify that components work together correctly. A STUN binding request goes through the full pipeline: parse, authenticate, process, encode, respond. A TURN allocation creates the state, assigns ports, and can relay data. A channel binding goes through the full lifecycle: create, use, refresh, expire. These tests spin up actual server instances, send real UDP packets, and verify real responses. They test the system as a customer would use it, not as a developer would unit-test it.
10 Latency Verification Tests
Latency tests verify that V100's performance claims are not regressions. Each test measures the processing time for a specific operation and asserts that it completes within a defined threshold. If a code change causes STUN message processing to exceed its latency budget, the test fails and the commit is blocked. These tests are the automated equivalent of running a benchmark on every commit — they ensure that performance optimizations are never accidentally undone.
108 RFC Compliance Tests
RFC compliance tests verify that V100's STUN/TURN implementation conforms to the relevant IETF standards: RFC 5389 (STUN), RFC 5766 (TURN), and RFC 8489 (STUN Extensions). Each test maps to a specific section or requirement in the corresponding RFC.
These tests cover: message type encoding (MUST use the magic cookie 0x2112A4BC), attribute padding (MUST pad to 4-byte boundaries), fingerprint calculation (MUST be CRC-32 of the message), integrity verification (MUST use HMAC-SHA1 over the message), error response codes (MUST return 401 for missing credentials), allocation lifetime (MUST default to 600 seconds), channel number ranges (MUST be 0x4000-0x7FFE), and dozens of other protocol requirements.
RFC compliance is not optional for a media server. A STUN/TURN server that deviates from the RFCs will fail with certain client implementations, certain NAT types, or certain network configurations. The 108 compliance tests ensure that V100 interoperates correctly with every WebRTC client, every browser, and every network topology.
Gateway: 134 Tests for the API Layer
The gateway crate is V100's 10-microsecond API gateway. It handles authentication, rate limiting, routing, middleware, and response handling for every API call. The 134 tests cover three categories.
28 library tests verify the core data structures: API key parsing, rate limit bucket logic, cache key generation, and request/response serialization. 74 integration tests spin up the full gateway and send HTTP requests through the complete middleware chain: authentication, rate limiting, routing, handler execution, and response. They verify correct status codes, headers (including Server-Timing), error messages, and body content. 32 middleware tests verify specific middleware behaviors: CORS headers, request body size limits, content-type validation, request coalescing, and Cachee cache hit/miss behavior.
The gateway tests include adversarial cases that simulate real attack patterns: API keys with SQL injection payloads, request bodies exceeding the size limit, concurrent requests that should trigger rate limiting, and malformed JSON that should return 400 errors. These are not theoretical attack patterns. They are derived from real-world attack traffic observed on production APIs.
Meeting Signaling: 37 Tests for Real-Time Rooms
The meeting-signaling crate manages WebSocket connections, room state, participant lifecycle, and SFU (Selective Forwarding Unit) coordination for live video meetings. The 37 tests cover room creation and deletion, participant join and leave, WebSocket message routing, SFU track subscription and publication, screen share toggling, recording state management, and room capacity enforcement (up to 200 participants).
These tests are particularly important because meeting signaling is inherently concurrent. Multiple participants join simultaneously. Messages arrive out of order. WebSocket connections drop and reconnect. The test suite verifies correct behavior under concurrent mutations: two participants joining the same room at the same instant, a participant leaving while a recording is starting, a screen share beginning while another participant is muting their camera. Every race condition that would cause a bug in production is modeled as a test case.
Security: 13 Tests for JWT, SSRF, and XSS
The security test suite is small in count but critical in scope. These 13 tests verify that V100 is not vulnerable to the most common web application attacks.
JWT tests: expired tokens are rejected, tokens with invalid signatures are rejected, tokens with the "none" algorithm are rejected (a classic JWT bypass), tokens issued by untrusted issuers are rejected, and tokens with missing claims are rejected. SSRF tests: webhook callback URLs pointing to internal IP addresses (127.0.0.1, 10.x.x.x, 169.254.169.254) are blocked, preventing server-side request forgery against the cloud metadata service. XSS tests: user-provided content (room names, participant display names) is sanitized before rendering, preventing cross-site scripting in the meeting UI.
The security tests also cover post-quantum cryptography edge cases: empty messages signed with ML-DSA-65, credentials up to 64KB, bit-flipped Dilithium signatures that must be rejected, and Kyber ciphertexts with corrupted bytes. These tests ensure that the PQ implementation does not have the kind of subtle edge cases that break cryptographic security — accepting a malformed signature, leaking timing information, or producing different behavior for valid and invalid inputs.
HIPAA Compliance: 14 Tests for Healthcare Video
V100 is designed for telehealth and healthcare video. The 14 HIPAA compliance tests verify the technical safeguards required by the Health Insurance Portability and Accountability Act: encryption of protected health information (PHI) in transit and at rest, audit logging of all access to recordings and transcripts, access control enforcement (participants can only access rooms they are authorized for), automatic session timeout, and secure deletion of recordings when requested.
These tests do not verify HIPAA compliance by themselves — HIPAA compliance also requires administrative safeguards, policies, and a BAA (Business Associate Agreement). But they verify the technical controls that a HIPAA-compliant deployment requires. A telehealth platform built on V100 can rely on these tests as evidence that the video infrastructure layer meets HIPAA's technical requirements.
The Philosophy: No Stub Passes, No Mocked Everything
A test suite is only as valuable as the discipline behind it. V100's testing philosophy can be summarized in three rules that we enforce on every commit.
Rule 1: Every feature has tests before production. No code ships to production without corresponding tests. This is not aspirational. It is enforced by CI: a pull request that adds a feature without adding tests will not merge. The test must verify the feature's behavior, not just that the code compiles.
Rule 2: No stub passes. A test that passes by mocking everything and asserting nothing is worse than no test at all. It creates the illusion of coverage while verifying nothing. Every V100 test asserts a specific, observable behavior. If a test cannot assert something meaningful, it is deleted.
Rule 3: Zero tolerance for flaky tests. A flaky test — one that sometimes passes and sometimes fails without code changes — destroys trust in the entire suite. If developers learn to ignore test failures because "that one is flaky," they will also ignore real failures. V100 has zero flaky tests. When a test becomes flaky (usually due to timing-dependent assertions), it is either fixed immediately or replaced with a deterministic alternative.
Continuous Verification: Every Commit, Every Push
The 938 tests run on every commit via cargo test. The full suite completes in under 30 seconds on a development machine and under 15 seconds in CI. This speed matters: if the test suite took 10 minutes, developers would skip it locally and only run it in CI, creating a feedback loop that is too slow to be useful.
In addition to the test suite, V100 runs a benchmark suite on Graviton4 hardware (AWS c8g.metal-48xl) that measures throughput, latency percentiles, and resource utilization under sustained load. The benchmarks are not part of the 938-test count because they are not pass/fail assertions — they are measurements. But they serve a similar purpose: if a code change causes a performance regression, the benchmark results make it visible before it reaches production.
# Run the full V100 test suite
$ cargo test --workspace
Compiling v100-turn v0.1.0
Compiling v100-gateway v0.1.0
Compiling meeting-signaling v0.1.0
Running unittests (v100-turn)
test result: ok. 606 passed; 0 failed; 0 ignored
Running tests/integration (v100-turn)
test result: ok. 134 passed; 0 failed; 0 ignored
Running tests/latency (v100-turn)
test result: ok. 10 passed; 0 failed; 0 ignored
Running tests/rfc_compliance (v100-turn)
test result: ok. 108 passed; 0 failed; 0 ignored
Running unittests (v100-gateway)
test result: ok. 28 passed; 0 failed; 0 ignored
Running tests/integration (v100-gateway)
test result: ok. 74 passed; 0 failed; 0 ignored
Running tests/middleware (v100-gateway)
test result: ok. 32 passed; 0 failed; 0 ignored
Running unittests (meeting-signaling)
test result: ok. 37 passed; 0 failed; 0 ignored
Running tests/security
test result: ok. 13 passed; 0 failed; 0 ignored
Running tests/hipaa
test result: ok. 14 passed; 0 failed; 0 ignored
Total: 938 passed, 0 failed, 0 ignored
Finished in 14.2s
The Server-Timing Header: Every Response Carries Proof
Testing verifies that V100 works correctly before deployment. The Server-Timing header verifies that it works correctly in production, on every single response. This is the companion to testing: tests catch bugs before they ship, and Server-Timing catches performance regressions after they ship.
Every V100 API response includes Server-Timing: total;dur=0.01, showing the actual server processing time in milliseconds. This number is not an average. It is not sampled. It is the measured time for that specific request, on that specific server, at that specific moment. Any V100 customer can verify it. Any competitor can challenge it. The number is on every response.
V100 also exposes a live /latency endpoint that returns real-time percentile metrics: p50, p95, and p99 latency, updated continuously. Combined with the published benchmarks page, this creates a level of transparency that no other video API offers.
What Competitors Publish: Nothing
We searched the documentation, engineering blogs, and public communications of every major video API and video conferencing platform for any disclosure of test suite size, test coverage, or test results. The findings are uniform.
| Vendor | Published Test Count | Published Benchmarks | Server-Timing Header |
|---|---|---|---|
| V100 | 938 tests, 0 failures | Yes (full page) | Yes (every response) |
| Twilio Video | Not published | Not published | No |
| Zoom SDK | Not published | Not published | No |
| Daily | Not published | Not published | No |
| LiveKit | Not published | Limited (open source) | No |
| Agora | Not published | Not published | No |
| Mux | Not published | Not published | No |
| 100ms | Not published | Not published | No |
The absence of published test data does not mean these vendors do not test their products. They almost certainly do. But the refusal to publish test counts or benchmark results means that enterprise buyers are making trust decisions based on marketing claims rather than engineering evidence. V100 publishes the evidence because we believe enterprise buyers deserve it.
Enterprise Trust: Tests + Benchmarks + PQ Crypto
Enterprise trust is not built by a sales pitch. It is built by verifiable evidence. V100's trust stack consists of three layers: 938 tests that verify correctness, published benchmarks that verify performance, and post-quantum cryptography that verifies security. Each layer is independently verifiable by the customer.
The test suite proves that V100 handles edge cases correctly. The Server-Timing header proves that V100 performs as claimed on every request. The PQ-E2E badge proves that V100 encrypts meetings with quantum-safe algorithms. Together, they create the most transparent video API in the market — not because we claim to be transparent, but because every claim is backed by a number that anyone can verify.
Build on the most tested video API
938 tests, zero failures, verified benchmarks, post-quantum encryption. V100 is the most transparent video platform for enterprise teams that demand evidence, not promises.