~80
Lines of Code
5 min
Time to First Call
P2P
WebRTC Architecture
Free
100 API Calls/mo

What You Will Build

By the end of this tutorial, you will have a React component that creates peer-to-peer video calls using WebRTC. The finished component handles the full lifecycle: creating a meeting, exchanging signaling messages over WebSocket, negotiating ICE candidates, and rendering both local and remote video feeds. No third-party UI library required — just React hooks and the browser's native RTCPeerConnection API.

1:1 and group video calls
Screen sharing
Real-time transcription (40+ languages)
Meeting recording to S3
Virtual backgrounds
Noise suppression
Post-quantum encrypted signaling
RustTURN relay for NAT traversal
Architecture
React App                V100 API               Remote Peer
    |                          |                          |
    |--- POST /api/meetings --->|                          |
    |<--- { meetingId, token } ---|                          |
    |                          |                          |
    |--- WSS connect ---------->|                          |
    |                          |<--- WSS connect ----------|
    |                          |                          |
    |--- SDP offer ------------>|--- SDP offer ------------>|
    |<--- SDP answer -----------|<--- SDP answer ----------|
    |                          |                          |
    |--- ICE candidates ------->|--- ICE candidates ------->|
    |<--- ICE candidates -------|<--- ICE candidates ------|
    |                          |                          |
    |=========== P2P VIDEO STREAM (WebRTC) ===========|
            

Prerequisites

This tutorial assumes you have an existing React project. If you are starting from scratch:

Terminal
npx create-react-app my-video-app cd my-video-app npm start

Step 1 — Get Your API Key

Sign up at app.v100.ai and grab your API key from the dashboard. The free tier includes 100 API calls per month, which is enough for development and testing. No credit card required.

Store the key in your environment variables. Never commit API keys to source control.

.env
REACT_APP_V100_API_KEY=v100_sk_your_api_key_here

Security note: In production, create meetings from your backend server, not the browser. The API key should never be exposed in client-side code. This tutorial uses REACT_APP_ for simplicity during development. See the server-side guide for production patterns.

Step 2 — Create the Video Component

This is the core of the integration. The VideoCall component handles everything: creating the meeting, connecting to the signaling server, setting up the peer connection, and rendering video. Drop this into your project and it works.

src/VideoCall.jsx
import { useState, useEffect, useRef, useCallback } from 'react'; const API_BASE = 'https://api.v100.ai'; const WS_URL = 'wss://api.v100.ai/ws/signaling'; const API_KEY = process.env.REACT_APP_V100_API_KEY; export default function VideoCall() { const [meetingId, setMeetingId] = useState(null); const [status, setStatus] = useState('idle'); const [muted, setMuted] = useState(false); const [videoOff, setVideoOff] = useState(false); const localRef = useRef(null); const remoteRef = useRef(null); const pcRef = useRef(null); const wsRef = useRef(null); const streamRef = useRef(null); // 1. Create a meeting and get ICE server config const startCall = useCallback(async () => { setStatus('connecting'); // Create meeting via REST API const mtg = await fetch(`${API_BASE}/api/meetings`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ title: 'React Video Call' }), }).then(r => r.json()); setMeetingId(mtg.meetingId); // Fetch ICE servers (TURN/STUN credentials) const ice = await fetch(`${API_BASE}/api/webrtc/ice-servers`, { headers: { 'Authorization': `Bearer ${API_KEY}` }, }).then(r => r.json()); // 2. Get local camera + mic stream const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, audio: { echoCancellation: true, noiseSuppression: true }, }); streamRef.current = stream; localRef.current.srcObject = stream; // 3. Create peer connection with V100 ICE servers const pc = new RTCPeerConnection({ iceServers: ice.servers }); pcRef.current = pc; stream.getTracks().forEach(track => pc.addTrack(track, stream)); pc.ontrack = (e) => { remoteRef.current.srcObject = e.streams[0]; setStatus('connected'); }; // 4. Connect to signaling WebSocket const ws = new WebSocket( `${WS_URL}?token=${mtg.token}&meetingId=${mtg.meetingId}` ); wsRef.current = ws; pc.onicecandidate = (e) => { if (e.candidate) { ws.send(JSON.stringify({ type: 'ice-candidate', candidate: e.candidate, })); } }; ws.onmessage = async (event) => { const msg = JSON.parse(event.data); if (msg.type === 'offer') { await pc.setRemoteDescription(msg.sdp); const answer = await pc.createAnswer(); await pc.setLocalDescription(answer); ws.send(JSON.stringify({ type: 'answer', sdp: answer })); } if (msg.type === 'answer') { await pc.setRemoteDescription(msg.sdp); } if (msg.type === 'ice-candidate') { await pc.addIceCandidate(msg.candidate); } if (msg.type === 'peer-joined') { // We're the initiator — send offer const offer = await pc.createOffer(); await pc.setLocalDescription(offer); ws.send(JSON.stringify({ type: 'offer', sdp: offer })); } }; ws.onopen = () => setStatus('waiting'); }, []); // Cleanup on unmount useEffect(() => { return () => { pcRef.current?.close(); wsRef.current?.close(); streamRef.current?.getTracks().forEach(t => t.stop()); }; }, []); const toggleMute = () => { streamRef.current?.getAudioTracks() .forEach(t => (t.enabled = !t.enabled)); setMuted(m => !m); }; const toggleVideo = () => { streamRef.current?.getVideoTracks() .forEach(t => (t.enabled = !t.enabled)); setVideoOff(v => !v); }; const hangUp = () => { pcRef.current?.close(); wsRef.current?.close(); streamRef.current?.getTracks().forEach(t => t.stop()); setStatus('idle'); }; return ( <div style={{ maxWidth: 800, margin: '0 auto' }}> <div style={{ display: 'flex', gap: 16 }}> <video ref={localRef} autoPlay muted playsInline style={{ width: '50%', borderRadius: 12, background: '#111' }} /> <video ref={remoteRef} autoPlay playsInline style={{ width: '50%', borderRadius: 12, background: '#111' }} /> </div> <div style={{ marginTop: 16, display: 'flex', gap: 12 }}> {status === 'idle' && ( <button onClick={startCall}>Start Call</button> )} {status !== 'idle' && ( <> <button onClick={toggleMute}> {muted ? 'Unmute' : 'Mute'} </button> <button onClick={toggleVideo}> {videoOff ? 'Camera On' : 'Camera Off'} </button> <button onClick={hangUp}>Hang Up</button> </> )} </div> <p>Status: {status} {meetingId && ` | Meeting: ${meetingId}`} </p> </div> ); }

That is the entire video calling component. Let us walk through what happens when a user clicks Start Call:

  1. Create a meeting — a POST to /api/meetings returns a meetingId and a short-lived token for WebSocket auth.
  2. Fetch ICE servers/api/webrtc/ice-servers returns STUN and TURN server credentials. V100 runs RustTURN, a Rust-based TURN server, for NAT traversal.
  3. Capture mediagetUserMedia grabs the camera and microphone. The stream is assigned to the local <video> element.
  4. Create peer connection — the RTCPeerConnection is configured with ICE servers from V100. Local tracks are added.
  5. Connect to signaling — a WebSocket connection to wss://api.v100.ai/ws/signaling handles SDP offer/answer exchange and ICE candidate trickle.
  6. Peer joins — when a second participant connects to the same meeting, the signaling server sends a peer-joined event. The first peer creates an SDP offer, and the standard WebRTC negotiation completes.

Share the meeting link. To let someone else join, share the meetingId and have them connect to the same signaling WebSocket with that ID. In production, you would generate a join URL like https://yourapp.com/call/{meetingId} and pass it via your own UI.

Step 3 — Add Real-Time Transcription

V100 provides server-side transcription in 40+ languages. You enable it with a single API call when creating the meeting, and captions arrive over the same WebSocket connection you are already using for signaling.

Enable transcription when creating the meeting
const mtg = await fetch(`${API_BASE}/api/meetings`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ title: 'React Video Call', transcription: { enabled: true, language: 'en', // or 'es', 'fr', 'de', 'ja', etc. showCaptions: true, // sends live caption events over WS }, }), }).then(r => r.json());

Then add a caption handler to your existing WebSocket onmessage callback and a state variable to display the text:

Caption overlay in VideoCall.jsx
const [caption, setCaption] = useState(''); // Inside ws.onmessage handler, add: if (msg.type === 'transcription') { setCaption(msg.text); // Clear after 4 seconds so captions fade naturally setTimeout(() => setCaption(''), 4000); } // In the JSX, below the video elements: {caption && ( <div style={{ position: 'absolute', bottom: 80, left: '50%', transform: 'translateX(-50%)', background: 'rgba(0,0,0,0.8)', color: '#fff', padding: '8px 20px', borderRadius: 8, fontSize: 15, maxWidth: '80%', textAlign: 'center', }}> {caption} </div> )}

That is it. Live captions appear as an overlay on the video call. The transcription runs server-side on V100's infrastructure — no client-side speech recognition, no additional dependencies, and no extra cost on the free tier.

Step 4 — Add Recording

Recording stores the meeting video to S3. You start and stop recording with API calls, and retrieve the recording URL when the meeting ends.

Start recording
const startRecording = async () => { await fetch(`${API_BASE}/api/meetings/${meetingId}/recording/start`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}` }, }); };
Stop recording and get the URL
const stopRecording = async () => { const res = await fetch( `${API_BASE}/api/meetings/${meetingId}/recording/stop`, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}` }, } ).then(r => r.json()); // res.recordingUrl = signed S3 URL for the video file console.log('Recording saved:', res.recordingUrl); };

Add record/stop buttons to your UI next to the mute and camera controls. The recording is processed server-side and delivered as an MP4 via a signed S3 URL. You can also set up a webhook to receive a notification when processing completes — see the webhook docs.

Full transcript included. If transcription was enabled during the meeting, the recording response also includes a transcriptUrl with the complete text transcript in JSON and SRT formats. Use it for search, compliance, or AI-generated meeting summaries.

Going Further

The component above gives you a working video call in under 80 lines. Here is what you can add next, each with a single API call or config option:

V100 vs Building from Scratch

You could build all of this yourself. WebRTC is an open standard. TURN servers are open-source. Transcription models are available on Hugging Face. Here is what that actually looks like:

Build from Scratch V100 API
Time to first video call 3–6 months 5 minutes
TURN server setup Deploy coturn, configure TLS, monitor uptime Included (RustTURN)
Signaling server Build WebSocket relay, handle reconnects, scale Managed (wss://api.v100.ai)
Recording FFmpeg pipeline, S3 storage, transcoding One API call
Transcription Whisper deployment, GPU infra, 40+ language support One config flag
NAT traversal reliability Your problem 99.9% connection rate
Post-quantum encryption Implement ML-KEM + ML-DSA yourself Default on every call
Infrastructure cost $500–$2,000+/mo minimum Free tier, then usage-based
Ongoing maintenance WebRTC spec changes, browser updates, security patches Managed by V100

Building a production-grade video conferencing system from scratch is a 3–6 month project for a team of 2–3 engineers. Maintaining it is a permanent headcount. V100 gives you the same capabilities with a single React component and a few API calls.

Pricing

V100 offers a free tier with 100 API calls per month — enough for development, testing, and small projects. No credit card required. For production workloads:

See the full pricing page for details.

Start Building for Free

Get your API key and make your first video call in under 5 minutes. No credit card. No sales call. Just code.

Get Your Free API Key