Add a browser-based voice session with an SLNG agent to any web page using LiveKit and a lightweight backend proxy.
You can let visitors talk to an SLNG voice agent directly in the browser. You need a small backend to create the web session (keeps your API key off the client) and a frontend that connects to LiveKit for real-time audio.
Call your backend to get a session, then connect to the room:
Copy
Ask AI
import { Room, RoomEvent, createLocalAudioTrack } from "livekit-client";async function startSession() { // 1. Get session credentials from your backend const res = await fetch("/api/session", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ participant_name: "visitor" }), }); const session = await res.json(); // 2. Create and connect to the LiveKit room const room = new Room({ adaptiveStream: true, dynacast: true }); await room.connect(session.livekit_url, session.livekit_token); // 3. Publish your microphone const micTrack = await createLocalAudioTrack(); await room.localParticipant.publishTrack(micTrack); return { room, micTrack, session };}
The browser will prompt the user for microphone access on createLocalAudioTrack().
If your page is not served over HTTPS, most browsers will block the request.
A voice-only interface gives users no visual cue about what the agent is doing. Adding an animated persona — an orb, waveform, or avatar — makes the experience feel more responsive. Two ready-made libraries work well here:
Vercel AI SDK Persona
A React component with built-in states: idle, listening, speaking, thinking. Drop it in and map LiveKit events to states.
ElevenLabs Conversational UI
Orb and avatar components designed for voice interfaces, with audio-reactive animations.
To wire either library up, map your session and LiveKit events to persona states:
Copy
Ask AI
// Derive a persona state from your session + LiveKit eventsfunction getPersonaState({ status, muted, agentIsSpeaking }) { if (status === "connecting") return "idle"; if (status === "ended") return "idle"; if (agentIsSpeaking) return "speaking"; if (muted) return "thinking"; return "listening";}// Update on active-speaker changes (Step 7)room.on(RoomEvent.ActiveSpeakersChanged, (speakers) => { const agentIsSpeaking = speakers.some( (p) => p.identity !== room.localParticipant.identity ); setPersonaState(getPersonaState({ status, muted, agentIsSpeaking }));});