Skip to main content
You need an SLNG API key and a working knowledge of the WebSocket protocol. These examples use the Deepgram Nova model; see Choosing a Model for other available models and endpoints. WebSockets let you transcribe in real-time as users speak and receive interim results for immediate feedback. If you only need to transcribe pre-recorded files, HTTP is simpler.

Message Flow

Every STT WebSocket session follows this pattern: For the full list of message types and parameters, see the WebSocket protocol reference.

Quick Start

Connect, initialize a session, stream audio, and receive transcriptions:
const ws = new WebSocket("wss://api.slng.ai/v1/stt/slng/deepgram/nova:3");

ws.onopen = () => {
  // 1. Initialize session
  ws.send(
    JSON.stringify({
      type: "init",
      config: {
        language: "en",
        sample_rate: 16000,
        encoding: "linear16",
      },
    }),
  );

  // 2. Send audio data (from microphone or file)
  // ws.send(audioBuffer);  // Binary data
};

ws.onmessage = (event) => {
  const message = JSON.parse(event.data);

  if (message.type === "partial_transcript") {
    // Interim result — may change as more audio arrives
    console.log("Interim:", message.transcript);
  } else if (message.type === "final_transcript") {
    // Confirmed transcription for this segment
    console.log("Final:", message.transcript);
  }
};

Going Further

The WebSocket STT API supports several options you can set in the init config or take advantage of in the response:
  • Interim vs final transcripts — Partial transcripts update in real-time as the user speaks. Final transcripts are confirmed segments that won’t change. Use partials for live captions and finals for processing.
  • Language — Pass a language code in the init config for better accuracy. Not all models auto-detect.
  • Endpointing — Controls how quickly the API finalizes a transcript after silence. Useful for voice agents where you want fast turn-taking.
For the full parameter list per model, see the Speech-to-Text API reference.

Next Steps