Skip to main content
One endpoint pattern for every STT and TTS model on the platform. Switch providers by changing the URL path. Your code, auth, and request format don’t change.

Swap models by changing the URL

Every request uses the same base URL. The only part that changes is the model path:
Base URLModel path
Deepgram Nova 3api.slng.ai/v1/bridges/unmute/stt/deepgram/nova:3
Whisper Large v3api.slng.ai/v1/bridges/unmute/stt/slng/openai/whisper:large-v3
Deepgram Aura 2api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2
Orpheus Englishapi.slng.ai/v1/bridges/unmute/tts/slng/canopylabs/orpheus:en
Authentication, request body, and response format are identical. Only the path differs.

Works over HTTP and WebSocket

The same model path works with both protocols:
ProtocolURL
HTTPhttps://api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2
WebSocketwss://api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2
Use HTTP for batch jobs and file conversion. Use WebSocket for real-time streaming and voice agents. See HTTP vs. WebSocket for details.

Get started

Prerequisites

  • An SLNG API key (get one here)
  • curl installed (or any HTTP client)

Authentication

All requests require a Bearer token:
Authorization: Bearer YOUR_API_KEY

Text-to-Speech

Generate speech from text. Here’s a request using Orpheus English:
curl -X POST https://api.slng.ai/v1/bridges/unmute/tts/slng/canopylabs/orpheus:en \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "slng/canopylabs/orpheus:en",
    "voice": "tara",
    "text": "Hello from the SLNG Unified API!"
  }' \
  --output hello.wav
This saves a WAV file. You can set encoding and sample rate through the config object. Switch to Deepgram Aura 2, just change it in the URL, pass it as a parameter and adapt the voice.
curl -X POST https://api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2-en \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "slng/deepgram/aura:2-en",
    "voice": "asteria-en",
    "text": "Hello from the SLNG Unified API!"
  }' \
  --output hello.wav

Speech-to-Text

Transcribe audio with Deepgram Nova 3:
curl -X POST https://api.slng.ai/v1/bridges/unmute/stt/slng/deepgram/nova:3-multi \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "audio=@recording.wav" \
  -F "language=en"
Switch to Whisper for multilingual transcription — only the URL changes. Whisper auto-detects the language, so the language parameter is optional:
curl -X POST https://api.slng.ai/v1/bridges/unmute/stt/slng/openai/whisper:large-v3 \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "audio=@recording.wav"

WebSocket streaming

The same model paths work over WebSocket for real-time streaming. Connect to wss:// instead of posting to https://.
The browser WebSocket API does not support custom headers. Pass the API key as a query parameter or use a server-side WebSocket client. The example below uses the Node.js ws library.
import WebSocket from "ws";

const ws = new WebSocket(
  "wss://api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2-en",
  { headers: { Authorization: "Bearer YOUR_API_KEY" } }
);

ws.on("open", () => {
  ws.send(JSON.stringify({
    model: "slng/deepgram/aura:2-en",
    voice: "asteria-en",
    text: "Streaming audio in real time."
  }));
});

ws.on("message", (data) => {
  // Binary frames contain audio chunks
  if (Buffer.isBuffer(data)) {
    process.stdout.write(data);
  }
});
For browser-based WebSocket examples, see TTS over WebSocket.

Next steps

Why SLNG Unified API

Why a unified interface matters for voice AI.

Parameters coverage

See which parameters each provider supports.

Supported models

Browse all models available through the Unified API.

HTTP vs. WebSocket

When to use each protocol and their trade-offs.