# SLNG API Documentation ## Docs - [Embed a voice agent on your site](https://docs.slng.ai/agents/embed-web.md): Add a browser-based voice session with an SLNG agent to any web page using LiveKit and a lightweight backend proxy. - [LiveKit Plugin](https://docs.slng.ai/agents/livekit-plugin.md): Connect LiveKit Agents to any STT and TTS model through the SLNG gateway. - [Create agent](https://docs.slng.ai/api-reference/agents/create-agent.md): Create a new voice agent. - [Delete agent](https://docs.slng.ai/api-reference/agents/delete-agent.md): Soft-delete a voice agent. - [Duplicate agent](https://docs.slng.ai/api-reference/agents/duplicate-agent.md): Create a server-side copy of an existing voice agent. - [Get agent](https://docs.slng.ai/api-reference/agents/get-agent.md): Get a single voice agent by ID. - [List agents](https://docs.slng.ai/api-reference/agents/list-agents.md): List all voice agents for your organisation. - [Replace agent](https://docs.slng.ai/api-reference/agents/replace-agent.md): Replace a voice agent (full update). - [Update agent (partial)](https://docs.slng.ai/api-reference/agents/update-agent-partial.md): Partially update a voice agent. - [Cognigy STT](https://docs.slng.ai/api-reference/bridges/cognigy-stt-bridge/cognigy-stt-bridge-http.md): Transcribe audio via Cognigy Voice Gateway protocol bridge. - [Cognigy STT](https://docs.slng.ai/api-reference/bridges/cognigy-stt-bridge/cognigy-stt-bridge-ws.md): WebSocket protocol bridge for Cognigy Voice Gateway STT integration. Accepts Cognigy's native websocket messages (start, stop, binary audio) and translates them to SLNG's standard STT websocket protocol. - [Cognigy TTS](https://docs.slng.ai/api-reference/bridges/cognigy-tts-bridge/cognigy-tts-bridge-http.md): Synthesize speech via Cognigy Voice Gateway protocol bridge. - [Cognigy TTS](https://docs.slng.ai/api-reference/bridges/cognigy-tts-bridge/cognigy-tts-bridge-ws.md): WebSocket protocol bridge for Cognigy Voice Gateway TTS integration. Accepts Cognigy's native websocket messages (stream, flush, stop) and translates them to SLNG's standard TTS websocket protocol. - [Jambonz STT](https://docs.slng.ai/api-reference/bridges/jambonz-stt-bridge/jambonz-stt-bridge-http.md): Transcribe audio via Jambonz custom STT protocol bridge. The model_variant path parameter specifies the target STT model (e.g., deepgram/nova:3, slng/openai/whisper:large-v3). - [Jambonz STT](https://docs.slng.ai/api-reference/bridges/jambonz-stt-bridge/jambonz-stt-bridge-ws.md): WebSocket protocol bridge for Jambonz custom STT integration. Accepts Jambonz's native websocket messages (start, stop, binary audio) and translates them to SLNG's standard STT websocket protocol. - [Jambonz TTS](https://docs.slng.ai/api-reference/bridges/jambonz-tts-bridge/jambonz-tts-bridge-http.md): Synthesize speech via Jambonz custom TTS protocol bridge. The model_variant path parameter specifies the target TTS model (e.g., deepgram/aura:2). - [Jambonz TTS](https://docs.slng.ai/api-reference/bridges/jambonz-tts-bridge/jambonz-tts-bridge-ws.md): WebSocket protocol bridge for Jambonz custom TTS integration. Accepts Jambonz's native websocket messages (stream, flush, stop) and translates them to SLNG's standard TTS websocket protocol. Returns a connect message on connection and binary audio frames. - [Dispatch call](https://docs.slng.ai/api-reference/calls/dispatch-call.md): Dispatch an outbound call for a voice agent. - [Get call](https://docs.slng.ai/api-reference/calls/get-call.md): Get details of a specific call. - [List calls](https://docs.slng.ai/api-reference/calls/list-calls.md): List calls for a voice agent (paginated). - [Submit tool execution](https://docs.slng.ai/api-reference/calls/submit-tool-execution.md): Append a tool execution record to a call. - [Create web session](https://docs.slng.ai/api-reference/sessions/create-web-session.md): Create a browser session for a voice agent. - [Create batch job](https://docs.slng.ai/api-reference/speechmatics/create-batch-job.md): Submit audio for asynchronous transcription. Supports file upload (multipart/form-data), URL input, and presigned S3 upload (application/json). The file is queued for processing and you can poll for status until the transcript is ready. - [Delete batch job](https://docs.slng.ai/api-reference/speechmatics/delete-batch-job.md): Remove a completed or failed job. Only jobs with a terminal status (`DONE` or `FAILED`) can be deleted. - [Get batch job](https://docs.slng.ai/api-reference/speechmatics/get-batch-job.md): Retrieve the current status and details of a specific job. - [Get batch job files](https://docs.slng.ai/api-reference/speechmatics/get-batch-job-files.md): Retrieve signed download URLs for the input audio and output transcript files of a completed job. - [List batch jobs](https://docs.slng.ai/api-reference/speechmatics/list-batch-jobs.md): Retrieve all transcription jobs for your organization. - [Nova 2](https://docs.slng.ai/api-reference/stt/deepgram-nova-2/nova-2-http.md): Transcribe audio using Deepgram Nova 2 with VAD and speaker diarization. - [Nova 2](https://docs.slng.ai/api-reference/stt/deepgram-nova-2/nova-2-ws.md): Transcribe audio using Deepgram Nova 2. - [Nova 3 Medical](https://docs.slng.ai/api-reference/stt/deepgram-nova-3-medical/nova-3-medical-http.md): Transcribe medical audio using Deepgram Nova 3 Medical with specialized vocabulary. - [Nova 3 Medical](https://docs.slng.ai/api-reference/stt/deepgram-nova-3-medical/nova-3-medical-ws.md): Transcribe medical audio using Deepgram Nova 3 Medical. - [Nova 3 (English)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-english-http.md): Transcribe English audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (English)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-english-ws.md): Transcribe English audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Hindi)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-hindi-http.md): Transcribe Hindi audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Hindi)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-hindi-ws.md): Transcribe Hindi audio using SLNG-hosted Deepgram Nova 3. - [Nova 3](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-http.md): Transcribe audio using Deepgram Nova 3 with VAD and speaker diarization. - [Nova 3 (Kannada)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-kannada-ws.md): Transcribe Kannada audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Marathi)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-marathi-ws.md): Transcribe Marathi audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Multi-Language)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-multi-language-http.md): Transcribe multi-language audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Multi-Language)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-multi-language-ws.md): Transcribe multi-language audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Spanish)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-spanish-http.md): Transcribe Spanish audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Spanish)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-spanish-ws.md): Transcribe Spanish audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Tamil)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-tamil-ws.md): Transcribe Tamil audio using SLNG-hosted Deepgram Nova 3. - [Nova 3 (Telugu)](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-telugu-ws.md): Transcribe Telugu audio using SLNG-hosted Deepgram Nova 3. - [Nova 3](https://docs.slng.ai/api-reference/stt/deepgram-nova-3/nova-3-ws.md): Transcribe audio using Deepgram Nova 3. - [Reson8 STT v1](https://docs.slng.ai/api-reference/stt/reson8-stt-v1/reson8-stt-v1-ws.md): Real-time speech-to-text transcription using Reson8 via WebSocket. Supports streaming audio with word-level timestamps, confidence scores, and partial results. - [Saaras v3](https://docs.slng.ai/api-reference/stt/sarvam-ai-saaras/saaras-v3-http.md): Transcribe audio using Sarvam AI Saaras with domain-aware speech recognition for 23 languages and flexible output modes. - [Speech AI Real-time v4](https://docs.slng.ai/api-reference/stt/soniox-speech-ai-real-time-v4/speech-ai-real-time-v4-ws.md): Real-time speech-to-text transcription using Soniox Speech AI via WebSocket with speaker diarization, automatic language identification, and configurable endpoint detection in 60+ languages. - [Whisper Large v3](https://docs.slng.ai/api-reference/stt/whisper-large-v3/whisper-large-v3-http.md): Receive transcripts from Whisper Large v3 - [Whisper Large v3](https://docs.slng.ai/api-reference/stt/whisper-large-v3/whisper-large-v3-ws.md): Real-time speech-to-text transcription using OpenAI's Whisper Large v3 model via WebSocket. Supports streaming audio input with intelligent Voice Activity Detection (VAD), partial transcripts for immediate feedback, and automatic language detection. Perfect for live transcription, voice commands, an… - [Cartesia Sonic 3](https://docs.slng.ai/api-reference/tts/cartesia-sonic-3/cartesia-sonic-3-ws.md): Text-to-Speech API for generating speech from text using Cartesia Sonic 3. Low-latency streaming synthesis with context-aware generation controls. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. Reference `https://docs.cartesia.ai/use-the-api/tts-… - [Aura 2 (English)](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-english-http.md): Synthesize English speech using SLNG-hosted Deepgram Aura 2. - [Aura 2 (English)](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-english-ws.md): Text-to-Speech API for generating speech from text using SLNG deepgram/aura. Real-time conversational TTS designed for voice agents with ultra-low latency. English voices only. Establishes a WebSocket connection for real-time text-to-speech. - [Aura 2](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-http.md): Synthesize speech using Deepgram Aura 2 for conversational voice agents. - [Aura 2 (Spanish)](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-spanish-http.md): Synthesize Spanish speech using SLNG-hosted Deepgram Aura 2. - [Aura 2 (Spanish)](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-spanish-ws.md): Text-to-Speech API for generating speech from text using SLNG deepgram/aura. Real-time conversational TTS designed for voice agents with ultra-low latency. Spanish voices only. Establishes a WebSocket connection for real-time text-to-speech. - [Aura 2](https://docs.slng.ai/api-reference/tts/deepgram-aura-2/aura-2-ws.md): Text-to-Speech API for generating speech from text using Deepgram aura. Real-time conversational TTS designed for voice agents with ultra-low latency. Establishes a WebSocket connection for real-time text-to-speech. Audio data is sent as raw binary WebSocket frames (not JSON audio_chunk messages). - [Kugel 1 Turbo](https://docs.slng.ai/api-reference/tts/kugel-1-turbo/kugel-1-turbo-ws.md): Text-to-Speech API for generating speech from text using KugelAudio Kugel 1 Turbo. High-quality low-latency TTS with expressiveness control. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. - [Kugel 1](https://docs.slng.ai/api-reference/tts/kugel-1/kugel-1-ws.md): Text-to-Speech API for generating speech from text using KugelAudio Kugel 1. High-quality text-to-speech with expressiveness control. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. - [Kugel 2](https://docs.slng.ai/api-reference/tts/kugel-2/kugel-2-ws.md): Text-to-Speech API for generating speech from text using KugelAudio Kugel 2. High-quality text-to-speech with expressiveness control. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. - [Murf Falcon](https://docs.slng.ai/api-reference/tts/murf-falcon/murf-falcon-ws.md): Text-to-Speech API for generating speech from text using Murf Falcon. High-quality multilingual TTS with multiple encodings and sample rates. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. - [Orpheus (English)](https://docs.slng.ai/api-reference/tts/orpheus-english/orpheus-english-http.md): Synthesize speech using Orpheus with emotion control. - [Orpheus (English)](https://docs.slng.ai/api-reference/tts/orpheus-english/orpheus-english-ws.md): Text-to-Speech API for generating speech from text using SLNG canopylabs/orpheus. High-quality streaming TTS with emotion control, hosted on SLNG infrastructure. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v2 (Arabic)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-arabic-http.md): Synthesize Arabic speech using Rime Arcana TTS model. - [Arcana v2 (Arabic)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-arabic-ws.md): Text-to-Speech API for generating Arabic speech from text using Rime Arcana TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v2 (English)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-english-http.md): Synthesize English speech using Rime Arcana TTS model. - [Arcana v2 (English)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-english-ws.md): Text-to-Speech API for generating English speech from text using Rime Arcana TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v2 (French)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-french-http.md): Synthesize French speech using Rime Arcana TTS model. - [Arcana v2 (French)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-french-ws.md): Text-to-Speech API for generating French speech from text using Rime Arcana TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v2 (German)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-german-http.md): Synthesize German speech using Rime Arcana TTS model. - [Arcana v2 (German)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-german-ws.md): Text-to-Speech API for generating German speech from text using Rime Arcana TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v2 (Spanish)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-spanish-http.md): Synthesize Spanish speech using Rime Arcana TTS model. - [Arcana v2 (Spanish)](https://docs.slng.ai/api-reference/tts/rime-arcana-v2/arcana-v2-spanish-ws.md): Text-to-Speech API for generating Spanish speech from text using Rime Arcana TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v3 (English)](https://docs.slng.ai/api-reference/tts/rime-arcana-v3/arcana-v3-english-http.md): Synthesize English speech using Rime Arcana v3 TTS model. - [Arcana v3 (English)](https://docs.slng.ai/api-reference/tts/rime-arcana-v3/arcana-v3-english-ws.md): Text-to-Speech API for generating English speech using Rime Arcana v3 TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Arcana v3 (Hindi)](https://docs.slng.ai/api-reference/tts/rime-arcana-v3/arcana-v3-hindi-http.md): Synthesize Hindi speech using Rime Arcana v3 TTS model. - [Arcana v3 (Hindi)](https://docs.slng.ai/api-reference/tts/rime-arcana-v3/arcana-v3-hindi-ws.md): Text-to-Speech API for generating Hindi speech using Rime Arcana v3 TTS model. Establishes a WebSocket connection for real-time text-to-speech. - [Bulbul v3](https://docs.slng.ai/api-reference/tts/sarvam-ai-bulbul-v3/bulbul-v3-http.md): Synthesize speech using Sarvam AI Bulbul with high-quality multilingual TTS for Indian languages and 30+ speaker voices. - [Bulbul v3](https://docs.slng.ai/api-reference/tts/sarvam-ai-bulbul-v3/bulbul-v3-ws.md): Text-to-Speech API for generating speech from text using Sarvam AI Bulbul. High-quality multilingual TTS for Indian languages with 30+ speaker voices. Establishes a WebSocket connection for real-time text-to-speech using the unified SLNG TTS protocol. - [Soniox TTS v1-preview](https://docs.slng.ai/api-reference/tts/soniox-tts-v1-preview/soniox-tts-v1-preview-http.md): Real-time text-to-speech with streaming WebSocket and one-shot HTTP synthesis - [Soniox TTS v1-preview](https://docs.slng.ai/api-reference/tts/soniox-tts-v1-preview/soniox-tts-v1-preview-ws.md): Real-time text-to-speech with streaming WebSocket and one-shot HTTP synthesis - [Unified STT](https://docs.slng.ai/api-reference/unified-api/unmute-stt-bridge/unmute-stt-bridge-http.md): Transcribe audio via SLNG's native WebSocket protocol bridge. The model_variant path parameter specifies the target STT model (e.g., deepgram/nova:3, slng/openai/whisper:large-v3). - [Unified STT](https://docs.slng.ai/api-reference/unified-api/unmute-stt-bridge/unmute-stt-bridge-ws.md): WebSocket protocol bridge using SLNG's native protocol for STT. Messages pass through directly using SLNG's standard STT message types (init, audio, finalize, ready, partial_transcript, final_transcript, error). - [Unified TTS](https://docs.slng.ai/api-reference/unified-api/unmute-tts-bridge/unmute-tts-bridge-http.md): Synthesize speech via SLNG's native WebSocket protocol bridge. The model_variant path parameter specifies the target TTS model (e.g., deepgram/aura:2). - [Unified TTS](https://docs.slng.ai/api-reference/unified-api/unmute-tts-bridge/unmute-tts-bridge-ws.md): WebSocket protocol bridge using SLNG's native protocol for TTS. Messages pass through directly using SLNG's standard TTS message types (init, text, flush, clear, close, ready, audio_chunk, segment_start, segment_end, flushed, cleared, audio_end, error). - [How to use Batch API](https://docs.slng.ai/batch-guide.md): How to submit audio for asynchronous transcription using file upload, URL input, or presigned upload - [Changelog](https://docs.slng.ai/changelog.md): New features, updates, and fixes for the SLNG API - [Agent Infra](https://docs.slng.ai/dashboard/agent-infra.md): Create, test, and monitor voice agents in the Dashboard. - [API Keys](https://docs.slng.ai/dashboard/api-keys.md): Create and manage API keys used to call SLNG APIs in the Dashboard. - [Dashboard](https://docs.slng.ai/dashboard/index.md): Use the Dashboard to create API keys, configure telephony, and manage voice agents. - [Telephony](https://docs.slng.ai/dashboard/telephony.md): Configure telephony connections for outbound and inbound voice agent calls. - [Agent API examples](https://docs.slng.ai/examples/agents-api.md): Create, test, update, and delete voice agents with code examples - [Dispatching outbound calls](https://docs.slng.ai/examples/agents-calls.md): Examples for dispatching outbound calls with voice agents - [Configuring voice agents](https://docs.slng.ai/examples/agents-config.md): How to write system prompts, wire up tools, and use template variables - [STT HTTP Examples](https://docs.slng.ai/examples/stt-http.md): Examples for transcribing audio files using HTTP - [STT WebSocket Examples](https://docs.slng.ai/examples/stt-websocket.md): Real-time speech recognition examples using WebSockets - [TTS HTTP Examples](https://docs.slng.ai/examples/tts-http.md): Complete examples for TTS using HTTP requests - [TTS WebSocket Examples](https://docs.slng.ai/examples/tts-websocket.md): Real-time TTS examples using WebSockets - [Getting Started](https://docs.slng.ai/getting-started.md): Get up and running with the SLNG API in minutes. - [Welcome to SLNG API](https://docs.slng.ai/index.md): Real-time Speech & Language AI with multiple providers, single API. - [Integrations](https://docs.slng.ai/integrations/overview.md): Connect SLNG to your existing voice infrastructure with third-party integrations. - [Models by Language](https://docs.slng.ai/models/by-language.md): Find TTS and STT models that support your target language. - [Models by Region](https://docs.slng.ai/models/by-region.md): Find which SLNG models are deployed in your region. - [Model Catalog](https://docs.slng.ai/models/index.md): Browse all TTS and STT models available on the SLNG platform by provider or deployment region. - [Speech-to-Text Models](https://docs.slng.ai/models/stt.md): Browse all STT models available on the SLNG platform, grouped by provider. - [Text-to-Speech Models](https://docs.slng.ai/models/tts.md): Browse all TTS models available on the SLNG platform, grouped by provider. - [HTTP vs. WebSocket](https://docs.slng.ai/protocols.md): HTTP and WebSocket protocols compared when to use each for TTS and STT. - [Region & world-part overrides](https://docs.slng.ai/region-override.md): Pin API requests to a specific region or geographic zone using HTTP headers. - [Supported models](https://docs.slng.ai/unified-api/models-supported.md): All STT and TTS models available through the SLNG Unified API. - [SLNG Unified API](https://docs.slng.ai/unified-api/overview.md): One endpoint pattern for every STT and TTS model. Swap providers by changing the URL. - [Parameters coverage](https://docs.slng.ai/unified-api/parameters-coverage.md): Which request parameters are supported by each provider through the SLNG Unified API. - [Why SLNG Unified API](https://docs.slng.ai/unified-api/why.md): Why a single API for multiple speech providers saves you time, reduces risk, and keeps you on the best models. - [Voice Agents](https://docs.slng.ai/voice-agents.md): Build intelligent voice agents with outbound calls, web sessions, and webhooks. - [Cartesia Sonic 3](https://docs.slng.ai/voices/cartesia-sonic-3.md): Browse Cartesia Sonic 3 voices with audio samples — multilingual, low-latency TTS. - [Deepgram Aura](https://docs.slng.ai/voices/deepgram-aura.md): Browse Deepgram Aura 2 voices with audio samples, characteristics, and use cases. - [Kugel](https://docs.slng.ai/voices/kugel.md): Browse KugelAudio Kugel voices with audio samples — multilingual TTS with expressiveness control. - [Murf Falcon](https://docs.slng.ai/voices/murf.md): Browse Murf Falcon voices with audio samples — real-time multilingual TTS over WebSocket. - [Orpheus](https://docs.slng.ai/voices/orpheus.md): Browse Orpheus voice models with audio samples, emotion control, and use cases. - [Rime Arcana](https://docs.slng.ai/voices/rime-arcana.md): Browse Rime Arcana voice models with audio samples and characteristics. - [Sarvam Bulbul v3](https://docs.slng.ai/voices/sarvam-bulbul.md): Browse Sarvam Bulbul v3 voices — multilingual TTS for Indian languages. - [Soniox TTS v1-preview](https://docs.slng.ai/voices/soniox.md): Browse Soniox TTS v1-preview voices with audio samples — real-time, low-latency text-to-speech. - [WebSocket Integration Guide](https://docs.slng.ai/websocket-guide.md): Best practices and troubleshooting for WebSocket connections - [WebSocket API](https://docs.slng.ai/websockets.md): Real-time bidirectional streaming for TTS and STT services ## OpenAPI Specs - [tts-soniox.oas](https://docs.slng.ai/api-reference/openapi/tts-soniox.oas.yaml) - [stt-slng.oas](https://docs.slng.ai/api-reference/openapi/stt-slng.oas.yaml) - [batch.oas](https://docs.slng.ai/api-reference/batch/batch.oas.json) - [tts-slng.oas](https://docs.slng.ai/api-reference/openapi/tts-slng.oas.yaml) - [agents.oas](https://docs.slng.ai/api-reference/agents/agents.oas.yaml) - [tts-sarvam.oas](https://docs.slng.ai/api-reference/openapi/tts-sarvam.oas.yaml) - [tts-elevenlabs.oas](https://docs.slng.ai/api-reference/openapi/tts-elevenlabs.oas.yaml) - [tts-deepgram.oas](https://docs.slng.ai/api-reference/openapi/tts-deepgram.oas.yaml) - [stt-sarvam.oas](https://docs.slng.ai/api-reference/openapi/stt-sarvam.oas.yaml) - [stt-deepgram.oas](https://docs.slng.ai/api-reference/openapi/stt-deepgram.oas.yaml) - [bridges-unmute.oas](https://docs.slng.ai/api-reference/openapi/bridges-unmute.oas.yaml) - [bridges-jambonz.oas](https://docs.slng.ai/api-reference/openapi/bridges-jambonz.oas.yaml) - [bridges-cognigy.oas](https://docs.slng.ai/api-reference/openapi/bridges-cognigy.oas.yaml) - [gateway.oas](https://docs.slng.ai/api-reference/gateway.oas.yaml) ## AsyncAPI Specs - [tts-soniox.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-soniox.asyncapi.yaml) - [tts-kugelaudio.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-kugelaudio.asyncapi.yaml) - [stt-slng.asyncapi](https://docs.slng.ai/api-reference/asyncapi/stt-slng.asyncapi.yaml) - [stt-soniox.asyncapi](https://docs.slng.ai/api-reference/asyncapi/stt-soniox.asyncapi.yaml) - [tts-murf.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-murf.asyncapi.yaml) - [tts-slng.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-slng.asyncapi.yaml) - [tts-cartesia.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-cartesia.asyncapi.yaml) - [bridges-unmute.asyncapi](https://docs.slng.ai/api-reference/asyncapi/bridges-unmute.asyncapi.yaml) - [tts-sarvam.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-sarvam.asyncapi.yaml) - [tts-elevenlabs.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-elevenlabs.asyncapi.yaml) - [tts-deepgram.asyncapi](https://docs.slng.ai/api-reference/asyncapi/tts-deepgram.asyncapi.yaml) - [stt-reson8.asyncapi](https://docs.slng.ai/api-reference/asyncapi/stt-reson8.asyncapi.yaml) - [stt-deepgram.asyncapi](https://docs.slng.ai/api-reference/asyncapi/stt-deepgram.asyncapi.yaml) - [bridges-jambonz.asyncapi](https://docs.slng.ai/api-reference/asyncapi/bridges-jambonz.asyncapi.yaml) - [bridges-cognigy.asyncapi](https://docs.slng.ai/api-reference/asyncapi/bridges-cognigy.asyncapi.yaml) - [slng.asyncapi](https://docs.slng.ai/api-reference/asyncapi/slng.asyncapi.yaml) - [elevenlabs.asyncapi](https://docs.slng.ai/api-reference/asyncapi/elevenlabs.asyncapi.yaml) - [deepgram.asyncapi](https://docs.slng.ai/api-reference/asyncapi/deepgram.asyncapi.yaml) - [cognigy.asyncapi](https://docs.slng.ai/api-reference/asyncapi/cognigy.asyncapi.yaml)