SLNG. API

Documentation

Endpoint:https://api.slng.ai

Speech to Text [STT]

POST
https://api.slng.ai
/v1/stt

Submit audio for transcription.

Speech to Text [STT]Request Body

  • audiostring · binary · required
  • modelstring
  • configstring

Speech to Text [STT]Responses

Transcription result

  • textstring
  • modelstring

Whisper-v3 [STT]

POST
https://api.slng.ai
/v1/stt/whisper-v3

Transcribe audio using Whisper-v3 model.

Whisper-v3 [STT]Request Body

  • audiostring · binary · required
  • languagestring

Whisper-v3 [STT]Responses

Transcription result

  • textstring
  • languagestring

Whisper-v3-turbo [STT]

POST
https://api.slng.ai
/v1/stt/whisper-v3-turbo

Transcribe audio using Whisper-v3-turbo model.

Whisper-v3-turbo [STT]Request Body

  • audiostring · binary · required
  • languagestring

Whisper-v3-turbo [STT]Responses

Transcription result

  • textstring
  • languagestring

OpenAI Whisper [STT]

POST
https://api.slng.ai
/v1/stt/openai-whisper

Transcribe audio using OpenAI Whisper model.

OpenAI Whisper [STT]Request Body

  • audiostring · binary · required
  • languagestring
  • modelstring · enum
    Enum values:
    whisper-1

OpenAI Whisper [STT]Responses

Transcription result

  • textstring
  • languagestring

WhisperX Speaker Diarization

POST
https://api.slng.ai
/v1/dia/whisperx

Separate speakers from a conversation using WhisperX.

WhisperX Speaker DiarizationRequest Body

  • audiostring · binary · required
  • languagestring

WhisperX Speaker DiarizationResponses

Diarization output

  • speakersobject[]