Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.slng.ai/llms.txt

Use this file to discover all available pages before exploring further.

Welcome to the {Tech: Europe} Paris AI Hackathon! We brought voice AI APIs and Lego boxes. You bring the ideas. The SLNG team is here all weekend. Come say hi, grab a sticker, ask us anything.
Ismael Ordaz, SLNG.ai founder
Nicolas Grenié, SLNG.ai DevRel
Look for Ismael Ordaz (founder) and Nicolas Grenié (DevRel) — they’re around all weekend and happy to help you build.

What is SLNG?

SLNG is one API for real-time speech. Text-to-Speech, Speech-to-Text, and Voice Agents, all under 200ms. We aggregate providers like Deepgram, Rime, Cartesia, and Kugel behind a single endpoint. You switch models by changing the URL, not your code.

Text-to-Speech

Text in, audio out. Multiple voices and languages.

Speech-to-Text

Transcribe audio files or live streams.

Voice Agents

Full voice agents that make and receive phone calls.

Get your API key

1

Create an account

Sign up at app.slng.ai. It takes less than a minute.
2

Generate an API key

Go to the API Keys page in your dashboard and create a new key. Copy it somewhere safe — you won’t be able to see it again.
3

Test it

Run this in your terminal to make sure everything works:
curl https://api.slng.ai/v1/bridges/unmute/tts/slng/deepgram/aura:2-en \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello from Paris!"}' \
  --output hello.wav
You should get a hello.wav audio file. If you hear “Hello from Paris!” — you’re all set.

Start building

Grab the code for whatever you need.
Send text, get audio back.
# Change this to swap models (e.g. slng/rime/arcana:3-en, slng/rime/arcana:3-es)
MODEL="slng/deepgram/aura:2-en"

curl "https://api.slng.ai/v1/bridges/unmute/tts/${MODEL}" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Hello from Paris!"
  }' \
  --output hello.wav
Pick a different voice by adding "voice": "aura-2-theia-en" to the request body. See all available voices.

Other languages

Switch the model in the URL — the rest of the code is identical. Spanish via Rime Arcana v3:
curl https://api.slng.ai/v1/bridges/unmute/tts/slng/rime/arcana:3-es \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "¡Hola desde París!"}' \
  --output hola.wav
  • Spanish, Hindi, English (HTTP): slng/rime/arcana:3-es, slng/rime/arcana:3-hi, slng/rime/arcana:3-en
  • French (WebSocket only): Cartesia Sonic 3 or Kugel
See the Models page for the full language matrix.

Try it live

Not ready to write code yet? Try the demos in your browser — drop in your API key and go.

Live TTS demo

Type something and hear it back. Try different models and voices.

Live STT demo

Speak into your mic and watch the transcription stream in.
Source code: github.com/slng-ai/examples

What we want you to build

Use voice and speech AI in whatever way gets you excited. Here are some starting points, but go wherever the idea takes you.

Voice assistants

An agent that books appointments, handles support, tutors students. Pick a task and nail it.

Accessibility tools

Speech interfaces, screen readers, audio descriptions. Make something more people can use.

Creative audio apps

Podcasts, storytelling, audio games, voice-driven art. Surprise us.

Real-time transcription

Live captioning, meeting notes, multilingual translation. Turn speech into something useful.

Prizes

Best overall project wins a Lego Star Wars set. Demo what you built, impress the judges, take home some bricks.

Lego Star Wars set

Lego Star Wars set

Lego Star Wars set

FAQ

€4 in free credits when you sign up. That covers roughly 80 hours of audio generation or transcription, so you won’t run out this weekend.
Come find us at the event. We’ll top you up.
Nope. Keep building after the hackathon, polish your project, start a new one. They’re yours.
For most projects, start with:
  • English TTS: slng/deepgram/aura:2-en — low latency
  • English STT: slng/deepgram/nova:3-en — high accuracy
  • Spanish or Hindi TTS: slng/rime/arcana:3-es, slng/rime/arcana:3-hi
  • French TTS (WebSocket only): Cartesia Sonic 3 or Kugel
  • Multilingual STT: slng/deepgram/nova:3-multi — auto-detects 10+ languages
Full catalog on the Models page.
Yes. It’s standard HTTP and WebSocket, so it works with anything. The Unified API follows the same URL pattern for every provider, so you can drop SLNG into an existing voice pipeline without rewriting your integration.
No. Sign up at app.slng.ai and you’re good to go. The free tier covers the whole hackathon.
Find us at the event, we’re around all weekend. You can also check the full docs.