Configuring voice agents
The API reference covers every field on the agent object. This page covers the parts that need more context than a schema can give: system prompt structure, tool configuration, and template variables. You can manage agents through the API or through Agent Infra. For working code examples (create, update, delete), see Agent API examples.Writing a system prompt
A system prompt tells the agent who it is, how to talk, and what to do on the call. We recommend splitting it into five sections:| Section | Purpose |
|---|---|
| Identity | Name, role, and reason for calling |
| Style | Tone, sentence length, filler words |
| Response guidelines | One question at a time, how to handle mishearing, interruptions |
| Task and conversation flow | Step-by-step script with <wait for response> markers |
| Guardrails | Topics to avoid, how to handle off-script questions |
Full example: healthcare outreach agent
Full example: healthcare outreach agent
- Keep it spoken. The output goes through TTS, so write the way you’d actually talk. No markdown formatting, no numbered lists in the agent’s responses.
- One question per turn. Stacking questions confuses both the caller and the STT model.
- Use
<wait for response>markers. They make the conversation flow explicit for the LLM. - Normalize for speech. Phone numbers, dates, and times should be written out the way you’d say them aloud.
Tools
Agents support four tool types. You pass them in thetools array when creating or updating an agent.
Template tools
Two built-in templates:hangup and voicemail_detection. These handle call control without any backend setup.
execution_policy.pre_action_message tells the agent to say something before executing the tool. For hangup, the agent will speak the text and then end the call.
You can override the default tool prompt with the prompt field if you want to change when the LLM decides to use the tool.
Current date/time tool
Use thecurrent_datetime built-in when the agent needs reliable time-sensitive context without calling your backend.
get_current_datetime tool and returns a JSON object with:
timezonelocal_datetimelocal_datelocal_timeday_of_weekutc_offsetis_dst
"source": "system" and system.triggers if you want the runtime to inject current date/time context automatically on events like call_start.
Webhook tools
Webhook tools call your backend over HTTP. The LLM decides when to invoke them based on thename and description you provide. Parameters follow JSON Schema.
"auth": { "type": "bearer", "token": "..." }sends anAuthorization: Bearer <token>header"auth": { "type": "hmac", "secret": "..." }sends anX-Signature-256header (HMAC-SHA256 of the request body)
"wait_for_response": false when you don’t need the result back in the conversation, like logging a call event to your CRM.
Result visibility — Use "show_results_to_llm": false when a webhook should run but the successful response body should stay hidden from the model. Use llm_result_instructions to tell the model how to use a successful response when it is visible. llm_result_instructions is static configuration and does not support runtime personalization.
URL personalization — Webhook URLs may use {{variables}}, but only in path segments and query parameter values. The scheme, host, port, credentials, fragments, and query parameter names must remain literal.
System-triggered webhooks
By default, webhook tools are"source": "contextual" — the LLM calls them during conversation. Set "source": "system" to trigger them automatically on specific events instead.
call_start, first_user_message, call_end, tool_succeeded, tool_failed. For tool_succeeded and tool_failed, you also need to set source_tool_id to specify which tool’s outcome fires the trigger.
Arguments can pull from several sources:
| Source type | What it provides |
|---|---|
constant | A fixed value you define |
transcript_messages | Recent conversation transcript (bounded by max_messages) |
call_id, room_name, job_id | IDs from the current session |
agent_id, agent_name | The agent’s own metadata |
phone_number | Caller’s phone number |
first_user_message | The first thing the caller said |
call_end_reason | Why the call ended |
description, llm_result_instructions, and execution_policy.pre_action_message.text
must remain static. Only url and system.arguments[*].source.template support runtime
personalization for webhook tools.
Human transfer
Transfers the caller to a phone number. Requires an outbound SIP trunk on the agent.name and description to decide when to transfer, just like webhook tools.
Idle nudges
idle_nudges configures what the runtime should do when the caller goes silent for too long.
idle_nudges, the backend can apply language-specific defaults when returning the agent configuration.
Template variables
Use{{variable_name}} anywhere in the system_prompt or greeting to inject values at runtime.
There are two layers:
template_defaults— Default values set on the agent itself. These apply to every call unless overridden.arguments— Per-call overrides passed when dispatching a call.
patient_name resolves to “Maria” (from the call arguments) and practice_name resolves to “Greenfield Family Medicine” (from the agent defaults).
The API response includes a template_variables field that lists every {{variable}} found in the prompt, along with whether it has a default value. This is useful for validating your templates before dispatching calls.
Per-call arguments are limited to 32 keys, with keys up to 64 characters, values up to 1024
characters, and a combined value payload up to 8192 characters.
Runtime variables
Useruntime_variables when the model needs to capture call-scoped values during the conversation
and reuse them later in tool configuration.
These values:
- exist only for the active call or web session
- are set by the model through the built-in
set_runtime_variablestool - can be referenced in webhook URLs and system tool template arguments using the same
{{variable_name}}syntax
Next steps
Agent API examples
Create, test, update, and delete agents with code.
Dispatching calls
Outbound calls, template variables, and batch dispatch.