SubAgent
Agentic sub-agent that can be delegated tasks from a RealtimeAgent.
Runs an LLM-driven tool-calling loop to complete a given task, with built-in support for clarification questions, MCP server integration, and handoff from a parent voice agent.
The agent exposes two special tools to the LLM automatically:
- done — signals task completion and returns the final result.
- clarify — asks the user a question and blocks until they answer.
agent = SubAgent(
name="calendar_agent",
description="Manages the user's calendar.",
instructions="You are a calendar assistant ...",
llm=OpenAIChat(model="gpt-4o"),
)
result = await agent.run("Schedule a meeting with Alice tomorrow at 3pm.")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Unique identifier for this agent. Used as the tool name when registered as a handoff target in a |
required |
description
|
str
|
Short description shown to the parent LLM so it knows when to delegate tasks to this agent. |
required |
instructions
|
str
|
System prompt defining this agent's capabilities and behavior. |
required |
llm
|
BaseChatModel | None
|
LLM backend for the tool-calling loop. Must support tool/function calling. |
None
|
tools
|
SubAgentTools | None
|
Pre-registered tools available to the agent during its run loop. |
None
|
mcp_servers
|
list[MCPServer] | None
|
MCP servers connected during |
None
|
max_iterations
|
int
|
Maximum number of LLM invocations before the loop aborts. Guards against infinite tool-calling cycles. |
10
|
handoff_instructions
|
str | None
|
Extra instructions appended to the handoff tool description shown to the parent |
None
|
result_instructions
|
str | None
|
Instructions for how the parent agent should present the result returned by this agent to the user. |
None
|
holding_instruction
|
str | None
|
Message the parent agent says to the user while this agent is working (e.g. 'One moment, checking your calendar…'). |
None
|
context
|
T | None
|
Shared context object forwarded to all tool handlers. |
None
|
skills
|
list[Skill | Path | str] | None
|
Agent skills to make available. Each entry is either a Skill instance or a path to a skill directory containing a SKILL.md file. Skills are injected into the system prompt; with dynamic_skills=True the agent loads them on-demand via a 'load_skill' tool. |
None
|
dynamic_skills
|
bool
|
If True, only a skill index is injected into the system prompt and the agent explicitly calls 'load_skill' to pull in full instructions. Reduces token usage for large skill sets. |
False
|
prewarm
async
Prewarm MCP connections before run().
Safe to call multiple times — a no-op if servers are already connected.
Returns:
| Type | Description |
|---|---|
Self
|
Returns |
run
async
run(
task: str,
context: str | None = None,
clarification_answer: str | None = None,
clarify_call_id: str | None = None,
resume_history: list | None = None,
) -> SubAgentResult
Run the tool-calling loop until the task is complete or max_iterations is reached.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task
|
str
|
The task or question to complete. |
required |
context
|
str | None
|
Optional conversation history from the parent voice session, injected as a |
None
|
clarification_answer
|
str | None
|
Answer to a previous clarification question. Must be provided together with |
None
|
clarify_call_id
|
str | None
|
Tool call ID of the previous |
None
|
resume_history
|
list | None
|
Message history from a previous run that was interrupted by a clarification request. When provided, the loop resumes from this state rather than starting fresh. |
None
|
Returns:
| Type | Description |
|---|---|
SubAgentResult
|
Final result including the message, success flag, and executed tool calls. If |
options: members: false