Tools
First-party APIs
Bring-your-own tools for use in conversational agents
First-party API tools allow you to connect agents to your internal systems—databases, services, and business logic—using custom, authenticated APIs.
Each tool has:
- A unique name
- A human-readable description
- Input and output schemas (JSON Schema, OpenAPI 3.0.x or 3.1.x compatible)
Ways to create tools
- Manual entry: Create tools one by one in the dashboard.
- OpenAPI spec upload (coming soon): Generate multiple tools by uploading an OpenAPI spec.
- MCP server discovery (coming soon): Add your MCP server to auto-discover and keep tools in sync.
Best practices for creating first-party API tools
These practices reflect ongoing developments in building LLM tools. We’ll continue refining them as patterns emerge.
- Make your tools workflow-centric, not resource-centric. Avoid surfacing raw CRUD operations directly. Instead, expose higher-level workflows that match real tasks users might ask for. This simplifies LLM decision-making and reduces ambiguity.
- Write clear, intent-aligned descriptions. The tool description acts as a hint to the agent—help it understand when and why to use a tool.
- Return human-readable errors. Avoid cryptic error codes. Use natural language error messages so the LLM can explain the issue, ask clarifying questions, or course correct intelligently.
- Return time values in local time (or UTC). If the tool returns time-related data, format it relative to the customer’s local time zone when possible, or default to UTC with clear labeling.
- Avoid enums in responses. Instead of returning raw enum values, return full phrases that describe the concept clearly and are directly usable in conversation.