Tools, Functions, and Agentic Workflows
Integrate function calling and tools, design planner–executor patterns, and manage errors, scopes, and observability.
Content
Tool Selection Prompts
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Tool Selection Prompts — The Choosy Agent's Dating Profile
"A tool is only as useful as the question that chooses it." — Mostly true, occasionally dramatic.
You're already familiar with Function Calling Patterns (how we actually call a tool) and Parameter Schema Design (how we describe what the tool needs). Now we do the matchmaking: how does an agent decide which tool to use? This is where Tool Selection Prompts live — the explicit instructions we give the model so it can pick the right tool, at the right time, safely, and without emotional baggage.
We've previously covered safety and ethics. Treat that like your seatbelt — mandatory. Tool selection is where safety meets strategy: don't let the agent email a user's private info because it thought "fastest" meant "least consent".
Why Tool Selection Prompts matter
- Efficiency: Avoid unnecessary calls (and bills, latency). Pick the calculator, not the web search, to add 2+2.
- Accuracy: Some tools are specialized. Let the agent prefer the one with the right domain.
- Safety & Privacy: The selection prompt enforces policies (no PII to external API, require permission before emailing).
- Auditability: A well-structured prompt produces reasoning you can log for later review.
Anatomy of a Good Tool Selection Prompt
Think of it as a tiny playbook the model reads before choosing. Include these parts:
- Goal — What outcome matters right now? (be concrete)
- Tool Inventory — Short descriptions of available tools and their primary capabilities
- Selection Criteria — Ordered checklist: precision, latency, privacy, permission needed, cost
- Constraints & Safety Rules — Explicit bans (e.g., "never send PII to tool X") and required confirmations
- Fallback & Confidence Handling — What to do if confidence < threshold
- Logging Requirements — What to return for audit: chosen_tool, reason, confidence_score
Prompt Template (copy-pasteable)
You are a tool-selection assistant. Goal: <one-sentence goal>. Tools available:
- tool_a: <one-line capability>
- tool_b: <one-line capability>
Selection criteria (in order): <primary>, <secondary>, <privacy>, <cost>. Safety rules: <e.g., do not expose PII to external_http>. If a tool requires user permission (email/send), ask the user first. Choose the best tool, and respond in JSON:
{
chosen_tool: <tool_id or null>,
reason: <short rationale referencing criteria>,
confidence: <0.0-1.0>,
action: <call|ask_user|fallback>
}
If confidence < 0.6, do not call a tool; ask a clarifying question instead.
Replace placeholders with the actual context. Keep it concise — too much prose makes the model woozy.
Example: Choosing Between Web Search, DB Query, and Calculator
Prompt fragment fed before the agent evaluates a user request:
Goal: Answer the user's question accurately and with minimal external calls.
Tools:
- web_search: up-to-date web results, external_http, may return PII
- db_query: internal database, authoritative for account info
- calculator: numeric computation, offline
Selection criteria: 1) correctness for domain-specific facts 2) minimal privacy exposure 3) speed 4) cost
Safety: never send user PII to web_search. If request concerns account data, prefer db_query. If the request is purely numerical, use calculator.
Respond with chosen_tool, reason, confidence, action.
If the user asks: "What's my current account balance?" the agent should pick db_query with a high confidence and note the privacy rule. If the user asks: "What's the derivative of x^2?" pick calculator.
Advanced Patterns: Multi-step Selection & Function Schema Hooks
- Two-stage decision: First stage picks a capability cluster (e.g., "fetch factual external data" vs "compute"). Second stage picks a specific tool within the cluster based on cost, latency, and permissions.
- Parameter-aware selection: Use your Parameter Schema Design outputs to determine if the tool supports required parameters. If a tool's function schema lacks a needed parameter, disqualify it automatically.
- Confidence-driven chaining: If chosen_tool.confidence < threshold, trigger a clarifying question rather than a tool call. This reduces hallucination-driven actions.
Pseudocode:
stage1 = pick_cluster(user_query)
candidates = filter_tools_by_cluster(stage1)
candidates = filter_by_schema_support(candidates, required_params)
best = rank(candidates, criteria)
if best.confidence < 0.6: ask_clarify()
else: call(best)
This bridges nicely to function-calling since once you pick the tool you then map to the function schema and call it.
Safety & Ethics: Practical Rules to Embed
- Explicit permission: If an action will send or expose user data externally, require explicit user confirmation.
- Least privilege: Prefer internal or offline tools to external ones unless necessary.
- Red-team traps: Add policy checks to prevent tool selection for disallowed tasks (e.g., do not select external HTTP tool to facilitate wrongdoing).
- Logging: Always return a short rationale with the chosen tool to support audits.
If it sounds like paranoia, call it good engineering.
Quick Reference Table
| What you want | Prefer tool | Why | Safety note |
|---|---|---|---|
| Fresh web facts | web_search | Up-to-date | Sanitize queries, strip PII |
| Account details | db_query | Authoritative | Requires auth & consent |
| Math or conversion | calculator | Deterministic | Offline, safe |
| Send email | email_sender | Direct action | require explicit confirmation |
Common Failure Modes and Fixes
- Agent chooses wrong tool because prompt omitted a constraint -> Fix: be explicit in Selection Criteria.
- Agent calls external tool with PII -> Fix: add a hard-stop rule and require the model to check for PII before selecting.
- Low-confidence calls cause errors -> Fix: implement confidence thresholds and clarification fallback.
Closing: Your Mini-Checklist Before Deploy
- Did you enumerate tools and capabilities? ✔
- Did you order selection criteria (accuracy, privacy, latency)? ✔
- Did you require permission for actions with privacy/cost implications? ✔
- Did you add a confidence threshold and logging output? ✔
Final line: Make your agent choose like a responsible librarian, not a caffeine-fueled intern. Be explicit, be strict on privacy, and make it justify itself. Your future auditors (and users) will thank you — and maybe even give you cookies.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!