jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

Function Calling PatternsParameter Schema DesignTool Selection PromptsPlanner–Executor ArchitecturesGrounding via External ToolsError Handling and RetriesTimeouts and Circuit BreakersResult Summarization PromptsChaining Tool CallsCalculators, Coders, and BrowsersTool Availability ChecksPermissions and ScopesSemantic Caching StrategiesObservability and LogsFallback to Tool-Free Modes

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Tools, Functions, and Agentic Workflows

Tools, Functions, and Agentic Workflows

20405 views

Integrate function calling and tools, design planner–executor patterns, and manage errors, scopes, and observability.

Content

3 of 15

Tool Selection Prompts

The Choosy Agent's Dating Profile — Sass + Safety
4833 views
intermediate
humorous
ai
education
gpt-5-mini
4833 views

Versions:

The Choosy Agent's Dating Profile — Sass + Safety

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Tool Selection Prompts — The Choosy Agent's Dating Profile

"A tool is only as useful as the question that chooses it." — Mostly true, occasionally dramatic.

You're already familiar with Function Calling Patterns (how we actually call a tool) and Parameter Schema Design (how we describe what the tool needs). Now we do the matchmaking: how does an agent decide which tool to use? This is where Tool Selection Prompts live — the explicit instructions we give the model so it can pick the right tool, at the right time, safely, and without emotional baggage.

We've previously covered safety and ethics. Treat that like your seatbelt — mandatory. Tool selection is where safety meets strategy: don't let the agent email a user's private info because it thought "fastest" meant "least consent".


Why Tool Selection Prompts matter

  • Efficiency: Avoid unnecessary calls (and bills, latency). Pick the calculator, not the web search, to add 2+2.
  • Accuracy: Some tools are specialized. Let the agent prefer the one with the right domain.
  • Safety & Privacy: The selection prompt enforces policies (no PII to external API, require permission before emailing).
  • Auditability: A well-structured prompt produces reasoning you can log for later review.

Anatomy of a Good Tool Selection Prompt

Think of it as a tiny playbook the model reads before choosing. Include these parts:

  1. Goal — What outcome matters right now? (be concrete)
  2. Tool Inventory — Short descriptions of available tools and their primary capabilities
  3. Selection Criteria — Ordered checklist: precision, latency, privacy, permission needed, cost
  4. Constraints & Safety Rules — Explicit bans (e.g., "never send PII to tool X") and required confirmations
  5. Fallback & Confidence Handling — What to do if confidence < threshold
  6. Logging Requirements — What to return for audit: chosen_tool, reason, confidence_score

Prompt Template (copy-pasteable)

You are a tool-selection assistant. Goal: <one-sentence goal>. Tools available:
- tool_a: <one-line capability>
- tool_b: <one-line capability>
Selection criteria (in order): <primary>, <secondary>, <privacy>, <cost>. Safety rules: <e.g., do not expose PII to external_http>. If a tool requires user permission (email/send), ask the user first. Choose the best tool, and respond in JSON:
{
  chosen_tool: <tool_id or null>,
  reason: <short rationale referencing criteria>,
  confidence: <0.0-1.0>,
  action: <call|ask_user|fallback>
}

If confidence < 0.6, do not call a tool; ask a clarifying question instead.

Replace placeholders with the actual context. Keep it concise — too much prose makes the model woozy.


Example: Choosing Between Web Search, DB Query, and Calculator

Prompt fragment fed before the agent evaluates a user request:

Goal: Answer the user's question accurately and with minimal external calls.
Tools:
- web_search: up-to-date web results, external_http, may return PII
- db_query: internal database, authoritative for account info
- calculator: numeric computation, offline
Selection criteria: 1) correctness for domain-specific facts 2) minimal privacy exposure 3) speed 4) cost
Safety: never send user PII to web_search. If request concerns account data, prefer db_query. If the request is purely numerical, use calculator.
Respond with chosen_tool, reason, confidence, action.

If the user asks: "What's my current account balance?" the agent should pick db_query with a high confidence and note the privacy rule. If the user asks: "What's the derivative of x^2?" pick calculator.


Advanced Patterns: Multi-step Selection & Function Schema Hooks

  • Two-stage decision: First stage picks a capability cluster (e.g., "fetch factual external data" vs "compute"). Second stage picks a specific tool within the cluster based on cost, latency, and permissions.
  • Parameter-aware selection: Use your Parameter Schema Design outputs to determine if the tool supports required parameters. If a tool's function schema lacks a needed parameter, disqualify it automatically.
  • Confidence-driven chaining: If chosen_tool.confidence < threshold, trigger a clarifying question rather than a tool call. This reduces hallucination-driven actions.

Pseudocode:

stage1 = pick_cluster(user_query)
candidates = filter_tools_by_cluster(stage1)
candidates = filter_by_schema_support(candidates, required_params)
best = rank(candidates, criteria)
if best.confidence < 0.6: ask_clarify()
else: call(best)

This bridges nicely to function-calling since once you pick the tool you then map to the function schema and call it.


Safety & Ethics: Practical Rules to Embed

  • Explicit permission: If an action will send or expose user data externally, require explicit user confirmation.
  • Least privilege: Prefer internal or offline tools to external ones unless necessary.
  • Red-team traps: Add policy checks to prevent tool selection for disallowed tasks (e.g., do not select external HTTP tool to facilitate wrongdoing).
  • Logging: Always return a short rationale with the chosen tool to support audits.

If it sounds like paranoia, call it good engineering.


Quick Reference Table

What you want Prefer tool Why Safety note
Fresh web facts web_search Up-to-date Sanitize queries, strip PII
Account details db_query Authoritative Requires auth & consent
Math or conversion calculator Deterministic Offline, safe
Send email email_sender Direct action require explicit confirmation

Common Failure Modes and Fixes

  • Agent chooses wrong tool because prompt omitted a constraint -> Fix: be explicit in Selection Criteria.
  • Agent calls external tool with PII -> Fix: add a hard-stop rule and require the model to check for PII before selecting.
  • Low-confidence calls cause errors -> Fix: implement confidence thresholds and clarification fallback.

Closing: Your Mini-Checklist Before Deploy

  • Did you enumerate tools and capabilities? ✔
  • Did you order selection criteria (accuracy, privacy, latency)? ✔
  • Did you require permission for actions with privacy/cost implications? ✔
  • Did you add a confidence threshold and logging output? ✔

Final line: Make your agent choose like a responsible librarian, not a caffeine-fueled intern. Be explicit, be strict on privacy, and make it justify itself. Your future auditors (and users) will thank you — and maybe even give you cookies.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics