Tools, Functions, and Agentic Workflows
Integrate function calling and tools, design planner–executor patterns, and manage errors, scopes, and observability.
Content
Function Calling Patterns
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Function Calling Patterns — the Playbook That Actually Works
"Think of function calls as your model's Swiss Army knife. Sharp, practical, and a little dangerous if you poke your eye."
We already covered safety, transparency, and audit trails in previous lessons — good. Now let us graduate from high-level ethics to the actual plumbing: how to design function calling so your agentic workflow is reliable, auditable, and not secretly plotting to leak PII. This builds on that earlier foundation: function calls are where accountability, logging, and consent decisions meet code.
What is a function-calling pattern, and why should you care?
Definition: A function-calling pattern is a repeatable structure for how a model chooses, formats, and executes external functions or tools during a conversation or workflow.
Why it matters:
- Reliability: Clear patterns reduce hallucination to function invocations.
- Auditability: Consistent calls make it easy to reconstruct what an agent did (hello, audit trails).
- Safety & Privacy: Patterns can enforce checks like consent, redaction, and age rules before data leaves the model.
Imagine your prompt-engineered agent as a barista. Function-calling patterns are the recipe cards. Without them you get mystery beverages. With them you get repeatable, safe lattes.
Common Function-Calling Patterns
Single-call execution (Direct tool call)
- Model identifies one needed function and calls it.
- Great for simple lookups, quick computations.
Planner-executor split (Plan then do)
- The model writes a plan (sequence of tool calls) then an executor performs them.
- Improves transparency and controllability.
Chain-of-tools (Pipeline)
- Output of Function A feeds Function B, etc. Useful for complex transforms.
Hierarchical agenting (Meta-agent)
- A top-level agent delegates to specialized sub-agents/tools.
- Good for modular systems and role separation.
Event-driven invocation
- Functions are triggered by state changes or events rather than one-off prompts.
| Pattern | Best for | Auditability | Note |
|---|---|---|---|
| Single-call | Single queries | Easy | Low overhead |
| Planner-executor | Multi-step tasks | High | Slight latency |
| Chain | Data transforms | Moderate | Watch for coupling |
| Hierarchical | Complex domains | Excellent | More orchestration code |
| Event-driven | Reactive systems | Depends on logging | Good for scale |
Practical design rules (aka the things you will thank me for later)
- Single Responsibility: Each function should do one thing well. No Frankenfunctions.
- Schema everything: Define strict input and output schemas. This makes validation and logging easy.
- Idempotency: Where possible, design functions so repeated calls are safe.
- Fail fast, fail loudly: When invalid inputs occur, return structured errors, not poetic riddles.
- Log with context: Include user id (if allowed), timestamp, function signature, and model prompt snippet for each call.
- Privacy guard rails: Mask or omit PII before calling external services. Enforce age-appropriate rules from prior lessons.
Example: planner-executor pattern (pseudocode)
# Agent receives user request
user_ask = 'Summarize my notes and schedule a 30m followup next week'
# 1) Planner call: model creates plan
planner_output = plan_tool(user_ask)
# plan_tool returns a structured plan like:
# - tasks: [parse_notes, summarize, check_calendar_availability, create_event]
# - safety_checks: [consent_check, pii_redaction]
# 2) Executor runs the plan step by step
for task in planner_output.tasks:
validate(task)
result = call_function(task, input_for_task)
log_call(task, input_for_task, result)
if result.error:
abort_and_report(result.error)
# 3) Final: return summary + calendar event link
Note the explicit validation and logging steps. That is how audit trails and transparency get implemented, not by hoping the model behaves.
Error handling and recovery patterns
- Retry with backoff: For transient failures. But cap retries to avoid loops.
- Fallback functions: If the preferred tool fails, call a degraded function that returns basic info.
- Human-in-the-loop escalation: If safety checks fail, pause and ask for human approval.
- Structured errors: Functions should return codes and messages, e.g.,
{'code': 'PII_BLOCKED', 'message': 'User SSN present'}.
Quote-worthy truth:
"If your error handling is prose instead of JSON, you have failed the function."
Transparency and auditability (tie-back to previous topic)
We already insisted on accountability and audit trails. Function-calling patterns are where you operationalize those requirements:
- Log every function call with schema-validated inputs/outputs.
- Keep redaction decisions as first-class data, so auditors can see why something was redacted.
- Record the model prompt or plan that led to a call. Store hashes if full prompts are sensitive.
If a user is a minor, enforce age-appropriate design by adding an upstream gate that blocks or transforms calls that would expose sensitive content. This must be auditable: log the blocking event and the rule used.
Quick checklist before you ship
- Do all functions have schemas? ✅
- Are calls logged with minimal necessary PII? ✅
- Are safety checks executed before invoking external tools? ✅
- Is there a human escalation path for ambiguous safety decisions? ✅
- Are return values idempotent or at least safe to retry? ✅
If you answered no to any of these, go patch it before someone screenshots your agent doing weird stuff.
Closing: the one-liner to take away
Design function-calling patterns like contracts: precise signatures, clear responsibilities, and built-in record-keeping. When your system speaks in structured calls, audits are simple, safety rules are enforceable, and the whole thing behaves like a system you actually own.
Go forth, plan your functions, and make audit logs that make your future self proud. Or at least less panicked.
Further reading / next steps
- Try implementing a planner-executor prototype and attach structured logging.
- Create a testsuite for safety regressions that simulates edge-case inputs (PII, minors, malicious prompts).
- Next lesson in this module: orchestrating agents in production with monitoring and rate limits.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!