UGC copyBasic

Workshop

Tool

Tools turn a model from a conversational assistant into a workflow operator. This lesson helps you decide when the model needs files, search, structured outputs, and verification to do reliable work.

Outcome
A clear rule set for when to call tools, what to pass in, and how to verify the result.
Estimated effort
14 min workshop
Difficulty
Core

You will leave withOne tool-routing brief, one repeatable workflow sequence, and one verification checklist.

Section

Why this matters

A model can only reason over what it has. Tools are what let it search, inspect files, structure output, and verify before acting.

What to do

  • Treat tool use as a response to a missing capability: missing context, missing data, missing structure, or missing verification.
  • Decide which tasks should remain conversational and which ones need external help to become reliable.

Why it matters

  • Without tools, the model starts guessing when it lacks fresh information, exact file contents, or real-world state.
  • The right tool often improves reliability more than a stronger model alone.

What good looks like

  • You can explain why a task needs search, files, automation, or verification before the model starts working.

Checklist

  • Knowledge gap identified
  • Tool gap identified
  • Verification need identified

Use tools to close a knowledge or execution gap, not to add complexity for its own sake.

Section

Step 1: Identify what the model cannot know alone

Start by asking a simple question: what information or capability is missing if the model only sees the prompt?

What to do

  • Name whether the gap is live information, file context, exact extraction, structured output, or action execution.
  • Keep the gap statement short so the tool choice stays obvious.

Why it matters

  • Tool selection gets clearer when it is tied to the missing capability rather than a vague sense that the task feels complicated.
  • This also prevents over-tooling simple prompts that the model could handle directly.

What good looks like

  • You can finish the sentence: 'The model needs a tool because it cannot reliably know ____ from the prompt alone.'

Checklist

  • Gap is named
  • Reason for tool use is named
  • Direct-prompt option considered first

Tool use should begin with a missing capability, not a habit.

Section

Step 2: Choose the right tool for the job

After the gap is clear, select the narrowest tool that solves it cleanly: search, file access, structured extraction, or execution.

What to do

  • Map the gap to one primary tool instead of throwing multiple tools at the task immediately.
  • Choose the simplest path that gives the model the missing evidence or action.

Why it matters

  • One well-chosen tool is easier to debug and verify than a chain of unnecessary calls.
  • Narrow tool choice also makes failures easier to understand when the output is wrong.

What good looks like

  • The tool clearly solves the missing step without introducing unrelated complexity.

Checklist

  • Primary tool chosen
  • Reason for that tool is documented
  • Extra tool calls removed

The best tool choice is usually the narrowest tool that closes the real gap.

Section

Step 3: Prepare clean inputs

A tool only helps if the model sends it the right context. Clean inputs matter as much as the tool choice itself.

What to do

  • Pass only the information the tool actually needs, such as the right file, exact query, or target output format.
  • Remove irrelevant context that could cause the model to ask the tool for the wrong thing.

Why it matters

  • Messy inputs produce messy tool results, which then flow downstream into the final answer.
  • Structured, minimal inputs make tool behavior more predictable and easier to reuse later.

What good looks like

  • A teammate could understand the tool call from the input alone and predict what it is trying to fetch or produce.

Checklist

  • Input is scoped
  • Query is clear
  • Expected output is named

Tool reliability starts at the input boundary.

Section

Step 4: Require verification

Tool use is not finished when the tool returns data. The model still needs to check whether the result actually solved the task.

What to do

  • Add a verification step after important tool calls, such as checking the returned file, confirming the extracted field, or validating the final structure.
  • Treat verification as part of the workflow, not a bonus safety step.

Why it matters

  • Tools reduce guessing, but they can still return the wrong file, the wrong row, or incomplete output.
  • Verification is what turns tool use into operational reliability.

What good looks like

  • The workflow includes one explicit check that confirms the result is usable before it moves forward.

Checklist

  • Returned result checked
  • Structure checked
  • Task success checked

A tool call is only finished once the result is verified against the task.

Section

Step 5: Design a repeatable flow

Finish by turning the tool decision into a reusable sequence the model or team can follow every time.

What to do

  • Write the workflow in order: identify the gap, choose the tool, pass the input, verify the result, then return the final output.
  • Save the sequence as a reusable operating pattern for similar tasks.

Why it matters

  • Repeatable tool flows are easier to test, easier to automate, and easier to hand to another teammate.
  • They also make failure analysis faster because every step is named.

What good looks like

  • The workflow reads like a short operating procedure instead of a loose set of instincts.

Checklist

  • Flow order documented
  • Inputs documented
  • Verification documented

The best tool workflows are short, explicit, and easy to repeat.

Section

Common mistakes

Most tool failures come from using too many tools, using the wrong tool, or skipping the verification step after the tool call.

What to do

  • Simplify the tool chain until each tool has one clear job.
  • Add one visible verification check to every important tool-assisted workflow.

Why it matters

  • Over-tooling makes debugging harder because you lose track of which step actually failed.
  • Skipping verification lets wrong tool results flow into the final answer without anyone noticing.

Checklist

  • Do not use a tool without a named gap
  • Do not chain tools without a reason
  • Do not pass messy inputs
  • Do not skip verification

Tool use should reduce uncertainty, not hide it behind extra steps.

Starter prompt

Tool-routing scaffold

Use this when a task feels larger than a direct prompt. It helps you identify the missing capability and route the task through the simplest reliable tool path.

Goal: [what the final output needs to accomplish]
What the model already knows: [what is already in the prompt]
Missing gap: [what the model cannot know or do alone]
Primary tool: [search / files / extraction / execution / verification]
Required input: [what the tool needs to receive]
Verification step: [how you will confirm the tool result is correct]

Deliverables

What you should finish with

This topic is complete when these outputs exist and are saved for the next stage of the workflow.

  1. One tool-routing brief for a real task in your workflow.
  2. One repeatable workflow sequence with named tool steps.
  3. One verification checklist for tool-assisted outputs.
  4. One documented rule for when a direct prompt is enough and when a tool is required.

Asset slots

Placeholders for uploads

These are the assets we will plug in later. Keeping the slots visible now makes the workflow feel complete and shows exactly what still needs to be collected.

placeholder

Workflow map

Upload the visual sequence showing the tool-assisted flow from input to verified output.

placeholder

Verified example run

Upload one example of a tool-assisted task that passed the verification step.

placeholder

Tool verification checklist

Upload the checklist used to confirm the result before returning it.