Lesson 3: Module 03: Tools, The Agent Primitive
The key insight: an
askstep with tools IS an agent loop (the repeating cycle of reason, act, observe). No special mode required.
Learning Objectives
- Understand why
ask+ tools is the fundamental agent pattern - Define tools for an
askstep using tool machines (machines that perform a specific operation an LLM can invoke) - Build a research assistant that searches the web and synthesizes results
Complexity Ladder: Level 5 (Agentic); tools introduce the reason-act-observe loop that defines agents.
The Concept: Tools Turn Reasoning Into Action
Think of tools as the hands of the AI. Without tools, the LLM can only think — it can observe and reason but can’t interact with the world. Tools let it search the web, read files, and take actions.
In Module 02, you built an ask step that takes input and returns structured output. That’s a single inference call (one round-trip to the reasoning backend), powerful but fixed. The reasoning backend processes and responds. Done.
Now add tools. When an ask step has access to tools, the LLM can decide to call a tool, observe the result, and reason again. It loops: reason -> act -> observe -> reason. This is the agent loop, and it emerges naturally from giving an LLM tools.
┌──────────────────────────────────────┐ │ │ ▼ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ REASON │───►│ ACT │───►│ OBSERVE │─┘ │ │ │ │ │ │ │ LLM sees │ │ Execute │ │ Process │ │ question │ │ tool │ │ result │ │ + tools │ │ machine │ │ │ └──────────┘ └──────────┘ └──────────┘ │ │ (when done) ▼ ┌──────────┐ │ RESPOND │ │ Final │ │ answer │ └──────────┘The key insight: You don’t need a special “agent mode” or framework. An ask step with tools IS the agent. The reasoning backend decides when to use tools and when to respond. mashin handles the loop, the tool dispatch (routing each tool call to the correct machine), and the governance (policy enforcement ensuring every action is authorized and logged).
Start With Koda
Koda requires a free account. Sign in or create an account to use Koda exercises throughout this course. If you’re not signed in yet, read on; the exercises will be here when you’re ready.
Before diving into the syntax, try building a tool-equipped machine with Koda:
“Build a machine that answers questions by searching the web first. It should search for relevant information, then synthesize an answer with source citations.”
Examine the generated machine. It should use an ask step with a tools block containing web_search (a tool machine for querying search engines). Check that it has a returns block requiring both an answer and sources. Study the structure; the rest of this module explains every part of what Koda generated.
How It Works
- The
askstep sends the prompt to the LLM along with tool definitions - The LLM either responds directly or requests a tool call
- If a tool is called, Mashin executes the tool machine and feeds the result back
- The LLM reasons again with the new information
- This repeats until the LLM responds or hits the max rounds limit
Each tool is backed by a tool machine — a regular Mashin machine that performs a specific operation (like searching the web or reading a file). Tool machines are effect machines (machines whose purpose is to perform I/O — see README glossary): governed (policy-controlled so every action is authorized), auditable (every tool call is logged with inputs, outputs, and timing), and sandboxed. The LLM can’t do anything outside the tools you provide.
Building It: Research Assistant
Let’s build a machine that answers questions by searching the web first.
machine web_assistant "Web Research Assistant"
accepts question as string, is required
responds with answer as string sources as list
implements // An ask step that can use tools = an agent ask research, using: "anthropic:claude-sonnet-4" with role "You are a research assistant. Answer questions by searching the web for relevant information first. Always cite your sources. When you have enough information to answer confidently, respond directly. If your first search doesn't give good results, try different search terms." with task "Answer this question with web research:\n\n${input.question}" tools web_search: "@mashin/actions/tools/web_search" web_fetch: "@mashin/actions/tools/web_fetch" returns answer as string, is required sources as list, is requiredWhat Happens at Runtime
- The LLM sees the question and the available tools (
web_search,web_fetch) - It decides to search: calls
web_searchwith a query - Mashin executes
@mashin/actions/tools/web_search(a tool machine for web search) and returns results to the LLM - The LLM reads the results. Maybe it needs more detail — it calls
web_fetchon a URL - Mashin fetches the page and returns the content
- The LLM now has enough context. It responds with an answer and sources
- The response matches
returnsschema, typed and validated
All of this happens inside a single ask step. No manual loop, no explicit tool dispatch code.
Tool Definitions
Tools are defined in the tools block under an ask step. Each tool maps a name to a machine:
tools web_search: "@mashin/actions/tools/web_search" // Search the web by query read_file: "@mashin/actions/tools/read_file" // Read a local file by path bash: "@mashin/actions/tools/bash" // Run a shell commandThe tool machine’s inputs become the tool’s parameters. The tool machine’s outputs become the tool’s result. Mashin handles the wiring.
Available Stdlib Tools
Mashin ships with governed tool machines for common operations:
Remember the Mashin principle: code computes, machines effect. Tool machines are effect machines — they perform governed I/O so your reasoning steps stay pure.
| Tool Machine | Purpose | Key Inputs |
|---|---|---|
@mashin/actions/tools/web_search | Search the web (query a search engine and return results) | query |
@mashin/actions/tools/web_fetch | Fetch a URL (download a web page’s content) | url |
@mashin/actions/tools/read_file | Read a file | path |
@mashin/actions/tools/write_file | Write a file | path, content |
@mashin/actions/tools/edit_file | Edit a file | path, old_text, new_text |
@mashin/actions/tools/bash | Run a shell command | command |
@mashin/actions/tools/glob | Find files by pattern | pattern, path |
@mashin/actions/tools/grep | Search file contents | pattern, path |
Every tool is an effect machine — governed, auditable, sandboxed. The LLM can’t bypass the tool to access the filesystem or network directly.
Tool Rounds and Limits
Each ask step allows a limited number of tool rounds to prevent runaway agent loops:
ask agent, using: "anthropic:claude-sonnet-4" with task "Research and answer: ${input.question}" tools search: "@mashin/actions/tools/web_search"If the LLM hasn’t responded after max_tool_rounds (the maximum number of reason-act-observe cycles), the step completes with whatever the LLM has so far. This is a safety net — well-prompted agents usually finish in 2-4 rounds.
A Real Example: Koda’s Own Tools
Koda, mashin’s intelligent development environment, is itself an agent built with this pattern. Koda has access to tools like plan_machine, validate_machine, and describe_self. When you ask Koda to build a machine, it uses ask + tools to plan, generate, and validate the result.
This isn’t a special system; it’s the same ask + tools pattern you’re learning here, running in production.
Key Syntax
ask name, using: "provider:model-name" with role "agent's role and instructions" with task "the specific task or question" tools tool_name: "@namespace/tool_machine" another: "@mashin/actions/tools/something" returns field as typeCommon Mistakes
-
Giving too many tools. Research shows agents with >5 tools have 40-50% accuracy vs 95% for single-tool agents. Keep the tool set small and focused. If you need many tools, use the orchestrator pattern (Module 07).
-
Not providing enough context in the system prompt. The LLM needs to know what tools are for and when to use them. A vague
with roleleads to random tool calls. -
Forgetting that tool machines must exist. Each tool references a machine. If the machine doesn’t exist or isn’t accessible, the tool call fails. Use stdlib tools or create your own effect machines.
What’s Next
In Module 04, you’ll learn how to give machines persistent state and long-term memory, essential for agents that operate across multiple interactions.