Skip to content

Your First Ask Step

The ask ... using step sends a task to a reasoning provider (an LLM) and returns structured output. It is the most common way to add intelligence to a machine. Every ask step is governed: the runtime checks permissions, enforces budgets, and records the call in the behavioral ledger.

A simple classifier

machine sentiment
accepts
text as text, is required
responds with
sentiment as text
confidence as number
ensures
permissions
allowed to
llm_call
implements
ask classify, using: "anthropic:claude-sonnet-4-6"
with task "Classify the sentiment of this text as positive, negative, or neutral. Return a confidence score between 0 and 1.\n\nText: ${input.text}"
returns
sentiment as text
confidence as number
assuming
sentiment: "positive"
confidence: 0.95

Let’s break this down.

ask classify, using: "anthropic:claude-sonnet-4-6"

This names the step classify and tells the runtime which model to use. The format is provider:model. Supported providers include anthropic, openai, google, ollama, and groq.

with task

The instruction sent to the model. Use ${expr} to interpolate input values and previous step results directly into the prompt. There is no separate context block; data goes into the task string.

returns

The structured output schema. The model is instructed to return these fields, and the runtime parses the response accordingly.

assuming

Mock values for test and simulate mode. When you run tests, these values are returned instantly without calling the model. This makes tests fast, deterministic, and free.

Adding a system prompt

Use with role to set the model’s persona:

ask analyze, using: "anthropic:claude-sonnet-4-6"
with role "You are a senior financial analyst. Be precise and cite specific numbers."
with task "Analyze this quarterly report for key trends.\n\nReport: ${input.report}"
returns
trends as list
outlook as text
risk_level as text
assuming
trends: ["Revenue up 12%"]
outlook: "positive"
risk_level: "low"

Chaining steps

Each step’s output is available to subsequent steps via steps.<name>.<field>:

machine email_triage
accepts
subject as text, is required
body as text, is required
responds with
priority as text
action as text
ensures
permissions
allowed to
llm_call
implements
ask analyze, using: "anthropic:claude-sonnet-4-6"
with task "Analyze this email. Determine priority and whether it needs a response.\n\nSubject: ${input.subject}\nBody: ${input.body}"
returns
priority as text
needs_response as boolean
suggested_action as text
assuming
priority: "medium"
needs_response: true
suggested_action: "Reply within 24 hours"
compute format_result
{
priority: steps.analyze.priority,
action: steps.analyze.needs_response
? "Respond: " + steps.analyze.suggested_action
: "No response needed"
}

The compute step takes the LLM’s structured output and transforms it. Because compute is pure, it needs no permissions.

What happens at runtime

When this machine executes:

  1. The runtime checks that llm_call is in the allowed to list
  2. The ask step sends the prompt to Claude
  3. The model’s response is parsed into {priority, needs_response, suggested_action}
  4. The response, token count, cost, and latency are recorded in the behavioral ledger
  5. The compute step runs and produces the final output

If the machine did not declare llm_call permission, the runtime would deny the step and record the denial. Governance is not optional.

Try it

Write a machine that classifies support tickets. Give it a subject and body input, use an ask step to classify by urgency and department, then use a compute step to build the final output.

Next steps