Skip to content

Lesson 10: Metaprogramming

Lesson 10: Metaprogramming

Your email triage works. It is deployed, tested, monitored, and learning from corrections. You analyzed it with interpretation modes. But all of those analyses operated on the machine from the outside.

What if the machine could look at itself?

reflect(): A Machine That Knows Itself

machine self_aware_triage
accepts
subject as text, is required
sender as text, is required
responds with
priority as text
step_count as number
models_used as list
implements
ask classify, using: "anthropic:claude-haiku-4"
with task "Classify: ${input.subject} from ${input.sender}"
returns
priority as text
compute describe_self
{
let me = reflect()
let my_steps = form.steps(me)
let my_models = my_steps.filter(s => form.get(s, "variant") == "using").map(s => form.get(s, "variant_value"))
{
priority: classify.priority,
step_count: my_steps.length,
models_used: my_models
}
}

reflect() returns the machine’s own structure as data. Not a string description. Actual structure you can navigate.

form.steps(me) lists all steps. form.get(s, "variant_value") reads the model name from each step. The machine counts its own steps and lists its own models.

If you add a step, the count updates automatically. If you change a model, the list updates. No hardcoded descriptions to keep in sync.

quote: Capturing Code as Data

reflect() reads existing structure. quote creates new structure:

compute build_template
{
let template = quote
machine custom_classifier
accepts
text as text, is required
responds with
category as text
implements
ask classify, using: "anthropic:claude-haiku-4"
with task "Classify this text into a category.\nText: ${input.text}"
returns
category as text
{machine_form: template}
}

The indented MashinTalk after quote is captured as a form value, not executed. You now have a machine definition as data that you can modify, validate, and eventually materialize.

form.*: Pure Operations on Machine Structure

Forms are regular values. You can transform them without governance overhead:

compute customize
{
let base = quote
machine email_classifier
implements
ask classify, using: "anthropic:claude-haiku-4"
with task "Classify this email"
returns
priority as text
let upgraded = form.set(form.step(base, "classify"), "variant_value", "anthropic:claude-sonnet-4")
let new_machine = form.replace(base, "classify", upgraded)
let diff = form.diff(base, new_machine)
{changes: diff, hash: form.hash(new_machine)}
}

This swaps the model from Haiku to Sonnet, diffs the two versions, and hashes the result. All pure computation. No AI calls, no effects, no governance needed.

The Governance Boundary

Manipulating forms is free. Executing them is governed.

You cannot write form.eval(my_form) in a compute step. The compiler will stop you:

Error: form.eval() is a governed effect and cannot be called from a compute step.
Use an ask step instead:
ask result, from: "@system/koda/form_eval"
form: my_form
input: my_input

This is by design. Executing generated code is a real action with real consequences. It goes through the governance pipeline: 6 structural checks by FormInspector, capability verification, model authorization, step count limits.

The boundary is clean: form.* functions are pure (safe in compute). form.eval(), form.propose(), form.describe() are effects (require governed steps).

Self-Improvement

Combine everything: reflect, transform, propose.

machine improving_triage
accepts
subject as text, is required
sender as text, is required
body as text
responds with
priority as text
self_improved as boolean
implements
ask classify, using: "anthropic:claude-haiku-4"
with task "Classify: ${input.subject} from ${input.sender}\n${input.body}"
returns
priority as text
confidence as number
compute check_if_improvement_needed
{
let me = reflect()
let current_step = form.step(me, "classify")
let needs_upgrade = classify.confidence < 0.6
let better_prompt = form.set(current_step, "content", "Classify carefully. Consider sender history and subject urgency.")
let improved_me = form.replace(me, "classify", better_prompt)
{
priority: classify.priority,
self_improved: needs_upgrade,
improvement_diff: if (needs_upgrade) { form.diff(me, improved_me) } else { null }
}
}

The machine classifies an email. If confidence is low, it reflects on its own structure, modifies its prompt, and diffs the result. The diff could then be sent to form.propose() (via a governed step) to register as a new version in the evolution ledger.

The machine is improving itself, and every step of that improvement is visible and auditable.

What This Means

ConceptWhatGoverned?
reflect()Read own structureNo (compile-time)
quoteCapture code as dataNo (pure)
form.*Navigate and transformNo (pure computation)
form.eval()Execute a formYes (6 governance checks)
form.propose()Register new versionYes (evolution ledger)

Data is free. Execution is governed. That line is enforced by the compiler, not by convention.

Most systems treat self-modification as dangerous. mashin treats it as a governed capability. The same governance that applies to your machines applies to the machines they generate.

The Complete Journey

You started with 6 lines that classified an email. You added structure, decisions, actions, composition, deployment, goals, tests, memory, interpretation, and now self-inspection.

The email triage system you built is:

  • Live: Running every 5 minutes
  • Connected: Teams notifications, Planner tasks
  • Tested: Automated test suite with mocked AI
  • Learning: Remembers sender patterns and human corrections
  • Auditable: Every run traced, every cost tracked, every action logged
  • Self-aware: Can inspect and describe its own structure
  • Improvable: Can propose better versions of itself

All governed. All traceable. All from a language designed for exactly this.