Governance
Governance is what separates mashin from every other AI framework. In other systems, an LLM agent can call any tool, access any API, and perform any action unless you bolt on middleware to prevent it. In mashin, governance is structural. The ensures section declares what a machine can and cannot do, and the runtime enforces it. There is no way to bypass it because the capability to bypass it does not exist in the language.
The ensures section
Every machine that performs governed operations needs an ensures section:
machine research_agent
ensures permissions allowed to llm_call, http, memory not allowed to db_write, execute_shell_commands requires approval for external_api_callsThis machine can reason (LLM calls), make HTTP requests, and use semantic memory. It cannot write to a database or run shell commands. External API calls require a human to approve each one at runtime.
Three permission levels
allowed to
Grant capabilities. The machine can perform these operations without interruption:
permissions allowed to llm_call, httpnot allowed to
Explicitly deny capabilities. If a step tries to use a denied capability, the runtime halts execution and records the denial in the behavioral ledger:
permissions not allowed to db_write, file_accessrequires approval for
Capabilities that need human consent at runtime. The execution pauses, presents the request to the user, and continues only if approved:
permissions requires approval for send_emailCapabilities
Each step type needs specific capabilities:
| Step | Required capability |
|---|---|
ask ... using | llm_call |
ask ... from / action ... call | machine.call (or the target machine’s specific capability) |
action ... http | http |
remember / recall | memory |
action ... exec | execute_shell_commands |
action ... db | db_read or db_write |
compute / decide | None (pure, always allowed) |
Guards
Guards validate data before and after steps. Each guard has a name and an action to take when violated:
ensures permissions allowed to llm_call guards pii_detection on_violation: "redact" content_safety on_violation: "block" max_tokens: 4096| Action | What happens |
|---|---|
"block" | Halt execution, return an error |
"redact" | Remove the violating content and continue |
"warn" | Log a warning and continue |
"retry" | Re-run the step with guardrail feedback |
Needs
Declare external requirements that must be present before the machine runs:
ensures needs api_key: required database_url: requiredIf a declared requirement is missing at runtime, the machine fails immediately with a clear error. No steps execute.
A complete example
machine customer_support_agent
accepts message as text, is required customer_id as text, is required
responds with reply as text escalated as boolean
ensures permissions allowed to llm_call, memory not allowed to db_write, execute_shell_commands requires approval for send_email guards pii_detection on_violation: "redact" needs api_key: required
implements recall find_history query: input.message collection: "support_history" namespace: input.customer_id limit: 3
ask respond, using: "anthropic:claude-sonnet-4-6" with role "You are a customer support agent. Be helpful and professional." with task "Customer message: ${input.message}\n\nPrevious interactions:\n${steps.find_history.results.map(r => r.content).join('\n')}" returns reply as text escalated as boolean assuming reply: "I can help with that." escalated: false
remember save_interaction content: input.message + "\n" + steps.respond.reply collection: "support_history" namespace: input.customer_idThis machine can reason and use memory, but cannot write to a database or run shell commands. PII in model responses is automatically redacted. It requires an API key to start.
What gets recorded
Every governance decision is recorded in the behavioral ledger:
- Which capability was requested
- Whether it was allowed, denied, or required approval
- The rule that matched
- The inputs to the decision
- A hash proving the record was not tampered with
This is how mashin delivers auditability. You can always answer the question: “Why did this machine do that?”
Try it
Take the sentiment classifier from Your First Ask Step and add governance. Declare llm_call permission. Then add not allowed to file_access and a PII detection guard. Run the machine and check the behavioral ledger to see the governance decisions recorded.
Next steps
- Inputs and Outputs - Validation constraints as the first line of defense
- Effects - How governed effects work with stdlib machines
- ensures reference - Full specification