recall
recall
Retrieve data from semantic memory. A recall step performs a similarity search against a vector store collection, returning the most relevant entries for a given query. This is a governed effect: the runtime checks memory permissions, records the retrieval in the behavioral ledger, and mediates the read through the governance interpreter.
When to use
Use recall when you need to:
- Find relevant documents or facts for an LLM prompt (RAG pattern)
- Search conversation history for context
- Look up similar past cases or examples
- Retrieve knowledge base entries matching a query
Use compute for in-memory lookups on data already in scope. Use action ... db for structured SQL queries. Use recall specifically for semantic/vector similarity search that pairs with remember.
Syntax
recall <name> query: <expression> collection: "<collection_name>" limit: <number> threshold: <number> filter: { <key>: <value>, ... } namespace: "<namespace>"Configuration
| Config | Required | Description |
|---|---|---|
query | Yes | The search query. Embedded at runtime and compared against stored vectors. |
collection | Yes | Which collection to search. Must match a collection used in remember. |
limit | No | Maximum number of results to return. Default: 5. |
threshold | No | Minimum similarity score (0.0 to 1.0). Results below this score are excluded. Default: 0.0 (no threshold). |
filter | No | Metadata filter. Only entries matching all specified key-value pairs are searched. |
namespace | No | Restrict search to a specific namespace partition. |
Output shape
A recall step produces an object with:
| Field | Type | Description |
|---|---|---|
results | list | Array of matching entries, each with content, metadata, and score |
count | number | Number of results returned |
Each result entry:
| Field | Type | Description |
|---|---|---|
content | text | The stored content |
metadata | map | The metadata attached during remember |
score | number | Similarity score (0.0 to 1.0, higher is more similar) |
Examples
RAG pattern: retrieve context for LLM
machine question_answerer accepts question as text, is required
responds with answer as text sources as list
implements recall find_relevant_docs query: input.question collection: "knowledge_base" limit: 5 threshold: 0.7
compute build_context let docs = steps.find_relevant_docs.results { context: docs.map(d => d.content).join("\n\n---\n\n"), source_urls: docs.map(d => d.metadata.source) }
ask answer_with_context, using: "anthropic:claude-sonnet-4-6" with role "You are a knowledgeable assistant. Answer based only on the provided context. If the context doesn't contain the answer, say so." with task "Question: ${input.question}\n\nContext:\n${build_context.context}" returns answer as text assuming answer: "Based on the provided context..."
compute format_output { answer: steps.answer_with_context.answer, sources: build_context.source_urls }Filtered recall with namespace
recall find_recent_feedback query: "product quality issues" collection: "customer_feedback" filter: {category: "complaint", urgency: "high"} namespace: "2026-q1" limit: 10 threshold: 0.6Recall with conditional follow-up
machine research_assistant accepts question as text, is required
implements recall check_existing query: input.question collection: "research_cache" limit: 3 threshold: 0.85
decide use_cache_or_research when steps.check_existing.count is greater than 0 compute cached_answer { answer: steps.check_existing.results[0].content, source: "cache", confidence: steps.check_existing.results[0].score } otherwise ask fresh_research, using: "anthropic:claude-sonnet-4-6" with task "Research this question thoroughly: ${input.question}" returns answer as text assuming answer: "Research findings..."Governance
Every recall step is governed:
- Permission check: the machine must have
memorycapability - Read mediation: the governance interpreter mediates the retrieval
- Behavioral ledger: the query, collection, filter criteria, and result count are recorded
- Redaction: if the
records > redactionsection specifies PII redaction, recalled content may be redacted before being passed to downstream steps
In test mode, recall steps return empty results by default. Use assuming in verifies tests to mock recall results for deterministic testing.
Translations
| Language | Keyword |
|---|---|
| English | recall |
| Spanish | recuerda |
| French | rappelle |
| German | erinnert |
| Japanese | 思い出す |
| Chinese | 回忆 |
| Korean | 회상 |
See also
- remember - Store data in semantic memory
- ask … using - Often follows recall in a RAG pattern
- implements - The section where recall steps live
- ensures - Permission declarations for memory capability