agent

package
v0.0.0-...-7871f83 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2025 License: Apache-2.0 Imports: 15 Imported by: 0

Documentation

Overview

Package agent provides a high-level API for building LLM-powered agents with tool-calling capabilities, built on top of the graph execution engine.

Overview

The agent package simplifies the creation of autonomous agents that can:

  • Interact with LLMs (OpenAI, Anthropic, etc.)
  • Use tools to perform actions
  • Maintain conversation history
  • Handle multi-turn interactions automatically

Quick Start

Create an agent with a model and tools:

import (
	"context"
	"github.com/hupe1980/agentmesh/pkg/agent"
	"github.com/hupe1980/agentmesh/pkg/model/openai"
	"github.com/hupe1980/agentmesh/pkg/tool"
)

// Define a tool
weatherTool, _ := tool.NewFuncTool(
	"get_weather",
	"Get current weather for a location",
	func(ctx context.Context, location string) (string, error) {
		// Implementation...
		return "Sunny, 72°F", nil
	},
)

// Create agent
compiled, err := agent.NewReAct(
	openai.NewModel(),
	agent.WithTools(weatherTool),
)

// Run agent
messages := []message.Message{
	message.NewHumanMessageFromText("What's the weather in Boston?"),
}
lastEvent, err := graph.Last(compiled.Run(context.Background(), messages))

Architecture

Agents are implemented as graphs with three main nodes:

	START → agent → tools → agent → END
	         ↓              ↑
	         └──────────────┘

  1. Agent node: Calls LLM to generate response or tool calls
  2. Tools node: Executes requested tools
  3. Loop continues until agent produces final response

Streaming

Get real-time updates as the agent runs:

seq := compiled.Run(ctx, messages)
for event, err := range seq {
	if err != nil {
		// Handle error
	}
	switch {
	case event.Node == "agent":
		fmt.Println("Agent thinking...")
	case event.Node == "tools":
		fmt.Println("Executing tools...")
	case event.Err != nil:
		fmt.Printf("Error: %v\n", event.Err)
	}
}

Custom Tools

Tools can be functions, structs, or interfaces:

// Function tool
calc, _ := tool.NewFuncTool("add", "Add two numbers",
	func(ctx context.Context, a, b int) (int, error) {
		return a + b, nil
	},
)

// Struct tool (implements tool.Tool)
type SearchTool struct{}
func (s *SearchTool) Name() string { return "search" }
func (s *SearchTool) Description() string { return "Search the web" }
func (s *SearchTool) InputSchema() tool.InputSchema { return tool.InputSchema{} }
func (s *SearchTool) Run(ctx context.Context, input string) (any, error) {
	// Implementation...
	return "results", nil
}

Configuration

Agents can be configured with options:

compiled, err := agent.NewReAct(
	model,
	agent.WithTools(weatherTool),
	agent.WithSupervisorMaxIterations(10),
)

State Management

Agents maintain conversation state automatically:

// First interaction
msgs1 := []message.Message{
	message.NewHumanMessageFromText("What's 2+2?"),
}
result1, _ := graph.Last(compiled.Run(ctx, msgs1))

// Second interaction (includes history)
msgs2 := append(msgs1, result1.State[graph.MessagesKeyName].([]message.Message)...)
msgs2 = append(msgs2, message.NewHumanMessageFromText("Add 3 to that"))
result2, _ := graph.Last(compiled.Run(ctx, msgs2))

Error Handling

Tool errors are returned to the agent for recovery:

toolResult := message.NewToolMessage(toolCall.ID, toolCall.Name, err.Error())
// Agent sees error and can try alternative approach

Multi-Agent Systems

Combine multiple agents into larger workflows using message graph:

g := message.NewGraph()
g.CommandNode("classifier", classifierAgent, "researcher", "writer")
g.AgentNode("researcher", researchAgent, "writer")
g.AgentNode("writer", writerAgent, graph.END)
g.Start("classifier")
// ...

Supervisor Pattern

Create a supervisor agent that routes tasks to specialized workers:

// Create specialist agents
mathAgent, _ := agent.NewReAct(model,
	agent.WithInstructions("You are a math expert"),
	agent.WithMaxIterations(5))

codeAgent, _ := agent.NewReAct(model,
	agent.WithInstructions("You are a programming expert"),
	agent.WithMaxIterations(5))

// Create supervisor with functional options
supervisor, err := agent.NewSupervisor(model,
	agent.WithWorker("math", "Math expert", mathAgent),
	agent.WithWorker("code", "Programming expert", codeAgent),
	agent.WithSupervisorInstructions("Route to specialists"),
	agent.WithSupervisorMaxIterations(10),
	agent.WithWorkerRetries(2))

The supervisor automatically creates handoff tools for each worker and routes tasks to the most appropriate specialist based on the query.

See examples/supervisor_simple for a complete example.

Package agent provides sentinel errors for the agent package.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNoMessages is returned when there are no messages.
	ErrNoMessages = errors.New("agent: no messages")

	// ErrSessionIDRequired is returned when session_id is required but not provided.
	ErrSessionIDRequired = errors.New("agent/conversational: session_id is required")

	// ErrNoUserQuery is returned when no user query is found.
	ErrNoUserQuery = errors.New("agent/rag: no user query found")

	// ErrNoQueryMessages is returned when there are no query messages.
	ErrNoQueryMessages = errors.New("agent/rag: no query messages")

	// ErrNoMessagesInState is returned when there are no messages in state.
	ErrNoMessagesInState = errors.New("agent/rag: no messages in state")
)
View Source
var (
	// ReflectionCountKey tracks the number of reflections performed
	ReflectionCountKey = graph.NewKey[int]("reflection_count")
	// DraftKey stores the current draft answer being refined
	DraftKey = graph.NewKey[string]("draft")
)

State keys for reflection

View Source
var DocumentsKey = graph.NewKey[[]string]("documents")

DocumentsKey is the state key for storing retrieved documents in RAG workflows.

View Source
var MemoryContextKey = graph.NewListKey[message.Message]("memory_context")

MemoryContextKey is the state key for messages retrieved from memory.

View Source
var RephrasedQueryKey = graph.NewKey[string]("rephrased_query")

RephrasedQueryKey stores the rephrased query for retrieval. This is set by the rephrase node when query rephrasing is enabled.

View Source
var SessionIDKey = graph.NewKey[string]("session_id")

SessionIDKey is the state key for the current session identifier.

Functions

func GetConversationHistory

func GetConversationHistory(msgs []message.Message) []message.Message

GetConversationHistory extracts prior messages for context. Returns messages excluding the current (last) human query.

This is useful for rephrasing queries or providing conversation context to models that need to understand the full conversation.

Example:

history := agent.GetConversationHistory(messages)
// history contains all messages except the last human query

func IsConversationalContext

func IsConversationalContext(scope graph.Scope) bool

IsConversationalContext checks if the current execution has conversation history. Returns true if query rephrasing or context-aware processing would be beneficial.

Detection is based on:

  • Presence of AI responses (indicates prior exchange)
  • Multiple human messages (indicates multi-turn conversation)
  • Memory context from Conversational wrapper

Example:

if agent.IsConversationalContext(scope) {
    // Handle as follow-up question
} else {
    // Handle as standalone query
}

func NewConversational

func NewConversational(
	wrappedAgent *graph.Graph,
	mem memory.Memory,
	opts ...ConversationalOption,
) (*graph.Graph, error)

NewConversational creates a memory-enhanced conversational agent that:

  1. Recalls relevant context from memory before the agent runs
  2. Executes the wrapped agent (ReAct, RAG, etc.) as a subgraph
  3. Stores the conversation exchange in memory after completion

The wrapped agent can be any *graph.Graph (ReAct, RAG, Reflection, etc.). This enables composable, memory-aware conversational experiences.

A session ID must be provided at runtime using graph.WithInitialValue.

Returns a *graph.Graph for type-safe composition.

Example:

// Create a ReAct agent
reactAgent, _ := agent.NewReAct(model, agent.WithTools(tools))

// Wrap it with memory
mem := memory.NewSimple() // or semantic memory
chatAgent, _ := agent.NewConversational(reactAgent, mem)

// Use it with a session ID
for msg, err := range chatAgent.Run(ctx, messages,
    graph.WithInitialValue(agent.SessionIDKey, "user-123"),
) {
    // handle msg
}

func NewModelNodeFunc

func NewModelNodeFunc(mdl model.Model, opts ...ModelNodeOption) (graph.NodeFunc, error)

NewModelNodeFunc creates a graph.NodeFunc that executes a model.

The function:

  • Creates a model executor with the provided model and middleware
  • Extracts messages from state
  • Discovers tools from the configured Toolset (or uses static Tools)
  • Resolves instructions (supports templates with state placeholders)
  • Collects and appends tool instructions from InstructionProvider tools
  • Builds a Request with messages + configuration
  • Delegates execution to the executor
  • Routes based on tool calls in the response

Routing logic:

  • If the AI message contains tool calls -> routes to tool target (default: "tool")
  • Otherwise -> routes to next target (default: graph.END)

Example with static tools:

modelFn, err := agent.NewModelNodeFunc(myModel,
    agent.WithModelName("gpt-4"),
    agent.WithModelInstructions("You are a helpful assistant"),
    agent.WithModelTools(searchTool, calculatorTool))

Example with middleware:

modelFn, err := agent.NewModelNodeFunc(myModel,
    agent.WithModelNodeMiddleware(loggingMiddleware, retryMiddleware),
    agent.WithModelToolset(mcpToolset))

Example with custom routing:

modelFn, err := agent.NewModelNodeFunc(myModel,
    agent.WithNextTarget("validator"),  // Route to validator instead of END
    agent.WithToolTarget("tool_executor"))  // Custom tool node

func NewRAG

func NewRAG(mdl model.Model, retriever retrieval.Retriever, opts ...RAGOption) (*graph.Graph, error)

NewRAG creates a Retrieval-Augmented Generation agent that:

  1. Retrieves relevant context from a knowledge base
  2. Generates a response using both the query and retrieved context

Returns a *graph.MessageGraph for type-safe composition.

This pattern is ideal for question-answering over large document collections.

Example:

// Create retriever with topK configured
retriever := langchaingo.NewRetrieverFromVectorStore(vectorStore, func(o *langchaingo.Options) {
    o.NumDocuments = 5
})
agent, err := agent.NewRAG(model, retriever)

func NewReAct

func NewReAct(mdl model.Model, opts ...ReActOption) (*graph.Graph, error)

NewReAct creates a Reasoning and Acting (ReAct) agent that iteratively:

  1. Reasons about the task
  2. Decides which tool to use (if tools provided)
  3. Observes the result
  4. Repeats until the answer is found

Returns a Graph that processes message sequences and streams execution results.

This pattern is effective for multi-step problem solving with tool use.

You can provide tools in two ways:

  1. Static list: use WithTools() option
  2. Dynamic toolset: use WithToolset() option for runtime tool discovery

Example with static tools:

agent, err := agent.NewReAct(model,
    agent.WithTools(searchTool, calculatorTool),
    agent.WithMaxIterations(5))

Example with dynamic toolset:

agent, err := agent.NewReAct(model,
    agent.WithToolset(mcpToolset),
    agent.WithMaxIterations(5))

func NewReflection

func NewReflection(
	wrappedAgent *graph.Graph,
	reflectionModel model.Model,
	opts ...ReflectionOption,
) (*graph.Graph, error)

NewReflection creates a reflection agent that wraps another agent and adds self-critique and refinement capabilities. The reflection agent will:

  1. Run the wrapped agent to get an initial answer
  2. Critique the answer using a reflection model
  3. Pass the critique back to the agent for refinement
  4. Repeat until max reflections or quality threshold met

This pattern allows any agent (ReAct, RAG, Supervisor, custom) to benefit from iterative self-improvement through reflection.

Example wrapping a ReAct agent:

reactAgent, _ := agent.NewReAct(model, agent.WithTools(searchTool))
reflectionAgent, _ := agent.NewReflection(reactAgent, reflectionModel,
    agent.WithMaxReflections(3),
    agent.WithReflectionPrompt("Critique this answer..."))

Example wrapping a RAG agent:

ragAgent, _ := agent.NewRAG(model, retriever)
reflectionAgent, _ := agent.NewReflection(ragAgent, reflectionModel,
    agent.WithMaxReflections(2))

func NewSupervisor

func NewSupervisor(mdl model.Model, opts ...SupervisorOption) (*graph.Graph, error)

NewSupervisor creates a supervisor agent that delegates work to specialized worker agents. The supervisor uses a model to decide which worker should handle each request.

Returns a *graph.Graph that enables type-safe composition with other agents. Worker agents must also be *graph.Graph.

Example:

supervisor, err := agent.NewSupervisor(
    model,
    agent.WithWorker("math", "Math expert", mathAgent),
    agent.WithWorker("code", "Programming expert", codeAgent),
    agent.WithInstructions("Route to specialists"),
    agent.WithWorkerRetries(2),
)

func NewToolNodeFunc

func NewToolNodeFunc(opts ...ToolNodeOption) (graph.NodeFunc, error)

NewToolNodeFunc creates a graph.NodeFunc that executes tools.

The function:

  • Extracts tool calls from the last AI message
  • Discovers tools from the configured Toolset (or uses static Executor)
  • Converts tool calls to executor format
  • Delegates execution to the Executor
  • Formats results as ToolMessages
  • Routes back to model

The Executor handles all execution concerns including:

  • Sequential vs parallel execution
  • Error handling (continueOnError, errorPrefix)
  • Plugin lifecycle (BeforeTool, AfterTool, OnToolError)
  • Observability (tracing, metrics, logging)
  • Concurrency control (maxConcurrency for parallel execution)

Example with static executor:

executor := tool.NewSequentialExecutor(toolRegistry)
toolFn, err := agent.NewToolNodeFunc(agent.WithToolExecutor(executor))

Example with dynamic toolset:

toolFn, err := agent.NewToolNodeFunc(agent.WithToolNodeToolset(mcpToolset))

Types

type CitationStyle

type CitationStyle int

CitationStyle defines how citations are formatted in responses.

const (
	// CitationBracket uses bracket notation: [1], [2]
	CitationBracket CitationStyle = iota

	// CitationSuperscript uses superscript notation: ¹, ²
	CitationSuperscript

	// CitationParenthetical uses parenthetical notation: (1), (2)
	CitationParenthetical
)

type ConversationalOption

type ConversationalOption func(*conversationalOptions)

ConversationalOption configures a Conversational agent.

func WithFailOnStoreError

func WithFailOnStoreError(fail bool) ConversationalOption

WithFailOnStoreError causes the agent to return an error if memory storage fails. By default, storage errors are silently ignored since memory is non-critical.

func WithLongTermMessages

func WithLongTermMessages(n int) ConversationalOption

WithLongTermMessages sets the number of semantically similar messages to recall. These are retrieved via semantic search from the conversation history. Default is 5.

func WithMinSimilarityScore

func WithMinSimilarityScore(score float64) ConversationalOption

WithMinSimilarityScore sets the minimum similarity score for memory search.

func WithShortTermMessages

func WithShortTermMessages(n int) ConversationalOption

WithShortTermMessages sets the number of recent messages to always include. These are the last N messages from the conversation, providing immediate context. Default is 5.

type GroundingMode

type GroundingMode int

GroundingMode defines how strictly the model should ground responses.

const (
	// GroundingStrict requires all claims to be directly from documents.
	// Model will refuse to answer if information isn't in the context.
	// This is the default mode.
	GroundingStrict GroundingMode = iota

	// GroundingGuided prefers document information but allows inferences.
	// Model clearly distinguishes sourced facts from inferences.
	GroundingGuided

	// GroundingCitation requires inline citations for all claims.
	// Enables source attribution but allows some inference.
	GroundingCitation

	// GroundingNone disables grounding prompts entirely.
	// The model can use any knowledge to answer questions.
	GroundingNone
)

type Instructions

type Instructions struct {
	// contains filtered or unexported fields
}

Instructions represents either a static instruction string or a dynamic provider. Supports Go text/template syntax via pkg/prompt for placeholder substitution.

func NewInstructions

func NewInstructions(templateStr string) Instructions

NewInstructions creates Instructions from a template string. Uses Go text/template syntax with helper functions from pkg/prompt:

  • {{.keyName}} - substitute from graph state
  • {{default "fallback" .Value}} - use fallback if nil/empty
  • {{.Name | upper}} - convert to uppercase
  • {{.Name | lower}} - convert to lowercase
  • {{if .Condition}}...{{end}} - conditionals

Example:

NewInstructions("You are helping {{.userName}}. Task: {{default \"general\" .task}}")

func NewInstructionsFromFunc

func NewInstructionsFromFunc(f func(context.Context, graph.Scope) (string, error)) Instructions

NewInstructionsFromFunc creates Instructions from a function.

func NewInstructionsFromProvider

func NewInstructionsFromProvider(p InstructionsProvider) Instructions

NewInstructionsFromProvider creates Instructions from a dynamic provider.

func (Instructions) IsStatic

func (i Instructions) IsStatic() bool

IsStatic returns true if backed by a template (not a dynamic provider).

func (Instructions) Resolve

func (i Instructions) Resolve(ctx context.Context, scope graph.Scope) (string, error)

Resolve returns the instruction text, invoking the provider if dynamic, or rendering the template with state values if static.

type InstructionsProvider

type InstructionsProvider interface {
	Instructions(ctx context.Context, scope graph.Scope) (string, error)
}

InstructionsProvider supplies dynamic instruction text at runtime. Implementations can derive instructions from session state, configuration, or environment.

type InstructionsProviderFunc

type InstructionsProviderFunc func(ctx context.Context, scope graph.Scope) (string, error)

InstructionsProviderFunc is a functional adapter for InstructionsProvider.

func (InstructionsProviderFunc) Instructions

func (f InstructionsProviderFunc) Instructions(ctx context.Context, scope graph.Scope) (string, error)

Instructions implements InstructionsProvider.

type ModelNodeConfig

type ModelNodeConfig struct {
	Name         string                      // Executor name for identification
	Middleware   []model.Middleware          // Model middleware chain
	Instructions *Instructions               // Dynamic instructions (supports templates and providers)
	Tools        []tool.Tool                 // Static tools for this node
	Toolset      tool.Toolset                // Dynamic toolset for runtime tool discovery
	OutputSchema *schema.OutputSchema        // Optional schema for structured output generation
	ToolTarget   string                      // Target node when tool calls are present (default: "tool")
	NextTarget   string                      // Target node when no tool calls (default: graph.END)
	Stream       bool                        // Enable streaming mode for real-time output
	ResponseKey  *graph.Key[message.Message] // Optional state key to store the final response message
}

ModelNodeConfig holds configuration for creating a model node function.

type ModelNodeOption

type ModelNodeOption func(*ModelNodeConfig)

ModelNodeOption configures a ModelNodeConfig.

func WithModelInstructions

func WithModelInstructions(templateStr string) ModelNodeOption

WithModelInstructions sets instructions from a template string for this model node. Uses Go text/template syntax - placeholders like {{.userName}} are substituted from state.

func WithModelInstructionsFunc

func WithModelInstructionsFunc(f func(context.Context, graph.Scope) (string, error)) ModelNodeOption

WithModelInstructionsFunc sets instructions from a dynamic function for this model node. Use when instructions need complex logic or access to graph state beyond template substitution.

func WithModelName

func WithModelName(name string) ModelNodeOption

WithModelName sets the executor name for identification in logs and tracing.

func WithModelNodeMiddleware

func WithModelNodeMiddleware(middleware ...model.Middleware) ModelNodeOption

WithModelNodeMiddleware adds middleware to the model executor chain.

func WithModelOutputSchema

func WithModelOutputSchema(outputSchema *schema.OutputSchema) ModelNodeOption

WithModelOutputSchema sets a structured output schema for the model node. The schema constrains the model to generate valid JSON matching the schema. Only works with models that support structured output (check model.Capabilities().StructuredOutput). For agent-level configuration, use WithOutputSchema instead.

func WithModelResponseKey

func WithModelResponseKey(key graph.Key[message.Message]) ModelNodeOption

WithModelResponseKey sets a state key to store the final response message. The final message will be stored in state using graph.SetValue(key, message). This allows other nodes to access the model's response via graph.Get(scope, key).

Example:

var ResponseKey = graph.NewKey[message.Message]("model_response")
modelFn, _ := agent.NewModelNodeFunc(model,
    agent.WithModelResponseKey(ResponseKey))

func WithModelStreaming

func WithModelStreaming(enabled bool) ModelNodeOption

WithModelStreaming enables streaming mode for real-time output. When enabled, partial responses are streamed via the graph's stream writer, allowing real-time display of AI responses as they're generated.

func WithModelTools

func WithModelTools(tools ...tool.Tool) ModelNodeOption

WithModelTools sets static tools available to the model for this node. For dynamic tool discovery, use WithModelToolset instead.

func WithModelToolset

func WithModelToolset(ts tool.Toolset) ModelNodeOption

WithModelToolset sets a dynamic toolset for runtime tool discovery. Tools are discovered on each invocation with access to the current graph state.

func WithNextTarget

func WithNextTarget(target string) ModelNodeOption

WithNextTarget sets the target node when there are no tool calls. Default is graph.END.

func WithToolTarget

func WithToolTarget(target string) ModelNodeOption

WithToolTarget sets the target node when tool calls are present. Default is "tool".

type RAGOption

type RAGOption interface {
	// contains filtered or unexported methods
}

RAGOption configures a RAG agent.

func WithCitationStyle

func WithCitationStyle(style CitationStyle) RAGOption

WithCitationStyle sets the citation format for GroundingCitation mode. This affects how citations appear in the model's responses.

Available styles:

  • CitationBracket: [1], [2] (default)
  • CitationSuperscript: ⁽¹⁾, ⁽²⁾
  • CitationParenthetical: (1), (2)

Example:

ragAgent, _ := agent.NewRAG(model, retriever,
    agent.WithGroundingMode(agent.GroundingCitation),
    agent.WithCitationStyle(agent.CitationSuperscript),
)

func WithContextPrompt

func WithContextPrompt(tmpl *prompt.Template) RAGOption

WithContextPrompt sets a custom prompt template for formatting retrieved documents. This prompt is used to present the retrieved context to the model for generation.

func WithGroundingMode

func WithGroundingMode(mode GroundingMode) RAGOption

WithGroundingMode sets the grounding mode for RAG responses. By default, GroundingStrict is used which requires all claims to be from documents.

Available modes:

  • GroundingStrict: Only answer from provided documents (default)
  • GroundingGuided: Prefer documents but allow general knowledge
  • GroundingCitation: Require inline citations for all claims
  • GroundingNone: Disable grounding prompts entirely

Example:

ragAgent, _ := agent.NewRAG(model, retriever,
    agent.WithGroundingMode(agent.GroundingGuided),
)

func WithGroundingPrompt

func WithGroundingPrompt(prompt string) RAGOption

WithGroundingPrompt sets a custom grounding prompt. This overrides the default prompt for the selected grounding mode.

The prompt is rendered as a template with the following variables:

  • {{.CitationFormat}}: The citation format string (e.g., "[n]", "(n)")
  • {{.CitationExample}}: An example sentence with citations

Note: When using a custom prompt, you have full control over grounding behavior. The groundingMode still affects document formatting (numbered for citation mode).

Example:

ragAgent, _ := agent.NewRAG(model, retriever,
    agent.WithGroundingPrompt("Use citations {{.CitationFormat}} for all claims..."),
)

func WithRephrasePrompt

func WithRephrasePrompt(tmpl *prompt.Template) RAGOption

WithRephrasePrompt sets a custom prompt template for query rephrasing. This is used when the RAG agent rephrases queries in conversational contexts. Has no effect if WithSkipRephrasing is also used.

func WithSkipRephrasing

func WithSkipRephrasing() RAGOption

WithSkipRephrasing disables automatic query rephrasing.

By default, the RAG agent automatically detects conversational context and rephrases follow-up questions to be standalone queries for better retrieval. Use this option to disable this behavior if you want to use queries as-is.

Example:

// Disable automatic rephrasing
ragAgent, _ := agent.NewRAG(model, retriever,
    agent.WithSkipRephrasing(),
)

func WithoutGrounding

func WithoutGrounding() RAGOption

WithoutGrounding disables grounding prompts entirely. This is a shorthand for WithGroundingMode(GroundingNone).

Use this when you want the model to use any knowledge to answer, not just the retrieved documents.

Example:

ragAgent, _ := agent.NewRAG(model, retriever,
    agent.WithoutGrounding(),
)

type ReActOption

type ReActOption interface {
	// contains filtered or unexported methods
}

ReActOption configures a ReAct agent. It can be either a function or a sharedOption.

func WithTools

func WithTools(tools ...tool.Tool) ReActOption

WithTools provides static tools to the agent via options.

func WithToolset

func WithToolset(ts tool.Toolset) ReActOption

WithToolset adds a dynamic toolset for runtime tool discovery. Tools are discovered from the toolset on each model invocation, with access to the current graph state via the View parameter. Multiple toolsets can be added; they will be combined.

type ReflectionOption

type ReflectionOption interface {
	// contains filtered or unexported methods
}

ReflectionOption configures a Reflection agent.

func WithReflectionGraphMiddleware

func WithReflectionGraphMiddleware(middleware ...graph.NodeMiddleware) ReflectionOption

WithReflectionGraphMiddleware adds node middleware to the reflection graph.

func WithReflectionMaxIterations

func WithReflectionMaxIterations(n int) ReflectionOption

WithReflectionMaxIterations sets the maximum number of reflection iterations.

func WithReflectionModelMiddleware

func WithReflectionModelMiddleware(middleware ...model.Middleware) ReflectionOption

WithReflectionModelMiddleware adds middleware to the reflection model executor.

func WithReflectionPromptTemplate

func WithReflectionPromptTemplate(prompt string) ReflectionOption

WithReflectionPromptTemplate sets the prompt used to critique answers. Use {draft} as placeholder for the answer to critique.

type SharedOption

type SharedOption func(*commonOptions)

SharedOption wraps a function that modifies commonOptions and implements both ReActOption and SupervisorOption interfaces.

This allows common option functions (like WithInstructions) to work with any agent type that embeds commonOptions, eliminating the need for type-prefixed functions like "WithSupervisorInstructions" or "WithReActInstructions".

func WithInstructions

func WithInstructions(templateStr string) SharedOption

WithInstructions sets instructions from a template string. Uses Go text/template syntax - placeholders like {{.userName}} are substituted from state.

Example:

WithInstructions("You are helping {{.userName}}. Task: {{default \"general\" .task}}")

func WithInstructionsFunc

func WithInstructionsFunc(f func(context.Context, graph.Scope) (string, error)) SharedOption

WithInstructionsFunc sets instructions from a dynamic function. Use when instructions need complex logic or external data beyond template substitution.

Example:

WithInstructionsFunc(func(ctx context.Context, scope graph.Scope) (string, error) {
    user := graph.Get(scope, UserKey)
    if user.IsPremium {
        return "You are a premium assistant...", nil
    }
    return "You are a helpful assistant...", nil
})

func WithMaxIterations

func WithMaxIterations(n int) SharedOption

WithMaxIterations sets the maximum reasoning iterations for any agent type.

func WithModelMiddleware

func WithModelMiddleware(middleware ...model.Middleware) SharedOption

WithModelMiddleware adds middleware to the model executor for any agent type.

func WithNodeMiddleware

func WithNodeMiddleware(middleware ...graph.NodeMiddleware) SharedOption

WithNodeMiddleware adds node-level middleware to the graph for any agent type. Node middleware wraps each node execution and runs for every node. For middleware that should wrap the entire Run/Resume operation, use WithRunMiddleware.

func WithOutputSchema

func WithOutputSchema(outputSchema *schema.OutputSchema) SharedOption

WithOutputSchema sets a structured output schema for any agent type. The schema constrains the model to generate valid JSON matching the schema.

func WithRunMiddleware

func WithRunMiddleware(middleware ...graph.RunMiddleware) SharedOption

WithRunMiddleware adds run-level middleware to the agent's graph. Run middleware wraps the entire Run/Resume operation, intercepting:

  • Input before execution starts
  • Output after execution completes

This is useful for:

  • Input validation (check user input once at start)
  • Output validation (check final output once at end)
  • Logging/observability at the run level
  • Request/response transformation

Middleware is applied in order: first added = outermost wrapper.

func WithStreaming

func WithStreaming(enabled bool) SharedOption

WithStreaming enables streaming mode for real-time output. When enabled, partial responses are streamed via the graph's stream writer, allowing real-time display of AI responses as they're generated.

To receive streamed values, provide a stream handler when running the graph:

graph.WithStreamHandler(func(msg message.Message) {
    fmt.Print(msg.Text())
})

func WithToolMiddleware

func WithToolMiddleware(middleware ...tool.Middleware) SharedOption

WithToolMiddleware adds middleware to the tool executor for any agent type.

type SupervisorOption

type SupervisorOption interface {
	// contains filtered or unexported methods
}

SupervisorOption configures a supervisor agent. It can be either a function or a sharedOption.

func WithWorker

func WithWorker(name, description string, agent *graph.Graph) SupervisorOption

WithWorker adds a worker agent to the supervisor. The agent must be a *graph.Graph (e.g., created via NewReAct).

func WithWorkerRetries

func WithWorkerRetries(attempts int) SupervisorOption

WithWorkerRetries sets retry attempts for worker failures.

func WithWorkerValidation

func WithWorkerValidation(validate bool) SupervisorOption

WithWorkerValidation enables validation of worker results.

type ToolNodeConfig

type ToolNodeConfig struct {
	Executor    tool.Executor     // Static executor (mutually exclusive with Toolset)
	Toolset     tool.Toolset      // Dynamic toolset for runtime tool discovery
	Middleware  []tool.Middleware // Middleware to apply to the executor
	ModelTarget string            // Target node to route back to (default: "model")
}

ToolNodeConfig holds configuration for creating a tool node function.

type ToolNodeOption

type ToolNodeOption func(*ToolNodeConfig)

ToolNodeOption configures a ToolNodeConfig.

func WithModelTarget

func WithModelTarget(target string) ToolNodeOption

WithModelTarget sets the target node to route back to after tool execution. Default is "model".

func WithToolExecutor

func WithToolExecutor(executor tool.Executor) ToolNodeOption

WithToolExecutor sets a static tool executor. For dynamic tool discovery, use WithToolNodeToolset instead.

func WithToolNodeMiddleware

func WithToolNodeMiddleware(middleware ...tool.Middleware) ToolNodeOption

WithToolNodeMiddleware sets middleware to apply to the tool executor.

func WithToolNodeToolset

func WithToolNodeToolset(ts tool.Toolset) ToolNodeOption

WithToolNodeToolset sets a dynamic toolset for runtime tool discovery. Tools are discovered on each invocation with access to the current graph state.

type WorkerAgent

type WorkerAgent struct {
	Name        string       // Unique identifier for the worker
	Description string       // Description of the worker's expertise
	Agent       *graph.Graph // The agent to delegate work to
}

WorkerAgent represents a specialized agent that can be supervised.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL