Skip to content

Method Chaining


Fluent Interface Essential Pattern

AgentForce ADK uses method chaining to provide an intuitive, readable way to configure and execute agents. Learn how to master this powerful pattern.

Method chaining allows you to call multiple methods on an object in sequence, with each method returning the object itself. This creates a fluent, readable interface:

// Without chaining (verbose)
const agent = new AgentForceAgent(config);
agent.useLLM("ollama", "gemma3:12b");
agent.systemPrompt("You are a helpful assistant");
agent.prompt("Hello, world!");
agent.debug();
// With chaining (fluent)
const agent = new AgentForceAgent(config)
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are a helpful assistant")
.prompt("Hello, world!")
.debug();

AgentForce ADK methods fall into three categories:

Chainable Methods

Return: AgentForceAgent instance

Purpose: Configure the agent

Examples: .useLLM(), .systemPrompt(), .prompt(), .debug()

Terminal Methods

Return: Final result (Promise, string, etc.)

Purpose: Execute or finalize the agent

Examples: .output(), .serve(), .run()

Protected Methods

Return: Various types (internal use)

Purpose: Internal state management

Examples: .getModel(), .setProvider()

// Configure provider and model, latest overrides previous
agent
.useLLM("ollama", "gemma3:12b")
.useLLM("openrouter", "openai/gpt-4")
.useLLM("openrouter", "anthropic/claude-3-sonnet");
// Method signature
useLLM(provider: string, model: string): AgentForceAgent

Execution methods end the chain and produce final results:

// Generate formatted output (terminal)
const textOutput = await agent
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are helpful")
.prompt("Hello")
.output("text"); // Returns Promise<string>
// Cannot chain after output()
// ❌ This won't work:
// agent.output("text").debug();
// Method signature
output(format: "text" | "json" | "md"): Promise<string>
import { AgentForceAgent } from '@agentforce/adk';
const response = await new AgentForceAgent({
name: "QuickAgent"
})
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are a helpful assistant")
.prompt("Explain quantum computing in simple terms")
.debug()
.output("text");
console.log(response);
const agent = new AgentForceAgent({
name: "StepByStepAgent"
})
.useLLM("ollama", "gemma3:12b") // Step 1: Configure provider
.systemPrompt(` // Step 2: Set system instructions
You are a data analyst.
Provide clear, actionable insights.
`)
.prompt(` // Step 3: Set user input
Analyze this sales data:
${JSON.stringify(salesData)}
`)
.debug(); // Step 4: debugging output
// Step 5: Execute and get result
const analysis = await agent.output("md");
function createAgent(useDebug: boolean, useCloud: boolean) {
let agent = new AgentForceAgent({
name: "ConditionalAgent"
});
// Conditional provider selection
if (useCloud) {
agent = agent.useLLM("openrouter", "openai/gpt-4");
} else {
agent = agent.useLLM("ollama", "gemma3:12b");
}
// Conditional debugging
if (useDebug) {
agent = agent.debug();
}
return agent
.systemPrompt("You are a flexible assistant")
.prompt("Hello, world!");
}
// Usage
const devAgent = createAgent(true, false); // Debug + Local
const prodAgent = createAgent(false, true); // No debug + Cloud
function buildAgentChain(options: {
provider: string;
model: string;
systemPrompt: string;
userPrompt: string;
enableDebug?: boolean;
}) {
let chain = new AgentForceAgent({
name: "DynamicAgent"
})
.useLLM(options.provider, options.model)
.systemPrompt(options.systemPrompt)
.prompt(options.userPrompt);
if (options.enableDebug) {
chain = chain.debug();
}
return chain;
}
// Usage
const agent = buildAgentChain({
provider: "ollama",
model: "gemma3:12b",
systemPrompt: "You are a code reviewer",
userPrompt: "Review this TypeScript function",
enableDebug: true
});
class AgentBuilder {
private agent: AgentForceAgent;
constructor(config: AgentConfig) {
this.agent = new AgentForceAgent(config);
}
withOllama(model: string) {
this.agent = this.agent.useLLM("ollama", model);
return this;
}
withOpenRouter(model: string) {
this.agent = this.agent.useLLM("openrouter", model);
return this;
}
withSystemPrompt(prompt: string) {
this.agent = this.agent.systemPrompt(prompt);
return this;
}
withPrompt(prompt: string) {
this.agent = this.agent.prompt(prompt);
return this;
}
withDebug() {
this.agent = this.agent.debug();
return this;
}
build() {
return this.agent;
}
}
// Usage
const agent = new AgentBuilder({
name: "BuilderAgent"
})
.withOllama("gemma3:12b")
.withSystemPrompt("You are helpful")
.withPrompt("Hello")
.withDebug()
.build();
async function createAsyncAgent() {
// Chain with async operations
const systemPrompt = await fetchSystemPromptFromAPI();
const userPrompt = await processUserInput();
return new AgentForceAgent({
name: "AsyncAgent"
})
.useLLM("ollama", "gemma3:12b")
.systemPrompt(systemPrompt)
.prompt(userPrompt)
.debug();
}
// Usage
const agent = await createAsyncAgent();
const response = await agent.output("text");
// Base chain factory
function createBaseAgent(name: string) {
return new AgentForceAgent({ name, type })
.useLLM("ollama", "gemma3:12b")
.debug();
}
// Specialized chains
function createChatAgent(name: string) {
return createBaseAgent(name)
.systemPrompt("You are a friendly conversational assistant");
}
function createCodeAgent(name: string) {
return createBaseAgent(name)
.systemPrompt("You are an expert code reviewer and generator");
}
function createAnalysisAgent(name: string) {
return createBaseAgent(name)
.systemPrompt("You are a data analyst providing actionable insights");
}
// Usage
const chatBot = createChatAgent("ChatBot").prompt("Hello!");
const codeReviewer = createCodeAgent("CodeReviewer").prompt("Review this function");
const dataAnalyst = createAnalysisAgent("DataAnalyst").prompt("Analyze this data");
// ✅ Good - Configure before prompting
const agent = new AgentForceAgent(config)
.useLLM("ollama", "gemma3:12b") // 1. Provider first
.systemPrompt("You are helpful") // 2. System prompt second
.prompt("User input") // 3. User prompt last
.debug(); // 4. Debug if needed
function createValidatedAgent(provider: string, model: string) {
if (!provider || !model) {
throw new Error("Provider and model are required");
}
return new AgentForceAgent({
name: "ValidatedAgent"
})
.useLLM(provider, model);
}
// Usage
const agent = createValidatedAgent("ollama", "gemma3:12b")
.systemPrompt("You are helpful")
.prompt("Hello");
// Agent template function
function createAgentTemplate(
name: string,
provider: string,
model: string
) {
return new AgentForceAgent({ name, type })
.useLLM(provider, model)
.debug();
}
// Specialized templates
const chatTemplate = (name: string) =>
createAgentTemplate(name, "ollama", "gemma3:12b")
.systemPrompt("You are a conversational assistant");
const codeTemplate = (name: string) =>
createAgentTemplate(name, "openrouter", "openai/gpt-4")
.systemPrompt("You are a code expert");
// Usage
const myChatBot = chatTemplate("MyChatBot").prompt("Hello!");
const myCodeBot = codeTemplate("MyCodeBot").prompt("Review this code");
function safeAgentChain(config: AgentConfig) {
try {
return new AgentForceAgent(config)
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are helpful")
.debug();
} catch (error) {
console.error("Failed to create agent chain:", error);
throw new Error(`Agent creation failed: ${error.message}`);
}
}
// Usage with error handling
async function executeWithErrorHandling() {
try {
const agent = safeAgentChain({
name: "SafeAgent"
})
.prompt("Test prompt");
return await agent.output("text");
} catch (error) {
console.error("Agent execution failed:", error);
return "Sorry, I encountered an error.";
}
}
function createValidatedChain(options: {
name: string;
provider: string;
model: string;
systemPrompt: string;
userPrompt: string;
}) {
// Validate all required options
const required = ['name', 'provider', 'model', 'systemPrompt', 'userPrompt'];
const missing = required.filter(key => !options[key]);
if (missing.length > 0) {
throw new Error(`Missing required options: ${missing.join(', ')}`);
}
return new AgentForceAgent({
name: options.name
})
.useLLM(options.provider, options.model)
.systemPrompt(options.systemPrompt)
.prompt(options.userPrompt);
}
// ❌ Wrong - Cannot chain after terminal methods
const result = await agent
.useLLM("ollama", "gemma3:12b")
.output("text") // Terminal method
.debug(); // Error: output() doesn't return agent
// ✅ Correct - Debug before terminal
const result = await agent
.useLLM("ollama", "gemma3:12b")
.debug() // Chainable method
.output("text"); // Terminal method

OpenAI Compatibility

Learn about OpenAI-compatible response formats → OpenAI Guide

You now understand AgentForce ADK’s method chaining system! This fluent interface makes agent configuration intuitive and code more readable and maintainable.