Chainable Methods
Return: AgentForceAgent
instance
Purpose: Configure the agent
Examples: .useLLM()
, .systemPrompt()
, .prompt()
, .debug()
AgentForce ADK uses method chaining to provide an intuitive, readable way to configure and execute agents. Learn how to master this powerful pattern.
Method chaining allows you to call multiple methods on an object in sequence, with each method returning the object itself. This creates a fluent, readable interface:
// Without chaining (verbose)const agent = new AgentForceAgent(config);agent.useLLM("ollama", "gemma3:12b");agent.systemPrompt("You are a helpful assistant");agent.prompt("Hello, world!");agent.debug();
// With chaining (fluent)const agent = new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are a helpful assistant") .prompt("Hello, world!") .debug();
AgentForce ADK methods fall into three categories:
Chainable Methods
Return: AgentForceAgent
instance
Purpose: Configure the agent
Examples: .useLLM()
, .systemPrompt()
, .prompt()
, .debug()
Terminal Methods
Return: Final result (Promise, string, etc.)
Purpose: Execute or finalize the agent
Examples: .output()
, .serve()
, .run()
Protected Methods
Return: Various types (internal use)
Purpose: Internal state management
Examples: .getModel()
, .setProvider()
// Configure provider and model, latest overrides previousagent .useLLM("ollama", "gemma3:12b") .useLLM("openrouter", "openai/gpt-4") .useLLM("openrouter", "anthropic/claude-3-sonnet");
// Method signatureuseLLM(provider: string, model: string): AgentForceAgent
// Set system instructions, latest overrides previousagent .systemPrompt("You are a helpful assistant") .systemPrompt("You are a code reviewer specializing in TypeScript") .systemPrompt(` You are a technical writer. Create clear, concise documentation. Use examples and best practices. `);
// Method signaturesystemPrompt(prompt: string): AgentForceAgent
// Set user input, latest overrides previousagent .prompt("Hello, how can you help me?") .prompt("Review this code for security issues") .prompt(` Analyze this dataset and provide insights: ${JSON.stringify(data)} `);
// Method signatureprompt(userPrompt: string): AgentForceAgent
// print debug infos, can be used multiple times in the chainagent .debug() // Debug Output at this state .useLLM("ollama", "gemma3:12b") .systemPrompt("Test prompt") .debug() // Debug Output at this state .prompt("Test input"); .debug() // Debug Output at this state
// Method signaturedebug(): AgentForceAgent
Execution methods end the chain and produce final results:
// Generate formatted output (terminal)const textOutput = await agent .useLLM("ollama", "gemma3:12b") .systemPrompt("You are helpful") .prompt("Hello") .output("text"); // Returns Promise<string>
// Cannot chain after output()// ❌ This won't work:// agent.output("text").debug();
// Method signatureoutput(format: "text" | "json" | "md"): Promise<string>
// Execute the agent (async chainable)const agent = new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are helpful") .prompt("Hello") .run(); // Returns Promise<AgentForceAgent>
// Can chain after run()await agent.then(a => console.log(a));
// Method signaturerun(): Promise<AgentForceAgent>
// Start agent as server (terminal, async)await agent .useLLM("ollama", "gemma3:12b") .systemPrompt("You are a web API assistant") .serve("localhost", 3000); // Returns Promise<void>
// Cannot chain after serve()// ❌ This won't work:// agent.serve().debug();
// Method signatureserve(host?: string, port?: number): Promise<void>
import { AgentForceAgent } from '@agentforce/adk';
const response = await new AgentForceAgent({ name: "QuickAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are a helpful assistant") .prompt("Explain quantum computing in simple terms") .debug() .output("text");
console.log(response);
const agent = new AgentForceAgent({ name: "StepByStepAgent"}) .useLLM("ollama", "gemma3:12b") // Step 1: Configure provider .systemPrompt(` // Step 2: Set system instructions You are a data analyst. Provide clear, actionable insights. `) .prompt(` // Step 3: Set user input Analyze this sales data: ${JSON.stringify(salesData)} `) .debug(); // Step 4: debugging output
// Step 5: Execute and get resultconst analysis = await agent.output("md");
function createAgent(useDebug: boolean, useCloud: boolean) { let agent = new AgentForceAgent({ name: "ConditionalAgent" });
// Conditional provider selection if (useCloud) { agent = agent.useLLM("openrouter", "openai/gpt-4"); } else { agent = agent.useLLM("ollama", "gemma3:12b"); }
// Conditional debugging if (useDebug) { agent = agent.debug(); }
return agent .systemPrompt("You are a flexible assistant") .prompt("Hello, world!");}
// Usageconst devAgent = createAgent(true, false); // Debug + Localconst prodAgent = createAgent(false, true); // No debug + Cloud
function buildAgentChain(options: { provider: string; model: string; systemPrompt: string; userPrompt: string; enableDebug?: boolean;}) { let chain = new AgentForceAgent({ name: "DynamicAgent" }) .useLLM(options.provider, options.model) .systemPrompt(options.systemPrompt) .prompt(options.userPrompt);
if (options.enableDebug) { chain = chain.debug(); }
return chain;}
// Usageconst agent = buildAgentChain({ provider: "ollama", model: "gemma3:12b", systemPrompt: "You are a code reviewer", userPrompt: "Review this TypeScript function", enableDebug: true});
class AgentBuilder { private agent: AgentForceAgent;
constructor(config: AgentConfig) { this.agent = new AgentForceAgent(config); }
withOllama(model: string) { this.agent = this.agent.useLLM("ollama", model); return this; }
withOpenRouter(model: string) { this.agent = this.agent.useLLM("openrouter", model); return this; }
withSystemPrompt(prompt: string) { this.agent = this.agent.systemPrompt(prompt); return this; }
withPrompt(prompt: string) { this.agent = this.agent.prompt(prompt); return this; }
withDebug() { this.agent = this.agent.debug(); return this; }
build() { return this.agent; }}
// Usageconst agent = new AgentBuilder({ name: "BuilderAgent"}) .withOllama("gemma3:12b") .withSystemPrompt("You are helpful") .withPrompt("Hello") .withDebug() .build();
async function createAsyncAgent() { // Chain with async operations const systemPrompt = await fetchSystemPromptFromAPI(); const userPrompt = await processUserInput();
return new AgentForceAgent({ name: "AsyncAgent" }) .useLLM("ollama", "gemma3:12b") .systemPrompt(systemPrompt) .prompt(userPrompt) .debug();}
// Usageconst agent = await createAsyncAgent();const response = await agent.output("text");
// Base chain factoryfunction createBaseAgent(name: string) { return new AgentForceAgent({ name, type }) .useLLM("ollama", "gemma3:12b") .debug();}
// Specialized chainsfunction createChatAgent(name: string) { return createBaseAgent(name) .systemPrompt("You are a friendly conversational assistant");}
function createCodeAgent(name: string) { return createBaseAgent(name) .systemPrompt("You are an expert code reviewer and generator");}
function createAnalysisAgent(name: string) { return createBaseAgent(name) .systemPrompt("You are a data analyst providing actionable insights");}
// Usageconst chatBot = createChatAgent("ChatBot").prompt("Hello!");const codeReviewer = createCodeAgent("CodeReviewer").prompt("Review this function");const dataAnalyst = createAnalysisAgent("DataAnalyst").prompt("Analyze this data");
// ✅ Good - Configure before promptingconst agent = new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") // 1. Provider first .systemPrompt("You are helpful") // 2. System prompt second .prompt("User input") // 3. User prompt last .debug(); // 4. Debug if needed
function createValidatedAgent(provider: string, model: string) { if (!provider || !model) { throw new Error("Provider and model are required"); }
return new AgentForceAgent({ name: "ValidatedAgent" }) .useLLM(provider, model);}
// Usageconst agent = createValidatedAgent("ollama", "gemma3:12b") .systemPrompt("You are helpful") .prompt("Hello");
// Agent template functionfunction createAgentTemplate( name: string, provider: string, model: string) { return new AgentForceAgent({ name, type }) .useLLM(provider, model) .debug();}
// Specialized templatesconst chatTemplate = (name: string) => createAgentTemplate(name, "ollama", "gemma3:12b") .systemPrompt("You are a conversational assistant");
const codeTemplate = (name: string) => createAgentTemplate(name, "openrouter", "openai/gpt-4") .systemPrompt("You are a code expert");
// Usageconst myChatBot = chatTemplate("MyChatBot").prompt("Hello!");const myCodeBot = codeTemplate("MyCodeBot").prompt("Review this code");
function safeAgentChain(config: AgentConfig) { try { return new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are helpful") .debug(); } catch (error) { console.error("Failed to create agent chain:", error); throw new Error(`Agent creation failed: ${error.message}`); }}
// Usage with error handlingasync function executeWithErrorHandling() { try { const agent = safeAgentChain({ name: "SafeAgent" }) .prompt("Test prompt");
return await agent.output("text");
} catch (error) { console.error("Agent execution failed:", error); return "Sorry, I encountered an error."; }}
function createValidatedChain(options: { name: string; provider: string; model: string; systemPrompt: string; userPrompt: string;}) { // Validate all required options const required = ['name', 'provider', 'model', 'systemPrompt', 'userPrompt']; const missing = required.filter(key => !options[key]);
if (missing.length > 0) { throw new Error(`Missing required options: ${missing.join(', ')}`); }
return new AgentForceAgent({ name: options.name }) .useLLM(options.provider, options.model) .systemPrompt(options.systemPrompt) .prompt(options.userPrompt);}
// ❌ Wrong - Cannot chain after terminal methodsconst result = await agent .useLLM("ollama", "gemma3:12b") .output("text") // Terminal method .debug(); // Error: output() doesn't return agent
// ✅ Correct - Debug before terminalconst result = await agent .useLLM("ollama", "gemma3:12b") .debug() // Chainable method .output("text"); // Terminal method
// ❌ Wrong - Too many responsibilities in one chainconst result = await new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") .systemPrompt("Complex system prompt...") .prompt("Complex user prompt...") .debug() .then(a => a.output("text")) .then(text => processResponse(text)) .then(processed => saveToDatabase(processed));
// ✅ Better - Separate concernsconst agent = new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") .systemPrompt("Complex system prompt...") .prompt("Complex user prompt...") .debug();
const text = await agent.output("text");const processed = processResponse(text);await saveToDatabase(processed);
// ❌ Wrong - Missing essential configurationconst agent = new AgentForceAgent(config) .systemPrompt("You are helpful") // System prompt recommended .output("text"); // Will fail, use default provider and model but no required prompt
// ✅ Correct - Complete configurationconst agent = new AgentForceAgent(config) .useLLM("ollama", "gemma3:12b") // Provider required .systemPrompt("You are helpful") // System prompt recommended .prompt("Hello") .output("text");
API Reference
Explore the complete compatibility API reference → API Reference
OpenAI Compatibility
Learn about OpenAI-compatible response formats → OpenAI Guide
You now understand AgentForce ADK’s method chaining system! This fluent interface makes agent configuration intuitive and code more readable and maintainable.