API Reference
Explore the complete compatibility API reference → API Reference
Get started quickly with these practical AgentForce ADK examples. Each example is self-contained and ready to run.
The simplest possible AgentForce agent:
import { AgentForceAgent } from '@agentforce/adk';
// Create and run a basic agentconst agent = new AgentForceAgent({ name: "HelloWorldAgent"}) .useLLM("ollama", "gemma3:12b") .prompt("Say hello to the world") .debug();
// Get responseconst response = await agent.output("text");console.log(response);// Output: "Hello, world! How can I assist you today?"
A basic question-answering agent:
import { AgentForceAgent } from '@agentforce/adk';
const qaAgent = new AgentForceAgent({ name: "QAAgent"}) .useLLM("ollama", "phi4-mini:latest") .systemPrompt("You are a helpful assistant that provides accurate, concise answers.") .prompt("What is the capital of France?") .debug();
const answer = await qaAgent.output("text");console.log(answer);// Output: "The capital of France is Paris."
A more sophisticated personal assistant:
import { AgentForceAgent } from '@agentforce/adk';
const assistant = new AgentForceAgent({ name: "PersonalAssistant"}) .useLLM("ollama", "gemma3:12b") .systemPrompt(` You are a helpful personal assistant. You can: - Answer questions about various topics - Help with planning and organization - Provide recommendations - Explain complex concepts simply
Always be friendly, professional, and helpful. `) .prompt("Help me plan a productive morning routine") .debug();
const routine = await assistant.output("md");console.log(routine);
import { AgentForceAgent } from '@agentforce/adk';
// Using different Ollama modelsconst fastAgent = new AgentForceAgent({ name: "FastAgent"}) .useLLM("ollama", "phi4-mini:latest") // Fast, lightweight .prompt("Quickly explain machine learning");
const powerfulAgent = new AgentForceAgent({ name: "PowerfulAgent"}) .useLLM("ollama", "gemma3:12b") // More capable .prompt("Provide a detailed analysis of blockchain technology");
// Run both agentsconst [quickResponse, detailedResponse] = await Promise.all([ fastAgent.output("text"), powerfulAgent.output("text")]);
console.log("Quick response:", quickResponse);console.log("Detailed response:", detailedResponse);
import { AgentForceAgent } from '@agentforce/adk';
// Create agent with initial modelconst adaptiveAgent = new AgentForceAgent({ name: "AdaptiveAgent"}).useLLM("ollama", "phi4-mini:latest");
// Simple task with fast modeladaptiveAgent.prompt("What's 2+2?");const simpleAnswer = await adaptiveAgent.output("text");
// Switch to more powerful model for complex taskadaptiveAgent .useLLM("ollama", "gemma3:12b") .prompt("Explain quantum computing and its implications for cryptography");
const complexAnswer = await adaptiveAgent.output("text");
console.log("Simple:", simpleAnswer);console.log("Complex:", complexAnswer);
import { AgentForceAgent } from '@agentforce/adk';
// Set OpenRouter API keyprocess.env.OPENROUTER_API_KEY = 'your-api-key-here';
const cloudAgent = new AgentForceAgent({ name: "CloudAgent"}) .useLLM("openrouter", "openai/gpt-4") .systemPrompt("You are an expert AI assistant with access to the latest information.") .prompt("What are the latest developments in AI?") .debug();
const response = await cloudAgent.output("text");console.log(response);
import { AgentForceAgent } from '@agentforce/adk';
// GPT-4 for analysisconst analyst = new AgentForceAgent({ name: "AnalystAgent"}) .useLLM("openrouter", "openai/gpt-4") .prompt("Analyze the current state of renewable energy");
// Claude for creative writingconst writer = new AgentForceAgent({ name: "WriterAgent"}) .useLLM("openrouter", "anthropic/claude-3-sonnet") .prompt("Write a short story about a robot learning to paint");
// Run bothconst [analysis, story] = await Promise.all([ analyst.output("text"), writer.output("text")]);
console.log("Analysis:", analysis);console.log("Story:", story);
import { AgentForceAgent } from '@agentforce/adk';
const textAgent = new AgentForceAgent({ name: "TextAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("Provide clear, concise explanations.") .prompt("Explain photosynthesis");
// Get plain text responseconst textResponse = await textAgent.output("text");console.log(textResponse);// Output: "Photosynthesis is the process by which plants..."
import { AgentForceAgent } from '@agentforce/adk';
const jsonAgent = new AgentForceAgent({ name: "JSONAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt(` You are a data extraction agent. Always respond with valid JSON. Extract key information and present it in a structured format. `) .prompt("Extract information about Paris: population, country, famous landmarks");
// Get structured JSON responseconst jsonResponse = await jsonAgent.output("json");console.log(JSON.stringify(jsonResponse, null, 2));
import { AgentForceAgent } from '@agentforce/adk';
const markdownAgent = new AgentForceAgent({ name: "MarkdownAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("Format responses as well-structured Markdown with headers, lists, and emphasis.") .prompt("Create a guide for getting started with TypeScript");
// Get formatted Markdown responseconst markdownResponse = await markdownAgent.output("md");console.log(markdownResponse);
import { AgentForceAgent } from '@agentforce/adk';
const summarizer = new AgentForceAgent({ name: "TextSummarizer"}) .useLLM("ollama", "phi4-mini:latest") .systemPrompt("Create concise summaries that capture the main points in 2-3 sentences.");
const longText = `Lorem ipsum dolor sit amet, consectetur adipiscing elit.Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris...[Your actual content here]`;
const summary = await summarizer .prompt(`Summarize this text: ${longText}`) .output("text");
console.log("Summary:", summary);
import { AgentForceAgent } from '@agentforce/adk';
const detector = new AgentForceAgent({ name: "LanguageDetector"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("Identify the language of the given text. Respond with just the language name.");
const text = "Bonjour, comment allez-vous?";
const language = await detector .prompt(`What language is this: "${text}"`) .output("text");
console.log(`Language: ${language}`);// Output: "Language: French"
import { AgentForceAgent } from '@agentforce/adk';
async function askQuestion(question: string) { const agent = new AgentForceAgent({ name: "QuestionAnswerer" }) .useLLM("ollama", "gemma3:12b") .systemPrompt("Provide helpful, accurate answers to questions. Keep responses concise but informative.") .prompt(question);
return await agent.output("text");}
// Usage examplesconst answer1 = await askQuestion("How do you make coffee?");const answer2 = await askQuestion("What is TypeScript?");const answer3 = await askQuestion("Explain gravity in simple terms");
console.log("Coffee:", answer1);console.log("TypeScript:", answer2);console.log("Gravity:", answer3);
import { AgentForceAgent } from '@agentforce/adk';
// Simple environment-based configurationconst agent = new AgentForceAgent({ name: process.env.AGENT_NAME || 'DefaultAgent'}) .useLLM( process.env.AI_PROVIDER || 'ollama', process.env.AI_MODEL || 'phi4-mini:latest' );
// Enable debug mode if environment variable is setif (process.env.DEBUG === 'true') { agent.debug();}
// Use the configured agentconst response = await agent .prompt("Hello, what can you do?") .output("text");
console.log(response);
import { AgentForceAgent } from '@agentforce/adk';
// Development configurationconst devAgent = new AgentForceAgent({ name: "DevAgent"}) .useLLM("ollama", "phi4-mini:latest") .debug();
// Production configurationconst prodAgent = new AgentForceAgent({ name: "ProdAgent"}) .useLLM("openrouter", "openai/gpt-3.5-turbo");
// Choose agent based on environmentconst agent = process.env.NODE_ENV === 'production' ? prodAgent : devAgent;
const response = await agent .prompt("What environment am I running in?") .output("text");
console.log(response);
import { AgentForceAgent } from '@agentforce/adk';
async function safeAgentCall(prompt: string) { try { const agent = new AgentForceAgent({ name: "SafeAgent" }) .useLLM("ollama", "gemma3:12b") .prompt(prompt);
const response = await agent.output("text"); return { success: true, response };
} catch (error) { console.error("Agent error:", error.message); return { success: false, error: error.message, fallback: "I'm sorry, I couldn't process your request right now." }; }}
// Usageconst result = await safeAgentCall("Hello, how are you?");
if (result.success) { console.log("Response:", result.response);} else { console.log("Fallback:", result.fallback);}
import { AgentForceAgent } from '@agentforce/adk';
async function retryableAgent(prompt: string, maxRetries: number = 2) { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const agent = new AgentForceAgent({ name: `RetryAgent_${attempt}` }) .useLLM("ollama", "phi4-mini:latest") .prompt(prompt);
const response = await agent.output("text"); console.log(`Success on attempt ${attempt}`); return response;
} catch (error) { console.log(`Attempt ${attempt} failed:`, error.message);
if (attempt === maxRetries) { throw new Error(`All ${maxRetries} attempts failed`); }
// Wait before retry await new Promise(resolve => setTimeout(resolve, 1000)); } }}
// Usagetry { const response = await retryableAgent("What is the weather like?"); console.log(response);} catch (error) { console.error("All retries failed:", error.message);}
API Reference
Explore the complete compatibility API reference → API Reference
Advanced Examples
Explore complex patterns and architectures → Advanced Examples
These basic examples provide a solid foundation for understanding AgentForce ADK fundamentals. Start with these patterns and gradually explore more advanced features as you become comfortable with the basics!