Input Validation
All tools validate parameters and data types before execution
Sanitization of file paths and user inputs to prevent injection attacks
Agent Tools enable AgentForceAgent instances to interact with external functions and APIs during conversation. Tools are automatically loaded and made available to the agent based on the tools
configuration in AgentConfig
, allowing for dynamic interactions and enhanced functionality.
Tools are configured in the AgentConfig
when creating an agent instance:
import { AgentForceAgent, type AgentConfig } from "@agentforce/adk";
const agentConfig: AgentConfig = { name: "FileAgent", tools: ["fs_read_file", "fs_write_file"] // Tool names from registry};
tools
string[]
undefined
AgentConfig.tools
array// 1. Agent configured with toolsconst agent = new AgentForceAgent({ name: "FileAgent", tools: ["fs_read_file"]});
// 2. Tools are auto-loaded during executionconst response = await agent .prompt("Read the README.md file and summarize it") .getResponse();
// 3. LLM can call fs_read_file tool during response generation// 4. Tool results are fed back to LLM for final response
fs_read_file
path, encoding, max_length
fs_write_file
path, content, encoding, append
fs_list_dir
path, include_hidden, recursive
fs_move_file
source, destination, overwrite
fs_find_files
pattern, path, recursive, case_sensitive
fs_find_dirs_and_files
path, pattern, max_depth
fs_search_content
query, path, file_pattern, case_sensitive
fs_get_file_tree
path, max_depth, include_hidden
web_fetch
url, wait_for_selector, screenshot, extract_links
api_fetch
url, method, headers, body, timeout_ms
filter_content
content, filter_type, options
gh_list_repos
username, type, sort, visibility
os_exec
command, args, cwd, timeout_ms
md_create_ascii_tree
structure, style, indentation
import { AgentForceAgent, type AgentConfig } from "@agentforce/adk";
const agentConfig: AgentConfig = { name: "FileAgent", tools: ["fs_read_file", "fs_write_file", "fs_list_dir"]};
const agent = new AgentForceAgent(agentConfig) .useLLM("ollama", "devstral") // ✅ Great for file operations and coding .systemPrompt("You are a file management assistant.") .prompt("Read the package.json file, then create a summary file");
const response = await agent.getResponse();
import { AgentForceAgent, type AgentConfig } from "@agentforce/adk";
const agentConfig: AgentConfig = { name: "WebScrapingAgent", tools: ["web_fetch", "fs_write_file"]};
const agent = new AgentForceAgent(agentConfig) .useLLM("ollama", "qwen3:8b") // ✅ Versatile with strong tool calling .systemPrompt("You are a web scraping specialist.") .prompt("Scrape the latest news from example.com and save to news.md");
const response = await agent.output("md");
import { AgentForceAgent, type AgentConfig } from "@agentforce/adk";
const agentConfig: AgentConfig = { name: "APIAgent", tools: ["api_fetch", "filter_content"]};
const agent = new AgentForceAgent(agentConfig) .useLLM("ollama", "gpt-oss:20b") // ✅ Strong reasoning for API integration .systemPrompt("You are an API integration expert.") .prompt("Fetch data from the JSON API and extract key metrics");
const response = await agent.run();
import { AgentForceAgent, type AgentConfig } from "@agentforce/adk";
const agentConfig: AgentConfig = { name: "DevOpsAgent", tools: ["os_exec", "fs_read_file", "fs_write_file"]};
const agent = new AgentForceAgent(agentConfig) .useLLM("ollama", "deepseek-r1:14b") // ✅ Excellent for system operations .systemPrompt("You are a DevOps automation assistant.") .prompt("Check git status, read the latest commit, and create a deployment summary");
const response = await agent.output("json");
const researchAgent = new AgentForceAgent({ name: "ResearchAgent", tools: [ "web_fetch", // Web scraping "api_fetch", // API calls "fs_write_file", // Save results "filter_content", // Content processing "md_create_ascii_tree" // Structure data ]}) .useLLM("ollama", "llama4:16x17b") // ✅ Latest with vision support .systemPrompt("You are a research assistant that can gather information from multiple sources.") .prompt("Research the latest AI developments and create a structured report");
const report = await researchAgent.output("md");
const codeAnalyst = new AgentForceAgent({ name: "CodeAnalyst", tools: [ "fs_read_file", // Read source files "fs_find_files", // Find code files "fs_search_content", // Search for patterns "os_exec", // Run analysis tools "fs_write_file" // Save reports ]}) .useLLM("ollama", "devstral") // ✅ Best for code analysis .systemPrompt("You are a code analysis specialist.") .prompt("Analyze the TypeScript project structure and identify potential improvements");
const analysis = await codeAnalyst.output("json");
const contentManager = new AgentForceAgent({ name: "ContentManager", tools: [ "fs_get_file_tree", // Project structure "fs_read_file", // Read content "web_fetch", // External research "filter_content", // Process content "fs_write_file" // Save output ]}) .useLLM("ollama", "magistral") // ✅ Efficient reasoning model .systemPrompt("You are a content management specialist.") .prompt("Audit the documentation structure and suggest improvements");
const audit = await contentManager.run();
Input Validation
All tools validate parameters and data types before execution
Sanitization of file paths and user inputs to prevent injection attacks
Resource Limits
Timeout controls prevent long-running operations from hanging
Size limits on file operations and network responses
Allowlists
Command allowlists for system execution tools
Protocol restrictions (HTTP/HTTPS only) for network tools
Sandboxing
Working directory restrictions for file operations
No shell execution - direct command spawning only
Security Restrictions:
Performance Limits:
// Example: Safe file operations{ "path": "./data/safe-file.txt", // ✅ Within project "max_length": 1000000 // ✅ Size limit}
Security Restrictions:
Anti-Detection Features (web_fetch):
// Example: Safe API call{ "url": "https://api.example.com/data", // ✅ HTTPS only "timeout_ms": 30000, // ✅ Timeout limit "max_response_bytes": 5000000 // ✅ Size limit}
Command Allowlist:
Sandboxing:
// Example: Safe command execution{ "command": "git", // ✅ Allowed command "args": ["status", "--porcelain"], // ✅ Safe arguments "cwd": "./project", // ✅ Within project "timeout_ms": 15000 // ✅ Timeout limit}
// Missing tools are logged but don't break executionconst agent = new AgentForceAgent({ name: "SafeAgent", tools: ["nonexistent_tool", "fs_read_file"] // First tool missing}) .systemPrompt("Base prompt") .prompt("Execute task");
// Agent will:// 1. Log warning for missing tool// 2. Load available tools (fs_read_file)// 3. Continue execution normally
const response = await agent.getResponse();
Debug Log Output:
{ "msg": "Loading tools for agent", "requestedTools": ["nonexistent_tool", "fs_read_file"]}{ "level": "warn", "tool": "nonexistent_tool", "msg": "Tool not found in registry"}{ "tool": "fs_read_file", "msg": "Tool loaded successfully"}{ "loadedCount": 1, "msg": "Tools loaded for agent"}
// Tools handle execution errors gracefullyconst agent = new AgentForceAgent({ name: "ErrorHandlingAgent", tools: ["fs_read_file"]}) .debug() // Enable debug logging .prompt("Read a file that doesn't exist");
// Tool execution returns error information instead of throwingconst response = await agent.getResponse();
Tool Error Response Format:
{ "success": false, "error": "File not found: /path/to/nonexistent.txt", "path": "/path/to/nonexistent.txt"}
Enable detailed tool execution logging:
const agent = new AgentForceAgent({ name: "DebugAgent", tools: ["fs_read_file", "api_fetch"]}) .debug() // Enable debug logging .systemPrompt("Debug agent") .prompt("Test tools");
const response = await agent.getResponse();
import { getAvailableTools, hasTool, getTool } from "@agentforce/adk";
// List all available toolsconst allTools = getAvailableTools();console.log("Available tools:", allTools);
// Check if a tool existsconst hasFileReader = hasTool("fs_read_file");console.log("Has fs_read_file:", hasFileReader);
// Get tool definitionconst tool = getTool("fs_read_file");console.log("Tool definition:", tool?.definition);
Ollama
✅ Full Support
OpenRouter
✅ Full Support
Google
🚧 Coming Soon
All Current Tool-Capable Models:
gpt-oss
20b, 120b
tools, thinking
mistral-small3.2
24b
vision, tools
magistral
24b
tools, thinking
devstral
24b
tools
qwen3
0.6b-235b
tools, thinking
granite3.3
2b, 8b
tools
mistral-small3.1
24b
vision, tools
cogito
3b-70b
tools
llama4
16x17b, 128x17b
vision, tools
deepseek-r1
1.5b-671b
tools, thinking
phi4-mini
3.8b
tools
llama3.3
70b
tools
qwq
32b
tools
Models That Do NOT Support Tools:
gemma3:4b
❌ - Does not support function callinggemma2:2b
❌ - Does not support function callingphi3:mini
❌ - Does not support function calling (use phi4-mini
instead)Recommended Models:
anthropic/claude-3.5-sonnet
- Excellent tool supportanthropic/claude-3-opus
- Advanced function callingopenai/gpt-4
- Full tool integrationopenai/gpt-4-turbo
- Enhanced tool capabilitiesmoonshotai/kimi-k2
- Supports function calling (free tier available)qwen/qwen-2-72b-instruct
- Strong tool support// ✅ CORRECT: Using a model that supports toolsconst agent = new AgentForceAgent({ name: "ToolAgent", tools: ["fs_read_file", "web_fetch"]}) .useLLM("ollama", "deepseek-r1:7b") // ✅ Excellent tool support .systemPrompt("You are a helpful assistant with tool access.") .prompt("Read the README.md file and summarize it");
const response = await agent.run();
// ❌ WRONG: Using a model that doesn't support toolsconst agent = new AgentForceAgent({ name: "ToolAgent", tools: ["fs_read_file"] // Tools configured but...}) .useLLM("ollama", "gemma3:4b") // ❌ Does NOT support tools .prompt("Read a file");
// This will result in error:// "registry.ollama.ai/library/gemma3:4b does not support tools"
// ✅ CORRECT: Use any of the tool-capable models instead.useLLM("ollama", "qwen3:8b") // ✅ Supports tools.useLLM("ollama", "devstral") // ✅ Great for coding.useLLM("ollama", "deepseek-r1") // ✅ Excellent reasoning
// ✅ CORRECT: Using OpenRouter with tool-capable modelconst agent = new AgentForceAgent({ name: "OpenRouterToolAgent", tools: ["fs_read_file", "api_fetch", "web_fetch"]}) .useLLM("openrouter", "anthropic/claude-3-haiku") // ✅ Supports tools .systemPrompt("You are a research assistant with file and web access.") .prompt("Research the latest AI developments and save findings");
const response = await agent.getResponse();
import type { ModelConfig } from "@agentforce/adk";
const modelConfig: ModelConfig = { temperature: 0.7, maxTokens: 16384, maxToolRounds: 10, // Limit tool execution rounds timeout: 60000 // 60 second timeout};
const agent = new AgentForceAgent({ name: "ConfiguredAgent", tools: ["fs_read_file", "fs_write_file"]}) .useLLM("ollama", "deepseek-r1:7b", modelConfig) // ✅ Excellent tool support .prompt("Process files with controlled parameters");
# Install top recommended models for tool integrationollama pull deepseek-r1:7b # Excellent reasoning with tool supportollama pull qwen3:8b # Versatile with strong tool callingollama pull devstral # Best for coding agentsollama pull gpt-oss:20b # Strong reasoning capabilitiesollama pull llama4:16x17b # Latest with vision support
# Smaller models for resource-constrained environmentsollama pull phi4-mini # 3.8B with function callingollama pull granite3.3:2b # IBM's compact modelollama pull smollm2:1.7b # Smallest tool-capable model
# Verify model supports toolsollama show deepseek-r1:7b
# List all available modelsollama list
# Set your OpenRouter API keyexport OPENROUTER_API_KEY=sk-or-v1-your-api-key-here
# Or add to .env fileecho "OPENROUTER_API_KEY=sk-or-v1-your-api-key-here" >> .env
Error: "registry.ollama.ai/library/gemma3:4b does not support tools"
Solution:
// ❌ Wrong - model doesn't support tools.useLLM("ollama", "gemma3:4b")
// ✅ Correct - use tool-compatible model.useLLM("ollama", "qwen2.5-coder:7b")
Error: "model not found: deepseek-r1:7b"
Solution:
# Install the model firstollama pull deepseek-r1:7b
# Then use in your agent.useLLM("ollama", "deepseek-r1:7b")
Error: "Tool execution timed out"
Solution:
const modelConfig: ModelConfig = { timeout: 120000, // Increase timeout to 2 minutes maxToolRounds: 5 // Limit tool rounds};
.useLLM("ollama", "deepseek-r1:7b", modelConfig) // ✅ Excellent tool support
// Ollama has full tool support with compatible modelsconst ollamaAgent = new AgentForceAgent({ name: "OllamaToolAgent", tools: ["fs_read_file", "api_fetch", "os_exec"]}) .useLLM("ollama", "deepseek-r1:7b") // ✅ Excellent tool calling .prompt("Use tools to analyze the project");
// Tools are automatically available during conversationconst response = await ollamaAgent.run();
Development Automation
Tools: fs_*, os_exec, gh_list_repos
Best for: Code analysis, file management, git operations, build automation
Content Research
Tools: web_fetch, api_fetch, filter_content
Best for: Web scraping, API integration, content aggregation
Data Processing
Tools: fs_*, filter_content, md_create_ascii_tree
Best for: File processing, data transformation, report generation
System Administration
Tools: os_exec, fs_*, gh_list_repos
Best for: System monitoring, deployment automation, infrastructure management
// Combine complementary toolsconst agent = new AgentForceAgent({ name: "EffectiveAgent", tools: [ "fs_read_file", // Read source data "filter_content", // Process data "fs_write_file" // Save results ]}) .useLLM("openrouter", "moonshotai/kimi-k2");
// Provide clear context.systemPrompt(`You have access to file operations and content filtering tools.Use them to read, process, and save data efficiently.`)
// Give specific instructions.prompt("Read data.json, extract important fields, and save a summary to summary.md");
// Too many tools (performance impact)tools: ["fs_read_file", "fs_write_file", "fs_list_dir", "web_fetch", "api_fetch", "os_exec", "gh_list_repos"] // Too many
// Wrong tool selectiontools: ["web_fetch"] // For local file operations
// Missing required toolstools: ["fs_read_file"] // But need to write files too
// Tool overkilltools: ["os_exec"] // Just to run "ls" command
.systemPrompt()
- Define how agents should use tools