🆓 Free Tier
Best Choice: moonshotai/kimi-k2 (free)
Features: 1T params (32B active), excellent tool calling, 33K context
Perfect for: Learning, prototyping, small projects
Model Context Protocol (MCP) integration enables AgentForce agents to interact with external tools, resources, and data sources through standardized server connections. This comprehensive guide covers MCP configuration, server management, and advanced usage patterns.
MCP (Model Context Protocol) is a standardized protocol that allows AI agents to connect to external servers that provide:
ollama
Local
gpt-oss, mistral-small3.2, magistral, devstral, qwen3, mistral-small3.1, phi4-mini, deepseek-r1, command-r7b
openrouter
Cloud
google/gemini-2.5-flash-lite, z-ai/glm-4-32b, moonshotai/kimi-k2, mistralai/devstral-medium, mistralai/devstral-small-1.1, x-ai/grok-4
google
Cloud
gemini-2.5-pro, gemini-1.5-flash, gemini-2.0-flash
openai
Cloud
gpt-5, gpt-4-turbo, gpt-3.5-turbo
anthropic
Cloud
claude-3-*, claude-3.5-sonnet
gpt-oss
20b, 120b
tools, thinking
ollama pull gpt-oss
mistral-small3.2
24b
vision, tools
ollama pull mistral-small3.2
magistral
24b
tools, thinking
ollama pull magistral
devstral
24b
tools
ollama pull devstral
qwen3
0.6b-235b
tools, thinking
ollama pull qwen3:8b
granite3.3
2b, 8b
tools
ollama pull granite3.3:8b
mistral-small3.1
24b
vision, tools
ollama pull mistral-small3.1
cogito
3b-70b
tools
ollama pull cogito:14b
llama4
16x17b, 128x17b
vision, tools
ollama pull llama4:16x17b
deepseek-r1
1.5b-671b
tools, thinking
ollama pull deepseek-r1:7b
phi4-mini
3.8b
tools
ollama pull phi4-mini
llama3.3
70b
tools
ollama pull llama3.3
qwq
32b
tools
ollama pull qwq
google/gemini-2.5-flash-lite
tools, thinking (optional), vision
$0.10/$0.40 per M tokens
Translation, Technology, Legal, Marketing
qwen/qwen3-235b-a22b-instruct-2507
tools, multilingual, math reasoning
$0.12/$0.59 per M tokens
Health, Math (AIME, HMMT), Coding
moonshotai/kimi-k2 (free)
tools, reasoning, code synthesis
FREE
Tool use, Coding, Reasoning
moonshotai/kimi-k2
tools, advanced reasoning, long context
$0.55/$2.20 per M tokens
Programming, Science, Technology
mistralai/devstral-medium
tools, code generation, agentic reasoning
$0.40/$2.00 per M tokens
Code agents, Software engineering
mistralai/devstral-small-1.1
tools, function calling, XML output
$0.10/$0.30 per M tokens
Autonomous development, Multi-file edits
x-ai/grok-4
tools, parallel calling, structured output, vision
$3.00/$15.00 per M tokens
Technology, Advanced reasoning
# Set your OpenRouter API keyexport OPENROUTER_API_KEY=sk-or-v1-your-api-key-here
# Or add to .env fileecho "OPENROUTER_API_KEY=sk-or-v1-your-api-key-here" >> .env
🆓 Free Tier
Best Choice: moonshotai/kimi-k2 (free)
Features: 1T params (32B active), excellent tool calling, 33K context
Perfect for: Learning, prototyping, small projects
💰 Best Value
Best Choice: mistralai/devstral-small-1.1
Pricing: $0.10/$0.30 per M tokens
Perfect for: Production coding agents, automated development
⚡ Ultra-Fast
Best Choice: google/gemini-2.5-flash-lite
Features: 1.05M context, optional reasoning, ultra-low latency
Perfect for: Real-time applications, high-throughput scenarios
🧠 Advanced Reasoning
Best Choice: x-ai/grok-4
Features: 256K context, parallel tool calling, structured outputs
Perfect for: Complex problem-solving, research, analysis
Configure MCP servers directly in the agent configuration:
import { AgentForceAgent, type AgentConfig } from '@agentforce/adk';
const config: AgentConfig = { name: "MCPAgent", mcps: ["filesystem", "github", "database"], // Server names mcpConfig: "configs/agent-specific.mcp.json", // Custom config file tools: ["fs_read_file", "web_fetch"] // Additional built-in tools};
// IMPORTANT: Use a model that supports tool calling!const agent = new AgentForceAgent(config) .useLLM("ollama", "llama3"); // Tool-capable model required
Add MCP servers dynamically using the .addMCP()
method:
// IMPORTANT: Must use a tool-capable model for MCP integrationconst agent = new AgentForceAgent({ name: "DynamicAgent" }) .addMCP("filesystem") // Pre-configured server .addMCP({ // Custom server config name: "custom-api", command: "python", args: ["./servers/api-server.py"], env: { API_KEY: process.env.API_KEY } }) .useLLM("ollama", "llama3"); // Tool-capable model required!
Create a global mcp.config.json
file:
{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"], "env": {} }, "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" } }, "brave-search": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-brave-search"], "env": { "BRAVE_API_KEY": "${BRAVE_API_KEY}" } }, "sqlite": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-sqlite", "/path/to/database.db"], "env": {} } }}
Override global settings with agent-specific configurations:
{ "mcpServers": { "database": { "command": "python", "args": ["./custom-servers/analytics-db.py"], "env": { "DATABASE_URL": "${ANALYTICS_DB_URL}", "CACHE_TTL": "3600" }, "workingDirectory": "/opt/mcp-servers", "timeout": 15000 }, "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "./project-files"], "env": {} } }}
filesystem
File Operations
@modelcontextprotocol/server-filesystem
git
Version Control
@modelcontextprotocol/server-git
sqlite
Database
@modelcontextprotocol/server-sqlite
brave-search
Web Search
@modelcontextprotocol/server-brave-search
github
Development
@modelcontextprotocol/server-github
postgresql
Database
@modelcontextprotocol/server-postgres
docker
DevOps
@modelcontextprotocol/server-docker
aws
Cloud
@modelcontextprotocol/server-aws
github
Development
@modelcontextprotocol/server-github
postgresql
Database
@modelcontextprotocol/server-postgres
docker
DevOps
@modelcontextprotocol/server-docker
aws
Cloud
@modelcontextprotocol/server-aws
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "FileAgent", mcps: ["filesystem"], tools: ["fs_read_file"] // Can combine with built-in tools}) .useLLM("ollama", "llama3") // Tool-capable model required! .systemPrompt("You are a file management assistant") .prompt("List all TypeScript files in the src directory and show their structure");
const response = await agent.output("text");
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "DevAgent", mcps: ["github", "filesystem", "git"], mcpConfig: "./configs/dev-agent.mcp.json"}) .useLLM("openrouter", "mistralai/devstral-medium") // ✅ Specialized for code agents .systemPrompt("You are a senior developer assistant") .prompt("Review the latest commits and suggest improvements to the codebase");
const response = await agent.output("md");
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "DataAgent", mcps: ["sqlite", "filesystem"], tools: ["web_fetch"]}) .useLLM("openrouter", "qwen/qwen3-235b-a22b-instruct-2507") // ✅ Excellent for math and reasoning .systemPrompt("You are a data analyst with database access") .prompt("Analyze user engagement data and create a summary report");
const response = await agent.output("json");
// Development Environmentconst devConfig: AgentConfig = { name: "DevAgent", mcps: ["filesystem", "git"], mcpConfig: "configs/development.mcp.json", assetPath: "./dev-assets"};
// Production Environmentconst prodConfig: AgentConfig = { name: "ProdAgent", mcps: ["postgresql", "aws", "filesystem"], mcpConfig: "configs/production.mcp.json", assetPath: "/opt/agent-assets"};
// Create environment-specific agentsconst environment = process.env.NODE_ENV || 'development';const config = environment === 'production' ? prodConfig : devConfig;
const agent = new AgentForceAgent(config) .useLLM("openrouter", "moonshotai/kimi-k2") .systemPrompt(`You are running in ${environment} mode`);
import { AgentForceAgent } from '@agentforce/adk';
class AdaptiveAgent { private agent: AgentForceAgent;
constructor(name: string) { this.agent = new AgentForceAgent({ name }); }
configureForTask(taskType: string): AgentForceAgent { // Base configuration this.agent.useLLM("ollama", "gpt-oss");
// Task-specific MCP servers switch (taskType) { case 'web-research': return this.agent .addMCP("brave-search") .addMCP("filesystem") .systemPrompt("You are a research assistant");
case 'code-review': return this.agent .addMCP("github") .addMCP("git") .addMCP("filesystem") .systemPrompt("You are a code review expert");
case 'data-analysis': return this.agent .addMCP("sqlite") .addMCP("postgresql") .addMCP("filesystem") .systemPrompt("You are a data analyst");
default: return this.agent .addMCP("filesystem") .systemPrompt("You are a general assistant"); } }}
// Usageconst adaptiveAgent = new AdaptiveAgent("TaskSpecificAgent");const response = await adaptiveAgent .configureForTask("code-review") .prompt("Review the recent changes in the main branch") .output("md");
// Custom server configurationconst customServer: MCPServerConfig = { name: "analytics-server", command: "python3", args: ["./servers/analytics.py", "--port", "8000"], env: { PYTHONPATH: "./servers", DATABASE_URL: process.env.ANALYTICS_DB_URL || "", LOG_LEVEL: "INFO" }, workingDirectory: "/opt/mcp-servers", timeout: 30000};
const agent = new AgentForceAgent({ name: "AnalyticsAgent" }) .addMCP(customServer) .addMCP("filesystem") .useLLM("google", "gemini-2.5-flash");
// Node.js server configurationconst nodeServer: MCPServerConfig = { name: "api-integration", command: "node", args: ["./servers/api-server.js", "--config", "production"], env: { NODE_ENV: "production", API_BASE_URL: process.env.API_BASE_URL || "", JWT_SECRET: process.env.JWT_SECRET || "" }, timeout: 20000};
const agent = new AgentForceAgent({ name: "APIAgent" }) .addMCP(nodeServer) .useLLM("openrouter", "z-ai/glm-4.5v");
Content Management
MCP Servers: filesystem
, github
, git
Pattern: File operations + version control
Use Cases:
Development Operations
MCP Servers: github
, docker
, filesystem
, git
Pattern: Code management + deployment
Use Cases:
Data Operations
MCP Servers: sqlite
, postgresql
, filesystem
, aws
Pattern: Database + storage + processing
Use Cases:
Research & Intelligence
MCP Servers: brave-search
, filesystem
, github
Pattern: Search + storage + analysis
Use Cases:
// 1. ALWAYS use tool-capable models - consider cost and performanceconst webDevAgent = new AgentForceAgent({ name: "WebDevAgent", mcps: ["github", "filesystem", "git"], // Related development tools mcpConfig: "configs/webdev.mcp.json"}) .useLLM("openrouter", "mistralai/devstral-small-1.1"); // ✅ Specialized for coding, great value!
// 2. Use environment variables for sensitive dataconst config = { "mcpServers": { "database": { "env": { "DATABASE_URL": "${DATABASE_URL}", // From environment "API_KEY": "${DB_API_KEY}" } } }};
// 3. Set appropriate timeouts for different server types.addMCP({ name: "external-api", command: "python", args: ["./api-server.py"], timeout: 60000 // Longer timeout for external APIs});
// 4. Use specific working directories.addMCP({ name: "file-processor", command: "node", args: ["./processor.js"], workingDirectory: "/opt/processors" // Dedicated directory});
// Using models without tool calling supportconst agent = new AgentForceAgent({ name: "BadAgent", mcps: ["filesystem"] }) .useLLM("ollama", "basic-text-model"); // ❌ Won't work - no tool support!
// Hardcoded secrets{ "env": { "API_KEY": "hardcoded-secret" // ❌ Use environment variables }}
// Missing error handlingconst agent = new AgentForceAgent({ name: "UnsafeAgent" }) .addMCP("non-existent-server"); // ❌ No error handling
// Duplicate server names.addMCP("filesystem").addMCP("filesystem"); // ❌ Duplicate, will be skipped
// Excessive timeout values.addMCP({ name: "quick-server", timeout: 300000 // ❌ 5 minutes is too long for most servers});
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "DebugAgent", mcps: ["filesystem", "github"]}) .debug() // Enable debug logging .useLLM("ollama", "gpt-oss") .systemPrompt("You have access to file and GitHub operations");
try { const response = await agent .prompt("List repository files and recent commits") .output("text");
console.log("Success:", response);} catch (error) { console.error("MCP Error:", error.message);
// Common issues: // - Server not found in configuration // - Server process failed to start // - Network connectivity issues // - Authentication failures}
{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "./dev-workspace"], "env": {} }, "git": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-git", "--repository", "."], "env": {} } }}
{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/opt/production-data"], "env": { "READ_ONLY": "true" } }, "database": { "command": "python3", "args": ["/opt/mcp-servers/production-db.py"], "env": { "DATABASE_URL": "${PROD_DATABASE_URL}", "CONNECTION_POOL_SIZE": "20", "QUERY_TIMEOUT": "30" }, "workingDirectory": "/opt/mcp-servers", "timeout": 45000 } }}
// Configure MCP servers with performance optimizationsconst performantAgent = new AgentForceAgent({ name: "PerformantAgent", mcps: ["database", "filesystem"], mcpConfig: "configs/performance.mcp.json"}) .useLLM("openrouter", "google/gemini-2.5-flash-lite", { temperature: 0.3, // Lower temperature for consistent performance maxTokens: 2048, // Limit token usage maxToolRounds: 3 // Limit tool call iterations });
// Use specific prompts to minimize tool callsconst response = await performantAgent .systemPrompt("Be concise and efficient with tool usage") .prompt("Get user count from database and save summary to file") .output("text");
.addMCP()
- Runtime MCP server addition