Skip to content

Agent MCP Integration


AgentForceAgent MCP Protocol Advanced

Model Context Protocol (MCP) integration enables AgentForce agents to interact with external tools, resources, and data sources through standardized server connections. This comprehensive guide covers MCP configuration, server management, and advanced usage patterns.

MCP (Model Context Protocol) is a standardized protocol that allows AI agents to connect to external servers that provide:

  • Tools: Functions the agent can execute
  • Resources: Data sources the agent can read from
  • Prompts: Pre-defined prompt templates
  • Integrations: External service connections
Provider
Type
Recommended Models
Notes
ollama
Local
gpt-oss, mistral-small3.2, magistral, devstral, qwen3, mistral-small3.1, phi4-mini, deepseek-r1, command-r7b
Local models with tool calling support - all listed models support function calling
openrouter
Cloud
google/gemini-2.5-flash-lite, z-ai/glm-4-32b, moonshotai/kimi-k2, mistralai/devstral-medium, mistralai/devstral-small-1.1, x-ai/grok-4
Premium cloud models with excellent tool calling - includes latest reasoning models
google
Cloud
gemini-2.5-pro, gemini-1.5-flash, gemini-2.0-flash
Google's Gemini models have excellent tool calling support
openai
Cloud
gpt-5, gpt-4-turbo, gpt-3.5-turbo
OpenAI models with native function calling capabilities
anthropic
Cloud
claude-3-*, claude-3.5-sonnet
Anthropic's Claude models with tool use capabilities
Model
Sizes
Description
Features
Install Command
gpt-oss
20b, 120b
OpenAI's open-weight models for reasoning and agentic tasks
tools, thinking
ollama pull gpt-oss
mistral-small3.2
24b
Improved function calling and instruction following
vision, tools
ollama pull mistral-small3.2
magistral
24b
Small, efficient reasoning model
tools, thinking
ollama pull magistral
devstral
24b
Best open source model for coding agents
tools
ollama pull devstral
qwen3
0.6b-235b
Latest Qwen series with comprehensive model sizes
tools, thinking
ollama pull qwen3:8b
granite3.3
2b, 8b
IBM Granite with 128K context length
tools
ollama pull granite3.3:8b
mistral-small3.1
24b
Vision understanding with 128k token context
vision, tools
ollama pull mistral-small3.1
cogito
3b-70b
Hybrid reasoning models by Deep Cogito
tools
ollama pull cogito:14b
llama4
16x17b, 128x17b
Meta's latest multimodal models
vision, tools
ollama pull llama4:16x17b
deepseek-r1
1.5b-671b
Open reasoning models approaching O3 performance
tools, thinking
ollama pull deepseek-r1:7b
phi4-mini
3.8b
Multilingual with function calling support
tools
ollama pull phi4-mini
llama3.3
70b
Performance similar to Llama 3.1 405B model
tools
ollama pull llama3.3
qwq
32b
Reasoning model of the Qwen series
tools
ollama pull qwq
Model
Description
Features
Pricing
Best For
google/gemini-2.5-flash-lite
Lightweight reasoning model optimized for ultra-low latency and cost efficiency
tools, thinking (optional), vision
$0.10/$0.40 per M tokens
Translation, Technology, Legal, Marketing
qwen/qwen3-235b-a22b-instruct-2507
Multilingual MoE model with 22B active parameters, excellent for reasoning and math
tools, multilingual, math reasoning
$0.12/$0.59 per M tokens
Health, Math (AIME, HMMT), Coding
moonshotai/kimi-k2 (free)
Large-scale MoE model (1T params, 32B active) optimized for agentic capabilities
tools, reasoning, code synthesis
FREE
Tool use, Coding, Reasoning
moonshotai/kimi-k2
Premium version with extended context for complex agentic workflows
tools, advanced reasoning, long context
$0.55/$2.20 per M tokens
Programming, Science, Technology
mistralai/devstral-medium
High-performance code generation model, 61.6% on SWE-Bench Verified
tools, code generation, agentic reasoning
$0.40/$2.00 per M tokens
Code agents, Software engineering
mistralai/devstral-small-1.1
24B parameter model for software engineering agents, 53.6% on SWE-Bench
tools, function calling, XML output
$0.10/$0.30 per M tokens
Autonomous development, Multi-file edits
x-ai/grok-4
Latest xAI reasoning model with parallel tool calling and structured outputs
tools, parallel calling, structured output, vision
$3.00/$15.00 per M tokens
Technology, Advanced reasoning
Terminal window
# Set your OpenRouter API key
export OPENROUTER_API_KEY=sk-or-v1-your-api-key-here
# Or add to .env file
echo "OPENROUTER_API_KEY=sk-or-v1-your-api-key-here" >> .env

🆓 Free Tier

Best Choice: moonshotai/kimi-k2 (free)

Features: 1T params (32B active), excellent tool calling, 33K context

Perfect for: Learning, prototyping, small projects

💰 Best Value

Best Choice: mistralai/devstral-small-1.1

Pricing: $0.10/$0.30 per M tokens

Perfect for: Production coding agents, automated development

⚡ Ultra-Fast

Best Choice: google/gemini-2.5-flash-lite

Features: 1.05M context, optional reasoning, ultra-low latency

Perfect for: Real-time applications, high-throughput scenarios

🧠 Advanced Reasoning

Best Choice: x-ai/grok-4

Features: 256K context, parallel tool calling, structured outputs

Perfect for: Complex problem-solving, research, analysis

Configure MCP servers directly in the agent configuration:

import { AgentForceAgent, type AgentConfig } from '@agentforce/adk';
const config: AgentConfig = {
name: "MCPAgent",
mcps: ["filesystem", "github", "database"], // Server names
mcpConfig: "configs/agent-specific.mcp.json", // Custom config file
tools: ["fs_read_file", "web_fetch"] // Additional built-in tools
};
// IMPORTANT: Use a model that supports tool calling!
const agent = new AgentForceAgent(config)
.useLLM("ollama", "llama3"); // Tool-capable model required

Add MCP servers dynamically using the .addMCP() method:

// IMPORTANT: Must use a tool-capable model for MCP integration
const agent = new AgentForceAgent({ name: "DynamicAgent" })
.addMCP("filesystem") // Pre-configured server
.addMCP({ // Custom server config
name: "custom-api",
command: "python",
args: ["./servers/api-server.py"],
env: { API_KEY: process.env.API_KEY }
})
.useLLM("ollama", "llama3"); // Tool-capable model required!

Create a global mcp.config.json file:

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"],
"env": {}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
},
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite", "/path/to/database.db"],
"env": {}
}
}
}

Override global settings with agent-specific configurations:

{
"mcpServers": {
"database": {
"command": "python",
"args": ["./custom-servers/analytics-db.py"],
"env": {
"DATABASE_URL": "${ANALYTICS_DB_URL}",
"CACHE_TTL": "3600"
},
"workingDirectory": "/opt/mcp-servers",
"timeout": 15000
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./project-files"],
"env": {}
}
}
}
Server Name
Category
Package
Description
filesystem
File Operations
@modelcontextprotocol/server-filesystem
Read, write, search, and manage files and directories
git
Version Control
@modelcontextprotocol/server-git
Git operations, commit history, branch management
sqlite
Database
@modelcontextprotocol/server-sqlite
SQLite database queries and operations
brave-search
Web Search
@modelcontextprotocol/server-brave-search
Web search capabilities via Brave Search API
github
Development
@modelcontextprotocol/server-github
GitHub repository operations, issues, PRs
postgresql
Database
@modelcontextprotocol/server-postgres
PostgreSQL database operations
docker
DevOps
@modelcontextprotocol/server-docker
Docker container management
aws
Cloud
@modelcontextprotocol/server-aws
AWS service integrations
github
Development
@modelcontextprotocol/server-github
GitHub repository operations, issues, PRs
postgresql
Database
@modelcontextprotocol/server-postgres
PostgreSQL database operations
docker
DevOps
@modelcontextprotocol/server-docker
Docker container management
aws
Cloud
@modelcontextprotocol/server-aws
AWS service integrations
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({
name: "FileAgent",
mcps: ["filesystem"],
tools: ["fs_read_file"] // Can combine with built-in tools
})
.useLLM("ollama", "llama3") // Tool-capable model required!
.systemPrompt("You are a file management assistant")
.prompt("List all TypeScript files in the src directory and show their structure");
const response = await agent.output("text");
// Development Environment
const devConfig: AgentConfig = {
name: "DevAgent",
mcps: ["filesystem", "git"],
mcpConfig: "configs/development.mcp.json",
assetPath: "./dev-assets"
};
// Production Environment
const prodConfig: AgentConfig = {
name: "ProdAgent",
mcps: ["postgresql", "aws", "filesystem"],
mcpConfig: "configs/production.mcp.json",
assetPath: "/opt/agent-assets"
};
// Create environment-specific agents
const environment = process.env.NODE_ENV || 'development';
const config = environment === 'production' ? prodConfig : devConfig;
const agent = new AgentForceAgent(config)
.useLLM("openrouter", "moonshotai/kimi-k2")
.systemPrompt(`You are running in ${environment} mode`);
import { AgentForceAgent } from '@agentforce/adk';
class AdaptiveAgent {
private agent: AgentForceAgent;
constructor(name: string) {
this.agent = new AgentForceAgent({ name });
}
configureForTask(taskType: string): AgentForceAgent {
// Base configuration
this.agent.useLLM("ollama", "gpt-oss");
// Task-specific MCP servers
switch (taskType) {
case 'web-research':
return this.agent
.addMCP("brave-search")
.addMCP("filesystem")
.systemPrompt("You are a research assistant");
case 'code-review':
return this.agent
.addMCP("github")
.addMCP("git")
.addMCP("filesystem")
.systemPrompt("You are a code review expert");
case 'data-analysis':
return this.agent
.addMCP("sqlite")
.addMCP("postgresql")
.addMCP("filesystem")
.systemPrompt("You are a data analyst");
default:
return this.agent
.addMCP("filesystem")
.systemPrompt("You are a general assistant");
}
}
}
// Usage
const adaptiveAgent = new AdaptiveAgent("TaskSpecificAgent");
const response = await adaptiveAgent
.configureForTask("code-review")
.prompt("Review the recent changes in the main branch")
.output("md");
// Custom server configuration
const customServer: MCPServerConfig = {
name: "analytics-server",
command: "python3",
args: ["./servers/analytics.py", "--port", "8000"],
env: {
PYTHONPATH: "./servers",
DATABASE_URL: process.env.ANALYTICS_DB_URL || "",
LOG_LEVEL: "INFO"
},
workingDirectory: "/opt/mcp-servers",
timeout: 30000
};
const agent = new AgentForceAgent({ name: "AnalyticsAgent" })
.addMCP(customServer)
.addMCP("filesystem")
.useLLM("google", "gemini-2.5-flash");
// Node.js server configuration
const nodeServer: MCPServerConfig = {
name: "api-integration",
command: "node",
args: ["./servers/api-server.js", "--config", "production"],
env: {
NODE_ENV: "production",
API_BASE_URL: process.env.API_BASE_URL || "",
JWT_SECRET: process.env.JWT_SECRET || ""
},
timeout: 20000
};
const agent = new AgentForceAgent({ name: "APIAgent" })
.addMCP(nodeServer)
.useLLM("openrouter", "z-ai/glm-4.5v");

Content Management

MCP Servers: filesystem, github, git

Pattern: File operations + version control

Use Cases:

  • Documentation generation
  • Content organization
  • Blog post management
  • Wiki maintenance

Development Operations

MCP Servers: github, docker, filesystem, git

Pattern: Code management + deployment

Use Cases:

  • CI/CD pipeline management
  • Code review automation
  • Deployment monitoring
  • Infrastructure as code

Data Operations

MCP Servers: sqlite, postgresql, filesystem, aws

Pattern: Database + storage + processing

Use Cases:

  • ETL pipeline management
  • Report generation
  • Data quality monitoring
  • Analytics dashboards

Research & Intelligence

MCP Servers: brave-search, filesystem, github

Pattern: Search + storage + analysis

Use Cases:

  • Market research
  • Competitive analysis
  • Knowledge base building
  • Trend monitoring
// 1. ALWAYS use tool-capable models - consider cost and performance
const webDevAgent = new AgentForceAgent({
name: "WebDevAgent",
mcps: ["github", "filesystem", "git"], // Related development tools
mcpConfig: "configs/webdev.mcp.json"
})
.useLLM("openrouter", "mistralai/devstral-small-1.1"); // ✅ Specialized for coding, great value!
// 2. Use environment variables for sensitive data
const config = {
"mcpServers": {
"database": {
"env": {
"DATABASE_URL": "${DATABASE_URL}", // From environment
"API_KEY": "${DB_API_KEY}"
}
}
}
};
// 3. Set appropriate timeouts for different server types
.addMCP({
name: "external-api",
command: "python",
args: ["./api-server.py"],
timeout: 60000 // Longer timeout for external APIs
});
// 4. Use specific working directories
.addMCP({
name: "file-processor",
command: "node",
args: ["./processor.js"],
workingDirectory: "/opt/processors" // Dedicated directory
});
// Using models without tool calling support
const agent = new AgentForceAgent({ name: "BadAgent", mcps: ["filesystem"] })
.useLLM("ollama", "basic-text-model"); // ❌ Won't work - no tool support!
// Hardcoded secrets
{
"env": {
"API_KEY": "hardcoded-secret" // ❌ Use environment variables
}
}
// Missing error handling
const agent = new AgentForceAgent({ name: "UnsafeAgent" })
.addMCP("non-existent-server"); // ❌ No error handling
// Duplicate server names
.addMCP("filesystem")
.addMCP("filesystem"); // ❌ Duplicate, will be skipped
// Excessive timeout values
.addMCP({
name: "quick-server",
timeout: 300000 // ❌ 5 minutes is too long for most servers
});
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({
name: "DebugAgent",
mcps: ["filesystem", "github"]
})
.debug() // Enable debug logging
.useLLM("ollama", "gpt-oss")
.systemPrompt("You have access to file and GitHub operations");
try {
const response = await agent
.prompt("List repository files and recent commits")
.output("text");
console.log("Success:", response);
} catch (error) {
console.error("MCP Error:", error.message);
// Common issues:
// - Server not found in configuration
// - Server process failed to start
// - Network connectivity issues
// - Authentication failures
}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./dev-workspace"],
"env": {}
},
"git": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git", "--repository", "."],
"env": {}
}
}
}
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/opt/production-data"],
"env": {
"READ_ONLY": "true"
}
},
"database": {
"command": "python3",
"args": ["/opt/mcp-servers/production-db.py"],
"env": {
"DATABASE_URL": "${PROD_DATABASE_URL}",
"CONNECTION_POOL_SIZE": "20",
"QUERY_TIMEOUT": "30"
},
"workingDirectory": "/opt/mcp-servers",
"timeout": 45000
}
}
}
// Configure MCP servers with performance optimizations
const performantAgent = new AgentForceAgent({
name: "PerformantAgent",
mcps: ["database", "filesystem"],
mcpConfig: "configs/performance.mcp.json"
})
.useLLM("openrouter", "google/gemini-2.5-flash-lite", {
temperature: 0.3, // Lower temperature for consistent performance
maxTokens: 2048, // Limit token usage
maxToolRounds: 3 // Limit tool call iterations
});
// Use specific prompts to minimize tool calls
const response = await performantAgent
.systemPrompt("Be concise and efficient with tool usage")
.prompt("Get user count from database and save summary to file")
.output("text");