Ollama
Local Free Privacy
Best for:
- Development and testing
- Privacy-sensitive applications
- Offline environments
- Cost-effective solutions
AgentForce ADK supports multiple AI providers, giving you flexibility to choose between local and cloud-based models based on your needs.
Ollama
Local Free Privacy
Best for:
OpenRouter
Cloud Multiple Providers API Key Required
Best for:
Ollama allows you to run large language models locally on your machine.
# Install Ollama using Homebrewbrew install ollama
# Start Ollama servicebrew services start ollama
# Verify installationollama --version
# Install Ollamacurl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama serviceollama serve
# Verify installationollama --version
# Download and install from https://ollama.ai/download# Or use Windows Subsystem for Linux (WSL)wsl curl -fsSL https://ollama.ai/install.sh | sh
# Run Ollama in Dockerdocker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Pull a modeldocker exec -it ollama ollama pull gemma3:12b
Pull models based on your hardware and use case:
# Fast and efficient modelsollama run phi4-mini-reasoning # 3.8B params, great small thinking modelollama run gemma3:4b # 4B params, balanced performanceollama run llama3.2:3b # 3B params, Meta's model
# Good performance and capabilityollama run gemma3:12b # 12B params, recommendedollama run gpt-oss:20b # 20B params, 14GB, versatileollama run phi4:latest # Microsoft's latest model
# Maximum capabilityollama run gpt-oss:120b # 65GB open-source model from OpenAIollama run qwen3:32b # Mixture of experts modelollama run qwen3-coder:30b # Specialized for code generation
Test your Ollama installation:
# Check if Ollama is runningollama list
# Test a modelollama run gemma3:12b "Hello, how are you?"
# Check Ollama APIcurl http://localhost:11434/api/tags
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "LocalAgent" }) .useLLM("ollama", "gemma3:12b") // provider: "ollama", model: "gemma3:12b" .systemPrompt("You are a helpful assistant running locally") .prompt("What are the benefits of local AI models?");
const response = await agent.output("text");console.log(response);
OpenRouter provides access to multiple AI providers through a single API.
# Add to your shell profile (.bashrc, .zshrc, etc.)export OPENROUTER_API_KEY="sk-or-v1-your-api-key-here"
# Or add to .env fileecho "OPENROUTER_API_KEY=sk-or-v1-your-api-key-here" >> .env
// Set at runtime (not recommended for production)process.env.OPENROUTER_API_KEY = "sk-or-v1-your-api-key-here";
import { AgentForceAgent } from '@agentforce/adk';// ... rest of your code
OpenRouter provides access to models from multiple providers:
🔥 Free Models
// Great for development and testing.useLLM("openrouter", "moonshotai/kimi-k2:free").useLLM("openrouter", "openai/gpt-oss-20b:free").useLLM("openrouter", "z-ai/glm-4.5-air:free")
🚀 OpenAI Models
// OpenAI's latest models.useLLM("openrouter", "openai/gpt-5-chat").useLLM("openrouter", "openai/gpt-5-mini").useLLM("openrouter", "openai/o3")
🧠 Anthropic Models
// Claude models.useLLM("openrouter", "anthropic/claude-opus-4.1").useLLM("openrouter", "anthropic/claude-opus-4").useLLM("openrouter", "anthropic/claude-sonnet-4")
🦙 Meta Models
// Llama models.useLLM("openrouter", "meta-llama/llama-4-maverick").useLLM("openrouter", "meta-llama/llama-4-scout").useLLM("openrouter", "meta-llama/llama-3.3-70b-instruct")
import { AgentForceAgent } from '@agentforce/adk';
const agent = new AgentForceAgent({ name: "CloudAgent" }) .useLLM("openrouter", "z-ai/glm-4.5v") // provider: "openrouter", model: "z-ai/glm-4.5v" .systemPrompt("You are a cloud-powered AI assistant") .prompt("What are the advantages of cloud-based AI models?");
const response = await agent.output("text");console.log(response);
Monitor your OpenRouter usage:
Feature | Ollama (Local) | OpenRouter (Cloud) |
---|---|---|
Cost | Free | Pay-per-use |
Privacy | Complete | Provider-dependent |
Latency | Low (local) | Network-dependent |
Models | Limited selection | 100+ models |
Setup | Installation required | API key only |
Offline | ✅ Yes | ❌ No |
Scalability | Hardware-limited | Unlimited |
Ollama not starting:
# Check if Ollama is runningps aux | grep ollama
# Start Ollama manuallyollama serve
# Check logsollama logs
Model not found:
# List available modelsollama list
# Pull missing modelollama pull gemma3:12b
# Check model infoollama show gemma3:12b
Connection refused:
# Check if Ollama API is accessiblecurl http://localhost:11434/api/tags
# Restart Ollama servicebrew services restart ollama # macOSsudo systemctl restart ollama # Linux
API key not working:
echo $OPENROUTER_API_KEY
sk-or-v1-
Rate limiting:
Model not available:
Now that AgentForce ADK is installed, you’ll need to set up AI providers to power your agents:
Great! You now have AI providers configured and ready to power your AgentForce ADK agents. Choose Ollama for local development and privacy, or OpenRouter for access to the latest cloud models.