OpenAI Compatibility
Full OpenAI chat completions API compatibility for seamless integration with existing tools.
AgentForce ADK includes powerful built-in server capabilities,
allowing you to deploy your agents as HTTP APIs with minimal configuration.
Or use the AgentForceServer
class for advanced server functionality.
Transform any agent into a web service in just a few lines:
import { AgentForceAgent } from '@agentforce/adk';
// Create your agentconst agent = new AgentForceAgent({ name: "WebAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are a helpful web API assistant");
// Deploy as HTTP serverawait agent.serve("localhost", 3000);console.log("🚀 Agent server running at http://localhost:3000");
Call the Agent via endpoint:
curl -X GET "http://localhost:3000" -G --data-urlencode "prompt=Tell me a joke"
or open the browser:
http://localhost:3000?prompt="What is TypeScript?"
This Implements a simple GET endpoint that accepts a prompt
query parameter and returns the agent’s response.
It is ideal for quick prototyping and testing.
But for production use, consider the AgentForceServer
class for more advanced features.
For more advanced server functionality, use the dedicated AgentForceServer
class:
import { AgentForceServer, AgentForceAgent, type ServerConfig } from '@agentforce/adk';
// Configure serverconst serverConfig: ServerConfig = { name: "ProductionAPI"};
// Create multiple agentsconst chatAgent = new AgentForceAgent({ name: "ChatAgent"}) .useLLM("ollama", "gemma3:12b") .systemPrompt("You are a friendly chat assistant");
const codeAgent = new AgentForceAgent({ name: "CodeAgent"}) .useLLM("ollama", "qwen3-coder") .systemPrompt("You are a code review and generation expert");
// Create server with multiple endpointsconst server = new AgentForceServer(serverConfig) .addRouteAgent("POST", "/chat", chatAgent) .addRouteAgent("POST", "/code", codeAgent) .addRoute("GET", "/health", { status: "ok" }) .useOpenAICompatibleRouting(chatAgent);
// Start serverawait server.serve("0.0.0.0", 3000);
OpenAI Compatibility
Full OpenAI chat completions API compatibility for seamless integration with existing tools.
Multiple Endpoints
Host multiple agents on different routes with custom validation schemas.
Static Routes
Add utility endpoints for health checks, documentation, and webhooks.
Structured Logging
Built-in logging with Pino for production monitoring and debugging.
Agent routes process AI requests and return intelligent responses:
// Add agent to handle GET requests with query parametersserver.addRouteAgent("GET", "/ask", questionAgent);
// Usagecurl -X GET "http://localhost:3000/ask" -G --data-urlencode "prompt=What is TypeScript"
// Add agent to handle POST requestsserver.addRouteAgent("POST", "/story", storyAgent);
// Usagecurl -X POST http://localhost:3000/story \ -H "Content-Type: application/json" \ -d '{"prompt": "Write a short story about AI"}'
import type { RouteAgentSchema } from '@agentforce/adk';
// Define validation schemaconst taskSchema: RouteAgentSchema = { input: ["prompt", "priority", "category"], output: ["success", "prompt", "response", "priority", "category", "timestamp"]};
// Add route with strict validationserver.addRouteAgent("POST", "/task", taskAgent, taskSchema);
// Only accepts requests with exactly these fields
Static routes return predefined data without AI processing:
server // Health check endpoint .addRoute("GET", "/health", { status: "ok", timestamp: new Date() })
// API information .addRoute("GET", "/info", { name: "AgentAPI", version: "1.0.0", endpoints: ["/chat", "/code", "/health"] })
// Dynamic webhook handler .addRoute("POST", "/webhook", (context) => ({ received: true, method: context.req.method, timestamp: new Date().toISOString() }));
Enable drop-in compatibility with OpenAI tools and SDKs:
// Add OpenAI chat completions endpointserver.useOpenAICompatibleRouting(agent);
// Creates: POST /v1/chat/completions// Compatible with OpenAI SDK, LangChain, and other tools
OpenAI API Usage:
curl -X POST http://localhost:3000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "ollama/gemma3:12b", "messages": [ {"role": "user", "content": "Hello!"} ] }'
Response Format:
{ "id": "chatcmpl-1234567890", "object": "chat.completion", "created": 1234567890, "model": "ollama/gemma3:12b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today?" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 2, "completion_tokens": 9, "total_tokens": 11 }}
Enforce strict input validation and structured output formatting:
import type { RouteAgentSchema } from '@agentforce/adk';
const projectSchema: RouteAgentSchema = { input: ["prompt", "project_name", "priority", "deadline"], output: ["success", "prompt", "response", "project_name", "priority", "deadline", "created_at"]};
server.addRouteAgent("POST", "/project", projectAgent, projectSchema);
curl -X POST http://localhost:3000/project \ -H "Content-Type: application/json" \ -d '{ "prompt": "Create a project plan", "project_name": "WebApp", "priority": "high", "deadline": "2025-08-01" }'
{ "success": true, "prompt": "Create a project plan", "response": "Here's a comprehensive project plan...", "project_name": "WebApp", "priority": "high", "deadline": "2025-08-01", "created_at": "2025-07-18T10:30:00.000Z"}
Schema Benefits:
const server = new AgentForceServer({ name: "DevServer"});
await server.serve("localhost", 3000);
const server = new AgentForceServer({ name: "ProductionAPI"});
const port = process.env.PORT || 8080;const host = process.env.HOST || "0.0.0.0";
await server.serve(host, port);
FROM oven/bun:1-alpineWORKDIR /appCOPY package.json bun.lock ./RUN bun install --frozen-lockfileCOPY . .EXPOSE 8080CMD ["bun", "run", "src/server.ts"]
# .env fileNODE_ENV=productionPORT=8080HOST=0.0.0.0
# AI Provider ConfigurationOPENROUTER_API_KEY=sk-or-v1-your-keyOLLAMA_HOST=http://localhost:11434
# LoggingLOG_LEVEL=info
Add comprehensive health and monitoring endpoints:
const server = new AgentForceServer(config) // Health check .addRoute("GET", "/health", () => ({ status: "healthy", timestamp: new Date().toISOString(), uptime: process.uptime() }))
// Readiness check .addRoute("GET", "/ready", async () => { try { // Check AI provider connectivity const testAgent = new AgentForceAgent({ name: "test", type: "health" }) .useLLM("ollama", "gemma3:12b");
return { status: "ready", providers: ["ollama"] }; } catch (error) { return { status: "not ready", error: error.message }; } })
// Metrics endpoint .addRoute("GET", "/metrics", () => ({ requests_total: requestCounter, response_time_ms: averageResponseTime, active_connections: activeConnections }));
The server provides comprehensive error handling:
// Automatic error responses for invalid requests{ "error": "Missing required fields", "message": "The following required fields are missing: priority", "required": ["prompt", "project_name", "priority"], "received": ["prompt", "project_name"]}
// OpenAI-compatible error format{ "error": "Invalid OpenAI chat completion format", "message": "Missing or invalid \"model\" field", "example": { "model": "ollama/gemma3:12b", "messages": [{"role": "user", "content": "Hello"}] }}
AgentForce servers handle multiple requests concurrently:
// Server automatically handles concurrent requests// No additional configuration neededconst server = new AgentForceServer(config) .addRouteAgent("POST", "/chat", chatAgent);
// Each request is processed independentlyawait server.serve("0.0.0.0", 8080);
API Reference
Explore the complete server API documentation → API Docs
Server Examples
See real-world server deployment examples → Server Examples
OpenAI Compatibility
Learn more about OpenAI API compatibility → OpenAI Guide
Production Deployment
Advanced deployment patterns and best practices → Advanced Examples
You now have the knowledge to deploy AgentForce ADK agents as production-ready HTTP APIs with OpenAI compatibility, schema validation, and comprehensive monitoring capabilities!