Skip to content

Server Mode


Serve a Agent Agentic Server HTTP APIs

AgentForce ADK includes powerful built-in server capabilities, allowing you to deploy your agents as HTTP APIs with minimal configuration. Or use the AgentForceServer class for advanced server functionality.

Transform any agent into a web service in just a few lines:

import { AgentForceAgent } from '@agentforce/adk';
// Create your agent
const agent = new AgentForceAgent({
name: "WebAgent"
})
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are a helpful web API assistant");
// Deploy as HTTP server
await agent.serve("localhost", 3000);
console.log("🚀 Agent server running at http://localhost:3000");

Call the Agent via endpoint:

Terminal window
curl -X GET "http://localhost:3000" -G --data-urlencode "prompt=Tell me a joke"

or open the browser:

http://localhost:3000?prompt="What is TypeScript?"

This Implements a simple GET endpoint that accepts a prompt query parameter and returns the agent’s response. It is ideal for quick prototyping and testing.

But for production use, consider the AgentForceServer class for more advanced features.

For more advanced server functionality, use the dedicated AgentForceServer class:

import { AgentForceServer, AgentForceAgent, type ServerConfig } from '@agentforce/adk';
// Configure server
const serverConfig: ServerConfig = {
name: "ProductionAPI"
};
// Create multiple agents
const chatAgent = new AgentForceAgent({
name: "ChatAgent"
})
.useLLM("ollama", "gemma3:12b")
.systemPrompt("You are a friendly chat assistant");
const codeAgent = new AgentForceAgent({
name: "CodeAgent"
})
.useLLM("ollama", "qwen3-coder")
.systemPrompt("You are a code review and generation expert");
// Create server with multiple endpoints
const server = new AgentForceServer(serverConfig)
.addRouteAgent("POST", "/chat", chatAgent)
.addRouteAgent("POST", "/code", codeAgent)
.addRoute("GET", "/health", { status: "ok" })
.useOpenAICompatibleRouting(chatAgent);
// Start server
await server.serve("0.0.0.0", 3000);

OpenAI Compatibility

Full OpenAI chat completions API compatibility for seamless integration with existing tools.

Multiple Endpoints

Host multiple agents on different routes with custom validation schemas.

Static Routes

Add utility endpoints for health checks, documentation, and webhooks.

Structured Logging

Built-in logging with Pino for production monitoring and debugging.

Agent routes process AI requests and return intelligent responses:

// Add agent to handle GET requests with query parameters
server.addRouteAgent("GET", "/ask", questionAgent);
// Usage
curl -X GET "http://localhost:3000/ask" -G --data-urlencode "prompt=What is TypeScript"

Static routes return predefined data without AI processing:

server
// Health check endpoint
.addRoute("GET", "/health", { status: "ok", timestamp: new Date() })
// API information
.addRoute("GET", "/info", {
name: "AgentAPI",
version: "1.0.0",
endpoints: ["/chat", "/code", "/health"]
})
// Dynamic webhook handler
.addRoute("POST", "/webhook", (context) => ({
received: true,
method: context.req.method,
timestamp: new Date().toISOString()
}));

Enable drop-in compatibility with OpenAI tools and SDKs:

// Add OpenAI chat completions endpoint
server.useOpenAICompatibleRouting(agent);
// Creates: POST /v1/chat/completions
// Compatible with OpenAI SDK, LangChain, and other tools

OpenAI API Usage:

Terminal window
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "ollama/gemma3:12b",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'

Response Format:

{
"id": "chatcmpl-1234567890",
"object": "chat.completion",
"created": 1234567890,
"model": "ollama/gemma3:12b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 2,
"completion_tokens": 9,
"total_tokens": 11
}
}

Enforce strict input validation and structured output formatting:

import type { RouteAgentSchema } from '@agentforce/adk';
const projectSchema: RouteAgentSchema = {
input: ["prompt", "project_name", "priority", "deadline"],
output: ["success", "prompt", "response", "project_name", "priority", "deadline", "created_at"]
};

Schema Benefits:

  • Input Validation: Reject requests missing required fields
  • Output Consistency: Standardized response format
  • API Documentation: Schema serves as endpoint specification
  • Error Prevention: Clear error messages for invalid requests
const server = new AgentForceServer({
name: "DevServer"
});
await server.serve("localhost", 3000);
Terminal window
# .env file
NODE_ENV=production
PORT=8080
HOST=0.0.0.0
# AI Provider Configuration
OPENROUTER_API_KEY=sk-or-v1-your-key
OLLAMA_HOST=http://localhost:11434
# Logging
LOG_LEVEL=info

Add comprehensive health and monitoring endpoints:

const server = new AgentForceServer(config)
// Health check
.addRoute("GET", "/health", () => ({
status: "healthy",
timestamp: new Date().toISOString(),
uptime: process.uptime()
}))
// Readiness check
.addRoute("GET", "/ready", async () => {
try {
// Check AI provider connectivity
const testAgent = new AgentForceAgent({ name: "test", type: "health" })
.useLLM("ollama", "gemma3:12b");
return { status: "ready", providers: ["ollama"] };
} catch (error) {
return { status: "not ready", error: error.message };
}
})
// Metrics endpoint
.addRoute("GET", "/metrics", () => ({
requests_total: requestCounter,
response_time_ms: averageResponseTime,
active_connections: activeConnections
}));

The server provides comprehensive error handling:

// Automatic error responses for invalid requests
{
"error": "Missing required fields",
"message": "The following required fields are missing: priority",
"required": ["prompt", "project_name", "priority"],
"received": ["prompt", "project_name"]
}
// OpenAI-compatible error format
{
"error": "Invalid OpenAI chat completion format",
"message": "Missing or invalid \"model\" field",
"example": {
"model": "ollama/gemma3:12b",
"messages": [{"role": "user", "content": "Hello"}]
}
}

AgentForce servers handle multiple requests concurrently:

// Server automatically handles concurrent requests
// No additional configuration needed
const server = new AgentForceServer(config)
.addRouteAgent("POST", "/chat", chatAgent);
// Each request is processed independently
await server.serve("0.0.0.0", 8080);

API Reference

Explore the complete server API documentation → API Docs

OpenAI Compatibility

Learn more about OpenAI API compatibility → OpenAI Guide

You now have the knowledge to deploy AgentForce ADK agents as production-ready HTTP APIs with OpenAI compatibility, schema validation, and comprehensive monitoring capabilities!