Runtime
Runtime Proxy
OpenAI-compatible Chat Completions gateway with Kagu policy, rollout, and metadata.
Kagu Proxy mode exposes an OpenAI-compatible endpoint for chat completions:
POST /runtime/proxy/chat/completions- Auth:
Authorization: Bearer <Kagu project API key>
Required headers
In addition to the bearer token, Proxy mode requires:
x-kagu-user-id: Stable end-user identifier from your product.x-kagu-policy-key: Policy key configured in the project (lowercased on lookup).
Optional headers
These headers enrich attribution and observability:
x-kagu-request-id: External request id for idempotency / dedupe (conflicts if reused).x-kagu-plan-id: Plan key or plan id.x-kagu-feature-key,x-kagu-cohort-key: Feature/cohort context.x-kagu-region,x-kagu-locale,x-kagu-country,x-kagu-platform: Runtime context.x-kagu-trial-status:NONE | TRIALING | EXPIRED | CONVERTED.
Request body
The body is OpenAI Chat Completions shaped. Kagu reads:
model(required): can include a provider prefix likeopenai/gpt-4o-mini.messages(required array)stream(optional boolean): must betruefor streaming andfalse/omittedfor buffered requests.
TypeScript example
const baseUrl = process.env.KAGU_API_BASE_URL ?? 'https://api.kagu.ai';
const res = await fetch(`${baseUrl}/runtime/proxy/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.KAGU_PROJECT_API_KEY}`,
'x-kagu-user-id': 'user_123',
'x-kagu-policy-key': 'support_chat',
'x-kagu-request-id': 'req_abc_001',
},
body: JSON.stringify({
model: 'openai/gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello, world!' }],
}),
});
const data = await res.json();
console.log(data.kagu); // { requestId, action, rolloutState, ... }Streaming
Set stream: true to receive SSE. Kagu returns OpenAI-compatible frames and also emits metadata headers, including:
x-kagu-request-idx-kagu-actionx-kagu-rollout-statex-kagu-used-fallback