Quickstart
Send your first request through Kagu in minutes using the proxy gateway and your project API key.
Get started with the Kagu runtime API. Proxy mode exposes an OpenAI-compatible Chat Completions endpoint at POST /runtime/proxy/chat/completions, with policy, rollout, and analytics wired in.
Set up your account
- Create a workspace and project in the Kagu dashboard.
- Configure provider credentials for your project.
- Create a Project API key and store it as
KAGU_PROJECT_API_KEY. - Ensure the project uses PROXY integration mode for the examples below.
Send your first request
Kagu validates your API key and routes the call through your active policy. You must send:
Authorization: Bearer <project_api_key>x-kagu-user-id— stable end-user id from your productx-kagu-policy-key— policy key configured on the project
Optional headers include x-kagu-request-id (idempotency / tracing), x-kagu-plan-id, x-kagu-region, and x-kagu-locale.
TypeScript
const baseUrl = process.env.KAGU_API_BASE_URL ?? "https://api.kagu.ai";
const response = await fetch(`${baseUrl}/runtime/proxy/chat/completions`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.KAGU_PROJECT_API_KEY}`,
"x-kagu-user-id": "user_123",
"x-kagu-policy-key": "support_chat",
"x-kagu-request-id": "req_abc_001",
},
body: JSON.stringify({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello, world!" }],
}),
});
const data = await response.json();
// OpenAI-shaped payload plus top-level `kagu` metadata on success
console.log(data);Python
import os
import requests
base_url = os.getenv("KAGU_API_BASE_URL", "https://api.kagu.ai")
resp = requests.post(
f"{base_url}/runtime/proxy/chat/completions",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {os.environ['KAGU_PROJECT_API_KEY']}",
"x-kagu-user-id": "user_123",
"x-kagu-policy-key": "support_chat",
},
json={
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello, world!"}],
},
)
resp.raise_for_status()
print(resp.json())cURL
curl "https://api.kagu.ai/runtime/proxy/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $KAGU_PROJECT_API_KEY" \
-H "x-kagu-user-id: user_123" \
-H "x-kagu-policy-key: support_chat" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}'Streaming
Set "stream": true for OpenAI-compatible SSE. Buffered requests must use stream: false (or omit it).
Light mode (optional)
For LIGHT integration projects, call advice before your provider, then ingest the outcome:
POST /runtime/light/advicePOST /runtime/light/requests
Use the same bearer project API key.
Next steps
- Tune policies and rollouts in the dashboard.
- Ingest light-mode telemetry for spend and savings reporting.
- Explore workspace alerts and analytics APIs in
kagu-api.
Questions?
Open an issue or contact your Kagu workspace admin for access and quotas.