Runtime
Runtime Light Mode
Advice + ingestion endpoints for light integration projects.
Light mode is designed for integrations where you call a provider yourself, but still want Kagu policy advice and telemetry ingestion.
Base path:
POST /runtime/light/advice(200)POST /runtime/light/requests(202)
Auth: Authorization: Bearer <Kagu project API key>
Advice (POST /runtime/light/advice)
Body
Required:
userId(string)policyKey(string)
Optional:
taskType,requestedProvider,requestedModel,requestedCapabilityTierregion,planKey,locale,promptVersiontemperaturecacheLookup(for cache awareness / semantic hints)
cURL
curl "https://api.kagu.ai/runtime/light/advice" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $KAGU_PROJECT_API_KEY" \
-d '{
"userId": "user_123",
"policyKey": "support_chat",
"requestedProvider": "OPENAI",
"requestedModel": "gpt-4o-mini"
}'Ingest (POST /runtime/light/requests)
Use this endpoint to persist telemetry for analytics, spend attribution, and caching.
Body (minimal)
Required:
userId(string)policyKey(string)
Recommended:
requestedProvider,requestedModelpromptTokens,completionTokens(ortotalTokens)
cURL
curl "https://api.kagu.ai/runtime/light/requests" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $KAGU_PROJECT_API_KEY" \
-d '{
"userId": "user_123",
"policyKey": "support_chat",
"requestedProvider": "OPENAI",
"requestedModel": "gpt-4o-mini",
"cacheStatus": "NONE",
"actionTaken": "ALLOW",
"promptTokens": 42,
"completionTokens": 18
}'