Documentation
Everything you need to integrate Spendpol into your AI stack.
On this page
Quick Start
Get Spendpol running in under 5 minutes. No code changes required.
Create an account
Sign up for free at spendpol.com/signup. No credit card required. You'll get an API key immediately.
Add your provider API keys
In the dashboard, go to Settings > Providers and add your OpenAI, Anthropic, or other provider API keys. Spendpol encrypts them at rest with AES-256-GCM.
Route traffic through the proxy
Point your application to the Spendpol proxy instead of the provider directly. The proxy URL includes the provider name. One environment variable change — zero code changes.
OPENAI_BASE_URL=https://proxy.spendpol.com/v1/openaiVerify in the dashboard
Make an API call. Within seconds, you'll see it in the Spendpol dashboard with cost, latency, tokens, and provider attribution.
SDKs & Integrations
Native SDKs for TypeScript and Python. Drop-in replacements for OpenAI and LangChain.
TypeScript SDK
$ npm install @spendpol/sdkimport { SpendpolClient } from "@spendpol/sdk";
const client = new SpendpolClient({
apiKey: "sp_key_...",
baseUrl: "https://proxy.spendpol.com",
});
const response = await client.chat({
provider: "openai",
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
// Response includes governance metadata
console.log(response.headers.cost); // "0.000150"
console.log(response.headers.cache); // "miss"
console.log(response.headers.auditId); // "evt_abc123"Python SDK
$ pip install spendpolfrom spendpol import SpendpolClient
client = SpendpolClient(
api_key="sp_key_...",
base_url="https://proxy.spendpol.com",
)
response = await client.chat(
provider="openai",
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
# Response includes governance metadata
print(response.headers.cost) # "0.000150"
print(response.headers.tokens) # {"input": 25, "output": 100}OpenAI Drop-in (TypeScript)
$ npm install @spendpol/sdk openaiimport { SpendpolOpenAI } from "@spendpol/sdk";
// Drop-in replacement for OpenAI client
const openai = new SpendpolOpenAI({
apiKey: "sp_key_...",
});
// Use exactly like the OpenAI SDK
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});LangChain Callback (Python)
$ pip install spendpol langchainfrom spendpol import SpendpolCallbackHandler
handler = SpendpolCallbackHandler(
api_key="sp_key_...",
)
# Add to your LangChain chain
chain.invoke(input, config={"callbacks": [handler]})Framework Support
Spendpol works with any framework that calls an LLM API. The proxy URL format is proxy.spendpol.com/v1/{provider}/{path}
OpenAI SDK (Python)
openai.base_url = "https://proxy.spendpol.com/v1/openai"OpenAI SDK (TS)
new OpenAI({ baseURL: "https://proxy.spendpol.com/v1/openai" })Anthropic SDK
Anthropic(base_url="https://proxy.spendpol.com/v1/anthropic")LangChain
ChatOpenAI(base_url="https://proxy.spendpol.com/v1/openai")CrewAI
os.environ["OPENAI_BASE_URL"] = "https://proxy.spendpol.com/v1/openai"AutoGen
config_list = [{"base_url": "https://proxy.spendpol.com/v1/openai", ...}]Supported Providers
/v1/openai/.../v1/anthropic/.../v1/google/.../v1/azure/.../v1/cohere/.../v1/mistral/...API Reference
All endpoints are available at https://api.spendpol.com. Authenticate with Authorization: Bearer sp_key_...
/budgets/budgets/budgets/:id/spend/policies/policies/policies/evaluate/alerts/alerts/audit-logs/audit-logs/export/agents/agents/:id/agents/:id/pause/agents/:id/resume/cache/stats/cache/config/routing/rules/chargeback/periods/chargeback/periods/:id/finalize/forecast/spend/forecast/anomalies/forecast/what-if/shadow-ai/alerts/prompts/prompts/api-keys/api-keys/:id/rotateShowing key endpoints. Full API includes 80+ routes across budgets, policies, alerts, audit, agents, cache, routing, chargeback, forecast, compliance, prompts, and integrations.
Proxy Response Headers
Every proxied response includes governance headers for cost, performance, and compliance tracking.
X-Spendpol-CostCost of the request in USD (e.g. "0.000150")X-Spendpol-ModelActual model used (may differ from requested if routed)X-Spendpol-TokensToken usage as "input/output" (e.g. "25/100")X-Spendpol-CacheCache status: hit, miss, or skipX-Spendpol-RoutedWhether intelligent routing was applied (true/false)X-Spendpol-Requested-ModelOriginal model before routingX-Spendpol-Audit-IdAudit event ID (format: evt_{id})X-Spendpol-Trace-IdDistributed trace ID (UUID)X-Spendpol-Span-IdTrace span identifierX-Spendpol-Latency-MsTotal proxy overhead in millisecondsX-Spendpol-Guard-ScorePII detection risk score (0-1)X-Spendpol-Hallucination-ScoreHallucination risk percentageX-Spendpol-RegionDeployment regionX-Spendpol-Template-IdResolved prompt template UUIDX-Spendpol-Template-VersionPrompt template version numberX-Spendpol-Policy-WarningComma-separated policy warningsAuthentication
All requests require an API key.
Proxy Authentication
Pass your Spendpol API key as the Bearer token. The proxy resolves your org, applies policies, and forwards to the provider with the correct provider key.
Authorization: Bearer sp_key_abc123def456Alternative: X-Spendpol-Key Header
You can also pass the API key via a custom header. Useful when the Authorization header is already used by your provider SDK.
X-Spendpol-Key: sp_key_abc123def456API Key Management
Create, rotate, and revoke keys from the dashboard or via the API. Keys are stored as SHA-256 hashes. Rotation is atomic — old key is deactivated, new key is created in a single transaction.
Request Configuration
Control proxy behavior per-request using these headers.
X-Spendpol-Cache: skipSkip semantic cache for this requestX-Spendpol-Route: skipSkip intelligent routing for this requestX-Spendpol-Template: slug-nameApply a prompt template by slugX-Spendpol-Department: engineeringSet department for cost attribution and chargebackX-Spendpol-Project: project-idSet project for cost attributionX-Spendpol-Trace-Id: uuidPropagate your distributed trace ID (auto-generated if missing)X-Spendpol-Parent-Id: span-idParent span ID for distributed tracingSelf-Hosted Deployment
Spendpol can run on your infrastructure — air-gapped, on-premise, or in your own cloud.
Docker Compose
Single-node deployment with all 12 services. Best for evaluation and small teams.
Helm Chart
Kubernetes-native deployment with HPA, PDBs, and network policies. Production-ready.
Terraform
AWS infrastructure modules: VPC, EKS, RDS, ElastiCache, S3. Multi-region capable.
Architecture
Proxy
Rust / Axum
API
Node.js / Express
Dashboard
Next.js 15
Workers
Kafka consumers
PostgreSQL
Transactional + RLS
Redis
Cache + rate limits
ClickHouse
Analytics + events
Redpanda
Event streaming
Need help with deployment? Contact us for dedicated onboarding support.