Documentation

Everything you need to integrate Spendpol into your AI stack.

Quick Start

Get Spendpol running in under 5 minutes. No code changes required.

01

Create an account

Sign up for free at spendpol.com/signup. No credit card required. You'll get an API key immediately.

02

Add your provider API keys

In the dashboard, go to Settings > Providers and add your OpenAI, Anthropic, or other provider API keys. Spendpol encrypts them at rest with AES-256-GCM.

03

Route traffic through the proxy

Point your application to the Spendpol proxy instead of the provider directly. The proxy URL includes the provider name. One environment variable change — zero code changes.

OPENAI_BASE_URL=https://proxy.spendpol.com/v1/openai
04

Verify in the dashboard

Make an API call. Within seconds, you'll see it in the Spendpol dashboard with cost, latency, tokens, and provider attribution.

SDKs & Integrations

Native SDKs for TypeScript and Python. Drop-in replacements for OpenAI and LangChain.

TypeScript SDK

$ npm install @spendpol/sdk
import { SpendpolClient } from "@spendpol/sdk";

const client = new SpendpolClient({
  apiKey: "sp_key_...",
  baseUrl: "https://proxy.spendpol.com",
});

const response = await client.chat({
  provider: "openai",
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
});

// Response includes governance metadata
console.log(response.headers.cost);      // "0.000150"
console.log(response.headers.cache);     // "miss"
console.log(response.headers.auditId);   // "evt_abc123"

Python SDK

$ pip install spendpol
from spendpol import SpendpolClient

client = SpendpolClient(
    api_key="sp_key_...",
    base_url="https://proxy.spendpol.com",
)

response = await client.chat(
    provider="openai",
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

# Response includes governance metadata
print(response.headers.cost)       # "0.000150"
print(response.headers.tokens)     # {"input": 25, "output": 100}

OpenAI Drop-in (TypeScript)

$ npm install @spendpol/sdk openai
import { SpendpolOpenAI } from "@spendpol/sdk";

// Drop-in replacement for OpenAI client
const openai = new SpendpolOpenAI({
  apiKey: "sp_key_...",
});

// Use exactly like the OpenAI SDK
const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
});

LangChain Callback (Python)

$ pip install spendpol langchain
from spendpol import SpendpolCallbackHandler

handler = SpendpolCallbackHandler(
    api_key="sp_key_...",
)

# Add to your LangChain chain
chain.invoke(input, config={"callbacks": [handler]})

Framework Support

Spendpol works with any framework that calls an LLM API. The proxy URL format is proxy.spendpol.com/v1/{provider}/{path}

OpenAI SDK (Python)

openai.base_url = "https://proxy.spendpol.com/v1/openai"

OpenAI SDK (TS)

new OpenAI({ baseURL: "https://proxy.spendpol.com/v1/openai" })

Anthropic SDK

Anthropic(base_url="https://proxy.spendpol.com/v1/anthropic")

LangChain

ChatOpenAI(base_url="https://proxy.spendpol.com/v1/openai")

CrewAI

os.environ["OPENAI_BASE_URL"] = "https://proxy.spendpol.com/v1/openai"

AutoGen

config_list = [{"base_url": "https://proxy.spendpol.com/v1/openai", ...}]

Supported Providers

/v1/openai/.../v1/anthropic/.../v1/google/.../v1/azure/.../v1/cohere/.../v1/mistral/...

API Reference

All endpoints are available at https://api.spendpol.com. Authenticate with Authorization: Bearer sp_key_...

GET/budgets
POST/budgets
GET/budgets/:id/spend
GET/policies
POST/policies
POST/policies/evaluate
GET/alerts
POST/alerts
GET/audit-logs
POST/audit-logs/export
GET/agents
DELETE/agents/:id
POST/agents/:id/pause
POST/agents/:id/resume
GET/cache/stats
PATCH/cache/config
GET/routing/rules
GET/chargeback/periods
POST/chargeback/periods/:id/finalize
GET/forecast/spend
GET/forecast/anomalies
POST/forecast/what-if
GET/shadow-ai/alerts
GET/prompts
POST/prompts
POST/api-keys
POST/api-keys/:id/rotate

Showing key endpoints. Full API includes 80+ routes across budgets, policies, alerts, audit, agents, cache, routing, chargeback, forecast, compliance, prompts, and integrations.

Proxy Response Headers

Every proxied response includes governance headers for cost, performance, and compliance tracking.

X-Spendpol-CostCost of the request in USD (e.g. "0.000150")
X-Spendpol-ModelActual model used (may differ from requested if routed)
X-Spendpol-TokensToken usage as "input/output" (e.g. "25/100")
X-Spendpol-CacheCache status: hit, miss, or skip
X-Spendpol-RoutedWhether intelligent routing was applied (true/false)
X-Spendpol-Requested-ModelOriginal model before routing
X-Spendpol-Audit-IdAudit event ID (format: evt_{id})
X-Spendpol-Trace-IdDistributed trace ID (UUID)
X-Spendpol-Span-IdTrace span identifier
X-Spendpol-Latency-MsTotal proxy overhead in milliseconds
X-Spendpol-Guard-ScorePII detection risk score (0-1)
X-Spendpol-Hallucination-ScoreHallucination risk percentage
X-Spendpol-RegionDeployment region
X-Spendpol-Template-IdResolved prompt template UUID
X-Spendpol-Template-VersionPrompt template version number
X-Spendpol-Policy-WarningComma-separated policy warnings

Authentication

All requests require an API key.

Proxy Authentication

Pass your Spendpol API key as the Bearer token. The proxy resolves your org, applies policies, and forwards to the provider with the correct provider key.

Authorization: Bearer sp_key_abc123def456

Alternative: X-Spendpol-Key Header

You can also pass the API key via a custom header. Useful when the Authorization header is already used by your provider SDK.

X-Spendpol-Key: sp_key_abc123def456

API Key Management

Create, rotate, and revoke keys from the dashboard or via the API. Keys are stored as SHA-256 hashes. Rotation is atomic — old key is deactivated, new key is created in a single transaction.

Request Configuration

Control proxy behavior per-request using these headers.

X-Spendpol-Cache: skipSkip semantic cache for this request
X-Spendpol-Route: skipSkip intelligent routing for this request
X-Spendpol-Template: slug-nameApply a prompt template by slug
X-Spendpol-Department: engineeringSet department for cost attribution and chargeback
X-Spendpol-Project: project-idSet project for cost attribution
X-Spendpol-Trace-Id: uuidPropagate your distributed trace ID (auto-generated if missing)
X-Spendpol-Parent-Id: span-idParent span ID for distributed tracing

Self-Hosted Deployment

Spendpol can run on your infrastructure — air-gapped, on-premise, or in your own cloud.

D

Docker Compose

Single-node deployment with all 12 services. Best for evaluation and small teams.

H

Helm Chart

Kubernetes-native deployment with HPA, PDBs, and network policies. Production-ready.

T

Terraform

AWS infrastructure modules: VPC, EKS, RDS, ElastiCache, S3. Multi-region capable.

Architecture

Proxy

Rust / Axum

API

Node.js / Express

Dashboard

Next.js 15

Workers

Kafka consumers

PostgreSQL

Transactional + RLS

Redis

Cache + rate limits

ClickHouse

Analytics + events

Redpanda

Event streaming

Need help with deployment? Contact us for dedicated onboarding support.