Integrations Overview
Cycles integrates with LLM providers, agent frameworks, and web servers. Each integration wraps model calls with the reserve → commit → release lifecycle so that every call is budget-checked before execution.
Supported integrations
| Integration | Language | Streaming | Pattern |
|---|---|---|---|
| OpenAI | Python | Yes | Decorator |
| Anthropic | Python | Yes | Decorator |
| LangChain | Python | Yes | Callback handler |
| LangChain.js | TypeScript | Yes | Callback handler |
| Vercel AI SDK | TypeScript | Yes | reserveForStream |
| AWS Bedrock | TypeScript | Yes | withCycles / reserveForStream |
| Google Gemini | TypeScript | Yes | withCycles / reserveForStream |
| Express | TypeScript | Yes | Middleware / withCycles |
| FastAPI | Python | — | Middleware / Decorator |
| OpenClaw | TypeScript | Yes | Plugin (lifecycle hooks) |
Integration patterns
Cycles offers several integration approaches depending on your stack:
Decorator / Higher-order function
The simplest approach. Wrap your LLM-calling function and Cycles handles reservation, commit, and release automatically.
- Python:
@cyclesdecorator - TypeScript:
withCycleshigher-order function
Best for: individual model calls, simple request-response flows.
Callback handler
For agent frameworks like LangChain that fire events on every LLM call. A custom callback handler creates reservations on llm_start and commits on llm_end.
Best for: multi-turn agents, tool-calling chains, LangChain/LangGraph pipelines.
reserveForStream
For streaming responses where the actual cost is only known after the stream completes. Reserves budget upfront, auto-extends the reservation TTL during streaming, and commits actual usage when the stream finishes.
Best for: streaming chat UIs, Vercel AI SDK, any provider with streaming support.
Programmatic client
Direct access to the Cycles client for full control over the reservation lifecycle. Use when the higher-level patterns don't fit your architecture.
Best for: custom frameworks, complex orchestration, batch processing.
See Choosing the Right Integration Pattern for detailed guidance.
Adding a new integration
All integrations follow the same protocol:
- Reserve budget before the LLM call with an estimated cost
- Execute the model call (respecting any caps returned)
- Commit actual cost from token usage after execution
- Release on error to free held budget
See Using the Cycles Client Programmatically for the full client API reference.
Next steps
- Adding Cycles to an Existing Application — step-by-step guide for your first integration
- Cost Estimation Cheat Sheet — pricing reference for estimation
- Error Handling Patterns — handling budget errors across languages
