API Documentation

Zero-config cost tracking for OpenAI, Anthropic, Google, and 40+ LLM providers. Just 2 lines of code.

Quick Start

1. Install the SDK

pip install llmobserve

2. Initialize (2 lines!)

python
import llmobserve

llmobserve.observe(api_key="llmo_sk_your_key_here")

# That's it! All LLM calls are now tracked automatically.
# Use OpenAI, Anthropic, etc. as normal - we handle the rest.

How It Works

Zero-Config Tracking

We patch HTTP clients to automatically capture API calls. Works with any provider.

Section Labels

Use section() to organize costs by feature, agent, or workflow.

Multi-Language

Python and Node.js SDKs. Same API, same dashboard.

Complete Example

python
import llmobserve
from llmobserve import section, set_customer_id
from openai import OpenAI

# Initialize once at startup
llmobserve.observe(api_key="llmo_sk_your_key_here")

client = OpenAI()

# Optional: Track costs per customer
set_customer_id("customer_123")

# Optional: Tag costs with a feature name
with section("feature:chat_assistant"):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello!"}]
    )

print(response.choices[0].message.content)
# ✅ Costs automatically tracked in your dashboard!

Core API Reference

observe(api_key)

Initialize LLMObserve. Call once at app startup.

llmobserve.observe(api_key="llmo_sk_...")
section(name)

Context manager to tag costs with a feature name.

with section("feature:summarizer"):
    response = client.chat.completions.create(...)
set_customer_id(id)

Track costs per customer for multi-tenant apps.

set_customer_id("customer_123")

Spending Caps & Alerts

Set budget limits to prevent runaway costs. Configure in your dashboard.

Soft Caps (Alerts)

Get email notifications at thresholds (80%, 95%, 100%). Calls continue to work.

Hard Caps (Blocking)

Block calls when exceeded. SDK throws BudgetExceededError.

Handling Hard Caps

python
from llmobserve import observe, BudgetExceededError

observe(api_key="llmo_sk_...")

try:
    response = client.chat.completions.create(...)
except BudgetExceededError as e:
    print(f"Cap exceeded: {e}")
    # Handle gracefully - switch model, queue request, etc.

Supported Providers

We track costs for 44+ models across 9+ providers.

gpt-4ogpt-4o-minigpt-4-turbogpt-4gpt-3.5-turboo1-previewo1-minio4-minitext-embedding-3-smalltext-embedding-3-largewhisper-1tts-1tts-1-hddall-e-3dall-e-2
claude-3.5-sonnetclaude-3-opusclaude-3-sonnetclaude-3-haiku

Ready to track your LLM costs?

Get started in under 2 minutes. Free forever.

Get Started Free