Quickstart

Install AgentKavach and start enforcing budgets in under 2 minutes.

Install #

AgentKavach is available on PyPI. Install it with pip:

bash
pip install agentkavach

Get Your API Key #

Sign in to the AgentKavach dashboard and navigate to Settings → API Keys. Create a new key — it will start with cg_. You can pass it directly to the constructor or set it as an environment variable:

bash
export AGENTKAVACH_API_KEY=cg_...

ℹ️ API key vs LLM key

The api_key parameter is your AgentKavach key (starts with cg_), NOT the LLM provider key. The llm_key is your OpenAI/Anthropic/Google/Mistral key.

ℹ️ Passthrough mode

If api_key is empty or invalid, AgentKavach operates in passthrough mode — LLM calls go directly to the provider with zero overhead. No pre-flight checks, no post-flight recording, no telemetry, no budget enforcement. The request and response are untouched.

Your First Protected Call #

Here is a complete working example that wraps an OpenAI call with a $50/day budget:

python
from agentkavach import AgentKavach, Budget

def emergency_stop():
    agent.save_checkpoint()
    sys.exit(1)

guard = AgentKavach(
    provider="openai",
    api_key="cg_...",               # your AgentKavach key (NOT the LLM key)
    llm_key="sk-...",               # your OpenAI API key
    agent_name="research-bot",
    budget=Budget.daily(50),        # $50/day hard limit
    on_kill=emergency_stop,
)

response = guard.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(f"Spent: ${guard.spent:.4f}, Remaining: ${guard.remaining:.4f}")

ℹ️ Near-zero latency

Budget checks run in-memory (~0.1ms). No network calls, no added latency.

⚠️ Prompt logging (opt-in)

By default, AgentKavach does not capture or store prompt text. To enable prompt logging for debugging and audit, set save_prompts=True in the constructor. When disabled, the dashboard shows Prompt logging disabled in the events table.

What Just Happened? #

When you call guard.create(), the SDK executes a precise sequence:

  1. Pre-flight budget check — the engine verifies the remaining budget can cover the estimated cost. If not, BudgetExceededError is raised before the LLM call is made.
  2. LLM call — the request is forwarded to the provider (OpenAI, Anthropic, Google, or Mistral) using your llm_key.
  3. Post-flight cost tracking — actual token usage is recorded and the budget is decremented in memory.
  4. Telemetry export — if an API key is configured, usage data is sent to the AgentKavach backend via OpenTelemetry for dashboard visibility.

Next Steps #

  • Budgets — daily, monthly, total, and shared budget types
  • Alerts — Slack, email, PagerDuty, and webhook notifications
  • Guardrails — token limits, call caps, runtime limits, and loop detection
  • Providers — OpenAI, Anthropic, Google, and Mistral integration guides