Skip to content

Debug Your AI Agents

Stop guessing why your AI agent did that. Opswald captures every trace, span, and decision so you can debug with confidence.

Why Opswald?

Your AI agent made a bad decision. A customer got a wrong answer. A tool call failed silently. Now what?

Opswald records every step your agent takes — every LLM call, every tool invocation, every decision branch — so you can trace exactly what happened and why.

Trace Everything

Capture full execution traces across LLM calls, tool invocations, and custom logic. See exactly what your agent did.

Replay Failures

Step through agent runs span by span. Find the exact moment things went wrong and understand why.

Decision Graphs

Visualize how your agent made decisions with interactive decision trees in the dashboard.

Zero Code Setup

Use the Opswald proxy to instrument OpenAI and Anthropic calls without changing a single line of code.

Quick Look

Point your LLM client at the Opswald proxy instead of the provider directly. Add your Opswald API key as a header. That’s it.

import openai
client = openai.OpenAI(
api_key="sk-your-openai-key",
base_url="https://proxy.opswald.com/openai", # swap the URL
default_headers={"X-Opswald-Key": "ops_your_key"} # add your Opswald key
)
# This call is now automatically traced — same code, full visibility
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the return policy?"}]
)

Open the dashboard and see your trace appear in real time.

Get Started

  1. First trace in 2 minutes — Swap one URL, see your first trace
  2. Understand the concepts — Traces, spans, and how they fit together
  3. Go deeper with the SDK — Custom spans, tool calls, decision tracking