Skip to content

Quick Start

The Fastest Way: Proxy (Zero Code)

Change one URL. See every LLM call your agent makes. No SDK, no instrumentation code.

1. Get Your API Key

Sign up at app.opswald.com and copy your API key from the settings page.

2. Point Your LLM Client at the Proxy

The idea is simple: instead of sending requests directly to OpenAI or Anthropic, you point your client at the Opswald proxy. Two changes:

  1. Set base_url to https://proxy.opswald.com/{provider}/v1 — the proxy forwards your request to the real API and captures a full trace
  2. Add your Opswald API key as a header (X-Opswald-Key) — so we know where to store your trace

Your existing API key, model, and parameters stay exactly the same.

import openai
client = openai.OpenAI(
api_key="sk-your-openai-key", # your normal OpenAI key
base_url="https://proxy.opswald.com/openai",
default_headers={
"X-Opswald-Key": "ops_your_opswald_key"
}
)
# This call is now automatically traced
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the return policy?"}]
)
print(response.choices[0].message.content)

That’s it. One line changed (base_url), one header added. Your app works exactly the same — but now every call is captured.

3. View Your Trace

Open app.opswald.comTraces. You’ll see your call with:

  • Model, provider, and parameters
  • Input and output content
  • Token counts and latency
  • Full request/response data

4. Add Context (Optional)

Group related calls with session and user headers:

client = openai.OpenAI(
api_key="sk-your-openai-key",
base_url="https://proxy.opswald.com/openai",
default_headers={
"X-Opswald-Key": "ops_your_opswald_key",
"X-Opswald-Session": "chat-session-42", # group calls per conversation
"X-Opswald-User": "user-123", # identify the end user
"X-Opswald-Trace": "support-agent-flow" # name your trace
}
)

Going Deeper: SDK Instrumentation

When you need more than LLM call capture — custom spans, tool calls, decision tracking — use the SDK alongside or instead of the proxy.

Terminal window
pip install opswald
import opswald
opswald.init(api_key='ops_your_opswald_key')
with opswald.trace('support-agent-run') as t:
with opswald.span('classify-intent', kind='llm_call', provider='openai', model='gpt-4o') as s:
s.set_input({'prompt': 'What is the return policy?'})
response = call_openai(prompt)
s.set_output({'response': response})
s.set_tokens(input_tokens=32, output_tokens=18)
with opswald.span('lookup-policy', kind='tool_call') as s:
s.set_input({'tool': 'knowledge_base', 'query': 'return policy'})
result = knowledge_base.search('return policy')
s.set_output({'result': result})

Note: the SDK defaults to https://api.opswald.com — no need to specify base_url unless you’re self-hosting.

Proxy vs SDK: When to Use What

ProxySDK
SetupChange 1 URLInstall package + instrument code
CapturesAll LLM API calls automaticallyWhatever you instrument
Custom spansNoYes — tool calls, decisions, custom logic
Best forQuick start, production monitoringDeep debugging, complex agent flows
Combine?✅ Use both — proxy for LLM calls, SDK for custom spans

Next Steps