Quick Start
The Fastest Way: Proxy (Zero Code)
Change one URL. See every LLM call your agent makes. No SDK, no instrumentation code.
1. Get Your API Key
Sign up at app.opswald.com and copy your API key from the settings page.
2. Point Your LLM Client at the Proxy
The idea is simple: instead of sending requests directly to OpenAI or Anthropic, you point your client at the Opswald proxy. Two changes:
- Set
base_urltohttps://proxy.opswald.com/{provider}/v1— the proxy forwards your request to the real API and captures a full trace - Add your Opswald API key as a header (
X-Opswald-Key) — so we know where to store your trace
Your existing API key, model, and parameters stay exactly the same.
import openai
client = openai.OpenAI( api_key="sk-your-openai-key", # your normal OpenAI key base_url="https://proxy.opswald.com/openai", default_headers={ "X-Opswald-Key": "ops_your_opswald_key" })
# This call is now automatically tracedresponse = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is the return policy?"}])print(response.choices[0].message.content)import anthropic
client = anthropic.Anthropic( api_key="sk-ant-your-key", base_url="https://proxy.opswald.com/anthropic", default_headers={ "X-Opswald-Key": "ops_your_opswald_key" })
response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{"role": "user", "content": "What is the return policy?"}])import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'sk-your-openai-key', baseURL: 'https://proxy.opswald.com/openai', defaultHeaders: { 'X-Opswald-Key': 'ops_your_opswald_key' }});
const response = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'What is the return policy?' }]});curl https://proxy.opswald.com/openai/v1/chat/completions \ -H "Authorization: Bearer sk-your-openai-key" \ -H "X-Opswald-Key: ops_your_opswald_key" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [{"role": "user", "content": "What is the return policy?"}] }'That’s it. One line changed (base_url), one header added. Your app works exactly the same — but now every call is captured.
3. View Your Trace
Open app.opswald.com → Traces. You’ll see your call with:
- Model, provider, and parameters
- Input and output content
- Token counts and latency
- Full request/response data
4. Add Context (Optional)
Group related calls with session and user headers:
client = openai.OpenAI( api_key="sk-your-openai-key", base_url="https://proxy.opswald.com/openai", default_headers={ "X-Opswald-Key": "ops_your_opswald_key", "X-Opswald-Session": "chat-session-42", # group calls per conversation "X-Opswald-User": "user-123", # identify the end user "X-Opswald-Trace": "support-agent-flow" # name your trace })Going Deeper: SDK Instrumentation
When you need more than LLM call capture — custom spans, tool calls, decision tracking — use the SDK alongside or instead of the proxy.
pip install opswaldimport opswald
opswald.init(api_key='ops_your_opswald_key')
with opswald.trace('support-agent-run') as t: with opswald.span('classify-intent', kind='llm_call', provider='openai', model='gpt-4o') as s: s.set_input({'prompt': 'What is the return policy?'}) response = call_openai(prompt) s.set_output({'response': response}) s.set_tokens(input_tokens=32, output_tokens=18)
with opswald.span('lookup-policy', kind='tool_call') as s: s.set_input({'tool': 'knowledge_base', 'query': 'return policy'}) result = knowledge_base.search('return policy') s.set_output({'result': result})npm install opswaldimport { init, trace, span, shutdown } from 'opswald';
init({ apiKey: 'ops_your_opswald_key' });
await trace('support-agent-run', {}, async (t) => { await span('classify-intent', { kind: 'llm_call', provider: 'openai', model: 'gpt-4o' }, async (s) => { s.setInput({ prompt: 'What is the return policy?' }); const response = await callOpenAI(prompt); s.setOutput({ response }); s.setTokens(32, 18); });
await span('lookup-policy', { kind: 'tool_call' }, async (s) => { s.setInput({ tool: 'knowledge_base', query: 'return policy' }); const result = await knowledgeBase.search('return policy'); s.setOutput({ result }); });});
await shutdown();Note: the SDK defaults to https://api.opswald.com — no need to specify base_url unless you’re self-hosting.
Proxy vs SDK: When to Use What
| Proxy | SDK | |
|---|---|---|
| Setup | Change 1 URL | Install package + instrument code |
| Captures | All LLM API calls automatically | Whatever you instrument |
| Custom spans | No | Yes — tool calls, decisions, custom logic |
| Best for | Quick start, production monitoring | Deep debugging, complex agent flows |
| Combine? | ✅ Use both — proxy for LLM calls, SDK for custom spans |
Next Steps
- Proxy Configuration — Content filtering, self-hosted deployment
- Core Concepts — Understand traces, spans, and decisions
- Python SDK — Full Python reference
- TypeScript SDK — Full TypeScript reference
- Dashboard Overview — Navigate the debugging dashboard