Loading...
Loading...
Wire Phoenix tracing and a 4-layer eval harness into an agent, with CI that blocks regressions.
What you'll build
Who it's for
who ship agents into production with print statements and hope, then cannot explain what happened when something breaks
who bolted OpenAI calls onto an existing service and now cannot pinpoint which tool caused the latency spike or the bill surprise
who need structured logs and trace propagation across an LLM tool chain the same way they already have it for microservices
FAQ
No. The course introduces traces, spans, and attributes from first principles, then walks through wiring them up inside a real Python agent. If you have used OpenTelemetry before, you can skim the early lessons and focus on the LLM-specific attributes.
Phoenix runs locally, stores traces on disk, and speaks the OpenTelemetry protocol. You can swap it for any OTel-compatible backend (Langfuse, Honeycomb, Datadog) without changing your instrumentation code. The course teaches the wiring, not a vendor.
Yes. The workshop uses an OpenAI-compatible client, so anything that speaks the OpenAI API (including OpenRouter, Fireworks, and local vLLM) works without changes. The OpenInference instrumentation hooks the client, not the endpoint.
The routing agent uses Serper for web search. Without a key the web route is skipped and the agent still works for SQL and RAG, so you can complete every lesson and see every span type.
Save your seat
4 hours. Your code. Live feedback from Param.
Can't make it live? Registration includes lifetime access to the self-paced course.
Prefer to learn anytime? Take the self-paced course instead.