Live workshop

Production Evals and Observability

Wire Phoenix tracing and a 4-layer eval harness into an agent, with CI that blocks regressions.

Next session
Saturday, July 18
Level
Advanced
Where
Online (Zoom)

What you'll build

Walk away with working code.

  • Explain why text logs fall apart for nondeterministic LLM applications
  • Emit JSON logs with a request ID that survives across tool calls and background tasks
  • Start Arize Phoenix locally and wire an OpenTelemetry tracer into a Python service
  • Auto-instrument OpenAI calls with OpenInference and read the span tree
  • Wrap any tool with a @trace decorator that records tokens, cost, and duration
  • Replay a failing trace in the Phoenix UI and alert on latency and error rate

Who it's for

Is this workshop for you?

AI engineers

who ship agents into production with print statements and hope, then cannot explain what happened when something breaks

Backend engineers

who bolted OpenAI calls onto an existing service and now cannot pinpoint which tool caused the latency spike or the bill surprise

Platform and SRE engineers

who need structured logs and trace propagation across an LLM tool chain the same way they already have it for microservices

FAQ

Common questions.

  • Do I need to know OpenTelemetry already?

    No. The course introduces traces, spans, and attributes from first principles, then walks through wiring them up inside a real Python agent. If you have used OpenTelemetry before, you can skim the early lessons and focus on the LLM-specific attributes.

  • Why Arize Phoenix and not a hosted tool?

    Phoenix runs locally, stores traces on disk, and speaks the OpenTelemetry protocol. You can swap it for any OTel-compatible backend (Langfuse, Honeycomb, Datadog) without changing your instrumentation code. The course teaches the wiring, not a vendor.

  • Will this work with my LLM provider?

    Yes. The workshop uses an OpenAI-compatible client, so anything that speaks the OpenAI API (including OpenRouter, Fireworks, and local vLLM) works without changes. The OpenInference instrumentation hooks the client, not the endpoint.

  • Do I need a web search API key?

    The routing agent uses Serper for web search. Without a key the web route is skipped and the agent still works for SQL and RAG, so you can complete every lesson and see every span type.

Save your seat

Ship something real by Saturday.

4 hours. Your code. Live feedback from Param.

$149$79per seat
Save 47% with regional pricing
4 hours live · advanced
Or get this plus every workshop replay with Pro
Add to calendarGoogleApple / iCal

Can't make it live? Registration includes lifetime access to the self-paced course.

Prefer to learn anytime? Take the self-paced course instead.