Loading...
Loading...
Rebuild a baseline RAG pipeline as a LangGraph graph of graphs with query rewriting, sub-retrieval fan-in, Presidio PII scrubbing, and RAGAS scoring. Add an offline eval harness and stream live metric scores so every change you ship is measurable.
Message a mentor about fit, prerequisites, or where to start. Replies come on WhatsApp, usually within a day.
Engineers are learning here from
Upgrade a baseline RAG pipeline with LLM query rewriting, sub-graph decomposition in LangGraph, PII scrubbing with Presidio, and RAGAS evaluation. Build an offline eval harness and stream live metric scores with SSE.
Add query rewriting, sub-graphs, PII scrubbing, and RAGAS scoring to a production RAG pipeline.
What you'll ship
What you'll learn
Curriculum
Baseline RAG pipeline
Stand up a ChromaDB-backed retriever, generate a first answer, then watch the baseline fail on realistic questions
LLM query rewriter
Generate diverse reformulations of each user question and fan them into parallel retrievals
Sub-graph decomposition
Build a parent LangGraph that calls a compiled child retriever graph for every rewritten query
PII scrubbing with Presidio
Mask PII before any LLM call and restore the real entities on the way out so the caller never sees placeholders
RAGAS evaluation metrics
Score each answer with faithfulness, answer relevance, and context precision, and learn to read the scores without being fooled
Offline evaluation harness
Build a gold set, a batch runner, and regression thresholds so every change is measurable
Streaming with live eval scores
Stream pipeline stages, rewrites, sources, the final answer, and eval scores with Server-Sent Events
Who it's for
whose RAG prototype looks great on cherry-picked questions but falls apart on real user queries
who need to ship RAG into production with privacy and measurable quality
who want an offline eval harness so every prompt or retriever change is measurable
FAQ
Yes. This workshop picks up where a baseline RAG pipeline leaves off. If you have never embedded text, stored it in a vector database, and retrieved chunks, start with the rag-fundamentals or rag-reranking-chromadb workshop first.
The repo ships a provider abstraction that supports OpenRouter, Fireworks, Gemini, and OpenAI. You only need one API key to follow along. Fireworks and Gemini have generous free tiers.
The scorer tries RAGAS first and falls back to a lightweight heuristic when RAGAS is not installed or fails. You can complete the workshop with either path and still see meaningful score trends.
No. Embeddings run on CPU through sentence-transformers, and all LLM calls go to hosted APIs. The ChromaDB store is local and file-based.
Pricing
Subscribe to Pro for every paid course, or buy just this one.
Unlock this course and every paid course plus workshop replays. One subscription.
You save 54% with regional pricing
One-time purchase. Lifetime access to every lesson, exercise, and update.
You save 47% with regional pricing
Still deciding? Ask Param a question
Advanced RAG with query rewriting and evaluation
$79 one-time