Loading...
Loading...
Classic NLP classifiers lock you into the labels they were trained on. LLM-powered tasks let you change labels, add languages, and introduce new emotions in a single prompt edit. You will build the full service: sentiment, entities, keywords, summary, emotion, streamed per task over SSE.
Message a mentor about fit, prerequisites, or where to start. Replies come on WhatsApp, usually within a day.
Engineers are learning here from
Build a multi-task NLP service that classifies sentiment, extracts entities and keywords, summarizes content, and detects emotion. Each task is a focused LLM prompt, orchestrated through a registry, optionally wired into a LangGraph StateGraph, and streamed over Server-Sent Events.
Build a multi-task NLP service powered by focused LLM prompts and per-task streaming.
What you'll ship
What you'll learn
Curriculum
Single sentiment task
Wire one classify_sentiment prompt end to end, then tighten the output with few-shot examples
Structured JSON output
Lock responses into a Pydantic schema and add a repair pattern that survives fenced, truncated, and prose-wrapped LLM output
Task registry
Add entities, keywords, summary, and emotion behind a single registry so new tasks become one-line additions
Agent graph with LangGraph
Upgrade the dispatcher into a LangGraph StateGraph, with a plain-async fallback when LangGraph is not installed
Server-Sent Events streaming
Push per-task progress events to the client as each task finishes, so long requests feel interactive
Selective task picker
Let the caller specify a subset of tasks per request, with safe default fallbacks when the input is missing or invalid
Resilient errors
Isolate per-task failures so a broken emotion classifier never takes down sentiment, entities, or summary
Router upgrade and recap
Let an LLM pick the applicable tasks for each input, then ship the service and recap the whole system
Who it's for
who are stuck with scikit-learn or spaCy classifiers that cannot adapt to new labels without retraining
who need to add sentiment, summarization, and entity extraction to an existing product without hiring an ML team
who want a clean pattern for orchestrating many small LLM calls and streaming progress to the UI
FAQ
No. A general-purpose LLM with a focused prompt produces strong sentiment, emotion, and entity results across many domains. You trade a little latency for adaptability and a much smaller codebase.
The workshop repo uses a provider abstraction with OpenRouter as the default (free tier available), plus Fireworks, Gemini, and OpenAI. You only need one API key to follow along.
No. The agent graph module detects whether LangGraph is installed. If it is, you get a StateGraph. If not, the plain-async fallback runs the same tasks in the same order.
Pricing
Subscribe to Pro for every paid course, or buy just this one.
Unlock this course and every paid course plus workshop replays. One subscription.
You save 54% with regional pricing
One-time purchase. Lifetime access to every lesson, exercise, and update.
You save 47% with regional pricing
Still deciding? Ask Param a question
Sentiment classification with LLMs and few-shot prompting
$79 one-time