Loading...
Frameworks hide the hard parts until they break. Build memory, tool calling, and autonomous loops from scratch with pure Python in 10 hours.
4 lessons
4 lessons
4 lessons
4 lessons
4 lessons
4 lessons
4 lessons
3 lessons
4 lessons
Build AI agents with pure Python. No LangChain. No CrewAI. No magic.
Chosen by engineers who want to understand the engine
You've stitched together 8 different AgentExecutors and RouterChains, but you have absolutely no idea what prompt is actually being sent to the LLM.
The agent just stopped working in production. Because the framework hides the core loop, debugging feels like guessing.
Your agent occasionally decides to hallucinate 40 tool calls in a row, draining your API budget and returning garbage.
You're asked to explain the system design to the security team, but you can't, because the framework abstractions are a black box.
Senior engineers aren't better because they know more syntax. They're better because they've:
We tear away the abstractions. You will build the core loops, memory systems, and tool dispatchers using nothing but pure Python and LiteLLM. No magic allowed.
From a stateless API call to an autonomous Office Manager.
You build a stateful loop that persists conversation history and handles context window limits.
You write raw JSON schemas and dynamic dispatchers so your LLM can execute Python functions.
You implement the Thought-Action-Observation loop that gives your agent reasoning capabilities.
You add hard limits, self-correction, and Human-in-the-Loop approval gates for production safety.
Understand the difference between stateless LLMs and stateful agents. Build your first agent with conversation memory using pure Python.
Get introduced to the workshop, understand what you will build, and set up your development environment
Discover why LLMs have no memory and understand the fundamental difference between stateless API calls and stateful agents
Build a SimpleAgent class that maintains conversation memory and creates the illusion of a stateful AI
Test your understanding of stateless vs stateful, agent memory, and the SimpleAgent pattern
Learn the Thought-Action-Observation reasoning loop that enables agents to use tools and solve multi-step problems.
Discover why LLMs cannot do math or access real-time data, and why agents need external tools
Learn the ReAct pattern: Thought-Action-Observation cycle that enables LLMs to use external tools
Implement the complete ReAct agent loop with multi-step reasoning and tool execution
Test your understanding of the ReAct pattern with ordering exercises and a checkpoint quiz
Replace fragile regex parsing with native function calling using JSON schemas for reliable tool integration.
Understand why regex-based tool parsing is unreliable and why we need structured function calling
Define tool interfaces using JSON schemas so the LLM knows what functions are available and how to call them
Implement the complete function calling flow: pass schemas, receive tool_calls, execute functions, and return results
Test your understanding of JSON schemas, function calling flow, and tool_call_id linking
Build an autonomous agent that chains multiple tool calls in a while loop with safety limits.
Understand why agents need loops to handle multi-step tasks autonomously
Build an Agent class with a while loop that continues calling tools until the task is complete
Understand how tool results flow through the agent loop and inform each subsequent decision
Verify your understanding of autonomous agent execution with ordering and checkpoint exercises
Solve context window limits with sliding window and summarization memory strategies.
Understand why growing conversation history causes cost and context window issues
Implement the simplest memory strategy: keep only the last N messages
Use the LLM to compress old messages into a summary, preserving key facts while reducing tokens
Test your understanding of memory strategies with ordering and checkpoint exercises
Build modular agent systems with the router pattern for task classification and the chaining pattern for multi-step workflows.
Understand why a single agent handling all tasks leads to poor performance and maintainability
Build a router that classifies requests and dispatches them to specialized handler agents
Build multi-step workflows where the output of one step becomes the input to the next
Test your understanding of router and chaining patterns with exercises and a checkpoint
Make agents robust with validation, the reflection pattern for self-correction, and retry loops.
Learn why you should never trust LLM outputs blindly and always validate before executing
Build validation functions that check LLM output against your system constraints before execution
Feed validation errors back to the LLM so it can correct its own mistakes in a retry loop
Test your understanding of validation, reflection, and self-correction patterns
Add safety gates and approval workflows so humans can review and approve sensitive agent actions.
Understand why autonomous agents need human oversight for sensitive actions
Implement approval gates that pause the agent for human review before executing sensitive tools
Understand production HITL patterns and test your knowledge with a checkpoint
Integrate everything you learned into a complete Office Manager agent with memory, tools, loops, error handling, and human-in-the-loop safety.
Plan the architecture of a production-ready agent that combines all 8 modules of concepts
Implement the complete OfficeManager class with all integrated patterns
Run the complete Office Manager scenario and verify all integrated components work together
Review key concepts across all modules with flashcards and a timed quiz, then celebrate your achievement
See every system, every week, in detail before you decide.
Anyone can copy-paste a LangChain tutorial to summarize an article.
Stop trusting abstractions you can't debug. Build the foundation yourself.
I am the Head of Engineering at Jobbatical (EU Tech), with 8+ years of leadership and 15+ years of total experience in the software industry.
"Most engineers are not blocked by ability, but by lack of real system ownership."
This accelerator exists to give you what most jobs never will.
Guest Sessions From Engineers at
Live sessions on System Design, Career Growth, and Interview Preparation.
Invest in your career. It pays back 100x.
Stop trusting magic. Start building engineering systems.