Loading...
Stop reinventing the wheel. Get the definitive catalog of battle-tested agent architectures — from basic prompt chaining to metacognitive reasoning loops.
3 lessons
3 lessons
4 lessons
4 lessons
3 lessons
3 lessons
5 lessons
2 lessons
5 lessons
4 lessons
5 lessons
7 lessons
46 battle-tested agent patterns. The reference guide that didn't exist — until now.
Chosen by engineers designing production AI systems
You're building an agent system, but you chose the architecture based on a blog post you read rather than a rigorous evaluation of trade-offs.
You're trying to implement a specific pattern, but you're fighting the framework's abstractions instead of just writing the logic.
Your custom agent loop works for 80% of cases, but spectacularly fails on edge cases because you didn't implement proper state recovery.
You read the Tree of Thoughts paper, but when it came time to implement it, you realized you had no idea how to structure the code.
Senior engineers aren't better because they know more syntax. They're better because they've:
We've cataloged 46 distinct agent architectures. For every pattern, we provide the theory, the use cases, and the pure-Python implementation using LiteLLM.
From basic chains to frontier cognition.
Master prompt chaining, routing, and parallel execution for deterministic workflows.
Implement query rewriting, semantic routing, and self-correcting RAG pipelines.
Design systems with supervisors, sequential processes, and dynamic agent generation.
Build Tree of Thoughts, Reflexion, and Metacognitive controllers for the hardest tasks.
Get started with AI agentic patterns, set up your environment, and learn the foundational patterns: prompt chaining and routing.
Overview of the course, what you will learn, and environment setup
Chain multiple LLM calls sequentially, passing output from one step as input to the next
Classify inputs and route them to specialized handlers based on content or intent
Run tasks simultaneously for speed, reflect on outputs, and integrate external tools into your agents.
Run multiple LLM calls simultaneously and aggregate results for faster processing
Have an LLM review and improve its own output through iterative self-critique
Give your agent access to external tools and APIs through function calling
Decompose complex goals into plans, coordinate multiple specialized agents, and manage conversation memory.
Break complex goals into structured, executable plans with steps and dependencies
Design specialized agents with distinct roles that collaborate on complex tasks
Store and retrieve conversation history, user preferences, and context across sessions
Test your knowledge of the first 8 patterns before moving to more advanced topics
Connect to external services via MCP, set and monitor goals, handle exceptions gracefully, and incorporate human oversight.
Standardized protocol for connecting AI systems to external tools and data sources
Set measurable goals, track progress, and adapt strategy based on metrics
Build resilient agents with fallback strategies, retry logic, and graceful error recovery
Escalate low-confidence or sensitive decisions to human reviewers
Retrieve external knowledge with RAG, enable agents to communicate, and optimize resource usage.
Enhance LLM responses by retrieving relevant external knowledge before generation
Enable structured message passing between agents through a communication hub
Dynamically select models and strategies based on task complexity to reduce cost
Master chain-of-thought reasoning, self-correction, and structured problem decomposition.
Enable step-by-step reasoning to improve problem-solving accuracy
Have the LLM review its output, identify errors, and generate corrected versions
Break complex problems into manageable sub-problems with identified dependencies
Implement guardrails, evaluate outputs with LLM-as-judge, monitor performance, prioritize tasks, and explore new topics.
Implement input validation, content filtering, and output safety checks
Use an LLM to systematically evaluate response quality, safety, and accuracy
Track performance metrics, detect anomalies, and maintain system health
Intelligently prioritize tasks based on urgency, importance, and deadlines
Autonomously explore topics, generate hypotheses, and discover new knowledge areas
Use Pydantic for type-safe LLM outputs and take a mid-course assessment.
Enforce type safety and validate structured LLM outputs with Pydantic schemas
Comprehensive checkpoint covering patterns 1-22
Build agentic RAG systems, orchestrate workflows, compose subgraphs, manage state machines, and create recursive agents.
Build intelligent RAG that decides when to retrieve, what to retrieve, and assesses response quality
Orchestrate complex multi-step workflows with dependencies and error handling
Build modular, reusable workflow components that compose into larger systems
Manage agent behavior with clear states, event-driven transitions, and defined rules
Build agents that recursively decompose and solve problems with depth control
Execute code safely, rewrite queries for better retrieval, check relevancy, and process data for RAG pipelines.
Safely execute dynamically generated code with validation and sandboxing
Optimize and expand queries for better document retrieval
Filter and score retrieved content for relevance and quality
Clean, validate, and chunk data for RAG, plus anonymize PII for privacy
Master Plan-Execute patterns, ReAct loops, blackboard systems, and dual memory architectures.
Parse, validate, and execute structured plans with dependency tracking and adaptation
Interleave reasoning and action in a Think-Act-Observe loop for dynamic problem solving
Plan steps, execute them, and verify each result before proceeding
Collaborate through a shared memory repository with dynamic agent activation
Dual memory combining conversation history (episodic) and structured knowledge (semantic)
Explore cutting-edge patterns — Tree of Thoughts, meta-controllers, graph memory, RLHF, metacognition — and build a capstone multi-pattern pipeline.
Explore multiple reasoning paths systematically and find the optimal solution
Simulate actions mentally before executing and validate with safety checks
Route tasks to specialists and aggregate diverse perspectives
Store knowledge as a graph of entities and relationships for multi-hop reasoning
Build a self-improving loop where outputs are critiqued and high-quality examples are stored
Emergent behavior from simple local rules, and agents that reason about their own capabilities
Build a complete pipeline combining guardrails, routing, planning, RAG, ReAct, reflection, and evaluation
See every system, every week, in detail before you decide.
Anyone can call OpenAI and parse a JSON response.
Stop writing bespoke agent loops. Start applying battle-tested patterns.
I am the Head of Engineering at Jobbatical (EU Tech), with 8+ years of leadership and 15+ years of total experience in the software industry.
"Most engineers are not blocked by ability, but by lack of real system ownership."
This accelerator exists to give you what most jobs never will.
Guest Sessions From Engineers at
Live sessions on System Design, Career Growth, and Interview Preparation.
Invest in your career. It pays back 100x.
Stop guessing. Start architecting.