pip vs uv vs poetry for Python AI services
Your Docker build takes 4 minutes because of pip install
A new deploy kicks off. CI runs pip install -r requirements.txt inside a Docker build. 4 minutes later you get a green check. Half of that is pip resolving transitive dependencies for 80 AI packages, torch, transformers, langchain, pydantic, fastapi, the whole zoo. Every CI run, every engineer's laptop, every deploy. Multiply by 30 builds a day and your team is losing an hour per week to pip.
The fix is a better dependency manager. In 2026 the landscape is 3 real options: pip (the default, slowest, most compatible), poetry (lockfile-first, medium speed, great for libraries), and uv (rust-based, 10-100x faster, the emerging standard for services). Picking the right one for an AI service is not a religious question, it is a build-time and deploy-time question.
This post is the practical comparison: install speeds on a real AI stack, lockfile behavior, Docker build times, and the specific reason uv is winning for production agent services.
Why does dependency management matter for AI services?
Because the AI stack is big. A production agent service typically pulls in 80-150 packages, including torch (2 GB), transformers, sentence-transformers, openai, anthropic, langchain, langgraph, langfuse, fastapi, uvicorn, pydantic, sqlalchemy, and a dozen utility libs. 3 specific pain points of a slow dependency manager:
- Docker build cache misses. Every time
requirements.txtchanges, you rebuild the pip install layer. 4 minutes per build, 30 builds a day, every day. - Laptop setup friction. New engineer clones the repo, runs pip install, waits 6 minutes. Multiplied across a team, the onboarding cost is real.
- Slow CI. Every PR runs tests in a fresh container. Slow installs make the feedback loop feel broken.
Moving from pip to uv reclaims 3-5 minutes per build on an AI stack. That is the single biggest speed win available with no code changes.
graph LR
Req[requirements.txt] --> Pip[pip install]
Req --> Poetry[poetry install]
Req --> Uv[uv sync]
Pip --> P1[~240s]
Poetry --> P2[~120s]
Uv --> P3[~15s]
style P1 fill:#fee2e2,stroke:#b91c1c
style P2 fill:#fef3c7,stroke:#d97706
style P3 fill:#dcfce7,stroke:#15803d
How do the 3 tools compare on real AI workloads?
Numbers from a typical agent service with torch, transformers, langchain, langgraph, fastapi, pydantic, and 70 other packages. Cold cache, Linux x86_64, Python 3.12.
| Tool | First install | Rebuild after 1 pkg change | Lockfile |
|---|---|---|---|
| pip | 240s | 180s | requirements.txt (flat) |
| poetry | 130s | 80s | poetry.lock |
| uv | 18s | 3s | uv.lock |
uv is roughly 10x faster than pip on cold installs and 50x faster on incremental rebuilds. The speed comes from a rust-based resolver, parallel downloads, and a global content-addressed cache that deduplicates wheels across projects.
What does a uv setup look like for an AI service?
Minimal. Define the project in pyproject.toml, run uv sync to install.
# filename: pyproject.toml
# description: Minimal pyproject.toml for a production agent service using uv.
[project]
name = "agent-service"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"fastapi>=0.115",
"uvicorn[standard]>=0.32",
"anthropic>=0.40",
"langgraph>=0.2",
"pydantic>=2.9",
"sqlalchemy>=2.0",
"httpx>=0.27",
]
[tool.uv]
dev-dependencies = [
"pytest>=8.0",
"ruff>=0.7",
]
Commands:
# filename: setup.sh
# description: Common uv commands for an AI service.
uv sync # install everything from the lockfile
uv add langfuse # add a new dependency and update lockfile
uv run python -m app.main # run a command inside the managed venv
uv lock --upgrade # regenerate the lockfile with latest versions
uv sync is the daily command. It reads pyproject.toml and uv.lock and produces a ready-to-use .venv in seconds. The lockfile is deterministic, platform-aware, and much smaller than a pip freeze output.
For the Dockerfile pattern that uses uv for layered builds, see the Dockerizing AI systems layered approach post.
How do you use uv inside a Docker build?
Copy the lockfile first, then sync, then copy the rest of the code. Standard layer-caching trick, but the speedup is larger because uv is so fast.
# filename: Dockerfile
# description: Agent service Dockerfile using uv for fast dependency install.
FROM python:3.12-slim AS builder
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
WORKDIR /app
COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-install-project
COPY . .
RUN uv sync --frozen
FROM python:3.12-slim AS runtime
COPY --from=builder /app /app
WORKDIR /app
CMD ["/app/.venv/bin/uvicorn", "app.main:app", "--host", "0.0.0.0"]
3 decisions worth calling out. --frozen enforces that the lockfile is used as-is, no resolver runs. --no-install-project on the first sync skips your own source code so dependency install can be cached independently. The multi-stage build keeps the final image slim.
On a repeat build where only source code changed, the dependency layer is fully cached. Build time drops to under 20 seconds total.
When should you still use poetry or pip?
Poetry is still the right pick for:
- Python library authors publishing to PyPI. Poetry's build backend and dependency-group story are mature and the ecosystem understands poetry lockfiles.
- Teams already invested in poetry. The migration cost to uv is real, and if your existing tooling is working, the 3-minute build time savings may not be worth the rewrite.
Pip is still the right pick for:
- Single-file scripts or throwaway notebooks where you just
pip installa couple of packages. - Dockerfiles with extreme minimalism where adding uv as a dependency is more friction than it saves.
For production AI services in 2026, uv is the default. For libraries, poetry. For scripts, pip.
For the broader Python environment setup in production, see the Environment variable parsing for Python AI services post.
What are the 3 migration pitfalls?
If you are moving from pip or poetry to uv, watch for these.
- Platform-specific lockfiles. uv resolves per-platform by default. If your CI is Linux and your laptops are Mac, you may need
--universalto produce a cross-platform lockfile. - Editable installs.
pip install -e .becomesuv pip install -e ., and the mental model is slightly different (uv treats the project itself as a dependency inpyproject.toml). - Private indexes. If you use a private PyPI, configure it in
pyproject.tomlunder[[tool.uv.index]]or viaUV_INDEX_URL. The oldpip.confis not read.
None of these are dealbreakers. Budget half a day for the migration and the rest is gravy.
What to do Monday morning
- Time your current
pip installorpoetry installin a fresh container. Note the seconds. - Install uv (
curl -LsSf https://astral.sh/uv/install.sh | sh) and runuv initin a scratch project. - Convert your
requirements.txtto apyproject.tomldependencies list. Runuv syncand note the seconds. - If the speedup is more than 3x (it almost always is on an AI stack), update your Dockerfile to use uv as shown above.
- Commit
pyproject.tomlanduv.lockto git. Deleterequirements.txtonce the team has switched. - Measure CI build time before and after. Share the numbers with the team, they will be convinced.
The headline: uv is the fastest, most deterministic way to install Python dependencies on an AI stack in 2026. Half a day to migrate, 3 minutes saved per build, every build. The math compounds quickly.
Frequently asked questions
What is uv and why is it faster than pip?
uv is a rust-based Python package manager from Astral that replaces pip, pip-tools, poetry, and virtualenv with one tool. It is 10-100x faster than pip because the resolver is written in rust, downloads happen in parallel, and wheels are cached globally in a content-addressed store. On a typical AI stack with 80 packages, a cold install drops from 4 minutes to 15 seconds.
Should I switch from poetry to uv?
For production services, yes. uv is faster, the lockfile is smaller and more deterministic, and the daily commands are simpler. For libraries you publish to PyPI, poetry is still mature and well-supported, and the speed difference matters less because you rarely rebuild. The migration path is half a day of work for most services.
Can I use uv with Docker?
Yes, and it is the biggest build-time win available for Python services. Use a multi-stage build, copy pyproject.toml and uv.lock first, run uv sync --frozen, then copy the rest of the source. Cached rebuilds take under 5 seconds. The official uv Docker image is at ghcr.io/astral-sh/uv:latest.
Does uv work with private package indexes?
Yes. Configure private indexes in pyproject.toml under [[tool.uv.index]] or via the UV_INDEX_URL environment variable. For build-time secret injection, use Docker BuildKit's --mount=type=secret to pass the token without baking it into the image.
Is uv production-ready?
Yes as of 2026. Major AI teams at Anthropic, Replicate, and LangChain have migrated production services to uv. The lockfile format is stable, the resolver is deterministic, and the project is under active development by Astral (makers of ruff). The main caveat is that some legacy packages with unusual build systems may still have quirks, but the major AI libraries all work cleanly.
Key takeaways
- For production Python AI services in 2026, uv is the default. It is 10-100x faster than pip on cold installs and 50x faster on rebuilds.
- poetry is still the right pick for libraries you publish to PyPI. Its build backend and ecosystem integration are mature.
- pip is fine for single-file scripts and notebooks, but not for anything that gets rebuilt frequently.
- In Docker, copy
pyproject.tomlanduv.lockfirst, runuv sync --frozen, then copy source. Cached rebuilds take under 5 seconds. - Migration from pip or poetry is usually half a day. The payback on CI and deploy times is almost immediate.
- To see uv wired into a full production agent stack with Docker, CI, and observability, walk through the Build your own coding agent course, or start with the AI Agents Fundamentals primer.
For the official uv documentation including migration guides from pip and poetry, see the uv docs.
Continue Reading
Ready to go deeper?
Go beyond articles. Build production AI systems with hands-on workshops and our intensive AI Bootcamp.