pip install takes 4 minutes and you do it 15 times a day

You just added langchain to your production AI agent project. pip install runs for 4 minutes. You add anthropic and run it again for another 4. Your Docker build rebuilds the whole image because pip does not have a real lockfile. Your CI installs dependencies from scratch on every run. You are burning 40 minutes a day waiting for a dependency resolver that was architected in 2008.

uv is the replacement. It is a pip, pip-tools, virtualenv, and poetry replacement in one binary, written in Rust, and it is 10 to 100 times faster than all of them depending on the operation. For AI services with heavy dependency trees (torch, langchain, transformers, plus their transitive closures), the difference is the difference between "rebuild the image" being a coffee break and being a lunch break.

This post is why uv matters for AI services specifically, the minimal set of commands that replace your current pip workflow, the pyproject.toml and lockfile pattern, and the 5-minute migration from a requirements.txt project.

Why is pip painful for production AI service dependencies?

Because AI services have massive dependency trees and pip resolves them slowly. A typical agent project pulls in langchain (200+ transitive deps), anthropic or openai, pydantic v2, fastapi, uvicorn, sqlalchemy, asyncpg, redis, and 20 others. The transitive closure is 400 to 600 packages. pip runs a backtracking SAT solver over all of them, which takes minutes.

3 specific pain points:

  1. Resolution speed. pip's SAT resolver is slow by design because it has to handle every edge case in PyPA metadata. uv is 10 to 100 times faster on the same problem because it uses a smarter algorithm and was built from scratch for speed.
  2. No real lockfile. requirements.txt is flat and does not capture transitive versions reliably across platforms. Pip-tools helps but still requires multiple tools to manage the workflow.
  3. No virtualenv management. pip does not create or activate virtual environments. You use python -m venv, then source, then pip install. uv does all 3 in one command.

uv collapses pip, pip-tools, virtualenv, and pyenv into one tool that is measurably faster at every step.

graph LR
    Old[Old workflow] -->|venv| A[python -m venv]
    A -->|activate| B[source activate]
    B -->|install| C[pip install -r]
    C -->|compile| D[pip-tools compile]

    New[uv workflow] -->|one command| U[uv sync]

    style Old fill:#fee2e2,stroke:#b91c1c
    style New fill:#dcfce7,stroke:#15803d

5 commands become 1. And that 1 command runs in seconds, not minutes.

How do you set up a project with uv?

Create a new project with uv init, add dependencies with uv add, and let uv manage the virtualenv, lockfile, and pyproject.toml automatically.

# filename: setup.sh
# description: Bootstrap a new Python AI project with uv.
# Replaces python -m venv, pip install, pip freeze, and pip-tools.
uv init my-agent
cd my-agent

uv add fastapi uvicorn anthropic langchain langgraph
uv add sqlmodel asyncpg redis
uv add --dev pytest ruff mypy

uv sync

uv init creates a pyproject.toml with a minimal project block. uv add adds dependencies to the project and updates the lockfile. uv sync installs from the lockfile into a managed virtualenv (.venv/). You never manually create or activate the venv; uv handles it.

The resulting project has a pyproject.toml with your direct dependencies, a uv.lock file with the resolved transitive closure, and a .venv/ that contains the installed packages. Commit the pyproject and the lock, gitignore the venv.

What is uv.lock and why is it better than requirements.txt?

uv.lock is a platform-aware, hash-verified lockfile that captures the exact resolved versions of every package in your dependency closure, including transitive dependencies, for every platform your project supports.

3 properties that requirements.txt does not have:

  1. Complete transitive resolution. Every package your direct dependencies depend on is pinned. Nothing is left to pip's next run to figure out.
  2. Per-platform resolution. Linux, macOS, Windows can have different wheels. uv.lock tracks the right wheel for each platform in one file.
  3. Hash verification. Every package entry includes a SHA256. Supply chain attacks that replace a package on PyPI cannot slip through unnoticed.

pip can import the resolved versions from uv.lock via uv export --format requirements-txt if you need interop with tools that only understand flat text. In practice, the whole ecosystem is rapidly converging on pyproject.toml + lockfile, so the export is rarely needed.

How do you use uv in Docker?

Replace the pip install step with uv sync --frozen. The --frozen flag tells uv to use the existing lockfile exactly, which makes the install deterministic across rebuilds and CI runs.

# filename: Dockerfile
# description: Dockerfile using uv for dependency installation.
# 10x faster than pip for agent workloads with heavy deps.
# syntax=docker/dockerfile:1.6
FROM python:3.12-slim

RUN apt-get update && apt-get install -y --no-install-recommends curl \
    && rm -rf /var/lib/apt/lists/*

COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /usr/local/bin/

WORKDIR /app

COPY pyproject.toml uv.lock ./
RUN --mount=type=cache,target=/root/.cache/uv \
    uv sync --frozen --no-dev

COPY . .
CMD ["uv", "run", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

3 things to notice. First, COPY --from=ghcr.io/astral-sh/uv:latest /uv pulls the uv binary from the official image without installing it through pip. Second, uv sync --frozen --no-dev is a single command that replaces create-venv + install + freeze + cleanup. Third, uv run is how you run a command inside the managed venv; it replaces source .venv/bin/activate && ....

Combined with the layer caching rules from the Docker Layer Caching: Faster Agent Image Builds post, this setup cuts image build times on AI services from 8 minutes to under a minute.

How do you migrate from requirements.txt to uv?

5 minutes for most projects. Import your existing requirements, initialize uv, and remove the old files.

# filename: migrate.sh
# description: Migrate a requirements.txt project to uv.
# Preserves direct dependencies and lets uv re-resolve transitives.
uv init --no-readme
uv add $(cat requirements.txt | grep -v '^-' | grep -v '^#' | tr '\n' ' ')
uv add --dev $(cat requirements-dev.txt | grep -v '^-' | grep -v '^#' | tr '\n' ' ')

# Verify the new lockfile resolves the same packages you had before
uv pip list

# Delete the old files once you've confirmed the sync works
rm requirements.txt requirements-dev.txt

The uv add call takes your direct dependencies from the old requirements file and adds them to pyproject.toml while letting uv re-resolve the transitive closure. The new resolution might differ slightly (uv is better at picking compatible versions), so run your tests after migration to catch any regressions.

For the broader environment and deployment picture, see the .env.development vs .env: Environment Config for Agentic Systems post. For the production FastAPI stack, see FastAPI and Uvicorn for Production Agentic AI Systems.

When should you still use pip or poetry?

Rarely. 2 cases where pip or poetry is still the right choice:

  1. You are publishing a library, not a service, and the poetry community has conventions you need to follow. pyproject.toml standardization is nearly there, but some library workflows still favor poetry's publishing helpers.
  2. You have an existing CI or deploy pipeline that is tightly coupled to pip or poetry and migration cost is high. In this case, migrate when the pipeline is being refactored anyway, not as a standalone project.

For everything else (services, scripts, notebooks, internal tooling, agent projects), uv is the right default. The speed alone pays for the 5-minute migration within the first day of use.

For the full course that builds an agent project from scratch using modern tooling like uv, the Build Your Own Coding Agent course covers it end to end. The free AI Agents Fundamentals primer is the right starting point if the agent loop concept is still new.

What to do Monday morning

  1. Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh. 10 seconds, no package manager dependencies.
  2. Pick one project with a painful pip workflow. Run the migration commands above. Expect the conversion to finish in under 5 minutes.
  3. Run your test suite with uv run pytest. Confirm nothing broke. uv's resolver is stricter than pip's, so small version mismatches can surface.
  4. Update your Dockerfile to use uv sync --frozen. Measure the build time before and after. Expect a 5x to 10x speedup on the dependency layer.
  5. Update your CI to install uv and run uv sync instead of pip install. Same measurement, same speedup, applied to every CI run.

The headline: uv is a pip, virtualenv, and pip-tools replacement that is 10 to 100 times faster and produces a real lockfile. 5-minute migration. Daily time savings that compound. No reason to stay on the old stack for new projects.

Frequently asked questions

What is uv and why should I use it for Python AI projects?

uv is a Rust-based replacement for pip, pip-tools, virtualenv, and poetry in one binary. It is 10 to 100 times faster at dependency resolution, produces a platform-aware lockfile, and manages virtual environments automatically. For AI services with heavy dependency trees like langchain and transformers, the speed difference is the difference between a coffee break and a lunch break on every rebuild.

How does uv.lock compare to requirements.txt?

uv.lock captures the full transitive closure, tracks per-platform resolution, and includes SHA256 hashes for supply chain verification. requirements.txt is a flat list of direct dependencies without reliable transitive pinning. The lockfile is the single biggest quality upgrade in uv over pip-based workflows.

Can I use uv inside Docker?

Yes, and it is where the speed benefit is most obvious. Pull the uv binary from the official image (COPY --from=ghcr.io/astral-sh/uv:latest), then use uv sync --frozen --no-dev as your dependency install step. Combined with BuildKit cache mounts, this typically cuts image build time on AI services from several minutes to under a minute.

How do I migrate from requirements.txt to uv?

Run uv init, then uv add with your existing direct dependencies. uv re-resolves the transitive closure and creates a lockfile. Run your tests to catch any regressions (uv's resolver is stricter than pip's), then delete the old requirements files. The whole migration typically takes 5 minutes for a service-sized project.

When should I still use pip or poetry?

For libraries being published to PyPI where poetry's publishing workflow has conventions your team relies on, or for existing CI pipelines where migration cost outweighs the speed benefit. For services, scripts, notebooks, and new projects, uv is the right default. The speed and lockfile alone pay for themselves within the first day.

Key takeaways

  1. pip is slow on AI dependency trees because the SAT resolver was built before AI services existed. uv is 10 to 100 times faster at the same problem.
  2. uv replaces pip, pip-tools, virtualenv, and poetry with one binary and one command. The workflow collapses from 5 steps to 1.
  3. uv.lock is a platform-aware, hash-verified lockfile that captures the full transitive closure. requirements.txt cannot match this.
  4. In Docker, uv sync --frozen --no-dev plus a cache mount cuts agent image build times from 8 minutes to under a minute.
  5. Migration takes 5 minutes. Run uv init, uv add, run tests, delete old files. The daily time savings compound immediately.
  6. To see uv wired into a full production agent stack with Docker, FastAPI, and observability, walk through the Build Your Own Coding Agent course, or start with the AI Agents Fundamentals primer.

For the full uv documentation and command reference, see the official uv docs. The migration guide there covers edge cases like workspaces, monorepos, and dependency groups.

Share this post

Continue Reading

Weekly Bytes of AI

Technical deep-dives for engineers building production AI systems.

Architecture patterns, system design, cost optimization, and real-world case studies. No fluff, just engineering insights.

Unsubscribe anytime. We respect your inbox.

Ready to go deeper?

Go beyond articles. Build production AI systems with hands-on workshops and our intensive AI Bootcamp.