Docker secrets management for agentic AI services
Your OpenAI API key is baked into the Docker image and someone just cloned it
You needed to pass an API key to a build step. You ran docker build --build-arg OPENAI_API_KEY=.... The key landed in the image history, where anyone with docker pull access can extract it with docker history. You rotated the key. Next week, an engineer checks a secret into .env, you copy it into the image with COPY .env .env, and the same bug ships again.
The fix is to never put secrets into any layer, ever. BuildKit secrets for build-time credentials, runtime environment variables for anything the service needs at request time, and a secret manager (AWS Parameter Store, Doppler, Infisical, Kubernetes Secrets) as the source of truth.
This post is the Docker secrets pattern for agentic AI services: the 3 bad patterns that leak keys, BuildKit --mount=type=secret for build-time needs, runtime injection for request-time needs, and the secret manager integration that ties it together.
Why do secrets in Docker leak so often?
Because the Dockerfile has 4 places that store data, and 3 of them are persistent: image layers, image history, environment variables set via ENV, and the container filesystem. Only the last is ephemeral.
3 specific failure modes:
-
ENV OPENAI_API_KEY=sk-...in the Dockerfile. The value is baked into the image layer. Anyone who pulls the image can read it withdocker inspect. -
--build-arg OPENAI_API_KEY=...during build. The value lands in the image history.docker history <image>prints it.docker saveembeds it in the tar archive. -
COPY .env .envinto the image. The.envfile is now a layer. Even if yourm .envin a later step, the first layer still contains it.
All 3 look like they work on day 1 and leak on day 2 when someone new runs docker history or pushes the image to a public registry by accident.
graph TD
BadEnv[ENV KEY=...] -->|bakes into layer| Leak1[docker inspect reveals key]
BadArg[--build-arg KEY=...] -->|bakes into history| Leak2[docker history reveals key]
BadCopy[COPY .env .env] -->|bakes into layer| Leak3[layer tar contains secret]
GoodBuild[BuildKit --mount=type=secret] -->|no layer| Safe1[Build uses secret, image has nothing]
GoodRuntime[docker run -e KEY=... from secret manager] -->|only in env| Safe2[No layer, no history]
style Leak1 fill:#fee2e2,stroke:#b91c1c
style Leak2 fill:#fee2e2,stroke:#b91c1c
style Leak3 fill:#fee2e2,stroke:#b91c1c
style Safe1 fill:#dcfce7,stroke:#15803d
style Safe2 fill:#dcfce7,stroke:#15803d
How do you inject a secret at build time without leaking it?
Use BuildKit's --mount=type=secret. The secret is made available to the RUN step as a file (or env var) but is never persisted to any layer.
# filename: Dockerfile
# description: BuildKit secret mount for private package installation.
# syntax=docker/dockerfile:1.6
FROM python:3.12-slim
RUN --mount=type=secret,id=pip_index_token \
PIP_INDEX_URL="https://$(cat /run/secrets/pip_index_token)@pypi.internal.com/simple" \
pip install internal-package
Build the image passing the secret:
export PIP_INDEX_TOKEN="mysecret"
docker build --secret id=pip_index_token,env=PIP_INDEX_TOKEN -t myagent .
The token is available inside the RUN step. It is not in the image, not in history, not in any cache layer. Confirm with docker history myagent, you will not see the token.
How do you inject a secret at runtime?
Runtime secrets come from environment variables set by the container orchestrator, NEVER from the Dockerfile. The orchestrator reads from a secret manager.
# filename: deploy.sh
# description: Kubernetes example injecting secrets at runtime.
kubectl create secret generic agent-secrets \
--from-literal=openai-api-key=$(aws ssm get-parameter --name /prod/openai_key --query Parameter.Value --output text --with-decryption) \
--from-literal=database-url=$(aws ssm get-parameter --name /prod/db_url --query Parameter.Value --output text --with-decryption)
In the Kubernetes manifest:
# filename: deployment.yaml
# description: Agent deployment reads secrets from K8s secret object.
apiVersion: apps/v1
kind: Deployment
metadata:
name: agent
spec:
template:
spec:
containers:
- name: agent
image: myagent:latest
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: agent-secrets
key: openai-api-key
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: agent-secrets
key: database-url
Inside the container, the agent service reads OPENAI_API_KEY via Pydantic Settings. The Dockerfile never knew the value; the image never had it; only the running container does.
For the Pydantic Settings pattern that reads the env var, see the Environment variable parsing for Python AI services post.
What are the 3 anti-patterns that leak secrets?
-
ENVin the Dockerfile. Never set a secret withENV. The value is a layer. Use runtime injection instead. -
--build-argfor secrets. Build args land in history. Use--mount=type=secretinstead. -
COPY .env .env. Files in any layer are in the image forever, even if a later step deletes them. Never copy secret files; read secrets from env vars at runtime.
How do you validate that an image is secret-free?
Run 2 checks before pushing to a registry:
# Check 1: scan history for obvious secret patterns
docker history myagent --no-trunc | grep -iE "(sk-|api[_-]?key|secret|token|password)"
# Check 2: use gitleaks or docker-scout to scan the image
docker scout cves myagent
Add these to your CI pipeline. If either finds a secret, fail the build. A secret caught in CI is a rotation avoided.
For the broader Docker security pattern including non-root user setup, see the Docker non-root user for agentic AI security post.
What secret manager should you use?
Depends on your platform:
- AWS: Parameter Store (cheap) or Secrets Manager (rotation features)
- GCP: Secret Manager
- Azure: Key Vault
- Kubernetes: Native
Secretobjects sourced from any of the above via External Secrets Operator - Lightweight/multi-cloud: Doppler, Infisical, HashiCorp Vault
The rule: pick one, centralize every secret there, and make the orchestrator inject at runtime. Never ask developers to copy secrets between systems.
What to do Monday morning
- Run
docker history <your-image> --no-trunc | grep -iE "(key|secret|token|password)"on your production image. If anything shows up, rotate that secret today. - Delete any
ENVline in your Dockerfile that sets a secret. Delete any--build-argusage that passes secrets. - If your build needs a secret (e.g., private package repo), convert to BuildKit
--mount=type=secret. - Move runtime secrets into your orchestrator's secret mechanism (K8s Secret, ECS task env, Fly secrets, whatever your platform provides).
- Add a CI check that fails the build if
docker historycontains anything matching secret patterns.
The headline: no secret belongs in any image layer. Build-time secrets go through BuildKit mounts. Runtime secrets come from the orchestrator. The secret manager is the source of truth. Rotate the one secret you probably baked in today.
Frequently asked questions
Why can't I just use ENV in the Dockerfile for secrets?
Because ENV values are baked into the image layer and are visible via docker inspect and docker history to anyone with pull access. If the image ever leaks (public registry, stolen credentials, CI log), the secret leaks too. ENV is for configuration that is not sensitive, like APP_ENV=production. Never for API keys or database passwords.
What is BuildKit's --mount=type=secret?
A Docker BuildKit feature that makes a secret file available inside a specific RUN step without writing it to any layer. The secret is mounted at /run/secrets/<id> during that step only. After the step completes, the mount disappears and the secret is not in any image artifact. Use it for build-time credentials like private package registry tokens.
How do I inject secrets at runtime in Kubernetes?
Create a Kubernetes Secret object sourced from your secret manager (AWS Parameter Store, GCP Secret Manager, Azure Key Vault, Doppler). Reference the secret in your deployment manifest via env.valueFrom.secretKeyRef. The container starts with the secret as an environment variable, which your Pydantic Settings loader reads on startup.
Which secret manager should I use?
Use your cloud provider's native option (AWS Parameter Store, GCP Secret Manager, Azure Key Vault) if you are single-cloud. Use Doppler or Infisical if you are multi-cloud or need a unified developer experience. Use HashiCorp Vault if you have serious compliance requirements. The specific choice matters less than the rule: one source of truth, never copy secrets between systems.
How do I audit an image for leaked secrets?
Run docker history <image> --no-trunc and grep for known secret patterns (sk-, api[_-]?key, token, password). Use docker scout or trivy for a more thorough scan that includes layer contents and known CVEs. Add the scan to your CI pipeline so a broken commit never reaches production.
Key takeaways
- Docker images are write-once: anything you bake into a layer is there forever, even if a later step deletes it. Never put secrets in any layer.
- Use BuildKit
--mount=type=secretfor build-time credentials. The secret is available during one RUN step and is not persisted. - Use runtime environment variables injected by your orchestrator for request-time secrets. Pydantic Settings reads them on startup.
- 3 anti-patterns to eliminate:
ENVfor secrets,--build-argfor secrets,COPY .env .env. All 3 leak. - Scan images in CI with
docker historygrep ordocker scout. Fail the build on any hit. - To see Docker secrets wired into a full production agent stack with CI, auth, and observability, walk through the Build your own coding agent course, or start with the AI Agents Fundamentals primer.
For the official Docker BuildKit secrets documentation including the --mount=type=ssh and multi-secret patterns, see the Docker secrets guide.
Continue Reading
Ready to go deeper?
Go beyond articles. Build production AI systems with hands-on workshops and our intensive AI Bootcamp.