How to Build Zero-CVE Container Images (Without Slowing Your Pipeline)

By
Minimus

Zero CVE container images are what teams aim for when policy says no unacceptable findings on a defined bill of materials, not a promise that no CVE identifier will ever appear in any database. To approach that goal without stalling CI, use minimal bases, multi-stage builds, automated dependency updates, and vulnerability gates tied to severity and exploitability. Cache layers and parallelize jobs so scans add minutes, not hours.

Treat "zero" as a policy outcome (for example no critical or high in production namespaces) backed by SBOMs and signed artifacts, not as a one-time screenshot. If you have been told to ship zero-CVE container images, you already know the tension: scanners update continuously, development wants velocity, and security wants proof. The workable version of "zero" is almost always zero violations of an agreed policy on a defined bill of materials, plus a process to refresh images when the outside world changes.

Key takeaways

  • "Zero CVE" is a scope decision: it usually means zero findings that matter for your policy on the base image and declared dependencies, not a guarantee that no identifier will ever appear in a scan.
  • Speed and security both improve when you shrink the image, rebuild often, and fail builds on policy instead of debating CVSS in every pull request.
  • Scanners, SBOMs, and VEX work together: scans find candidates; SBOMs scope blast radius; VEX (where valid) separates noise from real exposure.
  • Hardened, source-built images with a defined remediation SLA get you closest to zero-CVE-at-delivery for the layers the provider owns; your app code and config still need their own gates.

What Does "Zero CVE" Mean for a Container Image?

For a container image, "zero CVE" in production usually means policy-zero, not literal-zero. A CVE (Common Vulnerabilities and Exposures) is a cataloged identifier for a known flaw in a specific version of a library or package. Containers inherit CVEs from every package in every layer. A bloated base can pull in hundreds of vulnerabilities, including many in code paths your application never executes.

A literal claim of no CVE ID ever associated with any file in the image is brittle: NVD entries change, scanners disagree, and databases update after deploy. That is why mature programs frame zero as policy-zero: for example no critical or high findings from an approved scanner, with documented exceptions, or zero known exploitable issues after VEX triage where your process supports it. NIST SP 800-190 treats container images as part of the software supply chain; the discipline is integrity, provenance, and repeatable builds, not a single green badge.

Document who approves exceptions, how long they last, and what compensating controls apply. Auditors and customers care about that trail as much as the scan result.

For why hardened, minimal bases change the starting point, see Hardened Container Images: The Foundation of Container Security.

Why Do Container Images Accumulate CVEs Faster Than You Patch?

Images are snapshots. When you build FROM debian:bookworm or a language stack image, you inherit the distribution's package set. CVEs are filed against those packages continuously. If you only rebuild on feature work, security debt compounds.

Upstream fixes land in package repositories first. Your image does not pick them up until you rebuild and redeploy. Tie refresh to a cadence (weekly, daily, or event-driven for criticals), not only to sprint planning.

Two scanners can report different counts for the same digest. Standardize on one tool for enforcement and use others for research if needed. Do not block the pipeline on conflicting labels without a written ruling process and an owner.

When a vendor publishes how they count and remediate findings, you can compare programs fairly. Minimus Has 95% Fewer CVEs—Here's How We Back That Up is an example of methodology transparency buyers should expect from any provider making quantitative claims.

How Do Minimal Base Images and Dependencies Move You Toward Zero?

Smaller bases mean fewer packages, which usually means fewer CVE rows. Multi-stage builds keep compilers, test runners, and devDependencies out of the runtime image and remove a large class of findings that exist only to support the build.

Pin versions in Dockerfiles and lockfiles. Float tags only where automation catches breakage. Renovate, Dependabot, or internal bots that open merge requests on a schedule beat manual "we should update someday."

Distroless or minimal runtimes trade convenience for surface area: no shell means less post-exploitation tooling; it also means debug moves to sidecars or break-glass images. Plan migration with staging namespaces and parity tests before you cut over production traffic.

Tactic What it reduces
Multi-stage build Build-time tools and dev packages in production
Minimal or hardened base Unused system packages and utilities
Pin + automate updates Drift between "known good" and what is running
SBOM at build Ambiguity about what you must defend

How Should You Scan and Gate Without Creating a Queue of Exceptions?

Shift left: run Trivy, Grype, or similar on every merge to main, not only before a release. Gate on rules the business can defend: block new criticals on production-bound branches, cap highs with a ticketed waiver, time-box lows.

False positives are a process problem. Tie overrides to named owners, expiry dates, and VEX where the upstream or vendor documents "not affected" for a given context. Otherwise teams learn to ignore the dashboard.

Generate an SBOM (Syft, build tooling, or registry features) so you have a stable answer to "what is in this digest?" When a new CVE drops, query the SBOM instead of grepping across dozens of Dockerfiles.

For scanner behavior against minimal images, Using Open Source Vulnerability Scanners With Hardened Container Images covers what changes when the base is already thin. 

What Build Tools and Signing Add to the Zero-CVE Story?

BuildKit, Kaniko, and similar tools do not remove CVEs by themselves. They help with reproducibility, non-root builds, and cache so frequent rebuilds are cheap enough to become habit.

Signing (for example Cosign with Sigstore) ties an image digest to an identity. SLSA levels describe how defensible your build pipeline is. Buyers and regulators increasingly expect provenance, not only a passing scan. Executive Order 14028 accelerated SBOM expectations for vendors selling into federal systems; private-sector enterprises often mirror those asks.

You cannot sign your way out of a vulnerable package. You can prove which sources produced which digest, which makes rollback and incident response faster when you must swap a base.

Treat build provenance as part of the same story as CVE counts. Auditors ask how you know an image in production matches the output of your pipeline. Attestations answer that question; scans answer what is inside.

How Do You Keep the Pipeline Fast While Doing All of This?

Parallelize and cache

Run lint, unit tests, and image scan in parallel where possible. Run integration tests after the image artifact exists. Cache Docker layers and package indexes in CI so routine commits do not cold-build the world.

Put the vulnerability scan on the built artifact (the digest), not only on the Dockerfile text, so you catch issues in resolved dependencies and intermediate layers.

Match gates to risk

Separate "must pass to merge" from "must pass to deploy to prod." Not every branch needs the full registry promotion gate. Production promotion should include scan, sign, and SBOM attachment.

When scan time grows, first shrink what you scan (smaller images, fewer duplicate builds), then shard jobs. A five-minute scan on a slim image beats a fifteen-minute scan on a full OS base on every commit.

Rescan and notify

Registry-side rescanning catches CVEs that appear after build day. Your digest does not change; the NVD does. Scheduled rescans or webhook-driven rescans turn "we were clean at deploy" into "we know today's posture."

When a gate fails, the notification should include package name, fixed version, and owner, not only a CVE ID. That reduces the odds that "slow pipeline" becomes code for "security team bottleneck."

Language-specific bases follow their own upstream CVE rhythm; the same pipeline discipline applies, but patch velocity follows the runtime's release cycle.

Conclusion

Building zero-CVE container images in production is almost always policy-zero on layers you control: base image, declared dependencies, and build configuration. Application code, third-party behavior, and future CVE disclosures still need process and ownership. Pair image work with runtime controls and clear governance for exceptions so "zero" stays credible under audit.

This article is technical guidance, not legal or compliance advice. Map your program to your frameworks and contracts with your security, risk, and legal stakeholders.

Browse images: images.minimus.io
Read the docs: docs.minimus.io
Get started: Get started · Get a demo: Get a demo

FAQ

Is "zero CVE" achievable for every container?
You can get to policy-zero (no unacceptable findings under agreed rules) for the layers you control. Application logic, dynamic dependencies, and future CVEs mean ongoing work, not a one-time build.

Will two scanners always agree?
No. Align on one tool for enforcement, document known deltas, and treat major disagreements as a data quality issue, not a debate club.

How often should production base images rebuild?
At least as often as your risk tolerance for stale packages; many teams align base refreshes with weekly or daily automation plus emergency rebuilds for critical CVEs.

Does signing an image fix vulnerabilities?
No. Signing proves source and integrity. Patching still requires an updated digest from a rebuild or vendor refresh.

What is the difference between "zero CVE" and "near-zero CVE" images?
Zero CVE usually names a strict policy bar or a vendor delivery claim for a defined scope. Near-zero CVE admits residual identifiers and focuses on prioritization, VEX, and unfixable upstream issues. Both are governance terms, not magic scanner labels.

How Minimus Approaches Zero CVE for the Base Layer

You can get far with discipline on public bases. The ceiling is still the large package graph many distribution images carry for general use.

Minimus builds hardened container images from upstream source, keeps only what the workload needs, publishes signed SBOMs, and applies VEX-aware workflows so teams spend time on exploitable risk. Remediation runs on published SLA expectations when upstream fixes land. That makes "zero CVE at delivery" a contractual target for the base layer Minimus owns, not a best-effort slogan. Validate any vendor claim against your own scanner, pinned digest, and contract.

Minimus is image- and supply-chain-centric. It does not replace runtime detection for in-memory attacks or fix bad application code. It addresses CVE debt in the image layer that scanners keep flagging after you have already dropped root and tightened seccomp.

If your security model assumes verified artifacts and least privilege, Zero Trust with Minimus: Hardened Images for a Secure Foundation ties those principles to concrete image choices. 

Minimus
Minimus
Sign up for minimus

Avoid over 97% of container CVEs

Access hundreds of hardened images, secure Helm charts, the Minimus custom image builder, and more.