
When you deploy a container, something has to actually run it. That something is the container runtime, and choosing the right one (along with the right base image) has real consequences for security, performance, and portability.
This article breaks down how container runtimes work, what static vs. dynamic means in practice, and how Minimus bakes runtime correctness into every image we ship.
A container runtime is the software responsible for running containers, including pulling images, setting up namespaces, managing storage, and starting processes. It consists of two layers that work together:
Examples: containerd, CRI-O, Docker Engine
High-level runtimes handle image management, networking, and orchestrator integration.
Examples: runc, crun, gVisor, Kata
Low-level runtimes do the actual execution work: creating Linux namespaces, setting up cgroups, and spawning the container process from an OCI bundle.
Kubernetes does not run containers directly, but communicates with container runtimes through the Container Runtime Interface (CRI).
The execution chain is: Kubernetes → CRI → high-level runtime → low-level runtime.
Historically, Kubernetes shipped with dockershim, a lightweight tool that allowed Kubernetes to interact with the docker runtime. However, this was deprecated in Kubernetes 1.24. Containerd and CRI-O are now the primary supported runtimes used in Kubernetes environments.
The most consequential runtime decision is often one made at compile time: whether your binary should be statically or dynamically linked. Getting this wrong will break your container at startup.
Static binaries are entirely self-contained, with no shared library dependencies. They can run on any Linux system with a compatible kernel and matching architecture (e.g., amd64 or arm64). They are ideal for minimal base images.
Dynamic binaries link against shared system libraries at runtime, most commonly glibc. This is required when your application calls C code (CGO_ENABLED=1). The runtime environment must provide glibc.
Minimus provides two runtime base images designed specifically for these compilation modes.
Key rule:
Mixing these will break your container at startup.
For Go applications:
CGO_ENABLED=0 and use FROM reg.mini.dev/static:latestCGO_ENABLED=1 and use FROM reg.mini.dev/glibc-dynamic:latestSee the full walkthrough: Slim Down an App Using a Runtime Base.
# ── Static: self-contained, no dependencies ──────────────
FROM reg.mini.dev/go:latest AS builder
ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
RUN cd /app && go build -o myapp .
FROM reg.mini.dev/static:latest
COPY --from=builder /app/myapp /usr/bin/
ENTRYPOINT ["/usr/bin/myapp"]
# ── Dynamic: requires glibc at runtime ───────────────────
FROM reg.mini.dev/go:latest AS builder
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN cd /app && go build -o myapp .
FROM reg.mini.dev/glibc-dynamic:latest
COPY --from=builder /app/myapp /usr/bin/
ENTRYPOINT ["/usr/bin/myapp"]
Beyond static vs. dynamic, runtimes differ in how strongly they isolate workloads from the host kernel.
These runtimes use Linux namespaces and cgroups directly to isolate containers. They are fast and lightweight, but all containers share the host kernel. This is the default in virtually every cluster.
Each container runs inside a lightweight VM with its own kernel. This provides stronger isolation for multi-tenant or regulated workloads, but at the cost of higher overhead.
gVisor intercepts syscalls in user space and reimplements large parts of the Linux kernel interface. This provides better isolation than runc without full VM overhead, though some syscall compatibility gaps exist.
These runtimes execute WebAssembly modules with built-in sandboxing. They are extremely lightweight and cross-platform compatible, but require the application to be compiled to WASM.
Building an image is only half the work. Many teams verify that an image builds correctly but never verify that it actually runs correctly. Minimus makes runtime correctness a formal, automated phase of its CI/CD pipeline.
Every Minimus image is tested across all version tags and architectures (amd64 and arm64). Non-Kubernetes images are validated with Docker Compose and a Python TestClient that iterates every published version. Kubernetes workloads are validated inside real clusters using Helm deployments and automated service checks.
This matters in practice because a runtime failure detected in the pipeline never reaches production. If an image fails runtime validation, it is blocked from being published to the registry across every version and architecture.
The table below provides a quick reference for common container runtimes, their isolation models, and the types of workloads they are typically used for in container environments.
The container runtime stack has more layers than it first appears. Getting each layer right, including high vs. low level, static vs. dynamic linking, namespace vs. VM-level isolation, directly affects security, performance, and reliability.
Minimus addresses these concerns by aligning runtime base images with application compilation mode, enforcing runtime correctness through automated testing, and validating Kubernetes readiness before publication.
Get a demo to explore Minimus base images, or jump to the runtime base guide for Go to learn how to select the correct runtime image.