Understanding Container Runtimes: Static vs Dynamic and Runtime Isolation

By
Yevgeni Bulichev
March 13, 2026

When you deploy a container, something has to actually run it. That something is the container runtime, and choosing the right one (along with the right base image) has real consequences for security, performance, and portability. 

This article breaks down how container runtimes work, what static vs. dynamic means in practice, and how Minimus bakes runtime correctness into every image we ship.

What Is a Container Runtime?

A container runtime is the software responsible for running containers, including pulling images, setting up namespaces, managing storage, and starting processes. It consists of two layers that work together:

High-Level Runtimes

Examples: containerd, CRI-O, Docker Engine

High-level runtimes handle image management, networking, and orchestrator integration. 

Low-Level Runtimes

Examples: runc, crun, gVisor, Kata

Low-level runtimes do the actual execution work: creating Linux namespaces, setting up cgroups, and spawning the container process from an OCI bundle.

Kubernetes and the Container Runtime Interface (CRI)

Kubernetes does not run containers directly, but communicates with container runtimes through the Container Runtime Interface (CRI). 

The execution chain is: Kubernetes → CRI → high-level runtime → low-level runtime. 

Historically, Kubernetes shipped with dockershim, a lightweight tool that allowed Kubernetes to interact with the docker runtime.  However, this was deprecated in Kubernetes 1.24. Containerd and CRI-O are now the primary supported runtimes used in Kubernetes environments.

Static vs. Dynamic: Why It Determines Your Base Image

The most consequential runtime decision is often one made at compile time: whether your binary should be statically or dynamically linked. Getting this wrong will break your container at startup.

Static Binaries

Static binaries are entirely self-contained, with no shared library dependencies. They can run on any Linux system with a compatible kernel and matching architecture (e.g., amd64 or arm64). They are ideal for minimal base images. 

Dynamic Binaries

Dynamic binaries link against shared system libraries at runtime, most commonly glibc. This is required when your application calls C code (CGO_ENABLED=1). The runtime environment must provide glibc. 

Minimus Static vs. Dynamic Binaries

Minimus provides two runtime base images designed specifically for these compilation modes.

Key rule

  • Static binary: reg.mini.dev/static. 
  • Dynamic binary: reg.mini.dev/glibc-dynamic. 

Mixing these will break your container at startup. 

For Go applications:

  • Static binaries: compile with CGO_ENABLED=0 and use FROM reg.mini.dev/static:latest
  • Dynamic binaries: compile with CGO_ENABLED=1 and use FROM reg.mini.dev/glibc-dynamic:latest

See the full walkthrough: Slim Down an App Using a Runtime Base.

# ── Static: self-contained, no dependencies ──────────────
FROM reg.mini.dev/go:latest AS builder
ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
RUN cd /app && go build -o myapp .
FROM reg.mini.dev/static:latest
COPY --from=builder /app/myapp /usr/bin/
ENTRYPOINT ["/usr/bin/myapp"]
 
# ── Dynamic: requires glibc at runtime ───────────────────
FROM reg.mini.dev/go:latest AS builder
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN cd /app && go build -o myapp .
FROM reg.mini.dev/glibc-dynamic:latest
COPY --from=builder /app/myapp /usr/bin/
ENTRYPOINT ["/usr/bin/myapp"]

Runtime Isolation Models

Beyond static vs. dynamic, runtimes differ in how strongly they isolate workloads from the host kernel.

Native: runc / crun (OCI-based)

These runtimes use Linux namespaces and cgroups directly to isolate containers. They are fast and lightweight, but all containers share the host kernel. This is the default in virtually every cluster.

VM-Based: Kata Containers

Each container runs inside a lightweight VM with its own kernel. This provides stronger isolation for multi-tenant or regulated workloads, but at the cost of higher overhead.

User-Space Kernel: gVisor

gVisor intercepts syscalls in user space and reimplements large parts of the Linux kernel interface. This provides better isolation than runc without full VM overhead, though some syscall compatibility gaps exist.

WASM Runtimes: WasmEdge / Wasmtime

These runtimes execute WebAssembly modules with built-in sandboxing. They are extremely lightweight and cross-platform compatible, but require the application to be compiled to WASM.

Runtime Correctness: How Minimus Tests Every Image

Building an image is only half the work. Many teams verify that an image builds correctly but never verify that it actually runs correctly. Minimus makes runtime correctness a formal, automated phase of its CI/CD pipeline.

Every Minimus image is tested across all version tags and architectures (amd64 and arm64). Non-Kubernetes images are validated with Docker Compose and a Python TestClient that iterates every published version. Kubernetes workloads are validated inside real clusters using Helm deployments and automated service checks. 

This matters in practice because a runtime failure detected in the pipeline never reaches production. If an image fails runtime validation, it is blocked from being published to the registry across every version and architecture.

Quick Reference

The table below provides a quick reference for common container runtimes, their isolation models, and the types of workloads they are typically used for in container environments.

Runtime Layer Isolation Best For
Docker Engine High Namespace Dev workflows
containerd High Default in Kubernetes
CRI-O High OpenShift, minimal setups
runc / crun Low Namespace / cgroup Standard workloads
Kata Containers Low VM Multi-tenant, regulated
gVisor (runsc) Low Syscall intercept Untrusted workloads
WasmEdge Low WASM sandbox Serverless, edge

Putting It All Together

The container runtime stack has more layers than it first appears. Getting each layer right, including high vs. low level, static vs. dynamic linking, namespace vs. VM-level isolation, directly affects security, performance, and reliability.

Minimus addresses these concerns by aligning runtime base images with application compilation mode, enforcing runtime correctness through automated testing, and validating Kubernetes readiness before publication.

Get a demo to explore Minimus base images, or jump to the runtime base guide for Go to learn how to select the correct runtime image.

Yevgeni Bulichev
Escalation Engineer
Sign up for minimus

Avoid over 97% of container CVEs

Access hundreds of hardened images, secure Helm charts, the Minimus custom image builder, and more.