Docker Sandboxes, MicroVMs, and Kubernetes v1.36: The New Infrastructure for AI Agents

The infrastructure landscape is shifting fast — not because of yet another framework, but because AI coding agents are forcing a fundamental rethink of how we isolate, schedule, and secure workloads. Two major developments this month underscore this shift: Docker’s new sandbox architecture built on custom MicroVMs, and Kubernetes v1.36’s upcoming release packed with security hardening and DRA enhancements. Let’s break down what’s happening and why it matters for your stack.

The Problem: AI Agents Need Real Isolation

Over a quarter of all production code is now AI-authored, and developers using agents are merging roughly 60% more pull requests. But here’s the catch — those productivity gains only materialize when agents run autonomously. And autonomous agents need to docker build, docker run, install packages, and execute shell commands. Traditional sandboxing approaches break down fast.

Why Containers Alone Don’t Cut It

Containers are fast and familiar, but for an autonomous agent that needs to build and run its own Docker containers (which coding agents routinely do), you hit Docker-in-Docker. That requires elevated privileges that undermine the isolation you set up in the first place. WASM and V8 isolates are fast to spin up, but your agent can’t install system packages or run arbitrary shell commands. Full VMs offer strong isolation but bring slow cold starts and heavy resource overhead.

Running Without Sandboxing Is a Liability

One rm -rf, one leaked .env, one rogue network call — and the blast radius is your entire machine. As Docker’s team puts it: “An LLM deciding its own security boundaries is not a security model. The bounding box has to come from infrastructure, not from a system prompt.”

Docker Sandboxes: MicroVM Architecture Deep Dive

Docker’s answer, launched just this week, runs each agent session inside a dedicated MicroVM with three key architectural decisions:

  • Dedicated MicroVM per session. Each sandbox gets its own kernel — hardware-boundary isolation identical to a full VM. A compromised agent can’t reach the host, other sandboxes, or anything outside its environment.
  • Private, VM-isolated Docker daemon. Each agent gets its own Docker daemon running inside the MicroVM, fully isolated by the VM boundary. Full docker build, docker run, and docker compose support — no socket mounting, no host-level privileges.
  • No path back to the host. File access, network policies, and secrets are defined before the agent runs, not enforced by the agent itself.

Why Docker Built a Custom VMM

The most interesting engineering decision here is that Docker built a new VMM from scratch rather than using Firecracker. The reason: Firecracker was designed for cloud infrastructure (specifically Linux/KVM environments like AWS Lambda). It has no native support for macOS or Windows. But coding agents don’t run in the cloud — they run on developer laptops across all three platforms.

Docker’s custom VMM runs natively on macOS (Apple Hypervisor.framework), Windows (Windows Hypervisor Platform), and Linux (KVM) from a single codebase with zero translation layers. This means fast cold starts and consistent isolation guarantees regardless of host OS.

Kubernetes v1.36: What’s Coming (April 22, 2026)

While Docker tackles local agent isolation, Kubernetes is doubling down on cluster-level security and resource efficiency. v1.36 is planned for April 22 and brings several significant changes.

Ingress-NGINX Is Officially Retired

As of March 24, 2026, Ingress-NGINX is retired. No further releases, no bugfixes, no security patches. If you’re still running it in production, now is the time to migrate. The community has released ingress2gateway 1.0, a tool specifically designed to help teams migrate from Ingress to Gateway API safely.

Deprecation of Service ExternalIPs

The .spec.externalIPs field in Services is being deprecated in v1.36, with full removal planned for v1.43. This field has been a known security headache for years, enabling man-in-the-middle attacks on cluster traffic (CVE-2020-8554). If you’re still using it, migrate to LoadBalancer services, NodePort, or Gateway API.

gitRepo Volume Driver Permanently Disabled

Deprecated since v1.11, the gitRepo volume plugin is now permanently disabled. It allowed attackers to run code as root on the node. Use init containers or git-sync tools instead.

DRA: Partitionable Devices and Taints

The Dynamic Resource Allocation (DRA) framework gets two major enhancements:

  • Partitionable Devices (KEP-4815): Split a single GPU or accelerator into multiple logical units shared across workloads. Critical for cost efficiency when dedicating an entire GPU to one pod is wasteful.
  • Device Taints and Tolerations (KEP-5055): DRA drivers can now mark devices as tainted, ensuring specialized hardware is only used by workloads that explicitly request it. Graduates to beta in v1.36.

SELinux Volume Labeling Goes GA

The recursive SELinux file relabeling is replaced with mount -o context=XYZ, applying the correct label at mount time. This brings consistent performance and reduces Pod startup delays on SELinux-enforcing systems — particularly impactful for security-hardened production clusters.

Practical Takeaways: What You Should Do Now

1. Audit Your Ingress Setup

If you’re on Ingress-NGINX, start your Gateway API migration immediately. Use the ingress2gateway tool to convert your existing configs:

# Install ingress2gateway
go install github.com/kubernetes-sigs/ingress2gateway@latest

# Convert existing Ingress resources
ingress2gateway print --providers ingress-nginx   --input-file my-ingress.yaml > gateway-routes.yaml

2. Check for Deprecated API Usage

Scan your cluster for externalIPs usage and gitRepo volumes before upgrading to v1.36:

# Find services using externalIPs
kubectl get svc -A -o json | jq -r   '.items[] | select(.spec.externalIPs != null and .spec.externalIPs != []) |
  "\(.metadata.namespace)/\(.metadata.name): \(.spec.externalIPs)"'

# Find pods using gitRepo volumes
kubectl get pods -A -o json | jq -r   '.items[] | select(.spec.volumes[]?.gitRepo != null) |
  "\(.metadata.namespace)/\(.metadata.name)"'

3. Evaluate Docker Sandboxes for Agent Workflows

If your team is using AI coding agents (Claude Code, Codex, Cursor Agent, etc.), Docker Sandboxes provide the isolation model these tools need. Each agent session gets its own MicroVM with a private Docker daemon — no more sharing your host’s Docker socket with an autonomous AI.

The Bigger Picture

What we’re seeing is infrastructure adapting to a new class of workload. AI agents aren’t just another application — they’re autonomous actors that need real development environments with real isolation. Docker’s MicroVM approach solves this locally, while Kubernetes v1.36’s DRA enhancements and security deprecations address the cluster side.

The days of treating agent isolation as optional are over. Whether you’re running agents on developer laptops or scheduling GPU workloads in production clusters, the infrastructure is finally catching up to the security and efficiency requirements that AI workloads demand.

Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *