If you’ve been heads-down in your CI/CD pipelines this week, you missed some seismic shifts in the infrastructure landscape. Kubernetes v1.36 is dropping on April 22 with breaking changes that will bite you if you’re still using externalIPs or gitRepo volumes. Docker just shipped sandboxes specifically designed to run AI agents safely. And the Trivy supply chain compromise exposed just how fragile our container security trust model really is.
I’ve been running containerized workloads in production since the Docker 0.9 days, and this is one of the most consequential weeks I can remember for the DevOps ecosystem. Let me walk you through what changed, why it matters, and what you need to do about it — with practical migration steps you can implement today.
Kubernetes v1.36: The Breaking Changes You Can’t Ignore
Kubernetes v1.36 is scheduled for release on April 22, 2026, and it brings two significant removals that will impact existing workloads. The release also graduates several important features to GA and beta status.
Ingress-NGINX Is Officially Dead
The biggest news isn’t even in the v1.36 changelog — it’s the formal retirement of Ingress-NGINX on March 24, 2026. SIG-Security pulled the plug after years of accumulating security issues. There will be no more bugfixes, no security patches, nothing. If you’re still running Ingress-NGINX in production, you are now operating without a safety net.
The migration path is Gateway API, and the community has shipped ingress2gateway 1.0 to automate the conversion. Here’s how to convert your existing Ingress resources:
# Install the migration tool
go install github.com/kubernetes-sigs/ingress2gateway@latest
# Convert your existing Ingress manifests
ingress2gateway print --providers ingress-nginx \
--input-file my-ingress.yaml > my-gateway.yaml
# Apply the Gateway API resources
kubectl apply -f my-gateway.yaml
# Verify the gateway is routing traffic
kubectl get gateway -A
kubectl describe gateway my-gateway
Don’t sleep on this migration. I’ve seen teams get burned by “it still works” complacency with deprecated Kubernetes features. The difference here is that when a CVE drops for Ingress-NGINX — and it will — there will be no patch coming.
Service externalIPs Deprecation (KEP-5707)
The .spec.externalIPs field in Services is being deprecated in v1.36, with full removal planned for v1.43. This field has been a known security liability since CVE-2020-8554, enabling man-in-the-middle attacks on cluster traffic. Starting now, you’ll see deprecation warnings, and the clock is ticking.
If your manifests reference externalIPs, migrate to one of these alternatives:
# BEFORE: Using externalIPs (deprecated)
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
externalIPs:
- 203.0.113.10
ports:
- port: 80
targetPort: 8080
# AFTER: Use LoadBalancer for cloud environments
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
# OR: Use Gateway API for flexible external traffic
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-route
spec:
parentRefs:
- name: my-gateway
rules:
- backendRefs:
- name: my-service
port: 8080
gitRepo Volume Driver Removed (KEP-5040)
The gitRepo volume type has been deprecated since v1.11 — yes, that’s how old this is. In v1.36, it’s permanently disabled. This is a security win: the old driver could allow attackers to execute code as root on the node. If you had any lingering gitRepo volumes, replace them with init containers:
# Modern replacement for gitRepo volumes
apiVersion: v1
kind: Pod
metadata:
name: git-sync-pod
spec:
initContainers:
- name: git-clone
image: alpine/git:latest
command:
- git
- clone
- https://github.com/myorg/repo.git
- /repo
volumeMounts:
- name: repo-volume
mountPath: /repo
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: repo-volume
mountPath: /app/repo
volumes:
- name: repo-volume
emptyDir: {}
Features Worth Upgrading For
DRA Partitionable Devices — Finally, GPU Sharing Done Right
The Dynamic Resource Allocation (DRA) framework in v1.36 introduces partitionable devices (KEP-4815), allowing a single GPU or accelerator to be split into multiple logical units shared across workloads. This is a game-changer for cost optimization. Instead of dedicating an entire A100 to a single inference workload, you can partition it and run multiple workloads with proper isolation.
Combined with the DRA device taints and tolerations feature (KEP-5055, graduating to beta), cluster administrators now have fine-grained control over which workloads can access specialized hardware.
SELinux Volume Labeling Goes GA
The recursive SELinux relabeling bottleneck that slowed pod startup on SELinux-enforcing systems is now GA (KEP-1710). Instead of walking every file in a volume to apply labels, Kubernetes now uses mount -o context=XYZ to apply the correct label at mount time. The result: dramatically faster pod scheduling on RHEL, Fedora, and other SELinux-enabled distributions.
Docker’s AI Agent Infrastructure Play
While Kubernetes is evolving its resource management, Docker is going all-in on AI agent infrastructure. Three announcements this week caught my attention:
Docker Sandboxes for AI Agents
Docker released Docker Sandboxes — isolated execution environments purpose-built for running AI agents autonomously. Over 25% of production code is now AI-authored, but the real productivity gains only emerge when agents can run without constant human approval. Docker Sandboxes give agents a contained environment where they can execute code, modify files, and run commands without risking the host system.
# Run an AI coding agent in a Docker sandbox
docker sandbox run --agent my-coding-agent \
--image python:3.12-slim \
--volume ./project:/workspace
# The agent operates within the sandbox
# with network isolation and resource limits
# while maintaining full tool access
Docker Offload Reaches GA
Docker Offload is now generally available, and it solves a real enterprise pain point: running Docker in locked-down VDI environments. Instead of fighting with IT departments over virtualization permissions, Offload lets developers use the full Docker experience — builds, runs, composes — on any machine, even those with restrictive security policies. For organizations managing hundreds of developer workstations, this removes a significant friction point.
Docker Model Runner on DGX Station
Docker Model Runner now supports NVIDIA DGX Station GB300, bringing LLM inference to developer workstations with familiar Docker commands. Combined with the Gemma 4 models now available on Docker Hub, you can pull and run models locally:
# Pull and run Gemma 4 locally via Docker
docker model pull gemma4:4b
docker model run gemma4:4b
# Or with explicit GPU allocation on DGX Station
docker run --gpus all \
-p 8080:8080 \
gemma4:4b
The Trivy Supply Chain Compromise: A Reality Check
On March 19, 2026, threat actors compromised Aqua Security’s CI/CD pipeline and pushed backdoored versions of the aquasec/trivy image to Docker Hub. A second wave followed on March 22. The malicious images contained an infostealer targeting CI/CD secrets, cloud credentials, SSH keys, and Docker configurations.
This is the supply chain attack everyone has been warning about, and it hit one of the most widely-used security tools in the container ecosystem. The irony of a vulnerability scanner being the attack vector isn’t lost on anyone.
If you pulled Trivy images between March 19-23, take these steps immediately:
# 1. Check for compromised images
docker images | grep trivy
# 2. Remove any images pulled during the compromise window
docker rmi $(docker images -q aquasec/trivy)
# 3. Rotate any credentials that were in CI/CD environments
# where the compromised images ran
# 4. Pin image digests instead of tags going forward
docker pull aquasec/trivy@sha256:VERIFIED_DIGEST
# 5. Implement Cosign verification
cosign verify aquasec/trivy:latest
This incident underscores why Docker’s new partnership with Mend.io for Docker Hardened Images matters. Using VEX (Vulnerability Exploitability eXchange) statements to differentiate between exploitable and theoretical vulnerabilities is becoming essential for maintaining signal-to-noise ratio in security alerts.
Action Items for This Week
Here’s your prioritized checklist based on this week’s developments:
- Audit for Ingress-NGINX usage — Install
ingress2gateway, run it against your manifests, and start testing Gateway API in staging immediately. The retirement is effective now. - Scan for externalIPs in Service specs —
kubectl get svc -A -o json | jq '.items[] | select(.spec.externalIPs != null) | .metadata.name'will find them. Plan your migration to LoadBalancer or Gateway API. - Verify Trivy image integrity — If you pulled Trivy between March 19-23, rotate credentials and verify image signatures going forward.
- Start using Cosign for image verification — The supply chain attack demonstrated that tags alone are insufficient. Pin to digests and verify signatures.
- Evaluate Docker Sandboxes for AI agents — If your team is using AI coding agents, sandboxed execution is the safe way to unlock autonomous operation.
The infrastructure landscape is moving faster than ever. Kubernetes is cleaning house with long-overdue removals, Docker is pivoting hard toward AI-native workflows, and supply chain security is no longer theoretical. The teams that adapt to these shifts quickly will be the ones shipping reliably while everyone else is firefighting.
What’s your migration plan for Ingress-NGINX? I’d love to hear how other teams are approaching the Gateway API transition — drop a comment below.