This week delivered three stories that, taken together, paint a vivid picture of where the tech industry is heading in mid-2026. A major open-source company reversed course and went closed. A new AI model demonstrated that cybersecurity is now fundamentally a spending problem. And the internet quietly crossed a milestone that’s been decades in the making.
Let’s break down what happened, why it matters, and what it means for those of us building production systems.
Cal.com Goes Closed Source: The AI Security Calculus
After five years as an open-source scheduling platform, Cal.com announced they’re moving their core codebase to a closed repository. Their reasoning is blunt and worth reading in full: AI can now systematically scan open codebases for vulnerabilities, making public source code “like giving attackers the blueprints to the vault.” They released a hobbyist fork called Cal.diy under MIT for community use.
This is a bellwether moment. Not because Cal.com is the first company to go closed after starting open (it happens), but because of why. The explicit argument is that AI has changed the security calculus of open source. When a determined attacker can point an LLM at your codebase and have it systematically enumerate attack surfaces, dependency chains, and authentication flaws in minutes rather than weeks, the old assumption—that more eyes on the code makes it more secure—starts to break down.
As engineers, we should be thinking about what this means for our own open-source dependencies. Are you running any self-hosted open-source tools that expose attack surfaces? Because if you are, you’re now in a race against automated vulnerability discovery.
“Cybersecurity Is Proof of Work Now”: The Mythos Wake-Up Call
The second story makes Cal.com’s decision look prescient. Anthropic’s unreleased “Mythos” model—an AI system so capable at offensive security that Anthropic chose not to release it publicly—was evaluated by the UK’s AI Security Institute (AISI). The results are sobering.
Mythos completed a 32-step corporate network attack (estimated at 20 hours for a skilled human) in 3 out of 10 attempts, at approximately $12,500 per attempt, consuming 100 million tokens with no diminishing returns. The AISI confirmed that the model showed consistent capability across the full attack chain—from initial reconnaissance to lateral movement to data exfiltration.
The essay “Cybersecurity Looks Like Proof of Work Now” (widely discussed on Hacker News at 385 points) crystallizes the implication: security is now a token-spending arms race. Attackers with access to frontier models can throw compute at your defenses. You must outspend them on the defensive side—automated hardening, continuous red-teaming with AI, and fundamentally different assumptions about what “secure” means.
For engineering teams, this should trigger an immediate review of your security posture. The old model—”we’re not a target, we’re too small”—is dead. AI makes everyone a target because the marginal cost of scanning and attacking is approaching zero.
IPv6 Crosses 50%: The Infrastructure Milestone Nobody Celebrated
Meanwhile, Google’s IPv6 statistics quietly crossed 50% this week—meaning more than half of all traffic to Google’s services now comes over IPv6. For those of us who’ve been in the industry long enough to remember the IPv6 “transition” that was always “two years away,” this is a genuine milestone.
What does this mean in practice? If you’re running production infrastructure, it’s time to stop treating IPv6 as a nice-to-have. IPv6-only deployments are now practical. AWS, GCP, and Azure all support IPv6-only VPCs. Mobile networks (T-Mobile in the US, Reliance Jio in India) have been IPv6-majority for years. Your users are increasingly connecting over IPv6, and falling back to IPv4 via NAT64 adds latency and complexity.
Here’s a quick checklist for your services:
- DNS AAAA records for all public-facing services
- Load balancers configured for dual-stack (IPv4 + IPv6)
- Firewall rules audited for IPv6 (many “default allow” IPv6 configurations are invisible in security reviews)
- Application-layer support: ensure you’re not storing IP addresses as 32-bit integers
- Monitoring and alerting on IPv6 paths (not just IPv4)
OpenAI Agents SDK Gets Native Sandboxing
Also this week: OpenAI shipped a significant update to their Agents SDK, adding native sandboxing (agents run in controlled environments) and an “in-distribution harness” for deploying and testing agents on long-horizon tasks. This is directly relevant if your team is evaluating agent frameworks for production use.
The sandboxing feature means agents can execute arbitrary code without risking the host system—a prerequisite for any production deployment where agents interact with user-supplied input. The harness provides structured evaluation of agent performance over multi-step tasks, addressing the “it worked in the demo but fails on real workloads” problem that’s plagued agent deployments.
For teams building with LLM agents, this is worth evaluating alongside alternatives like LangGraph, CrewAI, or custom orchestrations. The native sandboxing is a differentiator—most other frameworks require you to bring your own containerization layer.
Darkbloom: Private AI Inference on Idle Macs
A project that caught my eye on Hacker News (213 points): Darkbloom enables distributed, privacy-preserving LLM inference across idle macOS devices. Users contribute spare compute from their Macs to run AI inference without sending data to cloud providers.
It’s part of the growing “local AI” / edge inference trend, but with a twist: instead of running models locally on one machine, it distributes inference across a fleet of idle devices. Think of it as SETI@home for LLM inference. Whether it scales to production-quality performance is an open question, but the approach is interesting for organizations with data sovereignty requirements that can’t justify dedicated GPU infrastructure.
What I’m Taking Away
Three themes emerge from this week’s news:
- AI is reshaping security assumptions — both for open-source licensing (Cal.com) and defensive infrastructure (Mythos). The threat model has fundamentally changed.
- The internet’s substrate is evolving — IPv6 at 50% means infrastructure decisions need to account for IPv6-first users.
- Agent tooling is maturing rapidly — OpenAI’s sandboxed SDK and projects like Darkbloom show that the ecosystem is moving from “can we build agents?” to “how do we deploy them safely at scale?”
What caught your attention this week? I’d be curious to hear what’s changing in your stack.