Remember 2022? Our jobs were safe, the pay was decent, and we had just discovered that ChatGPT could save us from the occasional Stack Overflow rabbit hole. Fast forward to today, and that little tool seems determined to promote us all from software developers to customers.
But here’s the thing: while some folks argue that the cost of writing code is now approaching zero, the laws of building good, reliable software haven’t changed one bit. If anything, they’re more important than ever — especially when a junior dev with an AI assistant can generate a thousand lines of code faster than you can say “undefined is not a function.”
So let’s take a tour through 13 principles and laws that explain why software projects are the way they are. And don’t worry — we won’t bore you with SOLID, DRY, or KISS. Those are fundamental, sure, but apparently only geeks care about fundamentals in an era where code is generated in a Vegas casino.
1. The 90-90 Rule — The Math That Doesn’t Math
The first and most important rule of modern software development states that you’ll spend 90% of your time building the first 90% of a product, and you’ll spend the other 90% building the remaining 10%. Yes, that’s 180%. No, it doesn’t make mathematical sense. But any developer who has shipped a product knows this is terrifyingly accurate.
Attributed to Tom Cargill of Bell Labs and popularized in Jon Bentley’s Programming Pearls, this rule captures the essence of software development: the last mile is always the longest. That final 10% is where the edge cases live, the weird bugs breed, and your sanity goes to die.
The good news? With AI, we can now complete 180% of the work 10 times faster. Or at least that’s what my manager keeps posting on LinkedIn.
2. Brooks’s Law — Why Throwing People at the Problem Won’t Work
If you think adding another developer to your late project will speed things up, Fred Brooks has some bad news for you. In his classic The Mythical Man-Month (1975), Brooks observed that adding manpower to a late software project makes it later.
New team members need time to ramp up. They consume existing members’ time for training and coordination. Communication overhead grows quadratically with team size. The project gets slower, not faster.
In the modern AI era, we can update this: adding an AI agent to a late software project will probably make it both later and more expensive. The agents hallucinate, introduce subtle bugs, and someone still has to review every line they produce. Sound familiar?
3. Ringelmann Effect — The More, the Merrier? Think Again
Closely related to Brooks’s Law is the Ringelmann Effect, which states that individual productivity decreases as group size increases. First documented by French agricultural engineer Maximilien Ringelmann in 1913 (yes, this has been a problem for over a century), it explains why nine women can’t deliver a baby in one month.
Adding more people to a project doesn’t linearly speed it up. It just creates more Slack channels, more stand-ups, more “syncs,” and more time wasted in meetings where three people could have handled the work in half the time.
What’s truly impressive is that if you try hard enough, you’ll hit the point where adding people yields negative returns per person. More on that later when we get to complain about managers.
4. Hofstadter’s Law — The Recursive Nightmare
Speaking of estimates — we all know they’re fiction dressed up in Jira tickets. But Hofstadter’s Law (from Douglas Hofstadter’s Gödel, Escher, Bach, 1979) captures the frustration perfectly: it always takes longer than you expect, even when you take into account Hofstadter’s Law.
See? Recursion jokes. In software. Who would have thought.
We humans are simply bad at estimating complex tasks. Every sprint, we confidently commit to a pile of work, and every sprint, we carry half of it over. The irony is that knowing about this law doesn’t make you any better at estimating — it just makes you more painfully aware of how bad you are at it.
5. Sturgeon’s Law — 90% of Everything Is Crap
Not meeting deadlines is a given in software. But once you do finally ship your product, the real fun begins. Science fiction author Theodore Sturgeon famously declared that 90% of everything is crap.
In software terms: for every product, there’s a tiny 10% of features that users actually care about, and a massive 90% that exists purely so people can stay busy. The real challenge of building software isn’t writing code — it’s figuring out which 10% matters.
Consider the App Store: roughly 2.2 million apps, nearly a million publishers, and about 25% of apps are abandoned after a single use. Retention after 3 days is ~12%, and by day 30 it collapses to under 4%. That’s Sturgeon’s Law in action, and it’s not pretty.
In the era of AI-generated code, I’m confident this number will rise. We’ll soon be able to proudly say that 99% of features from 100% of the software we build are crap. Progress!
6. Amara’s Law — The Hype vs. Reality Check
Which brings us neatly to Amara’s Law, coined by Roy Amara of the Institute for the Future: we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
In the short term, we’re all convinced AI will replace every developer in the famous “next six months.” In the long run, we’ll probably look back at this era the way we now look at the dot-com bubble — with a mix of nostalgia and secondhand embarrassment.
7. The Gartner Hype Cycle — We’ve Been Here Before
Amara’s observation maps perfectly onto the Gartner Hype Cycle, which describes how new technologies progress through five phases: Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity.
Sound familiar? We’re currently somewhere between “Peak of Inflated Expectations” and “Trough of Disillusionment” on the AI hype cycle. Every LinkedIn post promises that LLMs will replace developers, but somehow your backlog is longer than ever. The Trough is coming, and when it does, the developers who actually understand their code will be the ones still employed.
8. The Boy Scout Rule — The One Rule You Should Actually Follow
Okay, enough doom and gloom. Let’s talk about something positive. The Boy Scout Rule, popularized by Robert C. Martin in Clean Code, is simple and powerful: leave the code better than you found it.
That’s it. No complex framework, no methodology with a certification course. Just make the codebase a tiny bit cleaner every time you touch it. A cleaner codebase is easier to read, easier to work with, and easier to extend. When developers follow this principle, teams collectively feel responsible for code quality.
It’s the software equivalent of picking up a piece of trash on your way out of the park. Small individual actions compound into massive collective improvements.
9. Broken Windows Theory — Don’t Let It Slide
The Broken Windows Theory comes from criminology. In their famous 1982 Atlantic Monthly article, James Q. Wilson and George L. Kelling observed that a building with one broken window left unrepaired quickly ends up with all its windows broken. The signal that nobody cares is an open invitation for things to get worse.
In software, the same principle applies: if you let minor bugs, sloppy code, or bad practices slide, people assume quality doesn’t matter — and they’ll generate even messier code. One person takes a shortcut, the next person sees it and thinks “I guess that’s how we do things here,” and before you know it, your codebase looks like it was written by a caffeinated squirrel.
The NYC subway story is a fascinating real-world example. In the 1980s, the entire subway system was covered in graffiti, crime was rampant, and ridership was collapsing. The city started by cleaning the graffiti — pulling tagged cars out of service immediately. Then they went after fare evasion, discovering that a shocking percentage of fare dodgers also carried weapons or had outstanding warrants. Crime started dropping. Small fixes led to big changes.
Your codebase is that subway car. Fix the broken windows before they multiply.
10. Kernighan’s Law — Debugging Is Harder Than You Think
Co-author of The Elements of Programming Style (1974), Brian Kernighan gave us this gem: debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Now consider AI-generated code. If debugging human-written code is twice as hard as writing it, and AI-generated code is produced without anyone understanding it, then debugging that code is probably four times as difficult as simply rewriting it from scratch. Most people don’t think about the implication of generating millions of lines of code without understanding them. The bugs are in there, lurking, waiting for production.
11. The Peter Principle — Promoted to Incompetence
All that technical talk probably scared the managers away, so now is the perfect time to discuss the Peter Principle. Formulated by Laurence J. Peter in 1969, it states that in a hierarchy, every employee tends to rise to their level of incompetence.
Good engineers get promoted. They keep getting promoted until they reach a position they’re bad at — at which point the promotions mercifully stop, and they remain there, stuck in a role they were never trained for, making decisions about things they don’t understand. If you’ve ever wondered why your tech lead can’t code their way out of a foreach loop, now you know.
12. The Dilbert Principle — Damage Control
Scott Adams took the Peter Principle one step further with the Dilbert Principle (1996): companies tend to promote their least competent employees to management to limit the damage they can do.
Instead of losing a terrible engineer (which is hard because, you know, HR), you promote them to middle management where they can only hurt morale, not the actual product. It’s not a bug in corporate culture — it’s a feature. The Dilbert Principle explains why your manager’s solution to every deadline crisis is “we need more stand-ups.”
13. The Dunning-Kruger Effect — Confidently Wrong
The cherry on top of this corporate sundae is the Dunning-Kruger Effect. In their landmark 1999 paper, psychologists David Dunning and Justin Kruger demonstrated that people with limited knowledge in a domain greatly overestimate their own ability, while true experts tend to underestimate theirs.
In software, this manifests as the junior dev who insists on rewriting the entire backend in Rust over the weekend, the manager who promises the client a full rewrite in two sprints, and the LinkedIn influencer who claims AI has made software architecture irrelevant. The less they know, the more confident they are. The more they know, the more they say “it depends.”
Wrapping Up
These 13 laws won’t magically make you a better software developer. They won’t help you 10x your productivity, and they definitely won’t save you from the next sprint planning disaster. But they will help you understand why software projects behave the way they do — and maybe, just maybe, help you navigate the office politics for the few months we all have left until LLMs take over and we’re all promoted to customers.
In the meantime, follow the Boy Scout Rule, fix your broken windows, and remember: it always takes longer than you expect. Even when you take that into account.