When one cannot think of how to do things better one simply makes things bigger.
April 4, 2026•958 words
The quote from Heinz Pagels in The Cosmic Code (1982) captures a recurring pattern in human endeavors: when genuine innovation or efficiency plateaus, people (or organizations) default to scale—making structures, systems, or efforts larger—as a substitute for true improvement. Pagels links this to historical examples like the escalating size of Egyptian pyramids signaling the decline of the Old Kingdom, or oversized cathedrals and dinosaurs as evolutionary or cultural dead ends. Bigger becomes a proxy for progress when “better” (smarter design, elegance, efficiency) proves harder to achieve.
This dynamic applies strikingly to software engineering, where “making things bigger” often manifests as complexity bloat, over-engineering, or resource-heavy scaling instead of elegant, optimized solutions. Here’s how it plays out across key areas:
1. Codebases and Architecture: Feature Creep and Over-Engineering
When teams struggle to refine core logic, add value through simplicity, or fix underlying design flaws, they often pile on more code, layers, or features.
- Microservices proliferation: A simple monolithic app works fine for many use cases. But instead of deeply optimizing it (better data models, cleaner abstractions, or ruthless pruning), teams “scale” by splitting everything into dozens of services, adding orchestration (Kubernetes, service meshes), distributed tracing, and event buses. This makes the system bigger and more resilient in theory—but often slower to develop, harder to debug, and more failure-prone due to network issues and consistency problems. The real difficulty in distributed systems isn’t always raw scale; it’s the messy reality of partial failures, but “bigger” (more components) frequently masks a failure to simplify.
- Framework and library bloat: Modern apps pull in heavy dependencies for trivial tasks (e.g., a full ORM or state management library for basic CRUD). Or they adopt “enterprise” patterns everywhere. Result: Electron apps that consume hundreds of MB of RAM for what could be a lightweight native tool, or web pages with megabytes of JavaScript for simple interfaces.
- Feature bloat (“spoiling”): Software versions grow larger and slower over time despite faster hardware. New releases add rarely used features, UI clutter, telemetry, and ads instead of streamlining the core experience. Classic observation: apps feel heavier today than leaner predecessors from decades ago, even as hardware has improved dramatically. In short, when engineers can’t think of a better algorithm, abstraction, or architecture, they add more code, more abstractions, or more services—making maintenance, onboarding, and reasoning exponentially harder.
2. Scaling Strategies: Vertical vs. Horizontal (and the Temptation of “Bigger”)
- Vertical scaling (bigger servers/machines) is often the simpler, “bigger” path: throw more RAM, CPU, or cloud instances at the problem. It’s quick and avoids rethinking the system.
- True horizontal scaling or optimization requires deeper thinking—better caching, sharding strategies, efficient data structures, or algorithmic improvements (e.g., moving from O(n²) to O(n log n)). Many teams default to the former when innovation stalls. Cloud economics make it easy to “make things bigger” with auto-scaling, but this hides inefficiencies that surface as spiraling costs or fragility. Radical simplicity—staying monolithic or minimal longer—often scales better in practice than premature distribution.
3. Development Processes and Teams
- Team growth: When velocity drops, organizations hire more engineers instead of fixing process, tools, or technical debt. Larger teams bring coordination overhead (meetings, handoffs, communication tax), leading to slower progress—the “big team” trap.
- Process bloat: More frameworks, methodologies, governance layers, or CI/CD stages when the real issue is unclear requirements or poor prioritization.
- AI and tools as amplifiers: Modern coding assistants make developers faster, but they amplify existing habits. If your foundational thinking or code quality is poor, AI just produces more of the same mediocre (or bloated) output quicker. It doesn’t magically create “better”; it scales what you already do.
4. Why This Happens in Software Specifically
Software is uniquely malleable and invisible. Unlike physical engineering (where bigger pyramids have obvious material costs), code “scale” feels cheap at first—add another dependency, spin up another service, or ship another feature. Hardware improvements (Moore’s Law and its successors) have long masked the downsides, allowing bloat to accumulate. But costs show up in:
- Maintenance burden and technical debt.
- Performance and resource waste (slower apps, higher cloud bills).
- Security risks (larger attack surface from unnecessary code/libraries).
- Developer experience (harder to understand, test, or change the system). Bloat isn’t just size—it’s when code makes easy things hard. Simple changes require touching multiple services, untangling side effects, or navigating layers of indirection.
Counterexamples: Doing “Better” Instead of “Bigger”
The best software engineering pushes back against this:
- Simplicity as a discipline: Principles like KISS (Keep It Simple, Stupid), YAGNI (You Aren’t Gonna Need It), and ruthless elimination of waste (lean thinking). Prune features, delete dead code, choose minimal viable abstractions.
- Elegant algorithms and design: Investing in better data structures, caching, or concurrency models rather than brute-force scaling.
- Examples of restraint: Tools like SQLite (a powerful database in a small footprint), or projects that deliberately stay lean (e.g., certain embedded or performance-critical systems). Teams that vertical-scale first or refactor aggressively before distributing.
- Premature optimization is evil (per Knuth), but so is never optimizing or simplifying. The art is knowing when “bigger” is necessary versus a symptom of stalled creativity. Pagels’ insight is a cautionary one for our field: software’s rapid evolution and abundance of resources make the “make it bigger” temptation especially seductive. Great engineering is about elegance under constraint—finding smarter ways to solve problems with less, not just scaling up the mess. When you notice a system growing unwieldy without proportional gains in capability or maintainability, it might be time to ask: Are we innovating, or just building bigger pyramids?
This pattern is why debates around software “disenchantment,” website obesity, and bloat persist—hardware marches forward, but too often our thinking defaults to volume over refinement.