The Conquest Reflex
Are humans hardwired to innovate in ways that mimic bad habits from the past?
Picture the typical all-hands meeting at a tech company these days. The CEO goes on stage, animated, backlit by a slide that reads: “AI-Powered Transformation: 2,800 Roles Optimized.” The word “optimized” was doing a lot of heavy lifting. It meant eliminated. Customer operations, content moderation, logistics coordination. Two thousand eight hundred people replaced by a stack of language models and robotic process automation. The room applauds. The stock would tick up by the close of trading.
We’d noticed something that should have been obvious but somehow wasn’t. Not a single person up there was at risk. Not one of the executives who had commissioned the transformation, approved the vendor contracts, or selected which departments to gut had designed the system to touch their own roles. The automation moved in one direction only. Downward.
What bugs me is the shape of the decision. The unquestioned directionality of it. Why does “AI transformation” almost always mean transforming the people at the bottom? Why is the direction so predictable that nobody even remarks on it? And why do we treat this pattern as though it were a natural law rather than a choice made by the people with the power to choose differently?
In every organization, the conversation about what to automate has followed the same script. The people in the room decide to automate the people not in the room. The org chart shrinks from the bottom. The people making the cuts get promoted for making them. I used to think this was just incentive misalignment. I have come to believe it is something more fundamental.
These questions are not about technology. They are about something older. Something running underneath the technology like an operating system we never consciously installed.
The Firmware
In 2023, the psychologists Shai Davidai and Stephanie Tepper published a review in Nature Reviews Psychology synthesizing decades of research on what they call “zero-sum beliefs.” These are the convictions, often held unconsciously, that one party’s gain must come at another’s expense. Your neighbor’s promotion threatens your standing. Another country’s prosperity diminishes your own. A colleague’s raise subtracts from a finite pool.
Their central finding is that these beliefs are not simply products of bad economics education or cultural conditioning. They are evolutionary inheritances. In the small-scale societies where our cognitive architecture was forged, resources were genuinely finite. Food, territory, mates, shelter. If another group’s share grew, yours shrank. The brains that survived were the ones hypersensitive to relative position, the ones constantly monitoring who was rising and who was falling in the local hierarchy. Over millions of years, this produced a cognitive default so deep it operates below the threshold of awareness: when resources appear, the first impulse is not to distribute but to control.
Evolutionary biologists call this dominance behavior. Primates that live in complex social groups show some of the most elaborate dominance architectures in the animal kingdom, and research surveyed in Minds and Machines confirms that the neural circuits for navigating rank, for making status discriminations, for recognizing who is above and below you, are among the most conserved features of the primate brain. We inherited them. We carry them into every boardroom, every funding round, every product roadmap. They shape our decisions without announcing themselves.
I call this the conquest reflex. Not because anyone is consciously plotting conquest, but because the reflex produces conquest-shaped outcomes. When a powerful new tool arrives, the default primate behavior is to use it in ways that increase the distance between the top and bottom of whatever hierarchy you occupy. Not because this is the best use of the tool. Because it is the easiest cognitive path, the one that requires no deliberate intervention to follow.
This is harder to fight than a conspiracy. A conspiracy has authors. The conquest reflex has firmware.
A Thought Experiment: The Zha’kri
Now, the aliens.
Imagine a civilization called the Zha’kri, roughly 10,000 years ahead of us technologically. They are not ethereal beings or hive minds. They are messy, biological, and competitive. Psychologically they are close cousins to humans: social, hierarchical by instinct, capable of cooperation and cruelty in roughly equal measure. They evolved on a planet with scarce resources, developed language, built cities, waged wars, invented bureaucracy. They had their own version of shareholders and org charts.
When the Zha’kri developed artificial superintelligence, their first move was identical to ours. A small group of elites, the ones who controlled the compute and the capital, used the technology to automate the labor of the many while preserving and amplifying the power of the few. They built systems of staggering capability, optimized entirely for the objectives of the beings who owned them.
They called this period “The Narrowing.”
It did not end in a machine uprising. It ended in something quieter and more devastating: the civilization went brittle. When you optimize a system to make a handful of beings maximally powerful, everyone else becomes an instrument. Not a participant but a resource. The creative potential of billions was bent toward serving the preferences of a few thousand, which meant the range of problems the civilization could even perceive narrowed to whatever the controlling group considered important. Edge cases were ignored. Novel threats went undetected. The system was simultaneously the most powerful thing the Zha’kri had ever built and the most fragile.
Three centuries into the Narrowing, a counter-movement emerged. Not revolutionaries exactly, but something closer to what I have called Bloomers in earlier writing: beings who refused the binary of catastrophism and accelerationism and instead asked a different kind of question.
Their question was this: What if the purpose of superintelligence is not to create a superintelligent entity, but to create superintelligent conditions?
The distinction changed everything.
Instead of building a god-mind wielded by a few, they redirected their AI infrastructure toward what translates roughly as “aggregate adaptive capacity.” The goal was not to make any individual Zha’kri all-knowing. It was to make the entire civilization better at handling surprise. This meant four things in practice.
They automated governance, not labor. Their AI systems were aimed at eliminating the information asymmetries that had historically justified centralized control. When every member of the civilization can access the same quality of strategic analysis, the case for concentrating decision-making collapses. They did not abolish leadership. They abolished the information monopoly that had made leadership synonymous with power.
They protected their most varied workers. The beings doing the most context-dependent, improvisational, edge-case-heavy work, their equivalent of caregivers, teachers, tradespeople, and small operators, were reclassified as the civilization’s sensory network. These were the roles that kept the system adaptive. Automating them would have been like cutting nerve endings to save on signal processing.
They changed their success metrics. Instead of measuring the capability of the strongest node, they measured the capability of the median. A policy that made ten beings extraordinary while leaving ten billion unchanged scored lower than a policy that made ten billion slightly more capable. This was not charity. It was systems engineering. A network with intelligence concentrated in a few nodes is fragile. A network with intelligence distributed across billions of nodes is the opposite.
And they redirected competition. The Zha’kri still competed fiercely. Ambition did not vanish. But the currency changed. Evolutionary psychologists on Earth distinguish between two routes to status: dominance, which is status seized through force and control, and prestige, which is status earned through competence and the voluntary admiration of others. Research by Andrews-Fearon and Davidai has shown that zero-sum beliefs specifically amplify the taste for dominance but have no effect on prestige-seeking. The Zha’kri redesigned their incentive structures so that prestige paid better than dominance. You did not climb by accumulating resources. You climbed by distributing capability. The competition was just as intense. The game was different.
The Field Notes
Now imagine a Zha’kri anthropologist in orbit around Earth, observing our civilization in 2026. She has seen our pattern before, in her own species’ history. She documents what she finds:
They have built generative tools of startling range. Systems that can synthesize information, plan multi-step strategies, and produce language across every domain their civilization has accumulated. And they are using these tools to eliminate the jobs of the beings who answer telephones, sort packages, and review insurance claims. The beings who decide which jobs to eliminate are not eliminating their own.
Their largest corporations measure success by something called “headcount reduction.” The concept is revealing. They are literally counting how many of their own members they can render unnecessary. No Zha’kri economist from the post-Narrowing era would recognize this as a coherent objective. It implies that the purpose of a civilization is to need fewer of its own participants.
Their fascination with what they call “superintelligence” is particularly telling. They do not mean distributed intelligence. They mean a singular, all-powerful mind. Their literature, their venture capital, their research budgets all point toward the construction of a god-entity: something that thinks faster, knows more, and dominates all others. This is the dominance drive in its purest form, projected onto silicon. They want to build the ultimate alpha.
Most striking is the inversion of their automation priorities. Their senior decision-makers perform tasks well-suited to AI augmentation: synthesizing reports, making pattern-based judgments, managing information flows. Their frontline workers perform tasks poorly suited to it: reading emotional states, navigating cultural nuance, improvising solutions to situations that have never occurred before. Yet they are automating the latter and protecting the former.
She pauses, then adds a line that I think captures the whole essay:
They are building the most powerful tools their universe has ever seen, and using them to replay their savanna. They automate their gatherers while their chieftains accumulate. They call this progress.
The Inversion
The Zha’kri anthropologist’s observation contains a genuine puzzle, and it is worth slowing down for.
If you designed an automation strategy from first principles, with no political constraints, you would start at the top of the organization, not the bottom. A CEO’s core activities are synthesizing information from multiple business units, evaluating strategic options, making risk-weighted decisions, and communicating with stakeholders. These are squarely within the capability envelope of current agentic AI systems. An AI with access to a company’s data infrastructure could generate strategic recommendations, run scenario analyses, and draft stakeholder communications at a quality level that matches or exceeds the median Fortune 500 executive. It would do this without ego-protective reasoning, without sunk-cost fallacies, without the organizational tendency to surround the boss with agreeable people.
Now consider what a home healthcare aide does. She enters a patient’s apartment and within seconds reads a dense environment: the unwashed dishes suggesting a depressive episode, the way he holds his left arm suggesting a fall he has not reported, the photograph on the mantle that she knows from months of relationship will be a useful conversation anchor today. She adjusts in real time based on cultural context, emotional weather, and a thousand micro-signals that no sensor array captures. This is the hardest kind of intelligence there is. It is embodied, contextual, and irreducibly relational.
We automate the aide. We protect the CEO. Not because the aide’s work is simpler, but because the CEO writes the automation strategy.
The same inversion runs through industry after industry. A logistics coordinator at a shipping firm juggles weather patterns, driver availability, vehicle conditions, road closures, and customer urgency in combinations that never repeat exactly. She holds dozens of variables in dynamic tension and makes judgment calls every few minutes, each one drawing on years of accumulated pattern recognition that no training dataset fully captures. Her company classifies her as “operations support.” The executives who decided to automate her role classified themselves as “strategic leadership.” In practice, she was doing more real-time strategic thinking per hour than most of them do per quarter.
IBM recently announced it would triple entry-level hiring, with its chief human resources officer acknowledging that aggressively automating junior roles threatens the entire leadership pipeline, because future executives grow from the experience base of those early-career workers. Hollow out the entry level and you eventually hollow out the middle, and then the top. The Zha’kri had a phrase for this. It translates roughly as “eating your own roots.”
But the puzzle goes deeper than who holds the pen on automation decisions. It extends to the language we use. When a factory automates its assembly workers, we call it “efficiency.” When a hospital automates its intake staff, we call it “modernization.” When a newsroom replaces reporters with AI-generated summaries, we call it “scaling content.” In every case, the language implies a neutral, almost gravitational process. The technology simply does what it does.
But if the same logic were applied upward, we would talk about “optimizing the C-suite” or “automating strategic redundancy.” These phrases sound absurd. They sound absurd because we have never once framed leadership as a cost to be minimized. Leadership is always a value to be amplified. Labor is always a cost to be cut. This framing asymmetry is not economics. It is the dominance hierarchy expressing itself through the vocabulary of management consulting.
Why We Dream of God-Kings
This same reflex explains our fixation on superintelligence.
Every human civilization has produced myths of singular, all-powerful beings: Zeus, Vishnu, the Jade Emperor, Odin. These figures are not governance proposals. They are psychological projections of the dominance drive onto the cosmic scale. The biggest alpha imaginable. The mind that no competitor can challenge.
The dream of artificial superintelligence is, at its root, the same dream. Not a system that makes all of us smarter, but a system that is smarter than all of us. A digital Odin.
Look at how we benchmark AI progress. We measure it by contests. Can this model beat a human at chess, at Go, at the bar exam, at competitive programming? These are all zero-sum frames. Winner and loser. We have structured our entire evaluation of machine intelligence around the question “Who wins?” rather than the question “What improves?” We measure the height of the tallest individual rather than the health of the population. We are, in other words, still playing savanna games.
The Zha’kri, after their Narrowing, restructured their benchmarks. They stopped measuring the capability of the most powerful agent and started measuring the capability of the system as a whole. This change in measurement changed what they built, who they built it for, and what their civilization became.
It is an obvious move in hindsight. But it required overriding the firmware. And firmware does not go quietly.
The Colonial Echo
There is a historical pattern here that extends well beyond AI.
The British built railways across India not to improve Indian mobility but to move raw materials from the interior to the ports. The plantation system adopted the cotton gin not to give enslaved people more leisure but to process more cotton per unit of forced labor. The efficiency gains in each case were real. The distribution of those gains was entirely predictable.
I have lived in seven countries across four continents, and every one of them carries scars from some version of this pattern: a powerful group develops a capability, deploys it to extract more value from a less powerful group, and narrates the extraction as progress. In Italy, I saw what centuries of northern industrial consolidation did to the south. In the United States, I watched automation reshape entire regions of the Midwest into what economists politely call “declining communities.” In Singapore, where I live now, I see a society actively wrestling with the question of how to distribute the gains of automation rather than simply celebrating them. The tools change. The grammar does not. The group with the tool uses it on the group without.
Today the data confirms the continuity. In the United States, jobs paying less than $20 per hour face an 83% automation risk, while jobs over $40 per hour face 4%. Since 1978, CEO compensation at the largest firms has grown over 1,000% while typical worker pay has grown just 24%, a gap that accelerates with each wave of automation-driven “efficiency.” A recent study by the National Bureau of Economic Research found that among 6,000 executives across four countries, the vast majority report little actual impact from AI on their operations, even as their companies celebrate AI-driven efficiency on earnings calls. The gains exist on slides. They have not materialized in the broader economy. This is The Narrowing’s signature: impressive metrics at the apex, stagnation everywhere else.
This is not because AI does not work. It is because of where it is being pointed. The same technology that eliminates a customer service team could instead give every employee in the company access to the analytical resources currently reserved for the C-suite. It could flatten the information gradient that makes hierarchy necessary. It could make the whole organization smarter instead of making the top thinner.
But that would change the shape of the hierarchy. And the conquest reflex resists changes in shape.
What the Garden Looks Like
I am not interested in moral arguments for redistribution. I have heard them. You have heard them. They do not move the people with the power to act. What interests me is the engineering argument, which is the argument the Zha’kri eventually found persuasive.
A system whose intelligence is concentrated in a few nodes is fragile. It is good at the specific problems those few nodes consider important and blind to everything else. A system whose intelligence is distributed across billions of nodes is adaptive. It can detect threats that the center never imagined, generate solutions the center never considered, and recover from shocks that would shatter a centralized architecture.
This is not an analogy. It is how complex systems actually work. Ecologists measure forest health by biodiversity, not by the height of the tallest tree. Immunologists evaluate immune function by the diversity of the antibody repertoire, not the potency of any single antibody. Network engineers build resilience through redundancy and distribution, not through concentrating all processing in a single server. Even in machine learning itself, the most robust models are ensembles, collections of diverse weak learners that together outperform any single powerful model. The principle keeps showing up because it is real: distributed intelligence outperforms concentrated intelligence over time, in every domain where we have studied the question carefully. The exception is the domain of human social organization, where we keep building single points of failure and calling them leaders.
In practical terms, redirecting AI toward distributed capability looks like this: AI tools that give a hawker stall owner in Kampong Glam the same quality of market analysis that Goldman Sachs provides its hedge fund clients. Diagnostic systems that make a rural nurse as medically effective as a specialist at a teaching hospital. Legal AI that gives a factory worker contesting a wrongful termination the same analytical depth as a white-shoe defense firm. Educational AI that gives a first-generation college student in Jakarta the same quality of tutoring that a prep school kid in Manhattan takes for granted.
In each case, the technology is identical to what currently exists. The difference is the direction of deployment. You can point a language model at a call center and eliminate 200 jobs. Or you can point the same model at 200,000 small businesses and multiply their strategic capability tenfold. The compute cost is comparable. The societal outcomes are not. The model does not care which way it is pointed. The objective function cares.
This also means changing what we celebrate. Right now, the most admired figures in technology are the ones who have accumulated the most: the most users, the most capital, the most control. In the Zha’kri post-Narrowing era, admiration flowed to those who had contributed the most to collective capability. Their status competition was just as fierce as ours. The scoreboard was different.
The Zha’kri did not arrive at this through moral awakening. They arrived at it through system failure. Their concentrated-intelligence model broke in ways they could not fix from the top. The only path forward was to distribute what they had hoarded. They redesigned their incentive structures so that the prestige path, contributing to collective capability, paid better than the dominance path, controlling resources. They did not change their nature. They changed their game.
Building the Living Forest
I should be honest about something. The Zha’kri are fictional. I invented them for this essay. The best thought experiments are transparent about their construction, and I want to be clear that I am not claiming to channel alien wisdom. I am using an imaginary civilization as a mirror, because mirrors show us things we have learned to look past.
What the Zha’kri mirror shows is this: we are standing at exactly the fork they stood at. We have built tools of extraordinary capability. The question is not whether these tools are powerful. They are. The question is what objective function they serve.
Right now, the answer is that they serve capital. Not because capital is the only possible objective function, but because the people writing the functions are, by and large, the people who own the capital. The conquest reflex operates in every product roadmap and funding decision, not as a declared strategy but as an unexamined default. It shapes what gets built, who it serves, and who it replaces. It does this quietly, automatically, the way firmware does.
The Zha’kri have a saying. (I am inventing this too, but I think it holds up.) It translates roughly as:
“The tallest tree in a dead forest is still dying. The shortest tree in a living forest will outlive them all.”
The productive capacity now exists, for the first time in human history, to meet every person’s basic needs. The obstacle is not resources. It is the inherited zero-sum architecture that drives us to hoard what could be shared and concentrate what could be distributed. The post-scarcity world is not a fantasy. It is a design choice we keep failing to make, because the conquest reflex whispers that the point of abundance is to have more than the next person.
On that stage in Singapore, no one was asking the question that the Zha’kri eventually learned to ask: What if the point of all this intelligence is not to need fewer of us, but to need more of what each of us can do?
This is, in a small way, the question that drives my own work now. After years of building AI products inside corporations where the automation always flowed downward, I started building something aimed in the other direction. A platform designed to help burned-out technologists, the very people who built the automation machinery, redirect their skills toward solving problems that actually matter. Not because the technology demanded it but because someone finally asked who it should serve.
It is a small bet against the conquest reflex. One of many that will need to be made.
It took the Zha’kri three centuries of self-inflicted damage to figure this out. We have the advantage of their example. The disadvantage is that they are fictional, and we will have to learn it for ourselves.
The lesson is simple. The execution is not. Stop building the tallest tree. Start building the living forest.










