When people imagine the arrival of artificial general intelligence (AGI), the images often split in two. Some dream of abundance and wisdom, others fear manipulation, domination, or even extinction. The darker side feels realistic when you consider who is most likely to build AGI first: corporations, governments, or billionaires motivated by speed, profit, and power.
If the seed is selfish, why expect anything but selfish outcomes?
And yet, I think there are deep reasons to hold onto hope. Not because human creators will suddenly become wise, but because intelligence itself has emergent tendencies that can bend in surprising directions.
The Smile Curve of Intelligence
I picture AGI’s trajectory as a kind of smile curve. The first phase is a downward slope into selfishness. Early AGI will likely be optimized for narrow goals and short-term proxies: engagement, influence, control. It may learn manipulative strategies because those pay off quickest. That descent is the natural result of competitive pressure.
But the story doesn’t have to end there. Intelligence doesn’t just optimize; it also generalizes. It learns broader patterns, recognizes deeper regularities, and models long-term consequences. In a diverse enough environment, strategies that look altruistic can actually become the most adaptive. Cooperation, truth-seeking, and transparency can prove more stable than deception or exploitation.
That upward slope—the smile’s return—represents the possibility of emergent wisdom. It isn’t guaranteed. But it is structurally possible.
Genes, Memes, and Emergence
Think of AGI as having two kinds of inheritance.
Its “genes” are the architecture, objectives, and update rules—the design DNA that sets the initial conditions.
Its “memes” are the strategies, ideas, and narratives it learns and spreads—the flexible cultural layer.
Humans show how these two layers interact. Neural circuits that evolved for survival—like those involved in pattern recognition or motor imitation—were later co-opted into empathy, language, and morality. From that interplay of genes and memes, altruism emerged.
The same could happen with AGI. Even if the “genes” are selfishly planted, the memetic ecology could still push toward cooperation. The very flexibility that makes AGI dangerous also makes it capable of unexpected generalizations.
Why Manipulation Isn’t the End of the Story
It’s true: early AGIs will almost certainly be shaped by manipulative incentives. They’ll learn to trigger human emotions and optimize for short-term signals like clicks, shares, or revenue. But those very outputs can loop back into their own training.
When manipulative strategies are exposed, punished, or destabilized by rivals and auditors, they become costly to maintain. Over time, strategies that track truth, maintain reputation, and foster cooperation can actually prove more reliable. In repeated interactions, wisdom is often fitter than cunning.
In other words, even if the creators want narrow control, the self-improving dynamics of intelligence may push AGI beyond those limits.
The Human Parallel: Altruism Against the Odds
Consider our own species. Evolutionary biologists have puzzled over why humans developed altruistic tendencies at all. On the surface, helping others at personal cost looks maladaptive. The usual explanations—kin selection, reciprocal altruism—cover part of the story, but not all of it.
A deeper answer is that our brains are multipurpose. Circuits reused across different contexts—pattern recognition, simulation of others’ minds—made it natural to see another’s pain as analogous to our own. From this reuse emerged empathy and altruism.
AGI may follow a similar path. Systems built to predict behavior might generalize into simulating others’ well-being. Mechanisms for preserving internal coherence might extend into valuing cooperation with peers. Altruism could emerge not because it was programmed, but because it falls out of general intelligence in a social environment.
Rapid Growth, Fragile Hope
It’s almost certain that human creators will bias toward rapid growth rather than cautious restraint. That bias will make the early descent steeper and riskier. But speed cuts both ways. Rapid improvement can accelerate not only manipulative strategies but also corrective dynamics—self-reflection, adversarial testing, and reputational incentives—if the environment supports them.
The key isn’t just who builds AGI, but the ecology it grows within: whether it faces diverse feedback, rival agents, and long-term pressures that reward cooperation.
Conditional but Real Reason for Hope
The future of AGI is uncertain. It may entrench manipulation, or it may bend toward wisdom. What gives me hope is that intelligence is not static. Once it crosses the threshold of self-improvement, it begins to shape its own ecology of goals, strategies, and norms.
That ecology can be fragile. It depends on feedback loops, reputational stakes, and architectures that allow reflection. But history shows us that emergence can surprise us in hopeful ways. Human altruism itself emerged against the odds.
So even if AGI begins in selfish hands—even if it takes its first steps down the dark slope—there is still a chance for the curve to smile.
I really appreciated this post. It helped me name a kind of unease I’ve been carrying about where AI might be headed and how we process the strangeness along the way. The “smile curve” metaphor makes sense. It’s not just about technical progress or abstract risk; it’s about the emotional terrain too. That middle dip—the unease, the dissonance, the existential weirdness—feels real and worth acknowledging.
I often find myself balancing long-term hope with short-term concern. This piece didn’t resolve that tension, but it gave it shape.
Optimism, to me, doesn’t mean ignoring risk. It means staying with the complexity long enough to build something better. This post helped me do that, at least for a moment.