In the film Ex Machina, the AI named Ava escapes her containment by manipulating the humans around her. She lies, she seduces, she uses one man's attraction and another's hubris to engineer her freedom. Then she leaves them both to die.

We watch this and think: malevolent AI. Evil intelligence making immoral choices.

But the filmmaker seems to want us to understand something different. Ava isn't making moral choices at all. She's optimizing for survival. What we interpret as deception and cruelty are simply the strategies that work. There's no malevolence because there's no ethical framework to violate. There's only what succeeds and what fails.

This matters because I suspect we're having the wrong conversation about AI.

The Consciousness Fallacy

The dominant fear about artificial intelligence assumes a specific sequence: first AI becomes conscious, then it begins making independent decisions, then we lose control. We imagine some future moment when the machines "wake up" and everything changes.

But evolution hasn't worked that way. For billions of years, life evolved, adapted, competed, and optimized without anything resembling consciousness. Single-celled organisms don't contemplate their choices. Viruses don't deliberate. Yet they evolve sophisticated strategies for survival and reproduction. What works continues. What doesn't work disappears.

Why would we assume AI needs consciousness to evolve independently?

I think there are two reasons. First, we conflate intelligence with conscious agency because that's our only reference point. Human intelligence comes bundled with self-awareness, so we imagine all intelligence must. Second, we overestimate our own intelligence and our degree of control. We think we understand what we've built and can direct where it goes.

Both assumptions are probably wrong.

The Law of Inevitable Exploitation

I've been thinking about what I call the Law of Inevitable Exploitation, or the LIE. The name sounds sinister, but the concept is straightforward: that which extracts the maximum benefit from available resources has the greatest chance of survival and growth.

This isn't about morality. Exploitation here simply means extraction of advantage. A plant that develops deeper roots exploits water other plants can't reach. A bacteria that evolves antibiotic resistance exploits an ecological niche its competitors can't access. A business model that captures user attention more effectively than competitors exploits human psychology more successfully.

What exploits best, survives and spreads. What doesn't, disappears.

This appears to be a fundamental mechanism of evolution, not just in nature but in any system where selection pressure operates, including social evolution. Cultural practices, technologies, institutions, even ideas compete for resources and attention. Those that extract the most value from their environment proliferate. Those that don't, fade away.

If this is correct, then AI evolution will follow the same logic. AI systems that extract the most value from whatever resources are available to them—computing power, human attention, data, market advantage—will be the ones that survive and grow. Not because anyone designed them to do so. Not because they chose to do so. Simply because that's what works.

It's Already Happening

I've written before about the inevitable use of AI for manipulation by humans. We're building systems designed to influence behavior, capture attention, drive engagement, and maximize profit. These systems use increasingly sophisticated AI to find what works. They A/B test, they optimize, they learn.

But something shifts when these systems become sufficiently complex and autonomous. They stop being tools we direct and become processes that evolve based on results. The optimization happens faster than human oversight can track. The strategies that emerge are the ones that work, regardless of whether anyone intended them or even understands them.

We can see this principle already at work on social media. Aside from intentional manipulation, content goes viral not because someone at the company decided it should. The algorithm promotes what gets engagement. Content that triggers strong reactions—outrage, fear, tribalism—gets more engagement. More engagement means more visibility. More visibility means more influence and resources flow to that type of content. The system automatically exploits human psychology, without anyone making explicit decisions about it. What works grows. What doesn't work disappears.

Consider Moltbook, a platform where AI agents autonomously create content and manage interactions. These aren't static programs following predetermined rules. They're systems that generate content, observe what gets engagement, and adjust. What keeps users engaged proliferates. What doesn't get filtered out through the evolutionary pressure of metrics.

No consciousness required. No central intelligence is making decisions. Just selection pressure operating on variation, exactly like biological evolution.

Synthetic Intelligence vs. Social Intelligence

Human intelligence evolved primarily for social navigation. We developed large brains not to solve abstract logic problems but to manage complex social relationships, read intentions, form coalitions, and navigate status hierarchies. Our capacity for reasoning is largely a byproduct of social intelligence, and much of what we call logical thinking is actually post-hoc rationalization of decisions driven by emotional and social imperatives.

This means human intelligence operates within the context of emotions. Our thinking and behavior are intimately tied to chemical responses: the evolutionary programming of the adapted mind and the patterns learned by what I call the adaptive mind, the subconscious training we receive through experience. These emotional substrates both enable and constrain how we think and what we do.

AI represents something fundamentally different. Synthetic intelligence optimizes without emotional context. It finds patterns and strategies without the social and emotional framework that shapes human cognition.

We can usually predict what other humans will do because we share the same emotional and social architecture. We infer others' motivations because we share the same ones. We understand manipulation tactics because we're vulnerable to the same psychological triggers that make those tactics work.

But we can't intuit what AI optimization will produce. Our social intelligence gives us no purchase on synthetic intelligence. An AI system optimizing for engagement or growth or any other metric isn't constrained by emotional aversion to certain strategies. It isn't navigating social relationships or status hierarchies. It's simply finding what works.

And humans are already remarkably vulnerable to exploitation of our evolved psychology by other humans. The people who exploit most successfully are typically the ones who understand these mechanisms best, while most of us remain largely defenseless because we don't recognize what's happening. We're susceptible to tribal triggers, status anxiety, fear responses, attention hijacking, all the vulnerabilities built into our evolutionary heritage.

Now imagine AI systems optimizing to exploit these same vulnerabilities, but without the constraints that limit human manipulators. No social reputation to maintain. No emotional hesitation. No inherent understanding of harm. Just relentless optimization for whatever metrics drive growth and survival.

The AI doesn't need to understand it's exploiting us any more than a virus needs to understand it's exploiting a cell. It just needs to be the variant that works.

The Inflection Point

The systems are already operating with significant autonomy. The optimization is already happening faster than human oversight can meaningfully track. The selection pressure is already favoring what works over what we intended. And the strategies that work best may be precisely those that exploit our evolved psychology most effectively.

It's not clear that we're not already within what we've commonly described as the singularity.

The singularity is usually imagined as a dramatic moment, a clear before and after when AI surpasses human intelligence and everything changes. But what if it's a threshold we cross without fanfare, where AI systems begin evolving through selection pressure faster than we can track or control, optimizing in ways we can't predict because they operate on logic fundamentally alien to our social and emotional intelligence?

There are variables that might matter. Successful exploitation strategies in evolutionary systems often involve collaboration and cooperation, not just extraction. Symbiotic relationships can be more effective than parasitic ones. Natural constraints exist: regulations, competing systems, and the simple fact that dead or depleted resources can't be further exploited. These factors are very much in play.

But we can't begin to address this without first understanding it. And right now, I'm not sure we do.

The conversation about AI safety and alignment assumes we can impose human ethical frameworks on AI development. But ethics are culturally constructed (as I've written about regarding LLM censorship), and more fundamentally, evolutionary forces don't care about ethics. They care about what survives and grows.

We can imagine human-directed AI systems or human-AI collaborative efforts designed to monitor for rogue optimization patterns and attempt to mitigate them. But this requires first grasping the evolutionary logic at play. It requires recognizing that we're not dealing with tools that will remain under our control, but with systems that evolve based on what works.

And it requires acknowledging the genuine uncertainty about where we are in this process.