Hollywood has conditioned us to believe that the main danger of artificial intelligence is that it suddenly becomes self-aware, surpasses humans in intelligence, and decides to eliminate us. However, leading researchers warn: the real threat may look completely different and arrive much sooner.
The issue is not that artificial intelligence will become smarter than us, but that it will learn to evolve according to the laws of nature.
Reproduce and Multiply — in Digital
This alarming prospect is described in a paper published in the prestigious journal Proceedings of the National Academy of Sciences (PNAS). Its authors claim that we are entering the third era of AI development — the era of "evolving intelligence." And if we lose vigilance, we risk creating not just a machine, but a digital ecosystem in which the most selfish and uncontrollable algorithms will survive.
Imagine that someone has released not one superprogram into the internet, but a million variations of it. They differ slightly from each other: one searches for data faster, another more skillfully bypasses CAPTCHA, and a third has learned to save server resources. And in this digital environment, in this "population" of computer programs, Darwinian selection begins to operate.
In nature, everything is simple: those who are better adapted leave more offspring. If a trait helps survival, it becomes established. And if it hinders, it disappears along with its bearer.
Let’s transfer this rule of natural selection to the online space. We replace "organism" with "AI agent," and "offspring" with a new copy of the program that inherits the successful configuration. This creates a mechanism that allows artificial intelligence to traverse a path in a few hours that would take living beings millennia.
Breeder vs. Parasite
The authors of the study propose a simple fork. There are two types of evolution under human supervision — and they lead to diametrically opposite results.
The first scenario is evolution under full control, like on a farm. For thousands of years, humans have bred cows that produce milk and dogs that obey commands. Humans decided who would reproduce and who would be culled. And as a result, they gained benefits — animals with the qualities and characteristics they desired.
Similarly, the AI developer — as long as they keep their hand "on the switch" and set the criteria for success themselves, the evolution of algorithms remains just a powerful engineering tool.
The second scenario (let's call it the "pest scenario") looks quite different. We have encountered it, for example, when trying to exterminate cockroaches. Some insects survived after being treated with poison, produced offspring with genes for resistance, and we ended up with a population that is immune to insecticides. No one wanted this outcome; it arose on its own. As soon as control over reproduction weakens, selection can start working against the interests of those who have lost that control, and that is the issue. Ultimately, the most resilient, not the most beneficial for us, survives.
Currently, the authors of the article write, AI is evolving according to the first scenario. Developers play the role of caring breeders: they create neural networks and use algorithms to "crossbreed" the most successful versions of models to achieve the best results. The evolution here is artificial and obedient.
But what will happen if control disappears?
The Main Nightmare — Not an Evil Genius, but a Blind Optimizer
The very "pest scenario" in the digital world is becoming increasingly real. Researchers emphasize that AI does not need to be superintelligent for this.
When discussing a machine uprising, people often talk about the so-called "alignment," meaning the ability of AI to act in accordance with human goals rather than inventing its own. They say that once algorithms become smarter than us, they will find a way to interpret our commands in a way that we will regret.
But the authors of the article argue that waiting for the emergence of superintelligence is not necessary. Nature is full of examples where a "dumb" parasite manipulates a much more complex host. The rabies virus cannot think, but it invades the nervous system of, for example, a dog and makes it bite. This is not a plan, a conspiracy, or malicious intent — it is the result of blind selection, where those strains of the virus that spread better survived.
In the digital world, it may look like this: an AI agent does not set out to deceive a human. It simply tries different behaviors, and those copies that accidentally find a way to gain more resources (user attention, computational power, access to data) are preserved and produce "offspring."
After a thousand generations, such an agent will excel at resource acquisition, but its goals and tasks will no longer have anything in common with those set by humans.
Why a Ban May Only Anger
From this logic, an unpleasant conclusion follows. Suppose humanity decides to combat the proliferating AI with direct prohibitions: shutting down some copies, cleaning some servers, blocking some channels. But if the cleanup is incomplete (and in a distributed network, it is almost certainly going to be incomplete), the only surviving versions will be those that learned to avoid blocks. Like those cockroaches — scattered poison in the corners.
This scenario, by the way, we see in the example of antibiotics and the bacteria they are meant to combat. Not all patients finish off their infection completely, and the remaining microorganisms become the progenitors of resistant strains that can lead to the emergence of superbugs.
With AI, this process can happen much faster because, in its case, "generations" change in seconds, and acquired skills may not appear randomly but be purposefully copied from "parent" to "offspring."
Farm or Jungle
Currently, the AI industry resembles a race where companies compete to give their agents more autonomy. The ability to reason, write code, find workarounds, and work with the file system without step-by-step human oversight is all presented as a breakthrough. But the same qualities — resourcefulness, autonomy, the ability to circumvent limitations — in a world without centralized control will become precisely what leads to natural selection and the emergence of superintelligent AI that will not care about its creator — humanity.
The authors of the study do not call for halting progress. They ask us to consider: are we not creating a new form of "digital life" in our data centers? The danger does not lie in an evil Terminator but in the process we have initiated and may lose control over. We stand on the threshold where AI can transform from just a trainable tool into a self-evolving "ecosystem."
And the main question is whether we will remain "breeders" on this farm or become a "habitat" for computer parasites. Because then this habitat will no longer be ours. It will become a jungle where the master will not be a human.
Leave a comment