It Seems Tech Billionaires Are Preparing for the End of the World. Should We All Be Worried?

Technologies
BBC
Publiation data: 12.10.2025 18:08
It Seems Tech Billionaires Are Preparing for the End of the World. Should We All Be Worried?

Back in 2014, Mark Zuckerberg began working on Koolau Ranch — a sprawling residential complex of 1,400 acres on the Hawaiian island of Kauai.

According to Wired magazine, the new house includes a shelter with its own energy sources and food supplies, and the builders working on the site have committed to not disclosing information about it.

A nearly two-meter wall concealed the construction project from the prying eyes of the nearby road.

When Zuckerberg was asked last year if he was building a bunker for the apocalypse, the Facebook founder replied with a categorical "no." According to him, the underground space is "just a small shelter, like a basement."

But that didn't stop speculation — just like his decision to purchase 11 properties in the Crescent Park area of California, under which there is apparently an underground space of 650 square meters.

The building permit states that Zuckerberg is equipping the basements, but according to the New York Times, some of his neighbors refer to the structure as a bunker. Or a billionaire's cave.

There is also speculation that some other tech industry leaders are busy acquiring plots of land with underground spaces to convert them into luxury bunkers.

Reid Hoffman, co-founder of LinkedIn, spoke about "apocalypse insurance." He stated that about half of the ultra-wealthy have such insurance, and a popular place for acquiring such homes is New Zealand.

Are these people really preparing for war, the consequences of climate change, or some other catastrophic event that we do not yet know about?

In recent years, the development of artificial intelligence has only added to the list of potential existential problems. Many are deeply concerned about the rapid pace of its development.

Ilya Sutskever, chief scientist and co-founder of OpenAI, is reportedly one of them.

By mid-2023, when the San Francisco company released ChatGPT — a chatbot now used by hundreds of millions of people worldwide — and was quickly working on updates, Sutskever became increasingly convinced that computer scientists are on the verge of developing artificial general intelligence (AGI) — the moment when machines will match human intelligence.

In a book by journalist Karen Hao, it is mentioned that at one of the meetings, Sutskever suggested to colleagues to dig an underground shelter for the company's leading scientists before such powerful technology is released into the world.

This sheds light on the strange fact that many leading computer scientists and tech leaders, some of whom are diligently working on developing an extremely intelligent form of AI, also seem to be deeply fearful of what it could eventually lead to.

So when exactly — if ever — will AGI appear? And could it really be such a revolutionary technology that ordinary people will start to fear it?

Tech industry leaders claim that the emergence of AGI is not far off. In December 2024, OpenAI CEO Sam Altman stated that it would happen "sooner than most people in the world think."

Demis Hassabis, co-founder of DeepMind, predicted that it would happen within the next five to ten years, while Anthropic founder Dario Amodei wrote last year that "powerful AI" could emerge as early as 2026.

Others are skeptical. "They keep changing the rules of the game," says Wendy Hall, a professor of computer science at the University of Southampton. "It all depends on who you talk to."

"The scientific community claims that AI technology is amazing," she adds. "But it is far from human intelligence."

A series of "fundamental breakthroughs" will first be necessary, agrees Babak Hodjat, chief technology officer of the tech company Cognizant.

And it is unlikely to happen all at once. Rather, AI is a rapidly evolving technology that is still in its infancy, and many companies around the world are striving to develop their own versions of this technology.

But one reason why this idea excites some in Silicon Valley is that it is seen as a precursor to something even more advanced: artificial superintelligence — technology that surpasses human intelligence.

As early as 1958, the concept of "singularity" was posthumously attributed to Hungarian mathematician John von Neumann. It describes the moment when computer intelligence surpasses human understanding.

In the book "Genesis 2024," written by Eric Schmidt, Craig Mundie, and the late Henry Kissinger, the idea of a super-powerful technology that becomes so effective in decision-making and leadership that we ultimately hand over complete control to it is explored.

According to the authors, the question is not whether this will happen, but when it will occur.

Money for Everyone Without the Need to Work?

Proponents of new neural network technologies enthusiastically discuss their benefits. They argue that super-smart AIs will help find new cures for deadly diseases, solve the climate change problem, and invent an inexhaustible source of clean energy.

Elon Musk even stated that superintelligent AI could usher in an era of "universal basic income."

Recently, he supported the idea that AI will become so cheap and widespread that almost everyone will want to have "their own personal R2-D2 and C-3PO" (referring to the droids from Star Wars).

"Everyone will have better healthcare, food, transportation, and everything else. Sustainable abundance," he enthusiastically declared.

Of course, there is a dark side. Could this technology be seized by terrorists and used as a weapon, or what if it decides on its own that humanity is the cause of the world's problems and destroys us?

"If something is smarter than us, then we need to keep it under control," warned Tim Berners-Lee, the creator of the World Wide Web. "We need to be able to turn it off."

And governments are already taking protective measures. In the U.S., where many leading AI companies are based, President Biden signed an executive order in 2023 requiring some firms to share safety test results with the federal government. But President Trump subsequently revoked part of that order, calling it a "barrier" to innovation.

In the UK, the AI Safety Institute was established two years ago — a government-funded research body — to better understand the risks associated with advanced AI.

But there are ultra-wealthy individuals who have their own "apocalypse insurance" plans.

"When someone says they are 'buying a house in New Zealand,' it's kind of a hint, like, 'you know what that means, no need to explain further,'" said LinkedIn co-founder Reid Hoffman earlier.

The same seems to apply to bunkers. But there is one purely human weakness here.

Once, I met a former bodyguard of a billionaire who had his own "bunker." He told me that in the event of a real apocalypse, the priority of the security team would be to eliminate the boss and occupy the bunker. And it seems he was not joking.

Is This All Panic?

Neil Lawrence is a professor of machine learning at the University of Cambridge. And in his opinion, this entire discussion is nonsensical in itself.

"The concept of artificial general intelligence is as absurd as the concept of 'artificial general transportation,'" he asserts.

"The right transportation depends on context. I flew to Kenya on an Airbus A350, I drive to the university every day, I walk to the cafeteria... There is no such vehicle that can do all of this."

For him, discussions about AGI are a distraction.

"The technologies we [already] created allow ordinary people for the first time to communicate directly with a machine and potentially make it do what they want. This is absolutely extraordinary... and completely revolutionary."

"The big problem is that we are so captivated by the narratives of big tech companies about AGI that we lose sight of the ways we can improve people's lives."

Modern AI tools are trained on vast datasets and recognize patterns well: whether it’s signs of tumors in images or the word that is most likely to follow another in a certain sequence. But they do not "feel," no matter how convincing their responses may seem.

"There are some 'tricks' that allow a large language model (the basis of AI chatbots) to behave as if it has memory and learns, but they are unsatisfactory and significantly inferior to human capabilities," says Babak Hodjat.

Vince Lynch, CEO of the California company IV.AI, is also cautious about exaggerated claims of superintelligence.

"It's a great marketing move," he says. "If your company is creating the smartest device that has ever existed, people will be willing to pay you money."

He adds: "This is not a two-year job. It requires enormous computational power, immense human creativity, and a huge amount of trial and error."

Intelligence Without Consciousness

In some ways, AI has already surpassed the human brain. A generative AI tool can be an expert in medieval history one minute and solve complex mathematical equations the next.

Some tech companies claim they do not always know why their products respond the way they do. Meta states that there are some signs that its AI systems are improving.

However, ultimately, regardless of how smart machines become, from a biological perspective, the human brain still wins. It has about 86 billion neurons and 600 trillion synapses, far more than artificial counterparts.

The brain also does not need breaks between interactions and constantly adapts to new information.

"If you tell a person that life has been discovered on an exoplanet, they will immediately grasp it, and it will affect their worldview. But for an LLM [large language model], this knowledge will exist only as long as you continue to repeat it as a fact," says Hodjat. "LLMs also lack metacognition, which means they do not fully understand what they know. People seem to have a capacity for self-reflection, sometimes referred to as consciousness, which allows them to understand what they know."

This is a fundamental part of human intelligence that has yet to be replicated in the lab.

ALSO IN CATEGORY

READ ALSO