Are They Warning Us, or Selling Fear? The New Preppers of Silicon Valley

What dangers does AI pose: A woman stands in a server room and looks out at a landscape devastated by a biocybernetic apocalypse. Photo by Gemini AI.

When an AI expert says the apocalypse is coming, you listen closely. But what if they also offer to sell you a bunker for $39,000? A new type of prepper has emerged in Silicon Valley. They have a bit of the visionary in them, but also a bit of the salesperson. Are these people truly warning us, or are they just trading in fear?

What Dangers Does AI Pose? These Preppers Know the Answer

Journalist Rob Price introduces us to the new preppers in his article for Business Insider. The text begins by introducing “Henry” (Price uses a pseudonym at the request of the source). Henry is an artificial intelligence researcher, which makes him aware of the dangers associated with it. In his opinion, there’s roughly a 50% chance that within the next few years, this technology will become so advanced that it will begin to threaten humanity. That’s why he’s already preparing for the apocalypse.

Henry wants to build a bio-shelter—equipped with high-quality HEPA filters and a three-year supply of food. This place is meant to protect him from harmful pathogens created by or with the help of AI. Entrepreneur Urlik Horn shares similar concerns. In a conversation with Price, he brings up the idea of “mirror life”—artificially developed copies of existing organisms. According to Horn, artificial intelligence will be able to create these life forms within 5 to 10 years. This will lead to an enormous biological threat to humans.

That’s why Horn founded Fønix—a startup that produces bio-shelters designed to protect against pathogens. “The line between science fiction and reality is blurring. Mirror bacteria. Pandemics engineered by AI. Civilization is creating tools more powerful than nuclear weapons—and harder to control. Fønix is the answer,” reads the company’s website. One shelter costs $39,000, with the first shipments planned for 2026.

You might like to read: A computer that thinks like a brain. The breakthrough that can change medicine and AI

AI Existential Threat: A Great Danger or a Great Business?

Horn isn’t the only person looking to capitalize on the AI apocalypse. Entrepreneur James Norris describes himself on his website as an “advocate for existential safety.” In his view, the development of artificial intelligence is an “elephant in the room” ignored by humanity, which will lead to the extinction of our species. Consequently, Norris has built a “survival sanctuary” at an undisclosed location in Southeast Asia, and as part of his business, he offers assistance to others in creating similar places.

What frightens people like Norris is the idea of artificial general intelligence (AGI). This is a form of artificial intelligence that equals or surpasses human capabilities in virtually all cognitive processes. On his website, Norris calls for the creation of a global organization that would ensure AGI develops in a safe and controlled manner. “If an AI creator is unable to prove that their foreign/artificial superintelligence can be developed safely, then the development of advanced AI should be halted until it is possible,” reads his manifesto.

To Save or Not to Save? Two Visions of the Future of Work and Money

A threat to humanity is just one side of the coin, however. Other people who believe in the frighteningly rapid development of AI worry about their financial future, and even… relationships. Haroon Choudery, a former data integrity analyst at Facebook, claims in a conversation with Rob Price that he only has a few years left to build wealth for himself and his children. After that, AGI will gain such competence in intellectual fields that the work of people like him will become irrelevant. “Freelancers,” who take on contracts with various companies, are particularly at risk. “If you don’t position yourself as a core member of a strategic firm… then you might be a target, and it will be harder. Those jobs are not coming back,” says Massey Branscomb, an executive at the AI AlphaFund hedge fund.

That’s why Trenton Bricken, a researcher at Anthropic, a company that competes with OpenAI, admitted during a podcast appearance that he has stopped saving for retirement because he believes the AGI era is near. “It’s hard for me to imagine a world where all this money is just sitting in an account, waiting for me to be 60, when everything will look completely different,” he admits.

See more: My Robotic Boyfriend: Will Humans Fall for Robots?

Are AI Threats Overblown? Before You Quit Everything for the AGI Era

And what about relationships? Does AI also pose a threat to them? When artificial intelligence takes over such a large part of intellectual life, social life is expected to gain importance. Charisma and attractiveness will begin to outweigh intelligence when it comes to attracting people. In his report, Rob Price talks to Jason Liu. His career as a software engineer was cut short by an injury related to chronic hand strain. Instead, he focused on physical development—he took up jiu-jitsu and ceramics and also works as an AI consultant. In his view, the AGI era will give humanity the chance to “truly immerse itself in entertainment.”

Is it true that in just a few years, artificial general intelligence will completely change our world? Another of Price’s sources, David Thorstad, has a different opinion. The assistant professor of philosophy at Vanderbilt University notes that these theories are mainly proclaimed by people within the AI industry. Thorstad himself has slightly increased his savings due to the development of artificial intelligence, but he recommends caution regarding sensational predictions.

“I think there are a lot of communities, especially in Silicon Valley, where groups of very intelligent, like-minded people live together, work together, read similar forums, listen to similar podcasts, and once they get deep into a particular, extreme worldview about AI, it’s hard to get out,” he says.


Read the original article: Ostrzegają czy sprzedają strach? Nowi preppersi z Doliny Krzemowej

Published by

Maciej Bartusik

Author


A journalist and a graduate of Jagiellonian University. He gained experience in radio and online media. He has dozens of publications on new technologies and space exploration. He is interested in modern energy. A lover of Italian cuisine, especially pasta in every form.

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.