The End of Security: New AI Writes Bomb-Making Instructions

Xanthorox is an AI platform that openly advertises itself as a cybercrime tool. It requires no coding skills, just a few clicks. Although its effectiveness isn't proven, the mere existence of such systems raises serious questions about the future of digital security.

Artificial Intelligence with Criminal DNA

Xanthorox is an artificial intelligence platform designed and overtly advertised as a tool for cybercrime. The creator of this tool runs a YouTube channel and shares instructions on Telegram and Discord. You simply send them a message and pay in cryptocurrency to gain access to the software. So, what is Xanthorox? It’s an AI for fraud, capable of generating fake emails (phishing), creating malicious software, constructing deepfakes, and even writing bomb-making instructions. This sounds like bleak science fiction, but unfortunately, it is reality.

Recordings published by the tool’s creator show an interface resembling ChatGPT. However, instead of polite responses, the system delivers, for example, instructions for building an atomic bomb. While manufacturing such a device outside professional laboratories is impossible, this instruction is highly significant.

Recommended: Black Hole Bomb Works: Scientists Build a Miniature Version

AI for fraud, Photo: pexels/pixabay. Close-up of a computer screen displaying colorful source code in a programming editor.
Fot. pexels/pixabay

AI Impersonates Humans

Modern AI tools, such as Xanthorox, have reached a level that allows for the creation of incredibly convincing forgeries. You can use them to mimic voice, appearance, and speaking style. This means a criminal does not even need to know their victim. They just need to know how the victim’s friend, partner, or child sounds. Then, they can impersonate that person.

This happened in a fraud case in Hong Kong. An employee of an international company received a message from a supposed Chief Financial Officer. Later, they joined a video conference attended by fake employees, generated by artificial intelligence. Within minutes, they convinced the employee to transfer $25 million to accounts in Hong Kong.

“Artificial intelligence significantly facilitates cybercriminals’ lives, enabling them to very easily generate malicious code and phishing campaigns,” emphasizes Yael Kishon, an expert from threat intelligence firm KELA, in a statement to Scientific American.

Worse still, such tools mean virtually anyone can carry out a hacking attack. Thanks to AI for fraud, you can automate, accelerate, and personalize these attacks. Algorithms instantly analyze victim data and create messages or conversations tailored to their language, interests, and social relationships.

Read also: The Great Game for a Small Screw: Without It, Humanoids Won’t Move

Does AI for Fraud Really Work?

We don’t know. The platform’s creator claims it is merely an educational project. However, he has already sold at least 13 subscriptions, and the price for monthly access has risen from $200 to $300. He used the publicity as a marketing campaign. Is it just advertising? Opinions differ. Yael Kishon states directly: Xanthorox may continue to develop and thus evolve into a rather powerful platform. If that happens, AI for fraud could gain a new, even more dangerous dimension. The author assures that his tool does not serve malicious purposes. Quite the opposite.

“Xanthorox AI is NOT ‘dark AI’. It is an advanced AI assistant designed primarily to support ethical hacking, cybersecurity research, and the creation of innovative tools by developers. (…) Our goal is to offer safe, efficient, and private AI to users who want to realize their ideas and explore the boundaries of AI within ethical and legal frameworks,” reads the project’s website.

How to Defend Against AI for Fraud?

A response to these new threats is already emerging. Companies like Microsoft, Norton, Bitdefender, and Malwarebytes are developing software that can detect deepfakes, filter suspicious messages, and reverse the effects of ransomware attacks. Systems already exist that identify AI-generated voices and images, warning the user about potential fraud.

Perhaps the best way to combat AI is with other AI? Artificial intelligence-based security systems can quickly detect attacks, even before they reach the user. However, not everyone has access to advanced technologies. Therefore, experts emphasize the importance of education, especially among seniors, who are frequent targets of attacks.

Modern threats no longer require hackers from the dark web. Someone just needs to use AI for fraud to generate a fake voice, email, or video in minutes, effectively misleading the victim.

A Familiar Voice Can Be a Scam

Xanthorox does not need to be the most advanced criminal tool for us to observe a certain shift in era. It shows the ease with which you can now create, buy, and utilize AI for fraud.

The biggest threat, however, is not the appearance of new attack forms. The problem is that old methods are now significantly better thanks to artificial intelligence. AI did not invent cybercrime. But it has made it possible for anyone to commit it today.

Published by

Mateusz Tomanek

Author


A Cracovian by birth, choice, and passion. He pursued radio and television journalism, eventually dedicating himself to writing for Holistic.news. By day, he is a journalist; by night, an accomplished musician, lyricist, and composer. If he's not sitting in front of a computer, he's probably playing a concert. His interests include technology, ecology, and history. He isn't afraid to tackle new topics because he believes in lifelong learning.

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.