AI Will Become a Dangerous Weapon in War, Targeting the Enemy’s Emotions

AI in warfare: AI will become a dangerous weapon, say experts from NATO and the Atlantic Council. Photo: Maciej Bartusik / Copilot AI

AI in warfare will become a dangerous weapon, experts from NATO and the Atlantic Council indicate. Specialists claim that AI capable of predicting an adversary's moves will emerge, based on analysis of their emotions, ideology, and historical trauma.

Think Like the Enemy

Will machines learn to “think like the enemy”—not according to tactical handbooks, but by understanding the adversary’s culture, religion, philosophy, and their prevalent narratives? The US Army, through its Mad Scientist program, is already training models that analyze commanders’ memoirs and enemy doctrine. NATO considers the use of AI in war and such “strategic empathy” as crucial to future conflicts.

The goal involves attempting to understand how opponents perceive their interests, threats, and opportunities. Ordinary AI chatbots already spontaneously demonstrate the ability to detect beliefs potentially different from our own. Consequently, can AI predict the enemy’s logic better than a human can?

Consciousness Based on Empathy and AI in Warfare

The key to simulating an adversary’s thinking is distinguishing between taught consciousness and nurtured consciousness. The first involves structured knowledge, facts, and procedural logic. The second grows from culture, history, trauma, and identity—forces that shape how a group interprets risk, tied to ideology and legacy.

“A commander of the Chinese People’s Liberation Army, influenced by the 1979 Sino-Vietnamese War, might display caution in mountainous terrain,” explain John James and Alia Brahimi of the Atlantic Council, providing an example. “However, this is a detail invisible to most automated models, but accessible to LLMs trained on memoirs, doctrine, and historiography,” the researchers clarify.

Good examples include ISIS or Boko Haram—groups that do not operate according to classic strategic logic. Their behavior is influenced by religious fanaticism, a sense of historical grievance, and a geopolitical narrative they have adopted. They use violence and ritualized fear to sustain their ideology.

A purely analytical model might focus on the number of fighters or the frequency of attacks, thereby losing sight of the symbolic logic behind those numbers. An AI system trained on religious texts and ideological manifestos could simulate the decision-making logic of seemingly irrational organizations and factions. The further development of AI and its application in war may change the shape of armed conflicts forever. The future impact of AI in warfare is certainly something we must track closely.

Worth reading: The End of Security: New AI Writes Bomb-Making Instructions


Read this article in Polish: AI stanie się groźną bronią na wojnie. Uderzy w emocje wroga

Published by

Maciej Bartusik

Author


A journalist and a graduate of Jagiellonian University. He gained experience in radio and online media. He has dozens of publications on new technologies and space exploration. He is interested in modern energy. A lover of Italian cuisine, especially pasta in every form.

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.