Óscar Ruiz
Migration Expert and International Analyst
Terrorist groups are taking advantage of artificial intelligence (AI) to increase the scale and effectiveness of their propaganda operations, recruitment and cyber attacks, as well as investigating how to use AI drones to carry out attacks. What tools and strategies are available to make their job more difficult?
Technology in general is advancing and becoming much more accessible to all audiences, and of course also to organizations such as the Islamic State (ISIS) and other violent actors who have begun experimenting with AI tools to maximize their reach and minimize the risk of detection. This presents a security challenge for authorities who can only witness how technology advances and expands, and the means to control it lag far behind, requiring ever faster adaptation in AI control strategies by terrorist groups.
How AI is used by terrorism
There are several (and evolving) applications of AI used by extremist groups, ranging from the automation of propaganda content to the use of chatbots for interactive recruitment and social network manipulation. For example, chatbots with AI capabilities have been used to provide personalized information to potential recruits, tailoring messages according to their beliefs and interests (exactly as modern armies are doing to filter their search for new recruits). This makes the content more relevant and persuasive to the targets, fostering a stronger connection with the extremist group, so no direct human intervention is needed.
The creation of videos with AI-generated avatars to disseminate propaganda messages, emulating conventional media aesthetics to gain credibility among their audiences, has been another way for groups such as ISIS to use this technology. Generative AI has also been employed to automatically translate propaganda into multiple languages, allowing terrorists to overcome language barriers and increase the distribution of their messages globally.
The future of AI in terrorist hands
But these aforementioned tools could be just the tip of the iceberg because the potential of AI used by terrorist organizations can extend to other more complex and dangerous areas. One area of concern for security experts is the use of autonomous drones or “killer robots.” Terrorists have already begun to integrate AI into drones to improve autonomous navigation, target recognition and real-time mission planning. These drones can be used to carry out large-scale attacks without human intervention, reducing the risk to operators and increasing the lethality of their actions. There is also the possibility of terrorist groups using autonomous vehicles as mobile bombs (exactly as is currently being used in the Ukrainian war), and although this method does not yet appear to have been used, there are indications that ISIS/Daesh and other organizations have been investigating this technology, with the danger that this would entail.
On the cyber side, terrorists could use AI to launch more sophisticated cyberattacks that identify vulnerabilities and adapt their tactics in real time, such as using LLMs (large-scale language models) to simulate human interactions and fool security systems, which would make it more difficult to detect attacks before significant damage is done to systems.
How to combat it
Preventing the use of such tools by terrorists is nothing short of utopian, but measures and strategies can be taken to hinder the use of AI by the “bad guys”. Fundamental would be to start with improved content moderation; in this case content moderators and technology platforms should update their algorithms to identify AI-generated content, using approaches such as analyzing inconsistencies in speech patterns, unusual shadows in videos and abnormal facial expressions. In addition, hashing techniques to detect and block recycled or manipulated content must be adapted to the capabilities of generative AI.
A collaboration of the sectors involved would also be interesting ; governments, technology companies and academic institutions should establish more robust collaboration frameworks to share knowledge and coordinate efforts against malicious use of AI. Initiatives such as the European Union’s Code of Practice on Disinformation provide examples of how cooperation can be fostered to mitigate the impact of generative AI in spreading propaganda. Another important tool would be the development of defensive AI.
AI technologies can also be used to develop advanced defense systems, such as automated moderation systems and chatbots designed to intercept and redirect potential recruits before they become radicalized. Finally, public education and awareness would need to be incentivized . Educating and raising public awareness of the risks of generative AI and digital manipulation are critical to developing social resilience against disinformation. Initiatives such as media literacy campaigns should be a priority to enable people to identify manipulated or artificially generated content.
The use of AI by terrorist groups is, in addition to being an unavoidable issue, an evolving threat that requires new strategies and tools from security agencies. And while advances in AI can be used to enhance defensive capabilities, there is no doubt that they expand the possibilities for attack and allow terrorists to operate with greater sophistication and at lower cost and exposure. The development of policies that adapt to the changes, international collaboration and the implementation of advanced technological systems will be crucial to meet these challenges and protect global security in an increasingly digitized world.