Security experts: Free AI engines could be hacked to aid terrorist
Thank you for choosing Automatic Translation. Currently we are offering translations from English into French and German, with more translation languages to be added in the near future. Please be aware that these translations are generated by a third party AI software service. While we have found that the translations are mostly correct, they may not be perfect in every case. To ensure the information you read is correct, please refer to the original article in English. If you find an error in a translation which you would like to bring to our attention, it would help us greatly if you let us know. We can correct any text or section, once we are aware of it. Please do not hesitate to contact our webmaster to let us know of any translation errors.
Free AI models on the internet have proven easy to hack and security experts are now concerned Ai could be used to enhance the capabilities of terrorists.
In an article on SVT.se the Swedish Secret Police express concerns that AI could be used to multiply the ability of terrorists to do damage in attacks.
The Swedish TV News recently made a short news clip where they met with young AI entrepreneur Oliver Edholm, who hacked an AI model, downloadable from the internet, and removed the security locks on it in order to ask it questions it normally would be prohibited from answering.
Normally AI models are instructed not to answer questions on mass destruction or how to harm people. In the example, the unlocked AI models gives examples on how a mass destruction attack could be carried out towards Stockholm by poisoning the water supply.
"I think it is good if people are informed about the fact this is possible", Edholm says to SVT.se.
"If there are terrorists, or state actors, who previously could not execute a complex threat, perhaps now, thanks to AI, they may gain increased capability," says Anders Arpteg, AI and Data Manager at the Swedish Secret Police, Säpo.
AI could be used for harm in a variety of ways
In a 2024 post on the website for Combating Terrorism Center, Generating Terror: The Risks of Generative AI Exploitation, concerns are expressed that AI can be used to generate and distribute propaganda content faster and more efficiently than ever before. This can be used for recruitment purposes or to spread hate speech and radical ideologies. AI-powered bots can also amplify this content, making it harder to detect and respond to:
Automated attacks: Terrorists can use AI to carry out attacks more efficiently and effectively—for example, by using drones or other autonomous vehicles.
Social media exploitation: AI can also be used to manipulate social media and other digital platforms to spread propaganda and recruit followers.
Cyber attacks: AI can be used by extremist groups to enhance their ability to launch cyber attacks against targets, potentially causing significant damage.
In the 2021 publication Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes, the United Nations expressed concerns that as AI becomes more widespread, the barriers to entry will be lowered by reducing the skills and technical expertise needed to employ it.
"... the risk of already developed sophisticated technologies ending up in the wrong hands is ever-present. With the growing interest in drones and their use in conflict zones, as well as in the development and deployment of increasingly autonomous weapons systems in combat environments, there is a concern that such weaponry could be seized or illegally purchased or acquired by non-State actors, like terrorist groups. The possibility of this is in fact one of the arguments often used by experts calling for prohibitions on the development of autonomous weapon systems", the report concludes.