Home Strategic Affairs Cybersecurity & Technology AI Threats and International Law

AI Threats and International Law

10 min read
0
80

Technological innovation has reached a boom since the 1990s advent of the personal computer, which became available to every class of the population. The most recent innovation is however Artificial intelligence (AI), which was welcomed as the solution and simplification of our lives but then generated increasing concern because of its possible negative complications. ChatGPT reached 100 million users in just two months and can be used for free by every subject. Moreover, as these cyber tools are available to both individual users and companies, even state intelligence services, they constitute a risk that can lead to information manipulation, hate speech and security crises. For example, AI systems may be used to influence voters in elections and undermine democratic systems. Therefore, many stakeholders and cyber experts have been asking themselves about the neutrality of technology. For instance, during the last decades, many cyberattacks have been detected by the hands of Russian agencies, forcing NATO forces to invest in ICT security training and protection. Likewise, Albania was hit by a massive cyberattack linked to the Iranian government that temporarily crippled government services. The war between the two poles of the world reached the technological sphere. The main issue is however the speed of new technological improvements, which complicates any attempt to regulate a fair and responsible use of AI tools. 

So far there are no standards or even best practices on how to test these frontier systems for things like discrimination, misuse and/or safety. AI tools can be used even in a way its creators did not intend, like supporting cyber crimes, but also exploiting in a bad way the intentions of the creators. In particular, AI development intended for military use may have unintended consequences on peace and security. These consist of biases in the targeting function of an autonomous weapons system (AWS), which could cause the system to wrongfully attack civilians or civilian objects and wrongfully lead to rapid conflict escalation. Especially, excesses of individual national ambitions for combative dominance can destroy the world. Nevertheless, recognizing solely the technology’s military applications, the imperative to retain the element of human decision-making would be underscored. As a matter of fact, these systems have proven to perpetrate traditional discrimination and prejudice, instilling algorithmic bias, for example concerning gender equality. For a series of reasons, there is a need to establish an ethical, responsible framework for international AI governance.

In 2021, European Union lawmakers successfully approved the world’s first extensive set of regulations for artificial intelligence, becoming a leading institution on the subject matter. According to the digital strategy, the Commission enhanced the EU regulatory framework for AI, called AI Act. It comes not as a surprise, as the European Union did the same with personal data protection and the General Data Protection Regulation (GDPR). The adopted text of June 2023 ensures that AI developed and used in Europe is fully in line with EU rights and values including “human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.” Rules determine a full ban on Artificial Intelligence for biometric surveillance, emotion recognition, and predictive policing, together with a required disclosure of AI-generated content by the generative AI systems. Nonetheless, this regulation would eventually prevent any malign attack on the security and defense sector, which are at high risk. The European Union has been particularly strict in front of foreign applications of AI, like the Chinese social scoring or the remote biometric identification in publicly accessible spaces used for law enforcement purposes. However, attention is paid also to tech companies, which could take advantage of the generative technological system to exploit their consumers and use private data. Indeed, big private industries can successfully lobby for their interests and thus threaten population safety and privacy by negotiating with governments.

Later, in July 2023, UN Secretary-General António Guterres warned at the settled UN Security Council meeting on AI that the advent of generative Artificial Intelligence could be a threat to the whole international system. He underlined that it can be used by terrorists, criminals and governments causing “horrific levels of death and destruction, widespread trauma and deep psychological damage on an unimaginable scale.” Hence, the UN Security Council called for urgent safeguards, alerting the international community of a risk that was too soon welcomed as a positive digital revolution. According to the Secretary-General, the UN must consequently formulate a legally binding agreement before 2026 to prohibit the use of AI in automated weapons of war.

Yet, many national governments prevented any complication deriving from AI tools by emanating national AI strategies -France and Germany in 2018, Spain and Italy in 2020 and the USA in 2022. They watched the internal risks, safeguarding their citizens, but at the same time, these countries are still investing in defense and AI military technology. The US and China are currently fighting to shape the way militaries across the globe perceive the future military use of AI and take a leadership position. Even the European Union is working on a common, responsible military use of AI, as it relates to the EU’s ambition regarding its strategic autonomy, the interoperability of EU armed forces and the progress of European research and industrial collaboration. The European Defence Agency (EDA) has reportedly been working on developing a joint perspective on AI capability development since 2016, before the enhancement of the AI Act. States are thus playing with fire, recognizing the risks of AI tools and developing their abilities and competencies in the same possible destructive technologies.

By The European Institute for International Law and International Relations

References

https://www.euronews.com/next/2023/07/19/un-security-council-convenes-historic-session-to-discuss-ai-threat-to-global-peace
https://webtv.un.org/en/asset/k1j/k1ji81po8p
https://www.politico.com/newsletters/weekly-cybersecurity/2023/11/20/ai-seeps-into-international-democracy-conference-00127996
https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai
https://education.unoda.org/docs/ai-slide1.pdf
https://www.lawfaremedia.org/article/a-comparative-perspective-on-ai-regulation
https://www.sipri.org/sites/default/files/2020-11/responsible_military_use_of_artificial_intelligence.pdf

Check Also

U.S. Blames Microsoft on Chinese Hack: Further Effects on the World

            Our age is an age …