Home Strategic Affairs International development Artificial Intelligence Act- A Step Towards Digital Safety in Europe

Artificial Intelligence Act- A Step Towards Digital Safety in Europe

7 min read
0
125

In 2021 the European Commission came forward with a proposal for an Artificial Intelligence Act[1] to be able to regulate the use of AI in Europe. As AI has existed for a while why would now be the best time to regulate AI?  In the last decade, AI has evolved extremely fast and is used in many industries which take advantage of this fast-paced environment to profit and further innovation, yet AI has presented some issues that have called for lawmakers to open their eyes and understand the need to regulate this area. AI involves algorithms that can be biased, for example, Amazon came up with an algorithm that evaluated job applications that proved to be sexist by always choosing male candidates. Policymakers have a real issue on their hands with such situations because there is a  requirement to regulate in order to protect humans from any form of repercussion. Hence, the AI Act that identifies different applications of AI which are considered as ‘unacceptable risk applications’ which bans any harmful AI practices that are considered to become detrimental to people’s rights, livelihoods and safety. These are referencing to AI that can manipulate people, face recognition with the exception of real time identification,  AI for education, AI for law enforcement, AI for critical infrastructures, and others. The identification of such areas as risk features involves the use of algorithms that have to be monitored and tested to make certain are not of discriminatory against people. Hence, there is need to make sure that tech companies comply to given rules to ensure a protective system respectful of human rights such as the right to privacy and right to non-discrimination.

The EU has continued attempts in the year of 2023 yet lawmakers have hit a wall as there cannot be a clear method used to ensure that these systems are well controlled and that actions are in fact predictable by the AI under human supervision. For instance, if tech companies provide systems for security which can be adopted by governments there needs to be a full disclosure and transparency of how the AI works. There has been the exposure of AI systems that can detect or foresee the chances of an individual committing a crime, in Germany this was not well received because there are issues of such a system being unconstitutional. It is clear that there is a gap in legislation when it comes to regulating tech companies whilst not also affecting their effectiveness. The Artificial Intelligence Act was a proposed law to protect personal and non-personal data across the EU by respecting the fundamental rights of individuals. Having digitalisation is one of the most important tools for innovation, it becomes an area for concern as there cannot be a free riding system without being properly administered. Research and development plans depend majorly in digital innovations, and has grown exponentially thus in response, there is need for legal measures to be taken in order to regulate the technological sector, For instance, the EU’s approach towards tech companies are highly through restrictive regulations which is necessary to ensure safety to all.

Policymakers are forced to address the concerns that might come with the tech sector such as privacy of consumers, data protection, antitrust, etc. Although, this makes it harder for businesses that are in the technological sector to conduct business activities freely, it is the best course of action. It is necessary to have  a clear human oversight that can be able to be intervene, this falls under the responsibility of AI providers. An approach that would be efficient to make would be to look at the interplay between the AI Act and the General Data Protection Regulation (GDPR) which cover transparency, accountability, security and purpose of limitations. The AI providers comply to the GDPR transparency obligations which ensures the standard for intelligibility. In this sense, the AI Act needs measures that can safeguard fundamental rights by safeguarding rules on the use and the collection of sensitive data especially after post-market, that is after the system is sold there needs to be a continuous monitoring by the provider to make sure that there were no further alterations.

[1] https://artificialintelligenceact.eu/

By The European Institute for International Law and International Relations.

Check Also

The Upcoming G7 from AI to Ukraine

Italy is set to preside over the G7, which includes the United States, Japan, Germany, Bri…