!-- Google tag (gtag.js) -->

EU Introduces Proposal on Law Restricting “Unacceptable” Use of AI

The EU introduced a first-of-its-kind proposal to limit the use of Artificial Intelligence for risky purposes such as “social scoring” and facial recognition.

April 22, 2021
EU Introduces Proposal on Law Restricting “Unacceptable” Use of AI
Margrethe Vestager, the executive vice president of the European Commission
SOURCE: EPA

On Wednesday, the European Commission tabled a proposal to impose strict regulations on the use of Artificial Intelligence (AI), once again illustrating how the bloc is a pioneer in introducing laws on tech regulation. It stipulates restrictions on the use of AI by governments and companies and cracks down on systems that pose a “clear threat to the safety, livelihoods, and rights of people”. If passed, the law will restrict the use of AI for a wide range of activities, including bank lending, school enrolment, and hiring decisions. It also addressed the use of AI by courts and law enforcement bodies in the bloc, which it considered to be a “high risk”, as it had the potential to threaten the safety and fundamental rights of the affected citizens. The proposal also imposes a ban on the use of “remoted biometric identification” in public places for purposes other than national security.

The 108-page document aims to regulate the use of AI, pre-empting its excessive and widespread use by companies and governments. The proposal would further mandate companies to provide details on the AI’s use in high-risk areas and reassure regulatory bodies of its safety. Moreover, they would also be required to provide evidence of human oversight of its use and creation. Failure to abide by the provisions of the proposal could attract fines up to 6% of the company’s global sales.

The plan is likely to be opposed by tech giants, including Google, Facebook, and Amazon, who have already ramped up investments into AI development. Moreover, other smaller companies, which use such software for medical research, insurance policies, and other insurance-related data, could also be impacted by the law.

In this regard, several critics have raised concerns about the excessive curbs on companies and technologies introduced by the legislation. Carly Kind, the Director of the Ada Lovelace Insitute in London, said, “If it doesn’t lay down strict red lines and guidelines and very firm boundaries about what is acceptable, it opens up a lot for interpretation.” Furthermore, Benjamin Mueller, a senior policy analyst at the Centre for Data Innovation, said that the proposal could potentially make AI development “prohibitively expensive or even technologically infeasible” in the bloc. This, he added, could benefit American and Chinese companies as the bloc “kneecaps its own startups”.


On the other hand, several tech law lobbyists have criticised the law for not doing enough to curb the improper use of AI. In this regard, Sarah Chander, a senior policy adviser at European Digital Rights, said that the exemptions stipulated in the proposal are widely framed, which “defeats the purpose for claiming something is a ban”.

Nevertheless, celebrating the proposal, Margrethe Vestager, the executive vice president of the European Commission, said, “On artificial intelligence, trust is a must, not a nice-to-have … With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”

The European Union (EU) has emerged as a front-runner in the regulation of technology and data privacy. It has often found itself at loggerheads with tech giants, specifically while attempting to curtail the power by introducing anti-trust laws and content-moderation policies. Several countries also look up to the bloc’s laws on technology-related issues and often use them as blueprints for their own policies on the same. As a result, countries across the globe, including the United States (US) and the United Kingdom (UK), are debating policies on tech regulation. For instance, earlier this week, the American Federal Trade Commission highlighted the racially biased algorithms being used in AI systems. Moreover, the UK is in the process of creating a body for the specific aim of regulating the industry.

However, the EU’s plan is still going to be subject to a long-drawn process, as it needs the approval of the European Council, which comprises representatives of each of the member states’ governments, along with the European Parliament. This could potentially obstruct its expeditious implementation. Nevertheless, it does mark a significant step towards enhancing the regulation of AI and limiting its exploitative use.