• | 4:14 pm

Europe makes first move to put AI in check

As AI reshapes industries, Europe passes draft law to regulate the transformative technology

Europe makes first move to put AI in check
[Source photo: Chetan Jha/Press Insider]

The European Parliament this week became the first major legislative body globally to pass a draft law to regulate artificial intelligence (AI).

The European Union’s AI Act, which was first proposed in 2021, will apply to all products and services that use an AI system, and aims to protect individuals from potentially harmful AI applications and set a global standard.

Categorizing AI risks

The draft law categorizes AI applications into three tiers based on the level of risk. 

First, applications and systems that create an unacceptable risk are explicitly banned. The draft legislation prohibits real-time remote biometric surveillance in public spaces, bans predictive policing systems, and seeks to regulate the way companies train AI models. 

Second, high-risk applications, including products falling under the EU’s product safety legislation such as toys, aviation, cars, medical devices, and lifts, are subject to specific legal requirements. 

The draft rules also require certain AI systems, including CV-scanning tools that rank job applicants and migration, asylum and border control management tools, to be registered in an EU database.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

And, finally, applications not explicitly banned or listed as high-risk are largely left unregulated.

Deception point 

Meanwhile, companies may also be mandated to disclose AI-generated content and design AI models to prevent the creation of illegal content.

“Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated,” the AI draft policy says.

For context, a deepfake is a type of artificial media where a person’s likeness, such as their face or voice, is swapped or manipulated using AI. Deepfakes can make it appear as though someone has said or done something they never actually said or did, and are often realistic enough to deceive viewers.

The sting of noncompliance

The current draft could impose fines of up to 6% or 7% of a company’s global revenue for certain noncompliance cases.

The legislation is expected to undergo further negotiations among European Parliament representatives, EU member states, and the European Commission, with officials aiming to reach an agreement by the end of the year.

Some AI researchers and tech executives including Tesla CEO Elon Musk, had earlier this year signed an open letter that called for a six-month hiatus on training next-generation AI tools to allow time for regulators and industry to set safety standards. 

While tech companies argue that the proposed rules could hinder innovation, some researchers and technologists support the regulations as necessary for establishing safety standards.

The AI Act was initially proposed in 2021 by the European Commission in response to the rapid development of AI tools and systems. 

European officials hope that the legislation will serve as a pioneering framework for the governance of AI, promoting innovation while mitigating risks.

India may follow EU lead

India is also moving to regulate AI. Minister of state for electronics and IT Rajeev Chandrasekhar last week said India would regulate AI to protect citizens.

“Our approach towards AI regulation is fairly simple. We will regulate AI as we will regulate Web3 or any emerging technologies to ensure that they do not harm digital citizens,” Chandrasekhar said.

To be sure, Web3 is often associated with the use of blockchain technology and cryptocurrency. The main idea behind Web3 is to create a more decentralized and peer-to-peer internet, where users have control over their own data.

Sam Altman, the chief executive of OpenAI that developed the popular chatbot program ChatGPT, had called for regulation of big AI companies such as his during his visit to India this month, but wanted smaller firms and startups to be left out as it may stifle innovation.

In an interview with the Times of India last month, the Union minister for IT and electronics Ashwini Vaishnaw had hinted at India’s plans to set up a similar policy along the lines of the EU’s AI Act.

Vaishnaw had highlighted concerns around intellectual property rights (IPR), the bias of algorithms, copyright issues, and misinformation, while advocating the need for a “cooperative framework” around AI along with other nations.

More Top Stories: