Vote for Cryptopolitan on Binance Square Creator Awards 2024. Click here to support our content!

Europe’s AI Act Stirs Mixed Reaction from Tech Companies

Europe's AI Act Stirs Mixed Reaction from Tech CompaniesEurope's AI Act Stirs Mixed Reaction from Tech Companies
427025

In this post:

  • The EU recently agreed on provisional rules to regulate the burgeoning AI sector.
  • The Act sets out a risk-based approach for regulating different AI systems operating in the region.
  • Many fear that some of the stringent requirements in the rule could stifle AI innovation in the region.

The European Union’s landmark AI Act has received a mixed reaction from the region’s tech industry, with some welcoming the legislation’s attempt to regulate the development and use of artificial intelligence, while others have expressed concerns about its potential to stifle innovation.

EU Agrees on AI Act

The Act, which was provisionally agreed upon on December 8th, sets out a risk-based approach for regulating AI, whereby the systems deemed more risky would receive the most stringent regulation. 

Per the EU Commission, “minimal risk” AI systems, like recommender systems or spam filters, will receive free-pass regulatory treatment. AI systems considered “high-risk” will be subjected to strict requirements, while those considered “unacceptable risk” will be banned. 

The full details of the agreement have yet to be officially released, and so it remains uncertain what systems will be classified as high-risk. While many welcomed the regulatory approach, the tech industry was particularly concerned about the hefty requirements for systems deemed risky.

High-risk AI systems would be required to comply with several rules, including “risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.”

EU Tech Companies Oppose Stringent Rule for AI System

Many fear that such requirements would bring a heavy burden on developers, potentially leading to the exodus of AI talent and making the EU unattractive for AI projects. 

Read Also  Dialpad Unveils DialpadGPT, A Revolutionary AI for Enterprises

“The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers,” said Cecilia Bonefeld-Dahl, director general of DigitalEurope.

France Digitale, an independent organisation that represents European start-ups and investors, also stated that AI projects or systems classified as high-risk would have to obtain a CE mark, which involves a long and costly process.

“We called for not regulating the technology as such but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn’t make much sense,” France Digitale stated. 

The companies that fail to comply with the rules will be fined, per the Commission. The fine could range from €35 million or 7% of global annual turnover for violations of banned AI applications, €15 million or 3% for violations of other obligations, and €7.5 million or 1.5% for supplying incorrect information.

Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Editor's choice

Loading Editor's Choice articles...

Stay on top of crypto news, get daily updates in your inbox

Most read

Loading Most Read articles...
Subscribe to CryptoPolitan