Stakeholders in the tech industry are raising concerns over the potential consequences of the upcoming EU AI Act, warning that it could stifle innovation and drive AI startups out of existence.
The proposed regulations categorize AI models based on their risk factors, with startups developing high-risk foundation models facing additional requirements. Tech policy group DigitalEurope emphasizes the importance of allowing companies, especially startups, to harness foundation models for long-term innovation and competitiveness on a global scale.
Challenges for AI startups under EU AI act
Tech policy group DigitalEurope has issued a joint statement cautioning that the EU AI Act, in its current form, could adversely impact AI startups. The Act classifies AI models based on risk, ranging from ‘minimal’ to ‘unacceptable’ and ‘high risk.’ Startups dealing with high-risk foundation models would be burdened with additional requirements, including regular reporting on data and algorithm quality.
This could lead to higher costs and a slower pace of development, potentially hindering innovation and competition with global counterparts.
DigitalEurope argues that limiting the use of foundation models may impede the growth of new players in the AI industry. The statement underscores the need for Europe to foster innovation and become a global digital powerhouse.
The group contends that the ability of European companies to deploy AI in key sectors, such as green tech, health, manufacturing, and energy, is crucial for the continent’s competitiveness and financial stability.
Critics of the EU AI Act highlight the potential financial burdens on smaller enterprises. Data compiled by the European Commission indicates that compliance costs for a small to medium-sized business launching a single AI-enabled product could be around €300,000. Detractors argue that such costs could become untenable for smaller enterprises, hindering their ability to participate in the AI landscape.
DigitalEurope’s response to member states’ agreement
The pushback against the EU AI Act is part of an ongoing dialogue between regulators and industry stakeholders. The act has faced intense criticism for its potential negative impact on innovation across Europe.
The open-source community, in particular, has expressed concerns about the regulation, with warnings that it could undermine open-source AI development. Major tech companies, including GitHub and Hugging Face, have published a policy paper urging more concise definitions of AI components and greater support for open-source development.
In a recent development, three of the EU’s largest economies – France, Germany, and Italy – signed a joint agreement calling for the “mandatory self-regulation” of foundation models. This agreement suggests a divergence of opinions within the EU member states on the best approach to AI regulation.
The joint paper from the trio emphasizes the establishment of voluntary codes of conduct and a revised focus on distinguishing between regulating the use of AI tools in society and the regulation of AI technologies themselves.
DigitalEurope has welcomed the joint agreement by the three major EU economies as a positive step. The tech policy group sees it as a move to limit the scope of foundation models to transparency standards. The group emphasizes that the AI Act does not need to regulate every new technology and supports the original scope of focusing on high-risk uses rather than specific technologies.
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap