the ai act europe

In a groundbreaking move, the European Union has unanimously approved the Artificial Intelligence Act, also known as the AI Act, paving the way for the first comprehensive regulatory framework for artificial intelligence (AI) technology. This landmark legislation, approved by European Union lawmakers with 71 votes in favor, 8 against, and 7 abstentions, could potentially influence other governments’ efforts to regulate AI by setting a precedent for such regulations.

The AI Act adopts a “risk-based approach” to products or services that utilize AI, subjecting higher-risk applications to more stringent regulatory scrutiny. According to Associated Press reporting, this means that AI tools such as content recommendation systems will be subject to less stringent regulations and voluntary codes of conduct, while the use of AI in medical devices or critical infrastructure will face far more rigorous requirements. Notably, the legislation outright bans certain AI uses, such as social scoring systems and some forms of face-scanning for biometric identification by police in public.

For generative AI systems like ChatGPT, developers will be required to comply with several key provisions:

  • Provide detailed summaries of any text, video, or other data used to train their AI systems.
  • Adhere to EU copyright law in their use of training data.
  • Label all generated pictures or videos depicting existing people, often referred to as “deepfakes,” as being artificially manipulated.
  • Assess and mitigate the risks of their systems, and report any malfunctions that cause someone’s death or serious harm to health or property.
  • Implement cybersecurity measures.
  • Disclose the energy consumption of their models.

THE AI ACT in Europe

The Europe AI Act is expected to officially become law within the next 2-3 months, but will not be fully enforced until mid-2026. Enforcement mechanisms will involve the establishment of AI watchdogs in each EU country, where citizens can file complaints if they believe they have been victims of violations. Violations of the AI Act could result in substantial fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue.

While tech companies, particularly prominent U.S.-based firms, have generally supported AI regulations that promote the safe use of AI and mitigate its negative effects, they have also lobbied to ensure that their investments in the commercial use and development of AI are well protected. A notable example of this is when Sam Altman, the CEO of OpenAI, testified in favor of AI regulation in front of the U.S. Senate last May. Although President Biden has since signed an executive order on AI and legislation is expected to follow, the U.S. remains significantly behind the EU in terms of AI regulation.

The AI Act will primarily impact European AI startups, but its implications for foreign AI companies operating in Europe remain uncertain. Companies may choose to comply with the regulations or potentially abandon the European market altogether.

As the first comprehensive regulatory framework for AI technology, the AI Act represents a trailblazing effort by the European Union to address the challenges and potential risks posed by the rapid advancement of AI. It sets a precedent for other governments grappling with the complexities of regulating this transformative technology, and its impact on the global AI landscape will be closely watched.

By Author

Leave a Reply

Your email address will not be published. Required fields are marked *

WP Twitter Auto Publish Powered By : XYZScripts.com