EU AI Act checker reveals major compliance gaps in big tech’s AI practices

The artificial intelligence models faced challenges in meeting the requirements of European regulations

EU AI Act checker reveals major compliance gaps in big tech’s AI practices
EU AI Act checker reveals major compliance gaps in big tech’s AI practices

A newly introduced EU AI Act compliance checker has exposed significant shortcomings among major tech companies.

As per the data obtained by Reuters, the artificial intelligence models faced challenges in meeting the requirements of European regulations in main domains such as cybersecurity resilience and discriminatory output.

Before OpenAI released ChatGPT to the public in late 2022, the EU had discussed the rising challenges and new AI regulations.

The unprecedented level of attention and the resulting discussion about the potential dangers of these models prompted legislators to establish regulations specifically for "general-purpose" AIs (GPAI).

But now a new tool which received an approval by European Union officials, has evaluated generative AI models produced by big tech names like Meta and OpenAI and their number of categories, by the bloc's broad AI Act, which will go into force in phases over the next two years.

This new tool is designed by Swiss startup LatticeFlow AI and its partners at two research institutes, ETH Zurich and Bulgaria's INSAIT.

The tool passed the AI models by wading a score between 0 and 1 across dozens of categories, including technical robustness and safety.

A list released by LatticeFlow on Wednesday displayed that models created by Alibaba, Anthropic, OpenAI, Meta, and Mistral all achieved average scores of 0.75 or higher.