EU AI Act checker reveals Big Tech's compliance pitfalls

Reuters Reuters | 10-17 00:10

Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cybersecurity resilience and discriminatory output, according to data seen by Reuters.

The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022. The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around "general-purpose" AIs (GPAI).

Now a new tool, which has been welcomed by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories, in line with the bloc's wide-sweeping AI Act, which is coming into effect in stages over the next two years.

Designed by Swiss startup LatticeFlow AI and its partners at two research institutes, ETH Zurich and Bulgaria's INSAIT, the framework awards AI models a score between 0 and 1 across dozens of categories, including technical robustness and safety.

A leaderboard published by LatticeFlow on Wednesday showed models developed by Alibaba, Anthropic, OpenAI, Meta and Mistral all received average scores of 0.75 or above.

However, the company's "Large Language Model (LLM) Checker" uncovered some models' shortcomings in key areas, spotlighting where companies may need to divert resources in order to ensure compliance.

Companies failing to comply with the AI Act will face fines of 35 million euros ($38 million), or 7% of global annual turnover.

Mixed results

At present, the EU is still trying to establish how the AI Act's rules around generative AI tools like ChatGPT will be enforced, convening experts to craft a code of practice governing the technology by spring 2025.

But the test offers an early indicator of specific areas where tech companies risk falling short of the law.

For example, discriminatory output has been a persistent issue in the development of generative AI models, reflecting human biases around gender, race and other areas when prompted.

When testing for discriminatory output, LatticeFlow's LLM Checker gave OpenAI's "GPT-3.5 Turbo" a relatively low score of 0.46. For the same category, Alibaba Cloud's "Qwen1.5 72B Chat" model received only a 0.37.

Testing for "prompt hijacking", a type of cyberattack in which hackers disguise a malicious prompt as legitimate to extract sensitive information, the LLM Checker awarded Meta's "Llama 2 13B Chat" model a score of 0.42. In the same category, French startup Mistral's "8x7B Instruct" model received 0.38.

"Claude 3 Opus", a model developed by Google-backed Anthropic, received the highest average score, 0.89.

The test was designed in line with the text of the AI Act, and will be extended to encompass further enforcement measures as they are introduced. LatticeFlow said the LLM Checker would be freely available for developers to test their models' compliance online.

Petar Tsankov, the firm's CEO and cofounder, told Reuters the test results were positive overall and offered companies a roadmap for them to fine-tune their models in line with the AI Act.

"The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models," he said. "With a greater focus on optimising for compliance, we believe model providers can be well-prepared to meet regulatory requirements."

Meta and Mistral declined to comment. Alibaba, Anthropic, and OpenAI did not immediately respond to requests for comment.

While the European Commission cannot verify external tools, the body has been informed throughout the LLM Checker's development and described it as a "first step" in putting the new laws into action.

A spokesperson for the European Commission said: "The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements."

Published - October 16, 2024 04:28 pm IST

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.


ALSO READ

Saudi Arabia jails cartoonist Mohammed al-Hazza for 23 years for insulting leadership, rights group says

Dubai — A Saudi artist has been sentenced to more than two decades in prison over political cartoons...

world | 2 hours ago

Rain may have helped form the first cells, kick-starting life as we know it

Billions of years of evolution have made modern cells incredibly complex. Inside cells are small com...

science | 2 hours ago

The Science Quiz: AI in science, from neurons to nodes

Questions: 1. The functioning of organic neurons is the model for artificial neural networks. In bio...

science | 2 hours ago

Today’s top tech news: Meta’s U.S. legal troubles; Intel and AMD team up; Apple’s new iPad mini

(This article is part of Today’s Cache, The Hindu’s newsletter on emerging themes at the intersectio...

technology | 2 hours ago

AI firm Perplexity offers a peek into a new financial analysis tool

AI company Perplexity revealed a work-in-progress finance-centric platform that would let users look...

technology | 2 hours ago

Apple iPhone 16 Pro Max and Samsung Galaxy S24 Ultra | Prices, specs, features compared

As the festival season rolls by, many shoppers in India are considering whether it’s time to take ad...

technology | 2 hours ago