Saturday, July 27, 2024
News

Big Tech is already lobbying to ease Europe's AI rules

70views

IEuropean lawmakers are putting the final touches on a set of sweeping rules designed to govern the use of artificial intelligence, which if passed would make the EU the first major jurisdiction outside China to pass targeted AI regulation. This has made the upcoming law the subject of fierce debate and lobbying, with opposing sides battling to ensure that its scope is either wide or narrow.

Lawmakers close to agreeing on a draft version of the law financial Times informed of Last week. After that, the legislation will proceed to negotiation between the bloc’s member states and the executive branch.

The EU Artificial Intelligence Act could publicly ban controversial uses of AI such as social scoring and facial recognition, as well as force companies to declare that copyrighted material was used to train their AI. goes or not.

The rules could set a global bar for how companies build and deploy their AI systems as it may be easier for companies to comply with EU regulations globally rather than building separate products for different regions – a phenomenon known as the “Brussels effect”.

“The EU AI Act Is Definitely Going to Set the Regulatory Tone: What Will Ubiquitous Regulation of AI Look Like?” says Amba Kak, executive director of the AI ​​Now Institute, a policy research group based at NYU.

One of the most contentious points of the act is whether so-called “general purpose AI” based on ChatGPT should be considered high risk, and thus subject to stricter regulations and penalties for misuse. On one side of the debate are big tech companies and a conservative bloc of politicians, who argue that labeling general-purpose AI as “high risk” will stifle innovation. On the other hand is a group of progressive politicians and technologists who argue that exempting powerful general purpose AI systems from the new rules would be like passing a social media regulation that does not apply to Facebook or TikTok.

Read more: A to Z of Artificial Intelligence

Those calling for regulating general-purpose AI models argue that only the developers of general-purpose AI systems have real insight into how those models are trained, and therefore the biases and pitfalls that can arise as a result. They say the big tech companies behind artificial intelligence—the only ones who have the power to change the way these general-purpose systems are built—would be off the hook if small companies were shifted downstream to ensure AI safety. will be given.

in an open Letter Published earlier this month, more than 50 institutions and AI experts argued against exempting general-purpose artificial intelligence from EU regulation. “Considering [general purpose AI] Just as high-risk, the companies at the heart of the AI ​​industry will be exempt, making exceptionally important choices about how they will work, for whom they will work, and how they shape these models during the development and calibration process. Are. says Meredith Whitaker, president of the Signal Foundation and a signer of the letter. “It would exempt them from scrutiny, even when these general purpose AIs are critical to their business model.”

Big tech companies like Google and Microsoft, which have invested billions of dollars in AI, are arguing against the proposals, according to a report by the Corporate Europe Observatory, a transparency group. Lobbyists have argued that it is only when general purpose AI is applied to “high risk” use cases – often by smaller companies tapping into them to create more niche, downstream applications – that they become dangerous. become observatory reports states.

“General-purpose AI systems are purpose neutral: they are versatile by design, and are not in themselves high-risk because these systems are not intended for a specific purpose,” Google argued in a document it submitted to EU commissioners. Sent to the offices. summer of 2022, which the corporate Europe Observatory obtained through freedom of information requests and made public last week. Classifying general-purpose AI systems as “high risk”, Google argued, could harm consumers and stifle innovation in Europe.

Microsoft, the largest investor in OpenAI, has made similar arguments through industry groups of which it is a member. “There is no need to have a specific clause on GPAI in the AI ​​Act [general purpose AI]an industry group Letter Co-signed by Microsoft in 2022 states. “It is not possible for providers of GPAI software to comprehensively anticipate and estimate the AI ​​solutions that will be built on the basis of their software.” Microsoft has also lobbied against the EU AI Act “unreasonably burdensome innovation” through the Software Alliance, an industry lobby group founded in 1998. upcoming rule, it Argument“must be assigned to a user who may put the general purpose AI to high-risk use [case],” rather than a developer of a general-purpose system.

A Microsoft spokesman declined to comment. Representatives for Google did not respond to requests for comment in time for publication.

Read more: The AI ​​Arms Race Is Changing Everything

The EU AI Act was first drafted in 2021, at a time when AI was mainly a narrow tool applied to narrow use-cases. But in the last two years, big tech companies have begun to successfully develop and launch powerful “general purpose” AI systems that can perform harmless tasks – like writing poetry – while equally having the potential for more risky behaviour. (Think OpenAI’s GPT-4 or Google’s LaMDA.) Under the prevailing business model that has since emerged, these large companies license their powerful general-purpose AI to other businesses, often using them for specific tasks. and make them public through an app or Interface.

Read more: The new AI-powered Bing is scaring users. it’s no laughing matter

Some argue that the EU has built itself into a bind by structuring its AI Act out-of-date. Helen Toner, a member of OpenAI’s board and Georgetown’s director of strategy, says, “The underlying problem here is that the way they structured the EU Act years ago at this point, it doesn’t define risk categories for different uses of AI.” Were.” Center for Security and Emerging Technologies. “The problem they’re facing now is that the larger language model — the general purpose model — doesn’t have an underlying use case. It’s a big change in the way AI works.”

“Once these models are trained, they are not trained to do a specific task,” Toner says. “Even the people who make them don’t really know what they can and can’t do. I keep hoping it’s going to be maybe years before we really know all the things that GPT-4 can and cannot. It is very difficult for a piece of legislation that is structured to classify AI systems according to their level of risk based on their use case.

Must read more from time to time

write to Billy Perrigo at [email protected].