The EU is the first out of the blocks in laying down binding rules for fast-moving AI technology. Even if many countries and international clubs — from the OECD to the G7 — have spent the past few years pondering how to regulate AI, most have stuck to voluntary guidelines or codes of practice.
When EU policymakers announced they had found a final compromise on the AI Act’s content in December, the breakthrough was hailed as a pioneering step Europe should celebrate amid the rise of ubiquitous AI tools such as OpenAI’s ChatGPT and Google’s Bard.
But the achievement rubbed some EU countries the wrong way. Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote.
While Vienna’s beef was with data protection provisions, Paris and Berlin warned that rules for advanced AI models would hamstring Europe’s budding AI champions, such as France’s Mistral and Germany’s Aleph Alpha. With Italy — sometimes an AI Act critic — keeping mum on its intentions, the AI Act’s fate was suddenly in question, as four opposing countries would be enough to derail the law for good.
The cabinet of French Economy Minister Bruno Le Maire called for a further round of negotiations with the European Parliament to address his concerns. The prospect horrified the Belgian Council presidency, due to the lack of time for further negotiations. Making things worse, the Parliament itself was dealing with a simmering row over the AI Act’s facial-recognition rules, triggered by privacy hawk Svenja Hahn, an MEP.
Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act.