On Thursday, the European Union published the first draft of the Code of Practice for General Purpose AI (GPAI) Models. The document, which will not be finalized until May, sets out guidelines for risk management and provides a blueprint for companies to comply and avoid heavy penalties. The EU’s AI law came into force on August 1, leaving room for further details of GPAI regulation. This draft (via TechCrunch) is a first attempt to clarify what is expected from these more advanced models, and gives stakeholders time to provide feedback and refine the models before launching. give.
GPAI was trained with more than 10²⁵ FLOPs of total computational power. Companies expected to fall under the EU guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But the list could grow further.
This document covers several core areas for GPAI manufacturers, including transparency, copyright compliance, risk assessment, and technology/governance risk mitigation. This 36-page draft covers a lot of ground (and will likely grow even more before it’s finalized), but a few highlights stand out.
The code emphasizes transparency in AI development and requires AI companies to provide information about the web crawlers they use to train their models. This is a major concern for copyright holders and creators. The risk assessment section aims to prevent cybercrime, widespread discrimination, and loss of control over AI (that “I’m cheating” feeling moment from a million bad science fiction movies).
AI manufacturers are expected to adopt safety and security frameworks (SSF) to disaggregate risk management policies and mitigate them according to systemic risks. The rules also cover technical areas such as protecting model data, providing fail-safe access controls, and continuously reevaluating its effectiveness. Finally, the governance function strives to hold the company itself accountable by requiring continuous risk assessments and bringing in outside experts when necessary.
As with other technology-related regulations in the EU, companies that violate AI laws can face stiff penalties. Fines of up to 35 million euros (currently $36.8 million) or 7% of annual global profits, whichever is higher, could be imposed.
Interested parties are asked to submit feedback via the dedicated Futurium platform by November 28 to improve the next draft. This rule is expected to be finalized by May 1, 2025.
