General Purpose AI Code of Practice, Transparency Chapter

Proposed 2024-11-14 | Enacted 2025-07-10 | Official source

Summary

Requires general-purpose AI model providers to comply with AI Act obligations, particularly Articles 53 and 55. Obligates providers to maintain, disclose, and update model documentation for compliance assessment. Tasks the AI Office with overseeing adherence to these obligations.

  • This summary is awaiting validation (peer review by a second AGORA editor).

Key facts

🏛️ This document has been enacted by the European Union. For authoritative text and metadata, visit the official source.

🎯 This document primarily applies to the private sector, rather than the government.

📜 This document's name is General Purpose AI Code of Practice, Transparency Chapter.

Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
  • Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).

Governance strategies (8)

Full text

  • This is an unofficial copy. The document has been archived and reformatted in plaintext for AGORA. Footnotes, tables, and similar material may be omitted. For the official text, visit the original source.
  • Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).
Objectives The overarching objective of this Code of Practice (“Code”) is to improve the functioning of the internal market, to promote the uptake of human-centric and trustworthy artificial intelligence (“AI”), while ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection, against harmful effects of AI in the Union, and to support innovation pursuant to Article 1(1) AI Act. To achieve this overarching objective, the specific objectives of this Code are: A. To serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 AI Act, while recognising that adherence to the Code does not constitute conclusive evidence of compliance with these obligations under the AI Act. B. To ensure providers of general-purpose AI models comply with their obligations under the AI Act and to enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code to demonstrate compliance with their obligations under the AI Act.
Recitals Whereas: (a) The Signatories recognise the particular role and responsibility of providers of general- purpose AI models along the AI value chain, as the models they provide may form the basis for a range of downstream AI systems, often provided by downstream providers that need a good understanding of the models and their capabilities, both to enable the integration of such models into their products and to fulfil their obligations under the AI Act (see recital 101 AI Act). (b) The Signatories recognise that in the case of a fine-tuning or other modification of a general- purpose AI model, where the natural or legal person, public authority, agency or other body that modifies the model becomes the provider of the modified model subject to the obligations for providers of general purpose AI models, their Commitments under the Transparency Chapter of the Code should be limited to that modification or fine-tuning, to comply with the principle of proportionality (see recital 109 AI Act). In this context, Signatories should take into account relevant guidelines by the European Commission.
(c) The Signatories recognise that, without exceeding the Commitments under the Transparency Chapter of this Code, when providing information to the AI Office or to downstream providers they may need to take into account market and technological developments, so that the information continues to serve its purpose of allowing the AI Office and national competent authorities to fulfil their tasks under the AI Act, and downstream providers to integrate the Signatories’ models into AI systems and to comply with their obligations under the AI Act (see Article 56(2), point (a), AI Act). This Chapter of the Code focuses on the documentation obligations from Article 53(1), points (a) and (b), AI Act that are applicable to all providers of general-purpose AI models (without prejudice to the exception laid down in Article 53(2) AI Act), namely those concerning Annex XI, Section 1, and Annex XII AI Act. The documentation obligations concerning Annex XI, Section 2, AI Act, applicable only to providers of general-purpose AI models with systemic risk are covered by Measure 10.1 of the Safety and Security Chapter of this Code.
Commitment 1 Documentation LEGAL TEXT: Articles 53(1)(a), 53(1)(b), 53(2), 53(7), and Annexes XI and XII AI Act In order to fulfil the obligations in Article 53(1), points (a) and (b), AI Act, Signatories commit to drawing up and keeping up-to-date model documentation in accordance with Measure 1.1, providing relevant information to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems (‘downstream providers’ hereafter), and to the AI Office upon request (possibly on behalf of national competent authorities upon request to the AI Office when this is strictly necessary for the exercise of their supervisory tasks under the AI Act, in particular to assess the compliance of a high-risk AI system built on a general-purpose AI model where the provider of the system is different from the provider of the model1), in accordance with Measure 1.2, and ensuring quality, security, and integrity of the documented information in accordance with Measure 1.3. In accordance with Article 53(2) AI Act, these Measures do not apply to providers of general-purpose AI models released under a free and open-source license that satisfy the conditions specified in that provision, unless the model is a general-purpose AI model with systemic risk.
Measure 1.1 Drawing up and keeping up-to-date model documentation Signatories, when placing a general-purpose AI model on the market, will have documented at least all the information referred to in the Model Documentation Form below (hereafter this information is referred to as the ‘Model Documentation’). Signatories may choose to complete the Model Documentation Form provided in the Appendix to comply with this commitment. Signatories will update the Model Documentation to reflect relevant changes in the information contained in the Model Documentation, including in relation to updated versions of the same model, while keeping previous versions of the Model Documentation for a period ending 10 years after the model has been placed on the market.
Measure 1.2 Providing relevant information Signatories, when placing a general-purpose AI model on the market, will publicly disclose via their website, or via other appropriate means if they do not have a website, contact information for the AI Office and downstream providers to request access to the relevant information contained in the Model Documentation, or other necessary information. Signatories will provide, upon a request from the AI Office pursuant to Articles 91 or 75(3) AI Act for one or more elements of the Model Documentation, or any additional information, that are necessary for the AI Office to fulfil its tasks under the AI Act or for national competent authorities to exercise their supervisory tasks under the AI Act, in particular to assess compliance of high-risk AI systems built on general-purpose AI models where the provider of the system is different from the provider of the model,2 the requested information in its most up-to-date form, within the period specified in the AI Office’s request in accordance with Article 91(4) AI Act.
Signatories will provide to downstream providers the information contained in the most up-to-date Model Documentation that is intended for downstream providers, subject to the confidentiality safeguards and conditions provided for under Articles 53(7) and 78 AI Act. Furthermore, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, Signatories will provide additional information upon a request from downstream providers insofar as such information is necessary to enable them to have a good understanding of the capabilities and limitations of the general-purpose AI model relevant for its integration into the downstream providers’ AI system and to enable those downstream providers to comply with their obligations pursuant to the AI Act. Signatories will provide such information within a reasonable timeframe, and no later than 14 days of receiving the request save for exceptional circumstances. Signatories are encouraged to consider whether the documented information can be disclosed, in whole or in part, to the public to promote public transparency. Some of this information may also be required in a summarised form as part of the training content summary that providers must make publicly available under Article 53(1), point (d), AI Act, according to a template to be provided by the AI Office.
Measure 1.3 Ensuring quality, integrity, and security of information Signatories will ensure that the documented information is controlled for quality and integrity, retained as evidence of compliance with obligations in the AI Act, and protected from unintended alterations. In the context of drawing-up, updating, and controlling the quality and security of the information and records, Signatories are encouraged to follow the established protocols and technical standards.