ARTICLE
28 October 2025

The EU AI Act And General-Purpose AI: Navigating The Compliance Landscape

GA
Ganado Advocates

Contributor

Ganado Advocates is a leading commercial law firm with a particular focus on the corporate, financial services and maritime/aviation sectors, predominantly servicing international clients doing business through Malta. The firm also promotes other areas such as tax, pensions, intellectual property, employment and litigation.
The European Union's Artificial Intelligence Act (AI Act) carries significant implications for General-Purpose AI (GPAI) models.
Malta Technology
Andrea Grima’s articles from Ganado Advocates are most popular:
  • within Technology topic(s)
  • in United States
Ganado Advocates are most popular:
  • within Criminal Law, Privacy and Environment topic(s)

The European Union's Artificial Intelligence Act (AI Act) carries significant implications for General-Purpose AI (GPAI) models.

Understanding General-Purpose AI

GPAI models are characterised by their ability to perform a wide range of distinct tasks and display significant generality. They are not designed to perform a specific task but, rather, can be adapted and integrated into various downstream applications.

Common examples include OpenAI's GPT series, which can generate text, translate languages, and answer questions on multiple topics, and image and video generation models, such as DALL-E and Midjourney.

The versatility of GPAI is both its strength and a source of regulatory concern, which is the reason for the AI Act's rules that regulate the development and deployment of these models.

The AI Act's Obligations on Providers of GPAI

The obligations for GPAI models came into effect on 2 August 2025, with those models that had been placed on the market before said date having until 2 August 2027 to comply.

These obligations follow the AI Act's approach to risk mitigation where the majority of obligations are placed on the providers of AI models, as opposed to the mere deployers (i.e. users of a third-party model).

Moreover the Act distinguishes between so-called standard GPAI models and those that pose systemic risks, i.e. models that meet certain technical criteria that indicate that they could have a significant impact on society or the economy.

All GPAI models are subject to the following core obligations:

  • Transparency and Documentation: Providers of GPAI models must create and keep up-to-date detailed technical documentation. This includes information about the model's training data, testing processes, and capabilities. This documentation must be made available to the EU's AI Office and national competent authorities upon request.
  • Information for Downstream Providers: GPAI model providers must provide downstream AI system providers, who integrate these models into their own applications, with sufficient information to understand the model's capabilities and limitations.
  • Copyright Policy: GPAI model providers are required to have a policy in place to ensure compliance with EU copyright law. This includes respecting the opt-out mechanisms expressed by copyright holders whose data may have been used for training.
  • Summary of Training Content: A publicly available, sufficiently detailed summary of the content used for training the GPAI model must be provided.
  • Authorised Representative: GPAI model providers not established in the EU must designate an authorised representative within the EU.

To the extent that a GPAI model is considered to have systemic risk, it must comply with the following additional obligations:

  • Model Evaluation: Providers must conduct and document model evaluations to identify and mitigate systemic risks.
  • Risk Assessment and Mitigation: A thorough assessment of potential systemic risks must be carried out, and appropriate mitigation measures must be implemented.
  • Incident Reporting: Serious incidents and any corrective measures to address these incidents must be reported to the AI Office and relevant national authorities.
  • Cybersecurity: Providers must ensure an adequate level of cybersecurity protection for the GPAI model and its infrastructure.

To navigate this new regulatory reality, providers of GPAI models can look to the European Commission's voluntary Code of Practice, the Guidelines for providers of GPAI models and the detailed template for summarising training data.

Consequences of Non-Compliance

The focus has now shifted decisively from policy to active enforcement and compliance, shaping the future of responsible GPAI in the EU. Failure to adhere to the AI Act's obligations carries severe fines of up to €15 million or 3% of annual total worldwide turnover for the preceding financial year, whichever is higher. This underscores how crucial it is to fully grasp the risks and responsibilities involved in providing and deploying AI tools prior to their implementation.

This article was first published in "The Corporate Times" on 26/10/2025

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More