ARTICLE
13 August 2025

Tech Law Bytes – July 2025

DL
DSK Legal

Contributor

DSK Legal is known for its integrity, innovative solutions, and pragmatic legal advice, helping clients navigate India’s complex regulatory landscape. With a client-centric approach, we prioritize commercial goals, delivering transparent, time-bound, and cost-effective solutions.

Our diverse and inclusive culture fosters innovative thinking, enabling us to craft exceptional legal strategies. Recognized for excellence, we attract top talent and maintain strong global networks, ensuring seamless support for cross-border matters and reinforcing our position as a trusted legal partner.

On July 06, 2025, at the 17th BRICS Summit in Brazil, the heads of state signed the "Statement on the Global Governance of Artificial Intelligence" ("Statement").
India Technology

17th BRICS SUMMIT: BRICS LEADERS ISSUE STATEMENT ON THE GLOBAL GOVERNANCE OF AI

On July 06, 2025, at the 17th BRICS Summit in Brazil, the heads of state signed the "Statement on the Global Governance of Artificial Intelligence" ("Statement"). This marked the bloc's first unified agreement on broad principles to shape national Artificial Intelligence ("AI") governance frameworks.

Contextualizing the BRICS Statement: Global Precedents in AI Governance

In March 2024, the United Nations adopted a resolution promoting 'safe, secure and trustworthy' AI systems to support sustainable development for all. This was the first time the UN General Assembly adopted a resolution addressing the regulation of AI.

This milestone builds on earlier efforts, most notably in 2019 when the Organisation for Economic Co-operation and Development ("OECD") took an early lead in AI governance by releasing its Principles on Artificial Intelligence for the development of trustworthy AI. These principles guide AI actors in building systems that are transparent, fair, and accountable, and provide policymakers with recommendations for effective AI regulation. Today, the EU, the United States, the UN, and several other jurisdictions reference the OECD's definition of an AI system and its lifecycle in their legislative and regulatory frameworks.

Moreover, regional and global efforts on AI governance are moving in parallel and are increasingly complementary. In February 2024 and in January 2025, ASEAN released the ASEAN Guidelines on AI Governance and Ethics and the Expanded ASEAN Guidelines on AI Governance and Ethics ("Guidelines"), respectively, offering clear and practical guidance for organizations developing and deploying both traditional and Gen AI technologies.

Guiding Principles of the Statement

The Statement represents the latest effort by an intergovernmental body to set out broad principles for governing AI. It is also the first concerted attempt by the Global Majority to assert its stake in the international norm-building process for AI. It frames AI as a central issue in international relations, digital sovereignty, and equitable global development, while emphasizing AI's potential to drive sustainable development and economic growth and the need to address ethical, security, and equity concerns.

The Statement outlines the following five broad guiding principles for AI governance across BRICS nations:

1. On Multilateralism, Legitimacy and Digital Sovereignty

To avoid fragmented governance efforts, the Statement calls for unified and inclusive international coordination on AI through the United Nations, emphasizing the participation of the Global South. While acknowledging the challenges of multilateral cooperation, the Statement underscores the need for engagement with diverse stakeholders. The Statement also reaffirms each country's digital sovereignty to regulate AI, build capacity, safeguard rights, and promote technological autonomy.

2. On Market Regulation, Data Governance, and Access to Technology

The Statement underscores the need to protect the rights and responsibilities of states, users, and companies within national and international legal frameworks. Additionally, the Statement promotes the development of open-source AI and encourages international cooperation through Open Science and Open Innovation. The Statement also highlights that fair and inclusive data governance, alongside balanced protection of intellectual property rights, transparency, and accountability, are critical to ensuring equitable AI benefits, legal compliance, and responsible, secure use of data and technology.

3. On Equity and Sustainable Development

The Statement supports AI applications, including open-source solutions that address key development challenges across critical sectors, including health, education, and agriculture. Furthermore, the Statement endorses the deployment of AI solutions to advance climate action and environmental sustainability, thereby contributing to the achievement of the Sustainable Development Goals. Additionally, the Statement underscores the need for robust infrastructure, digital inclusion, and worker protections.

4. On Ethical, Trustworthy, and Responsible AI for Welfare of All

The Statement reaffirms the importance of ethical, transparent, and accountable AI frameworks, such asUNESCO's Recommendation on Ethics of Artificial Intelligence. Additionally, the statement acknowledges the need for robust tools to identify and mitigate algorithmic biases to ensure fairness. Further emphasis is placed on the importance of fostering a harmonious human-machine relationship, by virtue of which AI enhances human capabilities, while still being under the controlled oversight of humans.

EUROPEAN COMMISSION PUBLISHES GUIDELINES ON OBLIGATIONS FOR GENERAL PURPOSE AI-MODELS UNDER THE EU AI ACT

Exciting times lie ahead as the phased implementation of the EU AI Act ("AI Act") progresses. A key milestone was reached on August 2, 2025, with the enforcement of obligations for providers of General Purpose AI Models ("GPAI Models") under Chapter V of the AI Act.

Pursuant to Article 96(1) of the EU AI Act, the European Commission ("EC") is mandated to issue guidelines for the practical implementation of the Act. In line with this obligation, on July 18, 2025, the EC published the 'Guidelines on the Scope of the Obligations for General-Purpose AI Models Established by the AI Act' ("Guidelines"). The Guidelines, inter alia, clarify the classification criteria for GPAI Models, outline conditions for exemptions from certain obligations, and provide direction on compliance requirements applicable to providers of GPAI Models.

FLOPs and General-Purpose AI Models

Computing power of an AI model is commonly measured by the number of floating-point operations, or FLOPs, performed during training. FLOPs therefore serve as a practical metric of a model's computational capacity; models that consume higher FLOPs during training generally have greater computational power and a correspondingly larger ability to perform complex tasks.

The EU AI Act and its accompanying Guidelines use FLOPs as part of a two-fold approach to classifying systems as GPAI models and as GPAI models with systemic risk. The Guidelines clarify the indicative criteria used by the EC to classify a model as GPAI which includes: (i) the computational resources used to train the model, measured in FLOPs, with models trained using at least 10^23 FLOPs presumed to possess high-impact capabilities; and (ii) the model's demonstrated ability to communicate, store knowledge, and reason across a wide range of distinct tasks. The Guidelines further clarify that even if a model meets the FLOPs threshold as set out above, it will not be classified as a GPAI if it lacks generality or is incapable of competently performing a wide range of distinct tasks.

Article 51(1) of the AI Act further adopts the FLOP classification approach by classifying a GPAI model as presenting systemic risk if it meets either of the following two conditions: (i) high-impact capabilities, when the cumulative computation used for training exceeds 10^25 FLOPs; or (ii) based on the decision of the Commission.

Another jurisdiction that uses compute-based thresholds is the United States. In October 2023, President Biden signed an executive order on the development and use of AI that treats a model as high-risk if it was trained using more than 10^26 FLOPs, or more than 10^23 FLOPs in the case of models trained primarily on biological sequence data.

Providers of GPAI models

Importantly, the AI Act imposes, inter alia, enhanced compliance obligations on 'Providers' of GPAI models, including the requirement to maintain technical documentation. Pertinently, Article 3(3) of the AI Act defines a 'Provider' as any person or legal entity that places a GPAI model on the market. In this regard, the Guidelines clarify that 'placing on the market' includes making a GPAI model available through:

  • a software or library package;
  • an Application Programming Interface ; and/or
  • uploading it to a public catalogue, hub, or repository for direct download.

Exemptions for Open-source GPAI Models

Under the AI Act, providers of certain open-source GPAI models may be exempt from the obligations under Articles 53 and 54, provided the model does not pose a systemic risk. The Draft Guidelines issued by the European Commission set out three cumulative conditions that must be met to qualify for this exemption:

  • Under the AI Act, providers of certain open-source GPAI models may be exempt from the obligations under Articles 53 and 54, provided the model does not pose a systemic risk. The Draft Guidelines issued by the European Commission set out three cumulative conditions that must be met to qualify for this exemption:
  • no monetary compensation may be required for access, use, or modification of the model; and
  • key model components, including parameters such as weights, model architecture details, and usage documentation, must be made publicly available.

Meta Declines to Sign AI Code Amid Broader Industry Support

On July 10, 2025, following the release of the AI Code of Practice ("Code"), Meta issued a statement asserting that the Code introduces significant legal uncertainties for model developers and includes measures that exceed the scope of the AI Act. As a result, Meta confirmed that it would not be signing the Code. Notably, several other companies operating in the EU, including but not limited to Amazon, Bria AI, Microsoft, and OpenAI, have signed the Code and have not raised similar objections.

KERALA HIGH COURT ISSUES POLICY REGARDING THE USE OF ARTIFICIAL INTELLIGENCE TOOLS IN DISTRICT JUDICIARY

On July 19, 2025, the Kerala High Court ("KHC") issued its 'Policy Regarding the Use of Artificial Intelligence Tools in the District Judiciary' ("AI Policy"), marking a significant step toward regulating AI use within the judiciary. The AI Policy, introduced in response to the growing availability and adoption of AI tools, sets out guidelines for the responsible use of AI, particularly in judicial work. It emphasises that the role of such tools should be strictly supportive in nature.

The AI Policy applies to all members of the District Judiciary of Kerala, as well as interns and law clerks working with the District Judiciary in the state. The following guiding principles have been established under the AI Policy:

  • transparency, fairness, accountability, and confidentiality form the backbone of judicial administration and must not be compromised through the use of AI tools;
  • Cloud-based generative AI tools, such as ChatGPT and DeepSeek, must not be used, as doing so may result in serious confidentiality breaches. Accordingly, all cloud-based services should be avoided, except for approved AI tools;
  • All outputs generated by approved AI tools, including but not limited to legal citations or references, must be verified by judicial officers;
  • AI tools used to translate legal texts or case law, and the resulting translations, must be verified by qualified translators or by judges themselves;
  • AI tools must not be used to arrive at any findings, reliefs, orders, or judgments; and
  • Courts are required to maintain an audit trail of all instances where AI tools are used.

Importantly, this is not the first instance of a court independently issuing guidelines on the use of AI tools. In October 2024, the Court of Delaware released an interim policy governing the use of generative AI by judicial officers and court personnel.

BOMBAY HIGH COURT RULES ON CHARGING SERVICE FEES ON ONLINE MOVIE TICKET BOOKING

On July 10, 2025, the Bombay High Court ("BHC"), in the case of PVR Ltd. v. State of Maharashtra, quashed two government orders that prohibited cinema operators from charging an additional service fee for the booking of online tickets. In this regard, the BHC held that the state had acted beyond its legal authority and violated the fundamental right to carry on business under Article 19(1)(g) of the Constitution by imposing restrictions without any statutory backing. The BHC further clarified that the choice of whether to book tickets online or purchase them offline lies entirely with the customer.

Background

Over the past two decades, multiplexes have embraced online ticketing to enhance consumer convenience and operational efficiency. While the base ticket price is subject to GST as 'admission to entertainment,' multiplexes typically levy an additional convenience fee to recover their investment in digital infrastructure. This fee varies across locations and platforms and supports the long-term sustainability of online booking services.

The Government of Maharashtra, through government orders issued by the Revenue and Forest Department on April 4, 2013, and March 18, 2014 (collectively, the "Orders"). These Orders prohibited cinema exhibitors, owners, and agents from charging any additional amount for online ticket sales and further mandated that all cinema operators establish their own systems for online ticketing without levying service charges on viewers.

The Orders were subsequently challenged by multiplex cinema operators before the BHC. The petitioners contested the State's authority to impose such restrictions on the imposition of service charges, raising the question of whether the State, under the Maharashtra Entertainment Duty Act, 1923 ("MED Act") or otherwise, was empowered to regulate or prohibit the multiplex cinema operators from charging service charges.

Ruling on Service Fees

In the present case, the petitioners argued that the Orders violated their fundamental right under Article 19(1)(g) of the Constitution to carry on a legitimate business and lacked any statutory basis under the MED Act. In this regard, the BHC examined Sections 7 and 10 of the MED Act and held that the said provisions did not empower the State to impose the restrictions contemplated under the Orders.

The BHC emphasized that any restriction on the right to conduct business must be backed by valid legislation and must meet the requirements of Article 19(6) of the Constitution. Since the Orders lacked a statutory basis and failed to satisfy these constitutional requirements, the BHC upheld the challenge and held that the Orders were unconstitutional to the extent that they prohibited the collection of service charges on online bookings.

To view the full pdf, clikc here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More