- with readers working within the Accounting & Consultancy industries
- within International Law, Environment and Antitrust/Competition Law topic(s)
Artificial intelligence has already proven its ability to generate value at speed, producing everything from software code and product prototypes to business strategies and creative designs. But as the tools mature and their adoption accelerates, the question is surfacing: who owns the outputs? Unlike the input problem, which deals with what goes into the AI tool (see our previous article in this regards, AI Input), the output challenge is about what comes out of the tool and here, the lines are even more blurred than with inputs.
Consider a business that deploys a generative AI tool to design a new product prototype. That output could hold significant commercial value, but unless agreements are clear, it is not obvious who has the right to commercialise it. Is responsibility attributed to the end user that used the AI tool, the developer who created it, the licensor who provided it or another party who provided data or intellectual property in relation to the output?
Without certainty on the above, product launches can stall, collaborations can fracture and competitors or third parties may challenge the rights to use or enforce the output generated. The stakes are even higher if the AI generated content resembles pre-existing materials, either because of the data the model was trained on or the way it reproduces patterns. In those cases, businesses may find themselves unable to defend the originality of their outputs or worse, accused of infringement.
The question of outputs is not just about ownership. It extends to control, exclusivity, registrability and ultimately, value realisation. If a business cannot say with confidence who owns what, it cannot license those rights, protect them from misuse or scale them into reliable revenue streams. Ambiguity erodes value, and in the high-velocity world of AI-driven innovation, delays in asserting ownership can translate into lost market share. In a recent matter that we advised on, a client approached us with an issue where one of its employees used an AI tool to generate a logo; the company embarked on a massive marketing campaign expending missions of rands, only for the company to later discover that it was unable to register a trade mark or commercially exploit the logo because the terms of service of the AI tool attributed the ownership of the output (i.e the logo), to the AI tool developer and not to our client, this was not only a legally tricky position to be in but also commercially and reputationally, our client was affected negatively.
These concerns are no longer theoretical. As AI becomes embedded in product design cycles, software development and even day-to-day operations, the risks of unclear ownership are immediate. Teams are already encountering disputes over who controls the rights to AI-generated reports, code, designs or other outputs. The absence of clear frameworks not only slows down go-to-market strategies but also undermines a company's ability to protect its competitive edge.
The answer lies in coordination and governance. Technology, business, marketing, product teams and other stakeholders need to work hand in hand with legal and commercial functions to ensure that inputs are properly vetted, output ownership and rights are clearly addressed in enforceable agreements, and there is alignment on who owns, controls and can exploit AI-generated content. Beyond inputs and outputs, organisations also need to consider rights in training data itself, as well as ownership of algorithms in bespoke AI developments. These layers all play a role in shaping who truly benefits from the technology.
AI's power lies in its ability to generate value at scale and often in unexpected ways. But if businesses cannot secure the outputs or are unable to control how they are used or exploited, the innovation that AI promises may never translate into commercial advantage and worse may result in negative legal, reputational and commercial consequences. Managing this risk does not mean avoiding AI, it means adopting AI responsibly, with the proper and pragmatic contractual and governance frameworks in place.
For businesses building, deploying or relying on AI-driven tools, clarifying ownership of both inputs and outputs is one of the most important steps toward protecting long-term value and building trust in their technology.
In Part 3 of this series, we will explore the practical measures companies should adopt to mitigate AI-related risks and embed responsible governance frameworks into their contracts and operations.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.