top of page
Immagine del redattoreAndrea Viliotti

Copyright and Artificial Intelligence: Challenges and Opportunities for Creative and Technological Industries

The document “Copyright and Artificial Intelligence,” conducted by the United Kingdom’s Intellectual Property Office in collaboration with the Department for Science, Innovation & Technology and the Department for Culture, Media & Sport, explores the complex relationship between Copyright and Artificial Intelligence. It addresses the delicate balance between protecting creative works and developing AI-based technologies. The document highlights the need for new rules to support rights in the realm of Copyright and Artificial Intelligence while promoting transparency and innovation. This topic involves businesses, governments, and users in a dialogue aimed at defining effective tools to protect human expression without hindering the growth of the tech sector, recognizing the importance of international cooperation and regulatory clarity.

Copyright and Artificial Intelligence
Copyright and Artificial Intelligence: Challenges and Opportunities for Creative and Technological Industries

Copyright and Artificial Intelligence: The Urgency of Legislative Reform

The consultation shows how Artificial Intelligence has become a potential engine for growth: in its “World Economic Outlook” for 2024, the International Monetary Fund maintains that the adoption of AI solutions could trigger productivity increases of up to 1.5 percentage points per year. At the same time, the document underlines the value of the creative industries in the United Kingdom, which generate £124.8 billion of added value and employ thousands of professionals in sectors ranging from publishing to film. To understand the need for copyright reform, it is first necessary to consider the technological context in which generative AIs operate, as they increasingly rely on enormous datasets—often consisting of protected works.


In short, the debate focuses on the nature and scope of what copyright law should allow or prohibit during the “training” process, that is, the phase in which AI models learn rules, styles, and patterns from content available online or in databases. Many works hosted on the Internet—such as books, song lyrics, images, and illustrations—are retrieved by automated crawlers that gather resources to enrich the initial data lakes for AI training. On one hand, this process drives innovation by allowing developers to create more efficient and accurate algorithms, while on the other, it creates uncertainty for authors who do not know how their creations are being used, nor whether they will be entitled to proportional compensation.


The government consultation highlights the difficulty of conceiving a rule that is effective without slowing down AI development. Developers, on one hand, complain about the risk of sanctions and lawsuits, to the point that they might opt to train their models in countries with regulations considered more permissive or, in any case, clearer. On the other hand, rights holders claim they lack adequate means to verify or block unauthorized use of their works, resulting in an inability to control or monetize the use of protected material through proper licensing.


In this scenario, the British government proposes direct intervention through a series of measures that include dataset transparency, the adoption of targeted exceptions for text and data mining, and the option for creators to explicitly reserve their rights. The goal is to create a regulatory framework that fosters innovation while giving creators the tools to negotiate and manage the use of their works. The guiding idea centers on the introduction of a “rights reservation” mechanism, in which access to protected material for training is not automatic but subject to license clauses if the rights holders clearly indicate a desire for protection.


The success of these measures, however, depends on inclusive involvement: beyond creative industry operators and AI providers, the participation of consumers, authorities, and standard-setting bodies is crucial. Without a shared technical protocol and harmonization with other legislative systems, the new framework risks remaining theoretical, leaving the actual issue of control unresolved. This initial reflection therefore underscores how essential it is to act quickly and clearly, so as not to hamper AI advances and, at the same time, not to deprive artists of the deserved economic and moral recognition. The danger of major tech companies relocating from the United Kingdom underscores the gravity of the situation and what is at stake: a balanced intervention could turn into an opportunity for all parties involved, provided that the issues of transparency, regulatory clarity, and rights protection are addressed with a constructive and informed attitude.

 

Rights Reservation and Technical Standards: New Tools for Copyright in the AI Era

The core of the proposed measures lies in an exception to copyright law that would allow data mining of freely accessible works, but with the right of copyright holders to “reserve” their works and prevent their use without a license. Part of the inspiration comes from the European Union, which has already introduced the idea of an exception for text and data mining with an opt-out mechanism. However, the British document points out that the European model is still not without problems, partly because the system for reserving rights in many cases remains insufficiently standardized and difficult to implement on a large scale.

The hypothesis under consultation is to apply a “reserved rights” approach through technical solutions that make the rights holder’s intent clearly machine-readable.


An existing example is the robots.txt file used by various publishers to block the scanning of their content, though it is deemed inadequate for managing selective control over AI model training. Indeed, a robots.txt file applies to an entire domain and is intended for search engines, not for machine learning systems that may require more targeted constraints.

There are private initiatives that let creators register their works in databases, indicating their desire for exclusion. Some platforms, such as Spawning.AI, offer “Do Not Train” options, though the adoption of these tools is still inconsistent and requires cooperation from AI developers. As a result, the British government is considering regulating transparency and respect for rights reservation, requiring AI companies to disclose data sources and comply with any metadata or technical signals that prohibit the use of specific content. The aim is to foster a licensing market—particularly for large catalogs of content—in which owners can freely negotiate terms with AI providers. This presupposes the adoption of universally accepted technical standards to avoid the current fragmentation, where different developers implement proprietary methods, forcing authors to block each platform manually.


It is noteworthy that the proposal also considers the needs of small innovative companies, which would be unable to negotiate individually with every rights holder or handle overly burdensome compliance processes. A system of collective licensing and shared codes could facilitate access to large datasets, spurring the development of high-quality AI products while respecting creators’ legitimate expectations.

Concretely, if an author (or a major publisher) wants to grant a license for the use of their works for training purposes, they can remain within the exception without objecting, receiving compensation from deals with parties that need high-value datasets. Otherwise, they can reserve their rights and block any copying of their creations. This mix of freedom and control should serve as an incentive both for creators—who can offer licenses more transparently—and for AI developers—who will operate within a more secure framework.


Standardization and government support in developing technical solutions are critical: this ranges from the need for protocols to read and apply “rights reservation” signals to the promotion of metadata applicable to individual works. There is also talk of potential public funding for research on mass labeling tools to help less-structured creators manage the enormous flow of circulating data. Without such standards, the legislation could be rendered ineffective. Without an automated mechanism, rights holders would be forced to verify compliance on a case-by-case basis, and developers would lack certainty about meeting requirements. Clearly, then, rights reservation must be paired with a practical way to enforce it, without creating excessive burdens on industry operators. Achieving this balance will require additional dialogue and practical testing.

 

Transparency and Collaboration between Creators and AI Developers

The document underlines that the effectiveness of any new exception for text and data mining will depend on transparency. A major obstacle to mutual trust lies in the scarcity of information about the sources used to train generative models. Many creators have complained that they cannot determine whether their online content—often posted for promotional purposes—has been secretly copied to build massive datasets. AI companies, for their part, argue that revealing in detail millions of specific sources can be complex, especially when operating with datasets managed by third parties or open source resources without a centralized archive.


This has led to the proposal of at least a “sufficiently detailed” obligation of disclosure, akin to what is prescribed in the EU AI Act, which asks developers to publish a list of the main databases and sources used in training while allowing for a certain degree of conciseness. The United Kingdom appears keen to coordinate with these international rules to avoid barriers to interoperability and maintain the attractiveness of its market.


When structuring these transparency requirements, there must be consideration of potential conflicts with trade secrets and confidential information protections. Some AI developers might treat their data collection methods as company know-how, seeing it as harmful to disclose every source. At the same time, creators have the right to verify any unlawful uses of their works. The consultation therefore opens space for proposals on how to balance transparency with industrial secrecy, through partial disclosure procedures or the creation of an independent oversight body that could verify data sources without making all the details public.


Another particularly relevant aspect is the so-called “labelling” of AI-generated content. Some platforms are already experimenting with automated tagging so that end users know that a text, image, or video was produced by a generative algorithm. This is important not only for copyright protection but also for reasons of proper information. For instance, if a reader does not realize that an article has been written by a natural language system, they might wrongly attribute to a flesh-and-blood journalist opinions and reflections that are generated by an automated process, potentially impacting reputation. In the realm of copyright, labeling content as “AI-generated” would enable quicker assessment of whether a model may have reproduced protected works.


Internal traceability could also boost transparency: some algorithms, during the generation process, could save a “history” of how the content was created, helping prove that the text or image does not contain copyrighted material without authorization. In the consultation, the government emphasizes that it does not wish to impose heavy burdens on small businesses that use or develop AI; it seeks a balanced solution where the requirement for disclosure is proportional to the type of model, the scale of its use, and its purposes. If an application only generates brief excerpts for educational, non-commercial use, it would be unfair to require the same obligations as a major company distributing content to millions of users.

The debate remains open, and international coordination should not be overlooked, especially given that many AI services are trained outside the UK and then made globally available, underscoring the need for interjurisdictional cooperation.

 

AI-Generated Works: Legal Challenges and Human Creativity

A central point often discussed in the consultation concerns the protection of so-called computer-generated works, where a genuine human creative contribution cannot be identified. Under Section 9(3) of the UK Copyright, Designs and Patents Act, there is a specific 50-year protection for original works without a human author. However, this mechanism collides with the evolving concept of “originality” in case law, which has traditionally required intellectual input and creative decisions from a human author.

Nowadays, the rise of generative systems (texts, images, videos, music) questions the very notion of copyright protection. If an algorithm produces a musical composition without any human compositional input, does granting copyright protection make sense, and to whom should it be assigned? Some argue that such protection is unnecessary because automated creativity does not require the same incentives as copyright.


Others believe a basic form of protection could encourage companies or individuals to invest in software, neural networks, and infrastructure that produce content, creating economic advantages. Critics, however, respond that aside from the standard copyright that covers the “footprint” of a sound, a video, or a text (for example, on a recording), it is not appropriate to treat AI programs as artists. Many countries, including the United States, do not recognize protection in the absence of a human author, and this does not appear to have slowed the spread of AI.

If an artist “assisted” by AI remains protected because their creativity, even when mediated by algorithmic tools, is still safeguarded under traditional copyright, purely automated creation is a separate matter. The consultation thus invites opinions on whether to abolish or amend Section 9(3) of the UK law, leaving other forms of protection—entrepreneurial in nature, such as the role of a “publisher” for a sound recording—to cover the economic interests of those who invest in AI projects.


Another issue arises from the possibility that some AI-generated content might inadvertently plagiarize or contain substantial portions of protected works. If an AI model, trained on famous songs, were to produce remarkably similar tracks, it could violate existing copyrights. Developers try to manage this risk by introducing filters and internal checks, though results are not always foolproof. It therefore remains essential to determine whether generated content directly and substantially derives from protected works.

This fourth section illustrates the complexity of defining and regulating computer-generated works. The choice to maintain, modify, or repeal legal protection for these creations has significant economic, legal, and cultural consequences and calls for extensive engagement from all stakeholders, including big tech companies, individual artists, universities, and small entrepreneurs that view AI as a competitive tool.

 

Digital Replicas and Responsibility: Strategies for a Balanced Ecosystem

Another emerging topic addressed by the official document is that of digital replicas, often called deepfakes, which are synthetic content reproducing voice or appearance of real people without authorization. This raises significant concerns in the creative sector, particularly among musicians and actors since an AI model trained on audio or video recordings can faithfully recreate vocal or visual performances extremely like the original. While certain aspects of copyright, such as performance rights, can limit the use of specific vocal tracks or footage, these protections do not always suffice to deter the proliferation of synthetic imitations.


Some authors advocate stronger personality rights—like those in some parts of the United States—to block the unauthorized use of their image or voice. The consultation notes that introducing a new personality right in UK law would be a major shift, as it involves privacy, freedom of expression, and commercial strategies of record labels and film production companies. The UK does recognize certain protections—such as the tort of passing off or personal data protection—but many actors and singers fear these measures are insufficient in the face of AI capable of generating highly realistic vocal and visual “clones.”

Moreover, technologies like voice synthesis or 3D body modeling can now create “replicas” for advertising or marketing purposes without the person’s knowledge. Internationally, the consultation notes growing interest in ad hoc regulations, like those in California, and suggests that any decision in this area must consider this trend.


Another key point revolves around the inference process, i.e., when AI already trained on data—protected—uses it in real time to generate answers or new content. For example, a “retrieval-augmented generation” system might read online news behind a paywall or covered by copyright, integrating it into a synthesis offered to the end user. While copyright law clearly bans the substantial reproduction of protected portions, the complexity of these models—capable of swiftly analyzing and reproducing vast amounts of articles—cannot be overlooked. The consultation thus asks whether current regulations adequately protect creators and encourage healthy technological development.

Meanwhile, there are already forward-looking discussions regarding the use of synthetic data, specifically generated so as not to infringe copyright, potentially solving many issues. However, the actual market impact and how it affects the quality of AI solutions remains unclear.


In such a fast-changing landscape, the government document sets itself up as a starting point for ongoing dialogue. The intention is to adopt a legislative framework that, on one hand, gives creators better tools to control and monetize their works, and on the other, continues to encourage major tech companies to invest and conduct research in the UK. This entails a careful assessment of the needs of smaller businesses and the academic community, which often require flexible rules to keep pace with progress.

 

Conclusions

The reflections presented provide a realistic view of how difficult it is to strike a balance between innovation in the field of Artificial Intelligence and long-established rights within the creative industry. The United Kingdom, home to leading high-tech companies and boasting a significant cultural contribution, feels the need to clarify its legal framework to avoid uncertainty. Comparing its approach with other legal systems, such as those of the EU and the US, suggests that a text and data mining exception with the possibility of “rights reservation” is a feasible route—provided that technical tools and standardization protocols are implemented. Such measures could align creators’ interests, granting them the chance to opt for remunerative licenses, and AI developers’ interests, looking for secure access to vast amounts of data.


However, a strong commitment to transparency remains indispensable so that training datasets and generated content can be more easily understood and monitored, especially regarding sensitive issues such as plagiarism or unlawful use of protected works. It is also necessary to consider existing technologies for disclosing the automated nature of content (e.g., labeling systems), enabling business leaders and entrepreneurs to make informed decisions about their strategies. While synthetic data use may offer an alternative, a decisive intervention on licensing mechanisms and limits to digital replicas—which risk devaluing creative performances and escalating tensions in the entertainment world—appears crucial.

From a strategic perspective for businesses, it is essential to understand how this consultation and the related regulatory developments could affect investments. Clear rules and an effective licensing system might attract new AI enterprises, creating opportunities in publishing, music, and film. At the same time, creators need protections that reflect the value of their work and provide a balance between technological experimentation and economic return. The challenge lies in avoiding overly complex standards that would hamper smaller players, as well as overly simplified mechanisms that fail to provide solid rights protection.


The scenario thus makes clear that doing nothing is no longer an option, given the speed of AI’s expansion and the growing frustration of those who fear losing control over their content. The British government’s current direction seeks to promote an environment of mutual trust, where every stakeholder—from startups to major technology conglomerates, all the way to individual authors—understands how to contribute to building an ecosystem in which innovation does not conflict with fair remuneration for creative works. The questions raised in the consultation invite all parties to participate with constructive proposals, because the outcome will affect the resilience of both the industrial and cultural sectors, as well as the broader need to imagine a future in which AI and human creativity can collaborate in a positive way, governed by clear rules.


 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page