What intellectual property and liability issues does generative artificial intelligence raise? -Osborne Clarke

The growth and transformation of artificial intelligence raises a number of legal issues that require adaptation and innovation

Artificial Intelligence (AI) applications are undergoing significant growth and change, requiring adaptation and innovation across industries. For example, in order to stay ahead of the curve in this rapidly evolving world of AI, search engine providers are racing to release chatbots designed to improve their search engine capabilities. The advent of generative AI, the technology that enables content creation on demand, can simplify complex queries and handle routine tasks.

This conversion also raises legal questions. The forthcoming AI regulation addresses some of these, but not all. One area of ​​significant uncertainty – but high commercial relevance – is the intellectual property (IP) implications of AI systems.

Is the AI ​​system protected?

While some aspects of AI systems can be protected by patents, copyright, and trade secrets, machine learning systems don’t fit neatly into traditional IP categories. This is partly because the real investment and value in developing an Al solution goes into the training process. A “trained” system is not viewed differently in IP law than an “untrained” system, and yet one is significantly more valuable than the other. The training datasets themselves can be of immense value, and yet it is unclear to what extent intellectual property can protect them (apart from protecting individual elements). The reality is that the most effective protection is likely to come from a combination of intellectual property and a web of contractual provisions.

Is the exit protected?

As it is written, many copyright laws around the world only protect human creation, excluding anything created by sheer automation or chance – or even elephants being provided with paint, or, as has been reported, monkeys who operate the shutter of a camera. While the philosophical question may be asked as to whether, or at what point, artificial intelligence will become sufficiently similar to human intelligence to warrant an analogy, we’re probably not quite there yet. Neither your monkey nor your AI chatbot owns any copyright.

READ :  MRFF funding supports mRNA nanotechnology and artificial intelligence innovations

Recently, the US Copyright Office reportedly first granted copyright registration for a comic book that uses AI-generated art and then took steps to revoke the copyright registration on the grounds that only humans can be considered authors. However, since only the images were partially generated by AI, but the text of the book was not, and the selection and arrangement of the images was also the work of the human author, it seems that the work as a whole should be copyrighted. The same goes for an image or text created by AI, but then further edited by a human creator.

Other cases are less clear, and it may raise questions as to whether the prompt resulting in a particular AI-generated output might be copyrighted. At least in the EU, the threshold for protecting text can be low, but with current AI systems, the same prompt doesn’t always produce the same result, so the practical value of this approach is likely to be limited.

Is the issue hurtful?

Generative AI relies on the input of large amounts of data. In the real world, data used to train an AI model may or may not be IP restricted. This would depend on the underlying dataset being used; For example, animal sounds and historical weather data are likely to be unprotected, while the photos and text underlying some of the popular generative AI applications are likely to be protected. Whether the training set as a whole is protected may differ from one jurisdiction to another (e.g. the European Union has a specific database right) and depends on the specific circumstances in which it was created.

READ :  Watch A Robot Create Disturbingly Good Artistic Self-Portraits

The output of a generative AI system contains traces of all these inputs, and these can sometimes be quite conspicuously identifiable. A major visual media company — a provider of images for businesses and consumers — recently filed a lawsuit in the US against an AI company for copying millions of photos from its collection, allegedly to start a competing business without permission or compensation build up. The photo agency claims that the AI ​​company scraped off copyrighted images and used them to train its image generation model and removed or changed the copyright management information.

Using copyrighted works to train AI models raises questions of infringement. While US companies invoke fair use as a defense in ongoing litigation, courts have yet to rule on whether that exception applies in the context of AI. However, US case law has previously determined that digitizing copyrighted books for an online library project may be fair use – which could set a potential precedent for the use of copyrighted works in AI training.

In contrast, in European copyright laws there is no broad and flexible exception like fair use. It is therefore more complicated to find copyright exceptions to justify the use of a third party’s copyrighted work in the input or output of an AI system. In the context of machine learning, the EU Copyright Directive in the Digital Single Market allows text and data mining without rightsholders’ consent, but they can opt-out, except for non-profit research purposes. Limited exceptions apply to situations such as quotation, parody, or impersonation, but they often do not cover typical usage by AI systems.

READ :  AI art is in legal grayscale

Is the issue (otherwise) illegal?

Other points to consider, particularly when dealing with images of actual people, are publicity and privacy rights – under, for example, the General Data Protection Regulation – all of which can be easily violated when images are created, used and reproduced without the consent of the data subject persons or other legal bases. Several European data protection authorities have imposed significant fines on companies using AI-enabled biometric analysis tools trained on images scraped from online sources without sufficient legal basis.

Specific problems can arise from the previously discussed use case of search engines: The integration of an AI-supported auto-completion function in publicly available software tools can also increase a provider’s liability risks. For example, the Federal Court of Justice has held that a search engine must remove unfavorable autocomplete suggestions (this decision was made in a case where typing in a person’s name automatically suggested searching for that person as a “cheater”. ).

Commentary by Osborne Clarke

The AI ​​regulation currently being discussed in the European Union and beyond focuses primarily on security risks and risks to the fundamental rights and freedoms of citizens as well as individual aspects of liability disputes. It does not offer a new toolkit for dealing with IP and privacy rights in training data – issues made more complex by the cross-border nature of online data scraping and usage and the territorial nature of differing intellectual property laws.

Actors in the AI ​​ecosystem should take this uncertainty into account and are well advised to review the legal and contractual basis of their use of input and output in the key jurisdictions of their or their customers’ operations. Additional information may arise from the outcome of the legal proceedings highlighted above.