By Tim Pinto – a London-based partner in Taylor Wessing IP and media team.
UK and EU tests compared
According to the authors of “Copinger & Skone James on Copyright” (17th Edition), a leading UK practitioner's text, “There is no difference of substance between ‘intellectual creation’ on the one hand and the exercise of ‘skill’ or ‘judgment’ in making choices on the other.” In practice, this may well be true for all or most traditional types of literary, dramatic, musical and artistic works. If an artist paints a portrait of a person, it is virtually impossible to imagine that one test is satisfied but the other is not. However, the use of AI in art may be the breaking point. This will be the case if one of the originality tests requires a human author but the other does not. As mentioned, the UK's definition of a "computer generated" work is where "the work is generated by computer in circumstances such that there is no human author of the work".
Does the EU test require a human author? It is submitted that the EU AOIC test requires the work to be the intellectual creation of a human author. This is because the Painer test requires inherently human attributes, such as “personality”, “free and creative choices” and the “stamp of the author's personal touch”. Whilst we may consider, perhaps based on anthropomorphism, that some computers have personality, e.g. a mobile or home voice assistant or certain robots presumably designed to have a personality, it is stretching these concepts to say that machines produce works as a result of their own “free and creative choices” giving such works the “stamp of the author's personal touch”.
We are more likely to say that certain animals have personality and, possibly, a personal touch. If animals can be authors, then could machines be authors? This analogy fails at the first hurdle. The US courts have already ruled that animals cannot own copyright. In the monkey selfie case, Naruto v Slater (2018), the plaintiff Naruto was a crested macaque from Indonesia. The human defendant Slater, a wildlife photographer, left his camera unattended and Naruto allegedly took several selfies with Slater's camera. Slater published the photos in a book in which he and his company claimed to be the copyright owners. Naruto sued for infringement, claiming to be the copyright owner. The US 9th Circuit Court of Appeals dismissed the claim holding that: (1) if an Act of Congress does not plainly state that animals have statutory standing to sue, then animals cannot sue; (2) the US Copyright Act does not expressly authorise animals to file infringement suits; and thus (3) Naruto's claim failed. It is likely that UK and EU courts would come to the same conclusion. In summary, to be original under EU law, there must be a human author.
Does the traditional UK test require a human author? In contrast, it is submitted that the traditional UK originality test allows for AI generated art to subsist even where there is no human author. First, the UK 1988 Act expressly allows for authorship and ownership of copyright where "there is no human author of the work". Second, it is much easier to accept that a work originates from a machine which has exercised a modest amount of skill, labour and judgment.
Regarding moral rights and companies, does consideration of moral rights or the fact that companies can own and sue for copyright assist in the originality debate? As regards moral rights, the UK Act provides that the paternity and integrity rights do not apply to computer generated works (sections 79(2)(c) and 81(2)). The fact that companies can be the first owners of copyright and sue for infringement does not assist either. This is because UK law still requires the work to be original and for there be an author.
In summary, there must be a human author to satisfy the EU originality test. Therefore, if a machine creates a work of art in circumstances where no human satisfies the AOIC test, then there will be no copyright. If, upon Brexit, the UK courts retain their traditional originality test, then even in the case of pure computer generated works with no human author, copyright may subsist. We next examine who owns this copyright and how long it lasts.
Who is the author of AI created art?
Where a human satisfies the UK and/or EU originality test, then that human will be the author: section 9(1). This would be the case for Type 1 and most probably Type 2 categories. In the case of an artistic work “which is computer generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”: section 9(3). This provision in the 1988 Act seems on its face to resolve the difficult problem of authorship. However, it will not always be clear who undertook the necessary arrangements for the creation of an AI generated artistic work. It could be the person(s) who:
- organised and/or funded the project;
- arranged for the coding to be done before the AI was taught;
- arranged for the teaching of the AI; or
- chose and/or selected any input(s) and/or settings for the creation of the work in question.
The arranger is most likely to be the person (presumably the artist) who conceived of and organised the project (rather than the programmer and/or artist's assistants). It is helpful to distinguish between (a) the direct creator of the actual resulting work – which may be the machine and (b) the creator of the general idea and/or maker of the arrangements which enabled the machine creating the work of art. For example, imagine a spacecraft which records masses of data from Mars and, using machine learning, creates beautiful new images from the data. Here, the resulting works have no direct human author. Not only does the machine create the images, but also such images may be completely unpredictable and unimaginable. However, the project (from the general concept to building the spacecraft, including the AI aspect) would almost certainly have been arranged by an organisation.
There are likely to be disputes on who the author is of AI created works. There could be several people and/or companies who claim to be responsible e.g. for the underlying algorithms and coding and some of this may be open source. The ultimate question is whether a machine itself could ever be considered the author and/or own any legal rights, in the same way that corporations can own rights and enforce them. However, this seems unlikely.
If a work of art is partly created by a human and partly by a machine, could this be a work of joint authorship? Under section 10, a “work of joint authorship” means “a work produced by the collaboration of two or more authors in which the contribution of each author is not distinct from that of the other author or authors.” Whilst AI and humans will increasingly work more closely together over time, it seems difficult to envisage the court ruling that the machine (which is not a legal entity) and the human were collaborating. The human artist will likely be an author of works which were the expression of his or her own intellectual creation (EU test) or skill, labour and judgment (traditional UK test). The machine itself is not a legal entity and so cannot own any rights. However, copyright may be owned by the arranger (see above).
Who is the first owner of copyright (if any) in AI created art? The author is the first owner of copyright. Therefore, where the author is the arranger of the computer generated work, they will generally be the first owner: section 11(1). If the arranger is an employee, then that person's employer is generally the first owner subject to any agreement to the contrary: section 11(2). If copyright subsists in a human authored work, the term of copyright protection is generally 70 years after the death of the author: section 12(2). Where the work is computer generated with no human author, the term is 50 years: section 12(7). After 50 years, the work would enter the public domain meaning it can be copied without a licence.
When do third parties copying AI created art infringe copyright?
By their nature, machines can create works a lot faster than humans. Therefore, it is possible to envisage numerous machines churning out masses of new works of art. If copyright does not subsist (say because there is no human author and the AOIC test is not therefore satisfied), then there would be no copyright prohibition on people or machines copying and otherwise exploiting such works. Millions or even billions of works of AI generated work could then be available to copy for free (assuming that the works themselves do not infringe (as to which, see below)). This could potentially depreciate the value of art. It may also create a new market of curation, whereby humans (or other machines) would pick out the very best works from the billions being generated all the time. It may then be possible for copyright to subsist in the curation process – the free and creative choice to ascertain the best images.
Where copyright subsists in AI-generated art, then the copyright owner can bring infringement proceedings against a third party which copies the work. This could be easier said than done. The claimant will need to prove subsistence (including originality), authorship, ownership, copying and substantiality (i.e. that enough of the earlier work has been copied). Boasts such as “look how amazing our new robot is – it created this picture all on its own,” may count against the claimant.
When does the use of AI to create art infringe third party copyright?
Having examined the subsistence, authorship and ownership of works of art created by AI, it is necessary to analyse whether the AI process itself potentially infringes the copyright of a third party.
Teaching the machine
In order to teach an AI machine to create an artistic work, it is necessary to provide the machine with data. This data may nor may not comprise works protected copyright. If the input consists of works of art where the copyright has expired because it is so old, then any copying during the process will not infringe. The Next Rembrandt project started by feeding data from numerous of Rembrandt's paintings into a computer. Given that the paintings were over 350 years old, there is no danger that this process would have infringed copyright in the original paintings.
If the input is not capable of being a copyright work (or other protectable subject matter), then copying this would not infringe either. Data taken directly from the weather, Mars, oceans, plants, animals etc. would not amount to copyright works (assuming that the data is collected at source). Other types of data may nor not be protected, such as sports and financial data, or copying data which has previously been recorded by a third party who may own rights in it.
On the other hand, if the inputs comprise copyright works, the question arises if the person teaching or operating the AI needs a licence. If the process of teaching requires that all or a substantial part of an earlier copyright work be reproduced (whether directly or indirectly), then this would likely infringe copyright unless the use was licenced or a defence applied. However, if teaching of the machine does not actually involve any copying of the earlier work (say only certain statistics are recorded such as the distance between the eyes and nose of a figure in a painting), then it is possible that there is no infringement. If none of the expression of the author's own intellectual creation (EU test) or any of the skill, labour and judgment (traditional UK test) has been taken, then there may be no infringement. This is a question of fact.
It seems unlikely that the AI machine itself, once taught, would infringe copyright because the machine probably only comprises an algorithm and network of weighted connections. If the machine contains the digital records of earlier copyright protected art, then this would probably require a licence.
Will the machine's output infringe an earlier copyright work? If the machine produces a work of art which is a direct or indirect copy of a substantial part of an earlier copyright work and there is no defence, this will be an infringement. However, it is important to note that if the work was independently created (i.e. it was not a direct or indirect copy), however similar it is to a pre-existing work of art, then there will be no infringement. So if the user, the machine and the programmers had not seen or used the claimant's work, then there should be no infringement; however similar the output is to the claimant's work. If AI produces billions of works of art, some of them are bound to resemble earlier works even if there was no copying. This is more likely to occur than the scenario in the infinite monkey theorem.
Even if the AI generated art resembles the claimant's work and causation is proved (e.g. because the claimant's work was part of the input data), infringement does not follow automatically. The claimant will need to prove subsistence, title and sue the correct human and/or corporate defendant. Even if these things are proved, the question of substantiality can be difficult to determine with certainty, as the red bus case perhaps shows (see the images below, with the claimant's work on the left).
Conclusion
In conclusion, AI is increasingly being utilised by artists. This gives rise to fascinating legal questions which will either have to be resolved by the courts or new legislation. Whilst legal questions of subsistence and ownership will probably be resolved over time, it seems unlikely that machines will ever have their own legal rights.
Angus Brown of Cherwell, the University of Oxford newspaper, reports on a new exhibition of robotic art.
Robot’s art has gone on display in an exhibition at St John’s College in Oxford, as part of a project created to showcase the potential of artificial intelligence. The art is created by a humanoid robot named “Ai-Da” after 19th century mathematician Ada Lovelace. The robot uses a robotic arm and pencil to draw what it sees with a camera in the eye.
According to the BBC, the art from the robot’s first exhibition has already sold for more than £1 million. The robot Ai-Da was designed by gallerist Aidan Mellier, who wanted to create a human-like robot capable of producing original art in order to showcase the potential of artificial intelligence. It absorbs visual information and interprets it using technologies developed at the University of Oxford.
The Telegraph described her paintings as looking “like the kind of kaleidoscopic synthetic-cubist paintings produced by artists in Paris in the Twenties, though they’re more complex in structure and more muted in colour.” Despite being billed as art produced solely by Ai-Da, some of the robot’s works were painted over by a human artist, Susie Emery, and at least seven collaborators are listed beneath each work.
In a video produced for the BBC, the robot said “I would like young people to realise that there is more to art than simply drawing. The context, the meaning, what you want to say.” The robot further said that: “It is wonderful to see people engaging, thinking and discussing this work.” The machine’s creator describes it as a “bespoke robot” which is “able to actually explore those questions, and engage audiences in the whole issue of ethics, and where AI is taking us. “We need to be able to ask ourselves, what are we actually developing in the future?”
Although The Telegraph gave the exhibition just two stars, they also wrote that “the art of our time – as represented by, say, the Turner Prize – already feels artificial and “unreal” in so many elaborate ways, that creating a robot to produce art that looks “real” feels an almost perversely quaint thing to do.