This article is part 3 of 3 considering the challenges intellectual property (IP) law may face from the increased use of Artificial Intelligence (AI), and specifically considers whether AI can infringe copyright.
The flip side of AI-created works (or indeed AI-created inventions) is that AI may infringe the IP rights of others. For example, AI technology has been used by companies such as Aiva Technologies to compose sheet music through a method that involves processing large collections of existing songs. It is possible that the sheet music produced by the AI technology could contain a section that is the same as part of a pre-existing song protected by copyright. If produced by a human, this sheet music could be infringing, subject of course to the other requirements for infringement. Thus the question arises, what is the position where otherwise infringing sheet music is created by AI technology?
IP law is currently not equipped to deal with the notion of infringement by AI technology. In the case of copyright law, section 16(2) of the Copyright, Designs and Patents Act 1988 (CDPA) refers to “a person” infringing copyright. On a strict reading of these provisions, there could be no infringement by an AI system. This would be an unsatisfactory position since the owner of the copyright work would be denied a legal remedy for any damage suffered. Furthermore, if there is no primary act of infringement because the act is not by a person, any question of secondary infringement by a person would not arise.
Section 16(2) of the CDPA also provides that copyright is infringed by a person who authorises another to do a restricted act. Aside from whether a person can authorise a non-human entity in the relevant sense, the restrictive interpretation of “authorises” applied by the UK courts is unlikely to cover the use of AI. This is because the data supplied to an AI system can invariably be used in a non-infringing way and the supply of that date merely facilitates copying which does not amount to authorisation (CBS Songs v Amstrad  UKHL 15). In any event, as AI technology becomes increasingly independent of human control, the existence of any ‘authorisation’ by a person becomes less likely. Currently, while AI can ‘learn’ through machine learning techniques by processing vast quantities of information and analysing the relationship between that information, it remains dependent on the data which a programmer provides it.
It is therefore apparent that a change in the law is needed. It should also be clear that this change must identify a legal person with whom liability for infringement by AI rests in order to ensure that remedies can be enforced.
The issue of liability in the context of AI has received general consideration at both the UK Parliamentary level and by the European Parliament (see the Robotics and AI Report by the Science and Technology Committee available here, and the report on Civil Law Rules on Robotics adopted by the European Parliament, available here). The European Parliament has suggested that sophisticated AI systems be given a separate legal status, being “electronic persons” (p.16). Although legislatures have not yet considered the IP context specifically, these reports promise interesting developments to watch out for.