“Pennywise on a Bike” [pictured] is a drawing that appears quite ordinary, perhaps even slightly morbid, bearing all the hallmarks of a budding artist’s creation, including the signature in the bottom right corner. Yet, when the renowned horror novelist Stephen King shared this image on Twitter, he revealed a chilling detail: “This was done by an AI bot. I asked my tech friend, Jake, to put Pennywise on a bike and this came out.” (King S https://twitter.com/StephenKing/status/1560993085278019584/photo/1) And thus, the unsettling truth emerged.
This macabre artwork is the product of a text-to-image artificial intelligence (AI) generator, a technology that has increasingly infiltrated the digital realm. AI tools such as Midjourney, DALL-E 2, and Stable Diffusion have evolved in their sophistication, leaving many in awe of their output. The process of generating an image typically involves the user providing textual input, which the AI system, often relying on artificial neural networks, translates into a visual representation. Drawing from a database containing millions of images and corresponding captions, the neural network learns the relationships between words and images, resulting in a virtually limitless array of generated images.
The image shared by Stephen King, however, raises an unsettling question: why does this AI-generated artwork bear a signature? The answer, while simple, highlights the concerns of many critics. The AI generator, having analysed millions of images, has deduced that a signature is a necessary component of the artwork, as it features in many other artists’ original creations. Consequently, should these artists not be recognised, or at least granted the option to choose whether their works are used in such a manner? This quandary has led to numerous lawsuits, most notably between the stock image supplier Getty Images and OpenAI (Stable Diffusion).
Another disquieting aspect of AI-generated art is its capacity to emulate the unique styles of artists, both past and present. A striking example is ‘The Next Rembrandt’, a project that used similar technology to create an astonishingly accurate replica of a work by the Dutch master Rembrandt van Rijn. Contemporary artists are not immune to this phenomenon either, with digital artist Greg Rutkowski’s name being used as a prompt for AI image generation nearly 93,000 times, according to the MIT Technology Review. The replication of artists’ works, upon which their livelihoods depend, could potentially devastate the demand and value of their creations.
The advent of AI-generated art also raises thorny questions about copyright. Who owns the resulting work – the end-user or the AI system? Do the original owners of the images used to produce the final piece retain any moral rights over the work? The answers to these questions may vary depending on the specific processes employed by different AI image generators.
Intellectual property rights have traditionally centred on human creators, and the legislation surrounding them has been similarly human-centric. This leaves the legal landscape ill-prepared to address the complex issues arising from AI’s impact on intellectual property. Without a robust legislative framework to grapple with these challenges, the consequences could be dire.
In South Africa, the Copyright Amendment Bill, of 2017, aims to protect the economic interests of creators while accommodating emerging technologies. However, the bill offers no guidance regarding AI and other technological developments that will fundamentally affect copyright. As AI continues to permeate the world of art, the urgency for meaningful legislation only grows. The consequences of inaction loom large, threatening to unleash far-reaching disruptions within the intellectual property law space.
By Viteshen Naidoo and Stefaans Gerber