ARTISTS VERSUS AI

“We like to say we’re trying to expand the imaginative powers of the human species. The goal is to make humans more imaginative, not make imaginative machines, which I think is an important distinction.”

It’s funny how as technology rapidly evolves, so does its corresponding vocabulary, or, as more popularly referred to, its ‘buzzwords’. In just a relatively short period of time, we have been introduced to ‘bots’, the ‘metaverse’, ‘extended reality’ (‘XR’), ‘digital immune system’ (‘DIS’), and ‘content scraping’. This last one? According to many observers, it’s just a newfangled way of saying ‘website shoplifting’.

As with any content lifting, it is, of course, illegal if done without the consent of the original source creator. But with the advent of sophisticated AI tools known as text-to-image generators, the practice is made much easier. And now, artists whose works have been used to train the AI image generators are suing to stop the alleged purloining of their creative works.

Understanding Diffusion

AI diffusion was developed in 2015 at Stanford University, and it works by taking an image, or even non-image data, and progressively adding visual enhancements (or ‘noise’) that change the original image, step by step, until the image has been totally ‘diffused’ and is considered random noise. Next, AI backtracks over those same steps that turned the image into noise and removes the noise until a copy of the original image is produced. The process is similar to—yet different from— the way that digital formats such as MP3 and JPEG compress copies of digital data by omitting certain details of the data. Through diffusion, an AI program can reconstruct a copy of training data by way of de-noising.

In 2020, researchers at UC Berkeley took the technique one step further in order to produce high-fidelity copies of training images known as latent images, and, when those latent images were interpolated mathematically, a new derivative image was produced. In 2022, researchers in Germany developed a technique to shape the de-noising process with additional information (‘conditioning’), including using short text descriptions for conditioning. Stable Diffusion and other AI image generators (‘collage tools’) use a text prompt to create a new image.

Creating AI Diffusion Datasets

Stability AI is a company founded in London that funds LAION (Large-scale Artificial Intelligence Open Network), a German non-profit organization that provides datasets, tools, and models to advance machine learning research. Stability AI developed Stable Diffusion using the LAION dataset, and the organization also released Dream Studio, a web interface version of Stable Diffusion.

Another major player on the AI diffusion scene is Deviant Art which was founded in 2000 and soon became one of the largest artist communities on the internet. Possibly millions of LAION dataset images were copied from Deviant Art for the purpose of training Stable Diffusion. Yet another organization, Midjourney, founded in San Francisco in 2021, offers text-to-image generation through its web app.

So, what do Stability AI, Deviant Art, and Midjourney all have in common? They are all defendants in a class action lawsuit brought by artists whose works were scraped in order to train the text-to-image generators.

Collage Tools and Copyrights

In January 2023, three artists, Sarah Anderson, Kelly McKernan, and Karla Ortiz, filed a class action lawsuit against Stability AI, Deviant Art, and Midjourney for their use of the collage tool Stability Diffusion, alleging that it remixes the copyrighted works of millions of artists worldwide after scraping such works from the web for AI training purposes, without the consent of the artists and without crediting the artists or compensating them.

Defendant Midjourney, although holding itself out as a ‘research laboratory’ has, in fact, monetized its image generator by promoting it to thousands of paying customers. Midjouney founder, David Holz, has acknowledged that Midjouney is trained on a ‘big scrape of the internet’. However, he maintains that regarding the massive copying of training images, ‘there are no laws specifically about that.’ The Plaintiffs, on the other hand, intend to educate Mr. Holz and the other Defendants as to just what laws in fact exist to protect artists from what they deem unfair and unethical conduct.

Stable Diffusion’s Impact

Critics have, in particular, slammed Deviant Art for having allegedly sold out the artist community rather than protect them, despite the fact that their conduct appears to violate the organization’s own terms of service and privacy policy. When confronted over this issue in November 2022, Deviant Art executives offered no explanation as to the apparent conflict between policy and conduct. And, upon their release of DreamUp, another paid app based on the Stable Diffusion platform, the AI-generated art that was produced crowded out human artists.

In China, an experienced freelance illustrator was hired to draw artwork for a novel, but after submitting his drafts to the publisher, the company expressed their admiration for his work and then proceeded to fire him. They decided to replace him with an AI tool that would generate the needed artwork going forward—at the cost of only 2 cents per image.

A Fair and Ethical AI

One of the lawyers for the class action lawsuit described the action as “another step toward making AI fair and ethical for everyone” while expressing the concern that the ability of AI collage tools such as Stable Diffusion ‘to flood the market with an essentially unlimited number of infringing images will inflict permanent damage on the market for art and artists.’

Executive Summary

The Issue

What is the impact that AI college tools are having on the human artist community?

The Gravamen

The unauthorized, massive internet scraping of artists’ images for AI diffusion training purposes arguably amounts to copyright violation as millions—and perhaps billions—of images are copied without the knowledge or consent of the artists.

The Path Forward

Despite the fact that the resulting images may not actually resemble the training images they were derived from, derivative art is nevertheless protected art, and some observers predict that the collage tool vendors will ultimately have to fully license their AI diffusion tools.

Action Items

Boycotting the Technology:

Various artists are pledging never to use AI tools in their illustrations, vowing instead to create all of their works ‘one stroke’ at a time.

Hidden Watermarks:

Some artists are putting AI collage tool companies on notice that they may not use their works to train AI, but in addition, they are putting hidden watermarks in their work in an effort to prevent their work from being used to develop new AI models.

Industry Pushback:

The global artist community—including those in film—can pushback against the use of AI, as was the case in Japan when Netflix’s use of AI for the background art of a short film drew strong backlash from animators and when Mexican filmmaker Guillermo del Toro slammed machine-generated animation as ‘an insult to life itself.’

Sue ‘Em!:

Just as the aforementioned artists have filed suit against AI collage tool organizations, OpenAI’s ChatGPT has also been the subject of lawsuits by news publishers whose articles have been used to train the ChatGPT service based on allegations of pirating human work.

Further Readings

  1. https://www.forbes.com/sites/robsalkowitz/2022/09/16/midjourney-founder-david-holz-on-the-impact-of-ai-on-art-imagination-and-the-creative-economy
  2. https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart
  3. https://news.artnet.com/art-world/class-action-lawsuit-ai-generators-deviantart-midjourney-stable-diffusion-2246770
  4. https://www.vice.com/en/article/ake53e/ai-art-lawsuits-midjourney-dalle-chatgpt
  5. https://www.newyorker.com/culture/infinite-scroll/is-ai-art-stealing-from-artists
  6. https://www.radware.com/cyberpedia/bot-management/content-scraping/

Download Now

Submit your contact details to gain access to
all Articles for Free!