A fascinating new cycle for AI art techniques is emerging. I’ve alluded to this process in previous papers, but now that we’re seeing this cycle repeat over and over, it seems possible to describe it more concretely.

Because of Twitter and GitHub repos, we have a sort of worldwide collaboration of artists and technologists happening, at an extraordinary pace. While there is a long history of artists and technologists collaborating, and of individuals who innovate in both art and science simultaneously, this distributed international collaboration is—dare I say—completely unprecedented.

Some of the most significant AI algorithms that have fed this cycle are: DeepDream, StyleGAN, pix2pix, CycleGAN, BigGAN, and now DALL-E and CLIP.

The phases in this cycle seem to be:

  1. Computer science researchers release image generation code on GitHub, usually accompanying a technical paper posted to arXiv or published at a conference. Often these papers say little or nothing explicit about making art; they are focused on technical and algorithmic problems, even though the paper figures are often inspiring or delightful.

We researchers share our papers and code online simply as part of the technical publication process, to allow other researchers to understand and reproduce our work. There are some major downsides to the way this research has been conducted, but we can all agree that it has enabled technical development at a truly staggering pace, which have led to rapid development of new ideas in digital art.

Update (May 2022): since this post was written in March 2021, more recent tools like GPT-3, DALL-E 2 and Midjourney are being kept proprietary, and released only through limited-access APIs, which changes the dynamic.

  1. The links get widely shared on Twitter, sometimes together with news articles and press releases about the technology. Sometimes the researchers don’t make any announcement themselves, but the announcement appears in the arXiv daily digest email. Often other researchers, hunting for hidden gems, find papers via the daily digest. Notably, AK frequently Tweets highlights of the daily digest to 16K followers.

  2. Artists and other tinkerers download the code and experiment with the technology, kicking its tires, experimenting with different ways to use the technology to make images. These are tech-savvy artists, skilled in coding with the ML tools, and generally willing to play in the mud. These artists sometimes post their first experiments within days of the ML code release. They share their results with each other on Twitter and share information and ideas with each other; most likely, they are also noticing which images get the most Likes and comments.

A lot of these experiments are whimsical, playing around, and play is an important part of exploration. For example, here’s a review of Mario Klingemann’s feed of BigGAN experiments

How DeepDream’s inspiring peculiarities arose from its research history is an instructive example of how seemingly-unimportant decisions made by researchers affect the artwork later on.

Only a handful of research papers get this attention from artists. And some, like GauGAN, see a lot of experimentation, but don’t move past the experimentation phase much.

  1. Artists begin to release new work that uses these tools, showing it in galleries and exhibitions. DeepDream, StyleGAN, and BigGAN have all been used in fine art exhibitions. DALL-E and CLIP are so new that they haven’t yet, but it’s only a matter of time.

  2. Enough artists use these tools in straightforward ways that the style of the technology becomes recognizable and predictable. DeepDream was amazing only briefly. GANs are a much richer space, but there is a lot of GAN-based art out there that all looks the same, and, I for one have lost interest in much of it. GAN fatigue has set in. However, most of the world hasn’t seen GAN art, so there is still a considerable potential audience for it.

  3. The technology either matures or fades away. I’m still enjoying Helena Sarin and Sofia Crespo’s latest experiments with GANs, which are far more interesting than vanilla GAN renderings. Eventually, some exciting new algorithm is released, and we go back to step one.

But even when methods mature or fade away, they aren’t gone. The process is cumulative, and newer experiments mix-and-match ideas from newer and older ideas. Even though the cycle seems to be largely over for BigGAN and StyleGAN, the collective knowledge of these tools remains. For example, when CLIP was released, artists began to play with combining CLIP and BigGAN.

AI artists are generative artists, part of a 60-year-old history of generative art. When these algorithms mature, they are no longer used in a gimmicky fashion, producing recognizable styles. They are, instead, additional tools in the toolbox available to artists, for them to combine and build from in new ways.

Here are some other combinations of tools that were just shared in the past few days:

There’s lots of great stuff that happens outside this cycle. For example, work by Tom White and Trevor Paglen doesn’t really fit this pattern, who are using AI technology in very different ways that don’t follow this cycle. It’s also worth noting that credit and ownership can get confused in the open sharing environment; the way Robbie Barrat’s open sharing was abused for the Christie’s auction is an extreme example.

Watching new forms of art develop is wonderful, and it’s happening now, live, on social media, arXiv, GitHub, and beyond.