Here is the talk I gave at SIGGRAPH 2024, on occasion of receiving the SIGGRAPH Computer Graphics Achievement award. I talk about how my experiences in painting inspired my research in computer graphics, which then inspired my research in the science of art, to try to understand all those things I’d been confused about art. And how so many things—making art, doing research, and even a career trajectory—are unplanned and unpredictable.

At the bottom of this page are links for further reading and a list of things I thought of including in the talk but omitted for lack of time.

I’m really grateful for this award, and it means a lot, coming from this community that’s been so important to me, and mentored, guided, supported, and inspired me. The community—the network of teachers, colleagues, collaborators, mentors, and friends—makes it all worthwhile.

I’m occasionally asked for advice from students who are, understandably, stressed about figuring out their future plans. One message here is that you can’t really plan your future in detail. What you can do is be curious, learn as much as possible, work hard, set yourself in a direction, and then be open to the opportunities that can find or create.

My journey in art and computing (so far)

I want to tell a story about some of my experiences in art and computing. But first, I want to tell you about a book that changed my life.

In my first year of college, I took courses in many topics I found interesting: philosophy, literature, physics, and others. I took several art classes and enjoyed them.

But, since I planned to major in computer science, I was also required to take a generic Humanities-for-Non-Majors course. In that course, we were assigned to write an essay on The Princess of Cleves, a 17th-Century French novel. I started reading the novel, but couldn’t do it. I didn’t want to read the book, I didn’t want to write the essay, and I didn’t like the class itself.

But I found a way out of it. All I had to do was to double-major in art, and take three more years of art classes—and then I wouldn’t have to read the book. So that’s what I did. To this day, I still haven’t read the book.

In my art classes, we learned intuitions, not principles. One day, I painted a watercolor of house on a street into the studio, and, when I brought it into the studio, the professor said “Why did you paint that street sign? You didn’t have to.” My first response was that I painted it because it was there, and I’d wanted to paint what I saw. But it created all these unanswered questions in my head. He said it in a way that sounded like I shouldn’t have drawn it. Why not? How do you make these decisions? When the professor liked our work, he’d say “That’s kinda interestin’!” and point out good uses of gesture, mark-making, or space. Of course I didn’t believe that there was a “right” or “wrong” way to do things, but those judgments—which appeared in almost every critique session in every lesson—seemed to imply some sort of principles.

I really enjoyed these classes, and felt like I came away from them with lots of good intuitions and experiences about painting. But I learned no theory at all, no principles to work from. I felt confused about some very basic questions, like how should I think about composition or perspective? What does it even mean to be an artist?


A fter graduation, I decided to go to graduate school in computer graphics. When I did, several people told me that I should go to SIGGRAPH, which I hadn’t heard of before. So I drove to New Orleans for SIGGRAPH 96.

It was amazing. Oh my god. All these new ideas and new algorithms for how to do things, new ways to create. But also the art: big feature films next to weird abstract experimental animation. New interactive experiences, art installations. A whole community of people making art and technology together. I loved all of it, this is where I wanted to be.

In grad school, I had to figure out what to do for my research. In the first year, I played around with some ideas in image-based rendering but didn’t find a concrete direction.

The pivotal moment came next year at SIGGRAPH 97, when I attended Pete Litwinowicz’s talk on his paper describing an algorithm that makes paintings from photographs. The algorithm, he said, placed down brush strokes in a grid over the photograph, “the way an artist would.” And I thought “that’s not how artists paint.” At least, that’s not how I paint.

So I went back to the lab, implemented his algorithm, and started modifying it, playing with different ideas. After a few months, a labmate said it looked pretty good, and I should submit it to SIGGRAPH.

So I did, and it got in to SIGGRAPH 98, and it became my first SIGGRAPH paper: “Painterly Rendering with Brush Strokes of Multiple Sizes.” It grew directly out of my experience with real-world painting, applied to computing. It set me on the path to doing my PhD in non-photorealistic rendering.

I’ll mention one other paper from my thesis.

At SIGGRAPH 99, I went to each of the computer animation festival screenings, because I loved them so much. In one, I loved a short film by Bob Sabiston called “Snack and Drink.” It had a really interesting rotoscoped look, and I could imagine what kind of tools the animators might have used to make it.

So, next year, when David Salesin hired me for an internship, and asked what I wanted to work on, I thought back to that animation. I suggested we could make an assisted rotoscoping tool using computer vision: an artist could draw curves over video, and then we could generate new curves for new video frames based on motion tracking.

As we discussed it, we thought: if we could do that for curves, then why not do it for images? And this became the final paper of my thesis, “Image Analogies,” published at SIGGRAPH 2001, with David, Chuck Jacobs, Nuria Oliver, and Brian Curless.

In this paper, we proposed to learn artistic styles from examples—for example, to transfer a watercolor painting style to a photograph—rather than hand-coding styles them the way we’d all been doing it previously.

When you get immersed working on a problem, you begin to see connections everywhere. One day, I walked into a friend’s office, and saw a painting called “Changing Melody” on the wall, which depicted a meandering path through a forest. And I immediately thought: we can do that. I plugged it into the system with a guidance image, and then we had a way to generate new layouts of the same scenery, e.g., changing the path into a crop circle. Chuck implemented an interactive version, so you could paint new photograph layouts from example photographs.

We did not start out planning to write a paper on image stylization. The paper we published was almost totally different from the initial project idea. We started working on one thing, and published something almost unrecognizably different.

Even though I did my PhD in non-photorealistic rendering, for a long time after my PhD, I did not work on it much. In part, I was excited about other research areas, and worked on many projects in character animation, computer vision, graphic design, and others that I’m very proud of.

But I also didn’t have a lot of good ideas for non-photorealistic rendering. It really wasn’t clear to me how to make progress. In part, this may have been because of a lack of theory.

So I did not work on it very much—for nearly two decades.


Then, a few things happened that spurred me to start to think about art and computing again, but in a new way—and to start to figure out some of those questions I’d been confused about in my art education.

First, a new wave of AI art emerged, starting with DeepDream in 2015 and Neural Style Transfer in 2016, then followed by Generative Adversarial Networks. New artists worked with AI tools, and lots of people in AI—and the media more broadly—started talking about AI art. I often found these conversations frustrating, when people talked about “the AI” as “the artist,” or asking if AI art could even be art. We in computer graphics have been making art with computers for many decades, and we’ve seen it all before. I remember conversations where people had just assumed that Pixar movies were made by computers. In my PhD thesis back in 2001, I’d pointed out using a computer tool to make art doesn’t make that computer into an artist, just like in photography, where cameras had seemed like replacements for artists in the 19th century.

So I started studying books on art more concertedly: reading on philosophy of art, history of art, sociology of art, and media studies. I started going to different kinds of art galleries, and spent a lot of time talking to different kinds of artists, both from high end of contemporary art, and emerging AI artists.

That led to a 2018 paper called “Can Computers Create Art?” in which I presented a view of computers as artists tools that, in my view, represented the “conventional wisdom” of SIGGRAPH, and connected it with the philosophy of art with generative art and computer graphics.

I also wanted to communicate to a broader audience about the history of image stylization, since a lot of people seemed to think that this field only began in 2016. So I wrote my first blog posts, describing this history.

While doing so, I revisited the notion of line drawings as abstracted shading from SIGGRAPH: you can create very effective line drawings by approximating a very specific shaded rendering style. I recalled some perceptual motivation for this, and wondered if this had been picked up in the human vision literature.

It hadn’t, so I published a paper called “Why Do Line Drawings Work?” in a human vision journal, Perception. In this paper, I proposed an answer to the riddle as to why it’s so easy to understand in line drawings, when they look so different anything we see in the real world. We perceive line drawings not as similar to real things that we do see, but abstracted versions of realistic renderings that we could have seen. (This won’t make any sense until you’ve seen the relevant renderings from the papers.)


The other thing that happened was that I bought a new iPad, and a new Apple Pencil with it. I decided to try it out. I sat on my couch and sketched a houseplant.

It was so much fun. I was hooked. I had not been painting for 20 years, but I soon started taking my iPad everywhere, and sketching and painting regularly on it.

And I started to notice that I held many assumptions about making art, and these assumptions were wrong.

For example, one day I sat in front of a building and decided to paint a realistic watercolor of it, like I would have in college. I was still learning the tools, and I became frustrated with the painting. It did not look realistic at all, it did not look good.

So I switched directions. I drew black outlines over it, making a simple line drawing with watercolor underneath it. And that seemed to work.

According to my original goals, the painting was a failure. But, I was happy with it. I showed it to friends and got positive comments.

The final painting was a pleasant surprise, far from my original goals. But it wasn’t a fluke. Even now, with five more years of experience, I still can’t predict exactly how any painting will turn out. Each one is a surprise.

In our culture, we have a myth that an artist has a vision, and then executes on that vision. In our papers, we sometimes write that the goal of computer graphics tools is to help artists execute on their visions.

I no longer believe this. The more I’ve read the writing of famous artists, writers, and musicians, the more I see them describing the same thing: art is an exploration. You start with an initial idea, or plan, or inspiration. But then you start working, and then something happens, and you respond to that, and start working in a different direction, and your ideas change along the way. You might end up very very far from where you started.

This is the same thing that happened with that “Image Analogies” paper that I described earlier: we published something very different from where we started from.

And, my whole career can be described this way—nothing in my career was predictable or planned. Lots of other career retrospective talks make similar points, mentioning chance opportunities that seemed to unexpectedly change a person’s entire trajectory. One thing leads to another which leads to another and so on.

I like “open-endedness” as a term for describing this, as opposed to “planning” or “optimization.” Often, the most impactful art and research comes from exploration, not from sticking to a fixed plan.

When you’re exploring is when you can discover things. If you have a fixed plan, then you can’t discover anything unexpected.

For graduate students learning to do research, there are many important skills to cultivate, including the curiosity to learn lots of different things and to try different things to see what happens, and, most difficulty, the intuitions for where to poke around and explore, how to proceed, and what to do when an idea does not seem to be working out.

All that being said, we do often—more or less—achieve our research plans. We often do publish a paper on the thing we’d set out to do. And this is ok too.

Most research is incremental. At SIGGRAPH, we often treat “incremental” like it’s a four-letter word. But incremental research is very important too.


The other thing I noticed happened when drawing landscapes. I often take a photo once I’ve finished the drawing, so I have it for reference later.

A few times, when I compared the drawing and the photo, my first reaction, my gut reaction was: “My drawing doesn’t look like the photo, therefore I’m a bad artist.” I didn’t believe that intellectually, but that’s how I felt.

But then I’d look at the photo on my smartphone screen, and compare it to the real world, and realize that they looked very different too. The photo didn’t look like what I was seeing.

I came to think of everything that happens in a drawing as a set of choices. What to draw—whether or not to draw that street-sign—what colors to use, how to arrange objects on the page—all of these come from a continual stream of decision-making during painting.

And I came to think of photography as doing the same thing. I wrote a philosophical paper called “The Choices Hidden in Photography” that argues this point. I illustrated these points with photography algorithms from the SIGGRAPH literature that our smartphones use every single time we take a picture. There’s no such things as “right” or “wrong” pictures; photographs are not objective or correct. Every photograph is the product of many choices made by the photographer and the camera manufacturer.

Even so, different pictures create different perceptions, and I wanted to understand how, especially for perspective, like in these landscape views I was painting. So I went and read many books and papers in the psychology, philosophy, and art history of perspective, and spent the next three years reading and writing.

This work culminated in a paper called “Toward A Theory of Perspective Perception in Pictures,” and this is the work I’m most excited about right now. I know we always say this, but I feel like this is my very best work—or maybe just my most-recent.

In this work, I propose a theory of how perspective, inspired, in part, by the success of multiperspective projection in computer graphics, which show that you can make very effective pictures with different perspectives in different parts of the picture. The main hypothesis I’m proposing is that human vision perceives shape in pictures separately in each eye fixation: each fixation is interpreted in terms of a separate perspective projection. In my opinion, it suggests answers not to just things I’ve always wondered about drawing, but to questions that people have wondered about for 500 years, since the Renaissance, when people first invented linear perspective and first discovered its limitations. But I don’t want to oversell it!

The work I’m doing now extends these ideas, and I’m thinking about how ideas from computer graphics, along with recent results in neuroscience and vision can help us understand human vision better. How do we understand what is going on in pictures? Conversely, I think that better understanding of vision will lead to better graphics algorithms.


Based on these ideas, I want to suggest an idea of how to define computer graphics research.

Ever since I’ve been attending SIGGRAPH, I’ve sensed some insecurity about graphics research: are we just the R&D arm of the entertainment industry? It affects peoples’ motivations, as well as their ability to get jobs and research funding.

I very much believe in the Engineering Goal of SIGGRAPH: to build tools for visual art, communication, and scientific visualization. It’s been incredibly succesful and it’s what has motivated my work.

But there’s something else that has been happening, an overlapping Science Goal: scientific modeling of pictures, video, and other visual media. The last few papers I just described propose theories of human vision, heavily inspired by visual models and insights developed in computer graphics. Others have long argued for computer graphics as scientific modeling of photorealistic pictures; my work applies this to artistic pictures as well.

So computer graphics can help us create new kinds of art and expression. It can help us understanding the science of art and human vision. And it can help us understand what it means to make art with computers, and to defend against the AI hype that treats computers as artists, since we have many many decades of experience defining and making new kinds of art with computers.

Blog posts on these topics

In previous blog posts, I’ve written about many of these ideas in much more depth:

Papers mentioned/cited in the talk

Some things I left out

I had many ideas for things to talk about, and I had to leave a lot out, because I wanted to tell a coherent story in the available time. Each thing I mentioned had itself a whole story that I left out. Many other topics could have each been their own talk, or even future blog posts: Adequate thanks. In this talk I spent very little time talking about other people, all the mentors, colleagues, family, friends whom I’ve been blessed to know, learn from, and work with. A related theme I considered discussing is the importance of community: we focus on technical topics that define the field, but a research area is first and foremost, a community. “Dead ends” and “failures:” In telling this story, I presented a clear linear narrative of lots of successful experimentation and risk-taking, but I didn’t mention any of the dead ends and failures along the way, and you’re not really taking risks if there’s no possibility of failure. Childhood experiences: I considered discussing relevant childhood experiences (and one friend who listened to a practice talk wanted this), but didn’t feel what I would say was that interesting. The importance of writing: I’ve come to view good writing as an essential skill for anyone (or at least, anyone doing research, and perhaps anyone doing “knowledge work”). All my other research areas: I left out all the work I’ve done in other areas, such as in character animation and computer vision, and so on, in order to tell a coherent story. Previous award talks: My talk echoes themes of some previous award talks, several of which include their own discussion of the definition of computer graphics (e.g., Rob Cook, Michael Kass), and the unplannibility of career trajectories.