<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://aaronhertzmann.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://aaronhertzmann.com/" rel="alternate" type="text/html" /><updated>2026-03-25T16:35:42+00:00</updated><id>https://aaronhertzmann.com/feed.xml</id><title type="html">Aaron Hertzmann’s blog</title><subtitle>Aaron Hertzmann&apos;s blog</subtitle><entry><title type="html">Why Drawing is Hard: Visual Limitations and the Skills to Overcome Them</title><link href="https://aaronhertzmann.com/2026/03/23/drawing-skills.html" rel="alternate" type="text/html" title="Why Drawing is Hard: Visual Limitations and the Skills to Overcome Them" /><published>2026-03-23T00:00:00+00:00</published><updated>2026-03-23T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2026/03/23/drawing-skills</id><content type="html" xml:base="https://aaronhertzmann.com/2026/03/23/drawing-skills.html"><![CDATA[<p>If asked to draw a picture of a tree or a person in front of them, many people would say “I cannot draw.”  Thirty years ago, two psychologists <a href="https://people.uncw.edu/cohend/research/papers/Cohen%20and%20Bennett%2097.pdf">pointed out that this should be surprising</a>. They reasoned that most people can trace over pictures easily, and most people can perceive the world accurately. So why can’t most people look at the world, perceive it, and then draw what they perceive?</p>

<p>Since then, <a href="https://www.sciencedirect.com/science/article/pii/S0149763415301986">researchers have extensively studied</a>  which aspects of perceptual and other skills seem correlated with improved skill at drawing, but have yet to form clear explanations for why drawing from life is hard.</p>

<p>In a new review paper, <a href="https://cogtoolslab.github.io/people.html">Judy Fan</a> and I propose a new way to understand how people draw from life, and why it is hard:</p>

<table>
  <tbody>
    <tr>
      <td>A. Hertzmann, J. E. Fan.  Artists’ Drawing Strategies Serve to Overcome Visual Processing Limitations. <em>Psychology of Aesthetics, Creativity, and the Arts</em>. 2026 (Advance online publication). [<a href="https://psyarxiv.com/pqkxh">Preprint</a>]</td>
    </tr>
  </tbody>
</table>

<p>In the paper, we argue that drawing is hard because we humans have very limited vision and memory—much more limited than we think. We just don’t perceive the world as a picture. And so, learning to draw realistic pictures is not about innate talent, it’s about learning skills to bypass these visual limitations that we all share.</p>

<p><strong>One important caveat:</strong> this blog post is about just one aspect of drawing: transferring information from what you see to the page, as in copying an existing drawing. We study copying because it isolates some crucial aspects of realistic drawing skill.  But <em>drawing pictures isn’t just copying</em>, for lots of reasons.  Our paper does survey some other aspects of drawing, and I’ve also written elsewhere about them, including <a href="/2024/10/16/perspective-as-arrangement.html">perspective</a>, <a href="/2020/10/23/planning-and-strategy.html">choice of subject</a>, <a href="/2020/09/12/how-to-draw-pictures-contours.html">picking which lines to draw</a>, <a href="/2020/11/02/abstract-painting.html">abstraction</a>, <a href="https://arxiv.org/abs/2205.01605">creativity</a>, <a href="/2020/10/26/time-and-speed.html">speed and sketching</a>, <a href="/2024/06/21/judgments.html">taking risks</a>, and <a href="/2024/10/18/adrift.html">the role of different choices along the way</a>.</p>

<h1 id="our-visual-limitations">Our visual limitations</h1>

<p>One might think that an artist looks at the world, and then draws their mental picture. But no one can draw accurate pictures this way. We do not see entire scenes this way, and we do not store complete, pixel-level pictures in our heads.</p>

<p>Human vision is severely limited, both in terms of what we see at any moment, and how much of it we remember, even over short periods of time. And, these limitations are not obvious—we are unaware of how much we do not see, until it is pointed out.</p>

<p>As an example, suppose you wanted to draw a picture of this houseplant.</p>
<center>
<img src="../../../images/drawingskills/houseplant.jpg" />
</center>

<p>Try looking at the picture, and then looking away from it. How many details do you remember when you look away: how many branches does the plant have? How many leaves does each branch have?  Generally speaking, <em>these questions are easy to answer when looking at the picture, but nearly impossible once you’ve looked away</em>. Once you look away, the information is gone from your head.  And getting the fine details purely from memory—the curvature of the stems, the precise outlines and shading on the leaves: definitely not possible.</p>

<p>If you were to try to draw the picture from memory, you’d be making a lot of it up, based on what you know about what plants look like.</p>

<p>This demonstrates the first important limitation of human vision: <strong>we remember almost no fine visual details</strong>. We don’t store pictures in our heads like they were JPEGs. 
We do remember things about the pictures we’ve seen, but not at the level of detail necessary to accurately draw them.</p>

<p>Here’s the second thing to try: stare at one leaf on the plant, and then, without moving your eye from that leaf, try to answer those questions again (how many branches? How many leaves?). Again, if you fixate your eyes only one spot on the plant, then you get very little detail about the rest of the plant. Again, you couldn’t draw an accurate picture of the plant just staring at one leaf.</p>

<p>This demonstrates this second important limitation of human vision: <strong>we get very little detail in peripheral vision</strong>, that is, in directions that we’re not looking.   As another demonstration, try staring at one word in this paragraph, and, without moving your eyes, see how many other words you can read. The answer will be almost none.</p>

<p>Moreover, not only do we see and remember very little, we’re really unaware of this; we tend to think that we’re seeing everything in front of our eyes, but <a href="/2024/05/09/illusion-of-awareness.html">this is an illusion</a>. I’ve written much more about this illusion and provided surprising demonstrations about at that link. I have also written about how these illusions determine <a href="/2024/10/07/picture-perception.html">how we use and perceive perspective in pictures.</a></p>

<p>Once we understand these limitations of human vision, then a whole lot of things about how people draw begin to make sense.</p>

<h1 id="eye-movements-in-drawing">Eye movements in drawing</h1>

<p>Here’s Henri Matisse drawing a picture of his son. He doesn’t just look at his son once and then draw from memory. Instead, watch how much he moves his eyes back and forth between the drawing and his son:</p>

<center>
<video width="480" height="352" controls="">
  <source src="../../../images/drawingskills/matisse.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video></center>

<p>What exactly are his eyes doing?</p>

<p>Here’s another video of a person copying a picture, but this time recorded with an eye tracking device. The black rectangle show where the his eyes are looking at any moment.</p>

<center>
	<figure>
<video width="640" height="360" controls="">
  <source src="../../../images/drawingskills/stephen-target-locking.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video>
	<figcaption><i>Clip from <a href="https://vimeo.com/70552585">Capturing Life</a> by John Tchalenko</i></figcaption>
</figure>
</center>

<p>The artist’s eyes are continually going back and forth between lines in the drawing and the lines in their copy.<br />
In both videos we see that <em>drawing from observation involves near-continual eye movement between the source and the drawing</em>.</p>

<p>Why? Given the limitations of human vision that I demonstrated above make it obvious. Consider just copying a drawing. If the brain can store only very little detail at a time, then you can’t remember a detailed version of the source picture <em>or</em> your copy. You continually moving your eyes back-and-forth to gather information from the source and then see where to put it in the target. It’s a bit like trying to transfer water between two pots using only a leaky teacup.</p>

<p>What happens if you restrict eye movements? One common drawing exercise is called “blind drawing,” in which you are not allowed to look at your drawing, only your subject. Here’s an example of a blind drawing that I made at a workshop, where we were given 30 seconds to draw the person next to us using a single line:</p>

<center>
	<figure>
         <p float="left">
		<img src="../../../images/drawingskills/blind-drawing-pair.jpg" width="80%" />
</p>
</figure>
</center>

<p>(I took the photo a moment later.) The “errors” are typical of blind drawing: individual shapes are accurate, but their relative proportions and positions are way off. I drew her eyes, nose, and jawline first, then her hairline, but, by the time I got to the hairline, I didn’t know where my pen was relative to the her nose and eyes, and so I put her hairline right over her eyes.</p>

<p>When trying to do life drawings for the first time, beginners tend not to look at the source very much; they spend most of the time looking at their drawing. As a result, they can’t get many details very accurately simply because they never see them. <a href="https://link.springer.com/article/10.3758/BF03193626">One study</a> found that requiring novices to look at the subject more frequently improved their copying accuracy. Blind drawing is also an exercise often giving to beginner students.</p>

<p>Learning to draw, as they say, is partly about learning to look, and, to start, that can simply mean spending more time looking.</p>

<h1 id="drawing-is-a-set-of-skills">Drawing is a set of skills</h1>

<p>Everyone shares these visual limitations. Learning to draw is a matter of learning skills to work within these constraints. Artists develop a range of different techniques, strategies and skills for all parts of the drawing process. Here are just a few examples.</p>

<p>To draw simple lines and curves, the “target locking” technique for drawing an individual line is to position your pencil at the start of the lines, fixate your eyes on the end of the line, and then draw to where you fixate:</p>

<center>
	<figure>
<video width="640" height="360" controls="">
  <source src="../../../images/drawingskills/target-locking.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video>
	<figcaption><i>Clip from <a href="https://vimeo.com/70552585">Capturing Life</a> by John Tchalenko</i></figcaption>
</figure>
</center>

<p>Nobody teaches you to do Target Locking, and nobody does it intentionally. In fact, almost no one knows about it, even if they do it all the time. Target Locking is a description of behaviors observed in <a href="https://www.sciencedirect.com/science/article/pii/S0167945706000546">a few eye-tracking studies</a>. People are not good at being very aware of how their eye movements operate.  (Another behavior, from the same study, is “smooth pursuit,” where the eyes follow the pencil.)</p>

<p>When I read about ways that <a href="https://psycnet.apa.org/fulltext/2014-10106-001.html">artists use coordinated eye and hand motions simultaneously in copying</a>, I wondered if I drew that way. So I tried drawing a sketch at my desk, and—indeed—I was moving my eyes and hands in parallel just as they described. Before then, I’d had no idea.</p>

<p><strong>A more high-level example.</strong>  Here’s a simple fact:  a person’s eyes are half-way between the top and the bottom of their head. Yet, often our drawings don’t come out that way. Here’s a sketch that I recently drew for a paper figure:</p>

<figure><center>
	<img src="../../../images/drawingskills/flat_picture.jpg" width="40%" />
</center>
</figure>

<p>There are a few things to point out here:</p>
<ol>
  <li>The cartoon character’s eyes are almost next to the top of his head, which would look totally wrong in real life.</li>
  <li>You might not notice this anatomical mistake in the cartoon unless it is pointed out.</li>
  <li>I know this anatomical fact, and yet I still made the mistake.</li>
  <li>I don’t care about the mistake; I’m perfectly happy with this cartoon. I don’t consider it a mistake; I think the cartoon looks fine without anatomical accuracy.</li>
  <li>But if this were part of an initial sketch for a more-accurate drawing, then this would be a problem, and one might not notice the problem until much later.</li>
</ol>

<p>This knowledge can be useful for drawing: <a href="https://psycnet.apa.org/record/2016-28493-001">one study found</a> that informing novices that the eyes are in the middle of the head improved their drawing accuracy. This conceptual knowledge can help you make conscious judgements to improve your drawing. This is why realistic artists sometimes engage in careful studies of anatomy.</p>

<p>Here’s an explicit technique that you can use to draw faces and avoid this problem. First, draw an oval for the head, then a cross to indicate gaze direction, like in these examples:</p>

<center>
	<figure>
         <p float="left">
		<img src="../../../images/drawingskills/goeree.jpg" width="45%" />&nbsp;&nbsp;<img src="../../../images/drawingskills/veronese-detail2.jpg" width="45%" />
</p>
<figcaption>
</figcaption><i>How to plan faces as ovals, by William Goeree, 1688, and use of this technique in sketches by Paolo Veronese, 1568</i>
</figure>
</center>

<p>People have created all sorts of more-elaborate such rules for other kinds of drawing, including one-point perspective and two-point perspective, and rules for drawing full bodies, which has led to some <a href="https://imgur.com/7HMbY">great parodies</a>.</p>

<p>In the paper we survey many other kinds of strategies and skills that artists learn.</p>

<h1 id="anyone-can-learn-to-draw">Anyone can learn to draw</h1>

<p>In Western culture, we’re taught that artists are some sort of special geniuses with innate talent—you’re either an artist or you’re not—a belief that only dates back to <a href="/2022/09/27/art-eras.html">18th-century Romanticism</a> and <a href="https://www.visuallanguagelab.com/2021/03/from-learning-to-draw-to-acquiring-a-visual-vocabulary.html">misguided educational philosophies</a>.</p>

<p>This mystical nonsense prevents people from learning to draw. But all you have to do is take a chance and try it out. I personally have known so many people who took a drawing class or worked from a book of exercises and went from “I can’t draw” to “Wow, I drew that?”</p>

<p>Our work suggests some reasons why people might mistake the inherent difficulty of drawing with innate lack of talent. We’re unaware of how limited our own vision and visual memory is. So it’s like there’s this invisible mental block—limitations of vision and <a href="/2024/05/09/illusion-of-awareness.html">the illusion of awareness</a>—and we think that it means “I’m not good.”  But simple exercises can help you start to work around this mental block. Once you understand the block, it makes more sense.</p>

<p>Drawing is a skill you learn, not an innate talent.  I think there are only a few things you need to learn to draw: a little bravery, some exercises (especially from a book or from classes), and lots and lots of practice. I wrote a little more about this <a href="/2024/06/21/judgments.html">here</a>. Just a little time and practice can make drawing an engrossing pursuit that enriches your life and becomes its own reward.</p>

<center>
<figure>
	<img src="../../../images/ipad_paintings/pilea.jpg" width="75%" />
<figcaption><i><a href="/2020/10/05/art-is-a-process.html">My first iPad painting from 2019</a></i>
</figcaption>
</figure>
</center>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[If asked to draw a picture of a tree or a person in front of them, many people would say “I cannot draw.” Thirty years ago, two psychologists pointed out that this should be surprising. They reasoned that most people can trace over pictures easily, and most people can perceive the world accurately. So why can’t most people look at the world, perceive it, and then draw what they perceive?]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/drawingskills/blind-drawing-pair-large.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/drawingskills/blind-drawing-pair-large.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Technology and Social Isolation: From Cars to “AI”</title><link href="https://aaronhertzmann.com/2025/10/26/isolation.html" rel="alternate" type="text/html" title="Technology and Social Isolation: From Cars to “AI”" /><published>2025-10-26T00:00:00+00:00</published><updated>2025-10-26T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/10/26/isolation</id><content type="html" xml:base="https://aaronhertzmann.com/2025/10/26/isolation.html"><![CDATA[<p>In this blog post, I tell a story of how some technologies from the past century or so have, overall, led to increased social isolation in the United States, as we replaced in-person social interactions with technology.</p>

<p>We can divide social relationships into three categories: close ties, like close friends and family; weak ties, like coworkers and casual friends; strangers, like people you might pass on the street. All three kinds of social relationships are important, and this post is about ways that technology has transformed each of them in the past century.</p>

<p>Technology often benefits our social relationships, in ways that are often obvious. The negative effects are less obvious, which is why I focus more on those. I use and enjoy every category of technology discussed in this post: I love listening to recorded music; I use my smartphone and/or laptop all the time; I love connecting with friends and colleagues on social media. I don’t believe any of these technologies are “bad,” but <a href="https://www.jstor.org/stable/3105385">they are not “neutral”</a> either.</p>

<p>Many different trends inspired me to write this post. The past decade has seen a lot of simplistic claims about new technologies being good or bad. Simplistic arguments over technology as good or bad themselves are harmful <a href="https://journals.sagepub.com/doi/10.1177/1745691620919372">because they occupy resources and attention, distracting us from the underlying problems</a>.  In my work <a href="/2025/09/30/menace-of-mechanical-music.html">I’ve tried to understand if these arguments make sense</a>, and, the more I’ve dug into them, <a href="/2022/12/17/when-tech-changes-art.html">the more complexity I find, but also common themes</a>. I’ve previously written about the <a href="/2021/03/22/art-is-social.html">social nature of art</a> and how it impacts the role of <a href="https://www.mdpi.com/2076-0752/7/2/18">computers as art technology</a>, and <a href="/2025/09/30/menace-of-mechanical-music.html">the effect of recorded music on social interaction</a>; here I consider other technologies’ effect on social relationships. Most directly, this post is a response to simplistic claims like <a href="https://www.nature.com/articles/d41586-024-00902-2">“social media causes mental illness”</a>.  Social media has its problems, but the story is a lot more complex than that.</p>

<center>
	<figure>
		<img src="../../../images/tech_isolation/reach-out.jpg" width="75%" />
<figcaption><i>1980s advertisement for technology to connect us</i></figcaption>
	</figure>
</center>

<h1 id="cars-from-villages-and-cities-to-highways-and-suburbs">Cars: from villages and cities to highways and suburbs</h1>

<p>Automobiles are the Original Sin of 20th-Century American social isolation.</p>

<p>Isolation always existed throughout history. But, by the end of the 19th century, many people many lived in communities where they saw their families, friends and neighbors as part of their daily lives, whether farming villages or larger cities. Even poor immigrants overcrowded into New York City tenements often lacked basic needs but frequently had each other.</p>

<center>
	<figure>
		<table>
			<tr><tc>
		<img src="../../../images/tech_isolation/town-square.jpg" width="60%" />&nbsp;</tc><tc><img src="../../../images/tech_isolation/mulberry1900.webp" width="35%" /></tc></tr></table>
		<figcaption><i>Life at the start of the 20th-Century: <a href="https://www.reddit.com/r/OldPhotosInRealLife/comments/yy8cps/town_square_locations_midwest_usa_1908_to_1915/">Town square in Marion, Indiana</a>, and Mulberry St in New York City</i></figcaption>
	</figure>
</center>

<p>Henry Ford’s <a href="https://en.wikipedia.org/wiki/Assembly_line#20th_century">invention of the mass-produced automobile</a>—and the powerful automobile industry that grew around it—radically affected the 20th century development of American cities. Before the rise of the car, city streets were public spaces for everybody: pedestrians, horses, children playing.  <a href="https://cdn1.vox-cdn.com/uploads/chorus_asset/file/2934608/Norton_Street_Rivals.0.pdf">Automobile manufacturers began a program of remaking cities for cars, through ad campaigns to stigmatize pedestrians in roads as “jaywalkers”</a>, lobbying for laws against jaywalking, and lobbying for more roads. (“Jay” is old-fashioned slang for a country bumpkin; calling someone a jaywalker is like calling them an idiot.)  <a href="https://www.vox.com/2015/5/7/8562007/streetcar-history-demise">Public transportation</a> and walking became less-and-less practical, leading to a vicious cycle: more and more people drove cars, and then demanded more roads.  And, when cars hit pedestrians, pedestrians were blamed.</p>

<center>
	<figure>
		<img src="../../../images/tech_isolation/jaywalking2.webp" width="50%" />
		<figcaption><i>Cartoon from a 1923 campaign to convince people that walking in the street is bad.</i></figcaption>
	</figure>
</center>

<p>Automobiles offered freedom and independence: convenient ability to travel far, whether for work or reaction. They signalled social status, wealth, independence, and manhood.  They became popular, and then mandatory.</p>

<p>By the 1940s, a generation of urban planners favored cars as the primary mode of transportation. Most famously, New York’s transportation overlord Robert Moses grew up wealthy and never learned to drive, yet rode in cars his whole life. He built a staggering number of roads, highways, and bridges around New York City and Long Island, while blocking decades of public transportation projects.</p>

<p>Throughout the US, a new kind of population center arose: the “suburb.” Suburbs allow people to have big houses outside of the cities where they worked. Zoning separated houses from businesses, and large houses were separated from each other, creating massive <a href="https://en.wikipedia.org/wiki/Urban_sprawl">urban sprawl</a>, and making driving mandatory for most activities in many suburbs.</p>

<center>
	<figure>
		<a href="../../../images/tech_isolation/urban-sprawl-nevada-christoph-gielen.webp">
		<img src="../../../images/tech_isolation/urban-sprawl-nevada-christoph-gielen.webp" width="50%" /></a>
		<figcaption><i>Urban sprawl in <a href="https://twistedsifter.com/2010/07/urban-sprawl-aerials-christoph-gielen/">a Nevada housing subdivision.</a></i></figcaption>
	</figure>
</center>

<p>In the 1950s, federal funding such as the <a href="https://en.wikipedia.org/wiki/Federal-Aid_Highway_Act_of_1956">Federal-Aid Highway Act</a> led to the construction of highways throughout US cities. Many cities drew highways right through urban centers, <a href="https://www.segregationbydesign.com/about">often destroying vibrant communities (often, poor and/or Black)</a> and creating urban blight. Now, some cities devote an enormous fraction of their space to parking lots—<a href="https://www.atlasobscura.com/articles/parking-lots-in-cities-usa">some cities have more parking than housing</a>.</p>

<center>
	<figure>
		<a href="../../../images/tech_isolation/houston.webp">
		<img src="../../../images/tech_isolation/houston.webp" width="75%" /></a>
		<figcaption><i><a href="https://www.segregationbydesign.com/about">Downtown Houston, 1982:</a> mostly parking</i></figcaption>
	</figure>
</center>

<p>By the 1970s, the terrible consequences of our car-first policies were clear: intense traffic jams on highways; smog that considerably shortened peoples’ lifespans; racial segregation; urban blight; unhealthy lack of exerise when people drive everywhere; dependence on foreign oil, as highlighted by the <a href="https://en.wikipedia.org/wiki/1970s_energy_crisis">1970s energy crisis</a>.</p>

<center>
	<figure>
		<table>
			<tr><tc>
		<img src="../../../images/tech_isolation/la_traffic.jpg" width="45%" />&nbsp;</tc><tc><img src="../../../images/tech_isolation/la_smog.jpg" width="45%" /></tc></tr></table>
		<figcaption><i>Los Angeles traffic and smog in the 1970s</i></figcaption>
	</figure>
</center>

<p>Since then, we have undone some of these ills. Emissions regulation mostly eliminated smog; alternative energy modes have reduced dependence on oil; many cities now favor mixed-use urban planning, while investing in bike lanes and public transportation. A few urban highways have been removed, like San Francisco’s <a href="https://en.wikipedia.org/wiki/California_State_Route_480">Embarcadero Freeway</a> and Seattle’s <a href="https://en.wikipedia.org/wiki/Alaskan_Way_Viaduct">Alaskan Way Viaduct</a>. But the flywheel of highway demand still drives much of the politics: car drivers, stuck in traffic, continually demand more road in the <a href="https://en.wikipedia.org/wiki/Induced_demand">intuitive but utterly wrong idea that adding roads reduces traffic</a>, and fight against any attempts to share the roads with more-efficient transportation modes.</p>

<h1 id="car-culture-and-social-isolation">Car culture and social isolation</h1>

<p>But when it comes to social isolation, I think most people still don’t realize just how much our car-first culture has hurt our connections with other humans.</p>

<p>After all, cars seem empowering: if you want to go see friends, go to a restaurant or bar, go to see live theatre or sports… why, you just get in your car and drive there. The car makes it possible!  But American urban planning made the car <em>necessary</em> for most of these activities.</p>

<p>Car-first culture reduces meaningful encounters with strangers. As Joe Keohane describes in the surprisingly-deep <a href="https://www.penguinrandomhouse.com/books/608695/the-power-of-strangers-by-joe-keohane/"><em>The Power of Strangers</em></a>, interacting with strangers and acquaintances creates a sort of societal glue: not only can it improve your own mental well-being, and help you make connections, it ties us together in a community. Even walking down the street, you have passing interactions with strangers; depending on how segregated the society is, you could be passing a cross-section of humanity.  It’s much easier to feel some level of empathy for people that you interact with casually, in person. Interactions with strangers has played an important role in human evolution and in many ancient cultures.</p>

<p>When we travel by car, we self-segregate ourselves from all other humans, with virtually no direct interaction beyond fleeting eye contact. Our interactions with others often create stress, and more anger than empathy.</p>

<p>For most of human history, social interactions outside of the home was built into daily life: daily activities naturally involved seeing people. In the suburbs, you can drive to places to interact with people, say, driving to churches, meetups, bars, or public events.  You have to make an effort to have interactions with people, as a separate part of your day. Like driving to exercise at the gym.</p>

<p>My father was born in San Francisco, as was his father. After World War II, they moved to the suburbs, like many of that era. My father stayed in the suburbs until retirement, and then moved into a retirement community back in San Francisco. He said ruefully that it was the first time in his life that he felt like he lived in a “neighborhood:” a community where he interacted frequently with his neighbors.</p>

<p>Many seniors choose to stay in their suburban homes throughout their old age, becoming <a href="https://www.sfchronicle.com/opinion/openforum/article/waymo-uber-self-driving-seniors-bay-area-21146272.php">increasingly isolated as their mobility diminshes</a>, leading to increased loneliness, in turn, worsening dementia and other health problems.</p>

<p>Social isolation is particularly bad for children, as I’ll come to in a bit.  But first, some parallel developments in entertainment.</p>

<center>
	<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/HO17B-ACRn0?si=zqEQZBJD4cXOfrY-" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>
<figcaption><i>Telephone technology (and cars) presented as a solution to the problem of friends that live far away</i></figcaption>
</figure>
</center>

<h1 id="entertainment-from-performance-to-recordings-to-video-games">Entertainment: From performance to recordings to video games</h1>

<p>Throughout most human history, social groups made music together: families played instruments together in their living rooms and porches; city dwellers sang together in taverns; churchgoers sung hymns and gospel together.</p>

<p>In fact, social musicmaking predates human history. Savage et al. survey a range of anthropological evidence that <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/music-as-a-coevolved-system-for-social-bonding/F1ACB3586FD3DD5965E56021F506BC4F">social bonding is <em>the</em> reason why we have music</a>. Music performs important social functions for how we relate to our fellow humans around us.</p>

<p>At the start of the 20th century, Edison invented the first music recording device: the wax cylinder. You no longer had to be in the same room as a performer to listen to music. Recorded music grew in popularity, and, through many formats since (tape reel, vinyl record, cassette tape, DVD, mp3, streaming), remains a huge force in our culture today.</p>

<p>
<center>
	<figure>
		<a href="https://www.flickr.com/photos/archivesnz/30816015791/in/photostream/">
		<img src="../../../images/tech_isolation/record_listener.jpg" width="65%" /></a>
	</figure>
</center>
</p>

<p>Recordings allow you to develop a sort of relationship with a performer. Sometimes we feel like they’re directly communicating with us. It’s a one-sided, parasocial relationship, but a relationship nonetheless.</p>

<p>With the development of music recording, radio, cinema, and, then, television, communal music-making diminished. Why go through the trouble of learning instruments, getting your amateur friends together, picking a song, and playing it, when you can listen to a recording of some of the best music in the world? Or watch amazing artists perform on television or online?</p>

<p>Many 19th-Century living rooms focused on a piano. By the time I grew up, most living rooms were focused around the television; the living room of one of my high school friend’s centered around an enormous television that we called “The Shrine.”  Watching television together can still be a communal activity. Sometimes the show is a reason to get together. But you can also enjoy music and TV alone.</p>

<p>Around the 1980s, home computers and video game consoles appeared. These could also be social—going to a friend’s house to play games together—but one could also spend many hours alone with computers and video games. You no longer needed another person with you for playtime; the computer could be your play buddy. In the suburbs, this was a whole lot easier than actually finding a friend in person. Online gaming and the Internet added social elements, with email, BBSes, Usenet, and then the Web. So you could socialize and play alone, or with friends and strangers online.</p>

<p>
<center>
	<figure>
		<img src="../../../images/tech_isolation/atari.webp" width="65%" />
		<figcaption><i><a href="https://www.pcmag.com/news/time-capsule-a-look-back-at-my-familys-love-of-pcs-in-the-80s">Child using Atari computer in 1982</a></i></figcaption>
	</figure>
</center>
</p>

<p>With the Sony Walkman, recorded music went into our pockets, we could be out and about, with headphones separating ourselves from the world.</p>

<p>In the early 2000s, computers went into our pockets: with mobile phones, and then smartphones. Now each person carries their own television, cinema, game console, and Internet device around in their pocket. Each person at home or out and about can be in their own separate world of games and passive entertainment.</p>

<h1 id="children-from-free-range-kids-to-social-media">Children: From free range kids to social media</h1>

<p>Let’s come back now to the car-first suburbs, and its effect on children.</p>

<p>As I understand it, children in pre-1970s villages and big cities used to roam freely together for unsupervised play and exploration. Think of the adventures of Huck Finn and Tom Sawyer, or so many other fictional children that explored and played outdoors, unsupervised. The suburbs made this increasingly difficult for many kids.</p>

<center>
	<figure>
	<a href="../../../images/tech_isolation/footprints.jpg"><img src="../../../images/tech_isolation/footprints.jpg" width="75%" /></a>
		<figcaption><i><a href="https://www.tandfonline.com/doi/full/10.1080/21504857.2024.2356417#d1e207">Comic about a child sent on an errand, 1934</a></i></figcaption>
	</figure>
</center>

<p>As a suburban Gen X-er with working parents, I was a <a href="https://en.wikipedia.org/wiki/Latchkey_kid">latchkey kid</a> who, every afternoon, came home from school alone and stayed at home alone until dinnertime. My closest childhood friend lived over a mile away from me, which, for a kid, was quite a long distance. So we saw each other some days and not others. My other friends lived even further away, and it’s hard to be close with someone you don’t see often. I had a bike, but no interest in using it, which I attribute to a lack of role models, since adults never biked.</p>

<p>My social circle only became really active toward the end of my childhood, once we were old enough to drive, and suddenly groups of us could go do stuff together most evenings and weekends.</p>

<p>A series of “stranger danger” panics made things worse. We were told not to talk to strangers, who might take us away in a white van for unspoken purposes. In essence, we were taught to see every stranger as a mortal threat. But the threat was vastly overstated: <a href="https://en.wikipedia.org/wiki/Stranger_danger#Degree_of_risk">stranger abductions were quite rare</a>, much rarer than children dying in car crashes.</p>

<p>
<center>
<iframe width="560" height="315" src="https://www.youtube.com/embed/9mEQmrw6JpA?si=9TypChVwcTHayO_i" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>
</center>
</p>

<p>Yet, this perceived threat led to seismic shifts in childhood for generations of children. Because of “Stranger danger” fears, millenials were often not allowed out unsupervised alone at all, while helicopter parenting led to a decrease in unsupervised play time entirely. <a href="https://drive.google.com/file/d/1lkJQjywJ0wZT9-io-hY9IcY9KkTCrTJY/view">A compelling article by Gray et al.</a> (<a href="https://www.jpeds.com/article/S0022-3476(23)00111-7/abstract">official link</a>) surveys extensive sociological and psychological evidence that losing unsupervised play time led to an increase in mental illness among millenials.</p>

<p>Children raised in a bubble will struggle outside of the bubble, sometimes for their whole lives.</p>

<p>Since millenial kids had less opportunity to socialize in person, they turned to early social media sites like MySpace and Facebook. Parents responded with another moral panic, fearing the effects of social media on their kids. But, as <a href="https://www.danah.org/books/ItsComplicated.pdf">danah boyd found in her ethnographic studies</a>, these kids were just playing out all the usual teenage social dynamics online. In this case, technology replaced many of the social interactions that these kids were denied in person.</p>

<h1 id="internet-from-social-media-to-ai">Internet: From social media to “AI”</h1>

<p>In the 2000s, social media—Facebook, Twitter, Instagram—became one of the main ways that we interacted with other people in society. More and more, we interacted with our friends, with strangers, even with celebrities online. As newspapers fell to the one-two punch of <a href="https://www.politico.com/news/magazine/2024/02/18/is-wall-street-to-blame-for-the-collapse-of-newspapers-00141920">private equity</a> and Internet competition, we began to get more of our information about the world from social media sites.</p>

<p>Online affordances have long been blamed for a lack of online civility:</p>

<center>
<figure>
	<img src="../../../images/tech_isolation/penny-arcade.jpg" width="75%" />
</figure>
</center>

<p>When you don’t see someone in person, it’s harder to have empathy for them. It’s hard to communicate in a considerate way. It’s easier to be misinterpreted. And it’s a continuum: a Tweet is worse than a Zoom call, but a Zoom call is still worse than physically meeting in person.</p>

<p>By the 2010s, America’s polarization reached a crisis level of dysfunction, and many people blamed polarization on social media. I blame propaganda machines (<a href="https://en.wikipedia.org/wiki/Southern_strategy">Southern strategy</a>, <a href="https://en.wikipedia.org/wiki/Conservative_talk_radio">talk radio</a>, Fox news, etc.), but social media really greased the wheels. In these semi-anonymous online services, messages from your neighbors, from people who share your own values, from political operatives, and from Russian state sources could all look the same and carry the same weight.</p>

<p>By the COVID lockdowns in 2020, we had adopted online technologies so thoroughly that it was fairly straightforward to retreat indoors and conduct all of our interactions with other humans electronically. Electronic communication became a life-saver, literally. But we suffered for the isolation, especially children. And, when we did see each other in person during the pandemic, and, then, when we first came back out into world, to our first restaurants, concerts, and parties, we briefly felt just how valuable and important it was to be out with other people.</p>

<p>As someone who has worked in “AI” research for many years, I find the new technologies amazing and wonderful; it’s so cool and fascinating that they work. A few of the times I’ve tried it, ChatGPT has given me correct answers to questions where all other avenues I’d tried had failed. I’m excited about the way <a href="/2022/12/17/when-tech-changes-art.html">new technologies continually revitalize art</a>.</p>

<p>But new “AI” technologies also extend and amplify all of the threats to our social relationships, mental health, and social fabric. In the most basic sense, even casual interactions with customer service is being replaced by chatbots.  We hear of professional writing often replaced by text generation (e.g., scientific paper reviews, law briefs); professors get messages from their students that look like personal communications but were obviously written by chatbots.  Even more, we’re hearing stories of people forming intense personal relationships with “AI” chatbots—in a sense, fully replacing human interactions with technological interfaces.</p>

<p>More and more, we won’t even know if the words and conversations we see online were written or spoken by a person, further diminishing our trust and sense of connection to the people in our society.</p>

<center>
	<figure>
		<img src="../../../images/tech_isolation/15_million_merits.jpg" width="75%" />
<figcaption><i>A dystopian future of isolation via technology from <a href="https://en.wikipedia.org/wiki/Fifteen_Million_Merits">Black Mirror</a>, echoing E. M. Forster's <a href="https://en.wikipedia.org/wiki/The_Machine_Stops">The Machine Stops.</a></i></figcaption>
	</figure>
</center>

<h1 id="does-modern-technology-generally-lead-to-isolation">Does modern technology generally lead to isolation?</h1>

<p>Did all of these technologies make us less social? Are there any common lessons and themes?  Do automobiles and social media have anything in common?</p>

<p><strong>First, it’s worth repeating that technology does not cause isolation. Technology does not <em>cause</em> anything. But it shifts the scales of what is easy and what is possible.</strong></p>

<p>Moreover, it’s worth repeating that these technologies have a complex impact on our social relationships. If you live in a suburb, then having a car will be very important to having in-person social relationship; you can’t get by without it. So, in that immediate sense, having a car enhances your relationships. Likewise, social media lets you connect with friends and family in a way that you might not otherwise, especially when you live far apart from them. But, by virtue of making it easy to connect when we live far apart, they make it easier to live far apart, creating a vicious cycle.</p>

<p><strong>As technologies get better, they automate many interactions that previously required a human.</strong></p>

<p>A 19th century shopper might buy their meats from a butcher, and vegetables from various markets. They would have to travel to different sellers in their neighborhood, a time-consuming process; they might also form casual relationships with each of them. Then, as canned food became more commoditized, they were sold by grocery stores: you’d go to the grocer and tell them what you wanted, and they’d get your food from the shelves. In 1916, <a href="https://en.wikipedia.org/wiki/Grocery_store#Modernization">grocery stores became self-service</a>: you find what you need and bring it to a human cashier with whom you’d exchange pleasantries.  Starting in the 1990s, self-checkout machines meant no human interaction required. Then, delivery apps allowed you to order your groceries online: you could get all your food delivered without even leaving your house. No-contact deliveries in the pandemic meant you might not even see the delivery person.  This made grocery ordering fast, easy, and without the least bit of human interaction. Some services are now testing and deploying <a href="https://en.wikipedia.org/wiki/Delivery_robot">food delivery robots</a>.</p>

<p>We can repeat that kind of story for many other kinds of activities, that have, over decades, moved from time-consuming, in-person interactions to convenient, human-free apps.</p>

<p><strong>Our online social interactions often have a lot of the same feel and elements of real in-person interaction.</strong> Phone calls are a great way to keep in touch, and I still treasure my regular calls with close friends. Watching social media lets you enjoy and participate in a great online conversation; I’ve learned a lot and had many meaningful interactions with colleagues and acquaintances online.</p>

<p>But watching social activity online is a bit like eating junk food. Junk food has the sweetness, umami, and/or saltiness that tells our animal brains that we’re consuming something nourishing. But it is not nourishing. Likewise, social technologies give us some of the elements of social relationships, without the same nourishment.  Eating a bag of potato chips, or scrolling influencers on TikTok may be pleasurable, but they are unlikely to leave you feeling nourished.</p>

<p><strong>One could extend the argument backward through history.</strong>  When I mentioned my thesis above with some colleagues recently, one of them described books as socially isolating, and the others agreed (one described all the time her child spends reading). The industrial revolution was enormously alienating to generations of people who became mineworkers and factoryworkers, toiling in inhuman conditions for most of their waking hours. Or surely, one could say this about war technologies and surveillance—or heck, even <a href="https://web.cs.ucdavis.edu/~rogaway/classes/188/materials/Diamond-TheWorstMistakeInTheHistoryOfTheHumanRace.pdf">the development of agriculture</a> millions of years ago. If we view Pleistocene hunter-gatherer tribes as our ideal state, then every technology development has moved us away from that state, starting with the development of agriculture. But this does not seem like a productive argument.</p>

<h1 id="how-to-counter-it">How to Counter It</h1>

<p>We are social creatures, evolved to live in small tribes of hunters-and-gatherers; all of our social behaviors, in some way, can be traced back to our Pleistocene ancestors. In-person interactions with neighbors and with strangers is one of our most basic needs, like food, health, and physical safety.  And yet, we live in a modern world, with its complex demands and structures, and we’re not going back to our hunter-gatherer lives.</p>

<p>I still enjoy television, video games, social media; I daily use telephone, email, and instant messaging to keep in touch with close friends, family, and coworkers; I sometimes order food and products with delivery apps; I ride in automobiles when necessary, etc. In our modern world, each of these technologies can benefit our social relationships and other aspects of our lives. And, even if we wanted to give up these technologies, we live in a society in which doing so would be very hard. Their underlying problems are societal problems that we cannot solve alone.</p>

<p>However, I recommend taking stock of which relationships are valuable and/or nourishing. And, what kinds of interactions support those values? How do you feel after an evening with friends, versus an evening scrolling social media? I believe you will find that the in-person interactions are the most valuable, lead to the greatest connection and empathy.  These are worth prioritizing.  How can you organize your life to best support the things that matter (such as meaningful relationships) and not the things that don’t matter (sitting frustrated in traffic; scrolling social media)?</p>

<p>Moreover, in-person social interactions with strangers, though often fleeting, can really enhance our sense of connection with the larger society around us.</p>

<p>It takes conscious time and effort–and sometimes, resources—to avoid the noise and figure out what really matters.</p>

<h1 id="some-readings-i-recommend">Some readings I recommend</h1>

<p>Derek Thompson. <a href="https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/"><strong>The Anti-Social Century</strong></a>. The Atlantic. 2025.
<br />
<em>Thorough and well-researched article that makes many of these points, and more (sent to me after I posted this; thanks, Moshe Vardi).</em></p>

<p>Joe Keohane. <a href="https://www.penguinrandomhouse.com/books/608695/the-power-of-strangers-by-joe-keohane/"><strong>The Power of Strangers: The Benefits of Connecting in a Suspicious World</strong></a>. Penguin Random House. 2022.
<br />
<em>A surprisingly deep exploration of the  value of talking to strangers, through anthropology, sociology, and psychology, and how talking to strangers is important both personally and societally.</em></p>

<p>Amy Orben. <a href="https://journals.sagepub.com/doi/10.1177/1745691620919372"><strong>The Sisyphean Cycle of Technology Panics</strong></a>. <em>Perspectives on Psychological Science.</em> 2020.
<br />
<em>How moral panics recur over each new technology, and waste time and resources by misunderstanding the new technologies.</em></p>

<p>Peter Gray, David F. Lancy, David F. Bjorklund. <a href="https://www.jpeds.com/article/S0022-3476(23)00111-7/abstract"><strong>Decline in Independent Activity as a Cause of Decline in Children’s Mental Well-being: Summary of the Evidence</strong></a>. <em>The Journal of Pediatrics</em>. 2023. (<a href="https://www.jpeds.com/article/S0022-3476(23)00111-7/abstract">Alternative link.</a>)
<br />
<em>A compelling explanation for the rise in mental health issues.</em></p>

<p>Melvin Kranzberg. <a href="https://www.jstor.org/stable/3105385?seq=1"><strong>Technology and History: “Kranzberg’s Laws”</strong></a>. <em>Technology and Culture</em>. 1986.
<br />
<em>Is technology good or bad? No.</em></p>

<p>Jane Jacobs. <a href="https://en.wikipedia.org/wiki/The_Death_and_Life_of_Great_American_Cities"><strong>The Death and Life of Great American Cities</strong></a>. Random House. 1961.
<br />
<em>The influential and insightful account of how mid-century American urban planning fails, and how cities can function successfully.</em></p>

<p>Robert Caro. <a href="https://en.wikipedia.org/wiki/The_Power_Broker"><strong>The Power Broker: Robert Moses and the Fall of New York</strong></a>. Knopf. 1974.
<br />
<em>A gripping account of (among many other things) how car transportation took over New York City, and some of the harms that resulted.</em></p>

<p><a href="/2022/12/16/status-quo-bias.html"><em><strong>My blog post on status quo bias</strong></em></a>: peoples’ tendency to assume that new technologies and new kinds of art are bad.</p>

<p><a href="/2025/09/30/menace-of-mechanical-music.html"><em><strong>My blog post on recorded music</strong></em></a>. <em>Some pros and cons of recorded music technology.</em></p>

<hr />

<p><em>This post was inspired, in part, by conversations with Moshe Vardi.  Thanks to Steve DiVerdi and Obie Pressman for comments.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[In this blog post, I tell a story of how some technologies from the past century or so have, overall, led to increased social isolation in the United States, as we replaced in-person social interactions with technology.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/tech_isolation/record_listener.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/tech_isolation/record_listener.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How “AI”-Generated Imagery is Different From Previous Art Technologies</title><link href="https://aaronhertzmann.com/2025/10/25/indistinguishability.html" rel="alternate" type="text/html" title="How “AI”-Generated Imagery is Different From Previous Art Technologies" /><published>2025-10-25T00:00:00+00:00</published><updated>2025-10-25T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/10/25/indistinguishability</id><content type="html" xml:base="https://aaronhertzmann.com/2025/10/25/indistinguishability.html"><![CDATA[<p>When text-to-image generation became big in 2022, many people reacted in shock. I heard a lot of people saying that nothing like this had ever happened before.</p>

<p>They said that “AI” was going to “kill art.” But <a href="/2022/08/29/photography-history.html">people said that back when photography was invented too</a>. They said that it’s not art because it’s just “pressing a button.” But <a href="/2022/08/29/photography-history.html">people also said that about photography</a> for <a href="https://www.routledge.com/On-Photography-A-Philosophical-Inquiry/Costello/p/book/9780415684705">over a century</a>. They said that “AI” can’t make art because it doesn’t have soul. But <a href="/2025/09/30/menace-of-mechanical-music.html">people also said that about recorded music</a>.  People said that “AI” will take jobs from artists, which also happened with <a href="/2025/09/30/menace-of-mechanical-music.html">recorded music</a>, computer animation, and many other past art technologies. People said that it is “just copying”… but all art involves some level of copying and inspiration, and especially artforms explicitly based on copying, like <a href="https://en.wikipedia.org/wiki/Photomontage">photomontage</a>, <a href="https://en.wikipedia.org/wiki/Sampling_(music)">sampled music</a> including much hip-hop and <a href="https://en.wikipedia.org/wiki/Turntablism">turntablism</a>), and <a href="https://en.wikipedia.org/wiki/Appropriation_(art)">appropriation art</a>. (The “just copying” claim is factually not true for visual generation, with rare exceptions.)  People say that most art with the new technology is bad, but <a href="https://aaronhertzmann.com/2022/12/17/when-tech-changes-art.html">most art made at the dawn of any new art technology is not “good”</a>.  And so on, and so on.</p>

<p>Even if all new technologies have a lot in common, they also must have differences. But what are the relevant ones for “AI”?</p>

<p>Certainly the fact that many popular “AI” models are trained from  scraped datasets is important in some way, but that <em>alone</em> does not indicate its impact on the arts. Internet search engines are also built and trained from large scraped datasets in ways that <a href="https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.#Impact">can affect the arts</a>.</p>

<h1 id="the-value-of-historical-analogies-and-trends">The value of historical analogies and trends</h1>

<p>In my work, I have argued that <a href="/2022/12/17/when-tech-changes-art.html">new artistic technologies tend to appear in consistent ways</a>, good and bad, including the backlash. Teasing apart these historical analogies is important for a few reasons.</p>

<p>First, the historical analogies help us <a href="/2022/12/16/status-quo-bias.html">overcome cognitive biases</a>. Our gut reactions to change are often wrong, and historical examples help us see this, and help us better understand the real dangers.  For example, a natural response is to try to protect existing artists is to expand copyright, but this requires caution since overbroad copyright <a href="https://www.rightclicksave.com/article/ai-art-and-uncanniness-cory-doctorow-artificial-intelligence-copyright">has a long history of hurting artists</a>.</p>

<p>Second, if you want to convince people, then you need defensible arguments.  For example, <a href="https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art">Ted Chiang’s argument against “AI” as art</a> seems not to have been informed by any knowledge of the history of photography or conceptual art or computer graphics or the <a href="https://jov.arvojournals.org/Article.aspx?articleid=2783759">way algorithms make choices</a> or current developments in interactive tools. His piece will not persuade people who know about these things.  His piece could have been a good argument that text-to-image is not a good artistic tool, something I agree with, but that’s not the argument he claimed to be making.</p>

<p>One possible difference is that <strong>the tools became effective much faster</strong> than with previous technologies. Portrait photography took decades to become widespread, and consumer photography took even longer. But it’s hard to judge speed in the moment, amongst all the breathless hype and controversy; <a href="https://www.forbes.com/sites/annkowalsmith/2025/10/31/ai-didnt-layoff-14000-people-amazon-did/">some claims of the immediacy of “AI”’s actual impact may be overstated</a>.</p>

<h1 id="the-biggest-difference">The biggest difference</h1>

<p>After <a href="https://www.youtube.com/watch?v=c2YRC0Gk5Do">a recent talk that I gave</a>, a student in the audience pointed out a way to phrase the difference that I think may be important.</p>

<p><a href="https://nissmar.github.io/">Nissim Maruani</a> wrote to me that: “for the first time in history, human and machines are actually creating using the same art medium … what does art become if the spectator cannot distinguish generated and human-created content? More literally, what would be the incentive for a human artist to publish on Spotify if the public cannot distinguish their work from free, on demand, and even maybe personalized music?”</p>

<table>
  <tbody>
    <tr>
      <td>In short, the new tools are different in that, not long after invention: <em><strong>it is often impossible to tell how generated imagery, video, audio was created</strong></em>.</td>
    </tr>
  </tbody>
</table>

<p>Was it hand-drawn, digitally painted on a tablet, or generated by algorithm? There are some telltale signals (weird physics, extra fingers appearing), and various stylistic clues but it’s often very hard to say for sure. To some extent, telling the difference may be possible with expertise and connoisseurship, although this can often be misled</p>

<p>In contrast, early photography was never mistaken for painting. You would rarely listen to recorded music and think that it’s a live performance (exceptions include lip sync). Early computer animation looked nothing like live action film or hand-drawn animation.</p>

<p>The move from analog to digital techniques might be the closest analogy here, and each of these had some controversies in their time, e.g., <a href="https://www.jstor.org/stable/779094">digital cinema versus analog photography</a>, <a href="https://www.youtube.com/watch?v=OHbM4QJYVYM">digital image editing</a>. Nowadays <a href="https://www.youtube.com/watch?v=7ttG90raCNo">one cannot tell the difference between computer graphics and live-action cinematography</a>.</p>

<h1 id="implications">Implications</h1>

<p>So, unlike a lot of the gut-response objections, I believe that this <em>Indistinguishability</em> one is valid, and worth considering the implications of.</p>

<p>I am not claiming that the idea is new, but that this is a defensible way to <em>describe</em> how these technologies differ. People have been discussing the implications of this fact, even if they might have phrased it differently.  And, I think this is something that was missing from my previous discussions of the topic.</p>

<p>Here are some possible takes.</p>

<p>One take is “AI slop” <a href="https://www.theatlantic.com/technology/2025/10/ai-slop-winning/684630">“crushes creativity”</a>, by making all work meaningless when we can’t tell any of it apart.</p>

<p>Culture theorist W. David Marx has argued that <a href="https://www.penguinrandomhouse.com/books/659558/status-and-culture-by-w-david-marx/">cultural value has already stagnated for the past several decades</a>, and that <a href="https://culture.ghost.io/genai-is-our-polyester/">“AI” merely accelerates this trend</a>: “less value is created when all cultural artifacts are procurable with enough money, can be made anywhere by anyone, and offer no useful social distinctions between philistine and aesthete,” fulfilling an “extinction-level destruction of cultural value” predicted by the Postmodern philosophers decades ago. He further predicts <a href="https://bsky.app/profile/wdavidmarx.bsky.social/post/3m3nw7fe3fc25">“a re-evaluation of real-life experiences”</a>… maybe we’ll get away from our screens more, if online cultural activities offer less status or meaning?</p>

<p>I do believe that history shows that <a href="https://www.youtube.com/watch?v=c2YRC0Gk5Do">the best art requires human authorship, experience and connection</a>; we care about art because it is made by humans. Improved tools raised the bar for the best art. But there are many areas where we may not care so much, and these are the areas that “AI slop” inhabit.</p>

<p>I also think the analogy to photography remains useful: photographs didn’t look identical to paintings, but they were close enough to radically challenge the existing notions of the value of creating realistic pictures. Once photography grew widespread, making realistic pictures was no longer valued in the way it had been when it required a painter’s skill. The subsequent Modern Art movements reversed the understanding of artistic skill and talent <a href="/2022/09/27/art-eras.html">in the contemporary art world</a>; in Van Gogh’s words, recreating reality was “just photography”.</p>

<p>While skill may still be valued in contemporary art, <a href="/2020/06/08/wica.html">it is nothing without a good origin story and ideas behind it</a>.  In response to the idea of connoisseurship, contemporary artist Jason Salavon likes to quote <a href="https://www.pnas.org/doi/abs/10.1073/pnas.0706929105">the study showing that wine enjoyment is affected by beliefs about the price of the wine in a way that can be measured neurologically</a>.</p>

<p>Mass culture still values skill in a way that the contemporary art world does not.  Perhaps mass culture will go through a similar transition.</p>

<hr />

<p><em>Thanks to Kabir Ahuja for comments and pointing out the link to <a href="https://www.jstor.org/stable/779094">digital cinema</a>.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[When text-to-image generation became big in 2022, many people reacted in shock. I heard a lot of people saying that nothing like this had ever happened before.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Menace of Mechanical Music: Was John Philip Sousa right?</title><link href="https://aaronhertzmann.com/2025/09/30/menace-of-mechanical-music.html" rel="alternate" type="text/html" title="The Menace of Mechanical Music: Was John Philip Sousa right?" /><published>2025-09-30T00:00:00+00:00</published><updated>2025-09-30T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/09/30/menace-of-mechanical-music</id><content type="html" xml:base="https://aaronhertzmann.com/2025/09/30/menace-of-mechanical-music.html"><![CDATA[<p>In 1877, Edison introduced the <a href="https://en.wikipedia.org/wiki/Phonograph_cylinder">phonograph cylinders</a>, the first musical recording media. For the first time in history, you could listen to music without being in the same room as a performing musician.  And, soon, some people hated it.</p>

<p>In a 1906 essay entitled <em>The Menace of Mechanical Music</em>, the popular composer <a href="https://en.wikipedia.org/wiki/John_Philip_Sousa">John Philip Sousa</a> warned “I foresee a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations, by virtue—or rather by vice—of the multiplication of the various music-reproducing machines.”  <strong>In short, he predicted, recorded music will be disastrous for music and our musical taste.</strong></p>

<p>Today, it’s easy to laugh his bonkers predictions, like that mothers will turn their children into soulless machines by playing them soulless recorded music instead of singing their children to sleep. We take music recording for granted, and surely we still have pretty good taste. It’s hard to imagine wishing that records, CDs, and mp3s had never been invented.  So it’s very easy to dismiss his doom-saying.</p>

<p>But, the more I’ve thought about it, the more I think that he had some good points.</p>

<p>These days, many people predict artistic doom from our new technologies, and it can be easy to dismiss these predictions, sometimes by pointing back to older predictions of doom. But it’s helpful to look back at the ways that old technological changes did change everything, both in good and bad ways. Doing so can give hints as to how best to navigate new changes.</p>

<p>Was John Philip Sousa right in any way, and what lessons, if any, does his example contain?</p>

<center>
  <a href="../../../images/sousa-page1.jpg"><img src="../../../images/sousa-page1-v2.jpg" width="640" /></a>
</center>

<h1 id="soullessness">Soullessness</h1>

<p>Sousa first argues that music must express the performer’s “soul.” “The nightingale’s song is delightful because the nightingale herself gives it forth.”  Hearing music from a machine, and “reducing the expression of music to a mathematical system of megaphones, wheels, cogs, disks, cylinders,” robs music of its soul.</p>

<p>As someone who has spent their life listening to recorded music at home—and, often, fallen in love and feel deeply connected to musical recordings—I just think he is wrong about this. It is true that there is really something special about seeing music performed live, but recordings also create deeply meaningful experiences, and a sense of connection with the artist.</p>

<p>Musicians long ago began making records meant to be heard as records, whether we’re talking about <a href="https://en.wikipedia.org/wiki/Musique_concr%C3%A8te">music concrète</a>, The Beatles and Pink Floyd composing with tape loops, <a href="https://en.wikipedia.org/wiki/Album-oriented_rock">AOR rockers</a>, densely-composed rap albums like “Fear of a Black Planet” or “Paul’s Boutique,” or EDM musicians like DJ Shadow and Amon Tobin sampling and reconstructing bits of other recordings, and so on.  <strong>Recording has enabled new kinds of music that we now cherish.</strong></p>

<p>Listening to a recording is more like reading a novel, rather than listening to a storyteller read live. And, like a short story or novel, you can study a recording, and go back to it over and over, in a way that you cannot with a perfromance.</p>

<p>Sousa writes “I could not imagine that a performance by it would ever inspire embryotic Mendelssohns, Beethovens, Mozarts, and Wagners to the acquirement of technical skill, or to the grasp of human possibilities in the art.” <strong>In this he is utterly wrong.</strong></p>

<p>So many of our current musicians talk about being inspired by recordings. <a href="https://www.bbc.com/news/entertainment-arts-58876732">In the words of one artist</a>: “The first Velvet Underground album only sold 10,000 copies, but everyone who bought it formed a band.”</p>

<p>Whenever someone says that machine-made art is soulless and cannot compare to human art, they should consider whether they are basically making the same argument as Sousa’s, or <a href="/2022/08/29/photography-history.html">the complaints that artists once made against photography</a>, or against computer graphics, and so on.</p>

<center>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">In 1906 a famous composer warned recorded music would end lullabies and turn kids in human phonographs &quot;without soul or expression&quot; <br /><br />Hear that warning read by a voice actor below. (Via our podcast on recorded music: <a href="https://t.co/GpU9TYobzG">https://t.co/GpU9TYobzG</a>) <a href="https://t.co/6ys2goVTv2">pic.twitter.com/6ys2goVTv2</a></p>&mdash; Pessimists Archive (@PessimistsArc) <a href="https://twitter.com/PessimistsArc/status/1298678938705698817?ref_src=twsrc%5Etfw">August 26, 2020</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</center>

<h1 id="people-making-music-together">People making music together</h1>

<p>But, on the next point, I think Sousa was right on.</p>

<p>Sousa wrote “The time is coming when no one will be ready to submit himself to the ennobling discipline of learning music.” In short, he wrote that people would not learn to make music because they could just listen to recordings instead of making it themselves.</p>

<p>He throws out many instances of camaraderie in people making music together that, individually seemed old-fashioned and absurd when I first read it.  People gathering around a campfire won’t tell stories or play music, they’ll just listen to recordings!  But, taken together, I think he did have a real point.</p>

<p>Before recorded music, friends, families and communities often made music together. Friends and family would gather in their living rooms; <a href="https://www.newyorker.com/magazine/2005/06/06/the-record-effect">the piano was the center of social life in saloons and taverns</a>.</p>

<p>In contrast, when I grew up, at the center of most living rooms was a television. I never learned an instrument so I’ve never experienced it myself. I’ve only been around friends playing music on a handful of occasion, but, these occasions seemed magical.  I’ve heard from several people that singing together in groups and choirs is a powerful bonding experience.</p>

<p>In fact, <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/music-as-a-coevolved-system-for-social-bonding/F1ACB3586FD3DD5965E56021F506BC4F">there’s good evidence that communal bonding is the evolutionary explanation for <em>why we have music at all</em></a>.  <a href="/2021/03/22/art-is-social.html">Like all arts, recorded music has its own social functions</a>, for example, fans playing music together, and trading and selling, but I’m not sure these make up for not making music ourselves.</p>

<p>For most of us, experiencing music is a passive activity, especially as compared to before recordings. Even for many amateurs who play instruments, social music-making remains rare.  He wildly overstated it  when he said “it will be simply a question of time when the amateur disappears entirely,” but I think <strong>Sousa basically had a good point here.</strong></p>

<h1 id="copyight">Copyight</h1>

<p>And now we come to the real reason for Sousa’s article: his call for copyright protections for composers. “Composers of the music now produced so widely by the mechanical players of every sort draw no profit from it whatever. … The composer of the most popular waltz or march of the year must see it seized, reproduced at will on wax cylinder, brass disk, or strip of perforated paper, multiplied indefinitely, and sold at large profit all over the country, without a penny of remuneration to himself for the use of this original product of his brain.”</p>

<p>At the time, the US had no copyright law granting composers’ rights, and Sousa’s article was written as part of a campaign to create one.  The campaign was successful, leading to the creation of statutory licensing for music. Every time a music recording is played publicly, whether in a dance club or in your local cafe, some amount of money must be paid to ASCAP, which in turn is paid to composers.  Licensing is compulsory, meaning that you do not need to obtain permission from the composers.</p>

<p>This system seems to have been a successful and uncontroversial way to compensate and incentivize music composition, so, in that sense, Sousa was “right.”</p>

<h1 id="is-it-good-or-bad-what-are-the-lessons">Is it good or bad? What are the lessons?</h1>

<p>Is recorded music good or bad?  I, for one, could not imagine giving it up, or going back in time and trying to prevent its creation, if such a thing were possible. Yet, today, we continue to struggle with the way recording has replaced human connection—to oversimplify a bit, we’ve gone from making music together to listening to a few enormously, skilled and talented artists at home alone.</p>

<p>It’s hard to take Sousa’s claims of the purity of performance too seriously, since his own band sold recordings for profits. And, earlier in his careeer, he had used Gilbert and Sullivan’s music without compensating them, to which Arthur Sullivan had said, “It seems to be their opinion that a free and independent American citizen ought not to be robbed of his right of robbing somebody else.”</p>

<p>New music recording and distribution technologies have since led to many, many new copyright battles, including battles over <a href="https://en.wikipedia.org/wiki/Home_Taping_Is_Killing_Music">cassette tapes</a>, <a href="https://en.wikipedia.org/wiki/Sampling_(music)#Lawsuits">sampling in hip-hop</a>, <a href="https://en.wikipedia.org/wiki/File_sharing">file-sharing</a>, <a href="https://www.smithsonianmag.com/smart-news/andy-warhol-copyright-prince-fair-use-180982230/">pop art</a>, <a href="https://en.wikipedia.org/wiki/Appropriation_(art)#Examples_of_lawsuits">appropriation art</a>, and on and on.  New technologies lead to compensation battles as well: <a href="https://www.youtube.com/watch?v=ILaU78Oo7XM">video streaming technology led to the 2023 Hollywood writers’ strikes</a>, and <a href="https://en.wikipedia.org/wiki/Criticism_of_Spotify#Business_practices">compensation for musicians from online streaming remains bad</a>.</p>

<p>
<center>
<figure>
   <p float="left">
   <img src="../../../images/cinemahistory/robot_sings_of_love.jpg" alt="1930 advertisement against recorded music in cinemas: the robot sings of love" width="60%" />
</p>
</figure>
<figcenter><i>Advertisement from <a href="https://www.smithsonianmag.com/history/musicians-wage-war-against-evil-robots-92702721/">a 1927 campaign</a> against using recorded music in movies</i></figcenter>
</center>
</p>

<p>Each of these battles are accompanied with absolute claims about art like Sousa’s: not only does the new technology ruin compensation for artists, also it’s soulless, it’s bad for art.  A lot of campaigners against new technology don’t just say it’s bad for compensation, they say it’s bad, period.  But I think we should separate these issues.</p>

<p>I could not imagine wishing that these technologies themselves did not exist. You may consider yourself lucky if you have never had to go to a neighborhood video store to see what few movies you could bring home to watch, or flipped channels on a TV to see what was on—and ended up watching reruns and commercials. <a href="https://chokepointcapitalism.com/">Giblin and Doctorow make a compelling case</a> that the source of current artist precarity is not, primarily, technology or copyright, but near-monopoly market conditions.</p>

<p>With each new technology, the loudest voices make the most extreme arguments, for why the technology has nothing but benefits, or will ruin everything. In the words of historian Melvin Kranzberg, <a href="https://journals.sagepub.com/doi/abs/10.1177/027046769501500104">technology is neither good, nor bad, but nor is it neutral</a>.</p>

<p>We, as humans, are so bad at intuitively reasoning about the complexities of these long-term changes. But <a href="/2022/12/17/when-tech-changes-art.html">there are many common trends that occur again and again with new artistic technologies</a> studying these kinds of examples provides a useful counter against simplistic thinking and <a href="/2022/12/16/status-quo-bias.html">status quo bias</a>.</p>

<center>
  <figure>
    <img src="../../../images/Home_taping_is_killing_music.png" width="280" height="231" />
    <figcaption><i>Logo of a 1980s <a href="https://en.wikipedia.org/wiki/Home_Taping_Is_Killing_Music">music-industry campaign against cassette tapes</a></i></figcaption>
  </figure>
</center>

<p><strong>Further reading</strong></p>

<ul>
  <li>
    <p><a href="https://ocw.mit.edu/courses/21m-380-music-and-technology-contemporary-history-and-aesthetics-fall-2009/18ab3aba9fe7aa1502a55cd049333659_MIT21M_380F09_read02_sousa.pdf"><em>The Menace of Mechanical Music</em></a> by John Philip Sousa. Read it for yourself!</p>
  </li>
  <li>
    <p><a href="https://www.cambridge.org/core/journals/journal-of-the-society-for-american-music/article/abs/john-philip-sousa-and-the-menace-of-mechanical-music/A9E621587BE7580ABD73AEF64D4B2DC8">Background on Sousa and his legal battles by Patrick Warfield</a></p>
  </li>
  <li>
    <p>My blog post on <a href="/2022/12/17/when-tech-changes-art.html">When Technology Changes Art</a></p>
  </li>
</ul>

<p><em>Thanks to <a href="https://drib.net/">artist Tom White</a> for recommending the Sousa essay to me.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[In 1877, Edison introduced the phonograph cylinders, the first musical recording media. For the first time in history, you could listen to music without being in the same room as a performing musician. And, soon, some people hated it.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/sousa-page1-v2.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/sousa-page1-v2.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Books That Changed The Way I See The World</title><link href="https://aaronhertzmann.com/2025/09/15/books-that-changed-me.html" rel="alternate" type="text/html" title="Books That Changed The Way I See The World" /><published>2025-09-15T00:00:00+00:00</published><updated>2025-09-15T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/09/15/books-that-changed-me</id><content type="html" xml:base="https://aaronhertzmann.com/2025/09/15/books-that-changed-me.html"><![CDATA[<p>Ever since I read <em>Guns, Germs, and Steel</em>, and then more books like it, I started a mental category of “books that changed the way I see the world.”  These books have a few things in common: they are non-fiction; they often span eras; they present grand theories based on solid evidence; they’re engagingly written for a relatively broad  audience; in most cases, I couldn’t put them down, finishing them in a short amount of time, and then buttonholed all my friends telling them to read the book; years later, I still find myself referring to ideas or tidbits that stuck out from these books, and sometimes thinking “how can you discuss topic X if you haven’t read this book?”  I aspire to write books like this about art, technology, and perception someday.</p>

<p>There are lots of other kinds of fiction, writing, TV and movies that changed the way I see the world, but those are much harder lists to curate, so this one is just about books. This is not a complete guide to my worldview, e.g., there are some topics for which fiction and documentary films may have shaped my worldview more than books (for example, I remember being profoundly affected by the novel <em>A Fine Balance</em> by Rohinton Mistry and the San Francisco documentary film “We Were Here.”)  However, relevant to this list, the TV series <a href="https://en.wikipedia.org/wiki/Adam_Ruins_Everything"><strong><em>Adam Ruins Everything</em></strong></a> deserves special mention for covering common misconceptions effectively across a range of topics; it’s the only TV show I’ve watched that included extensive citations.</p>

<p>I almost never reread books so I don’t know if some books would seem hopelessly dated now.  A lot is about how the book hits you at the time you read it. Some books wouldn’t have meant anything to mean when I was too young too appreciate them, and others seem to obvious if I read them too late.</p>

<h1 id="history-of-science">History of Science</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/The_Age_of_Wonder"><em>The Age of Wonder</em></a>, Richard Holmes (2008)</strong><br />
Rip-roaring adventure, science, and art of the romantic era, and how we came to some of our key modern ideas, including the concept of scientist (as with <em>The Invention of Art</em> which describes where the concept of “artist” came from). Great fun, and really fascinating for understanding where a lot of our ideas came from, from an era before we made any distinctions between “art” “science” and “philosophy.”</p>

<p><strong><em>The Structure of Scientific Revolutions</em>, Thomas Kuhn (1962)</strong> <br />
The classic on paradigm shifts, and how “normal science” operates versus science in crisis.</p>

<p><strong><a href="https://archive.org/details/the-day-the-universe-changed-s01e01-the-way-we-are">“The Day The Universe Changed,”</a> James Burke (1985)</strong><br />
 The one TV show that I include in this list: a BBC history series that I watched as a child that I remember loving, and I do think it affected my worldview: how scientific and technological changes change how we understand the world and our worldviews. Some of the writing I do about how technology changes art might owe a debt to it. (And, being a simplified story for mass consumption, <a href="https://www.jstor.org/stable/232006">it has its problems</a>.)</p>

<h1 id="intelligence">Intelligence</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/From_Bacteria_to_Bach_and_Back"><em>From Bacteria to Bach and Back</em></a>, Daniel Dennett (2017)</strong><br />
I keep coming back to some of the ideas here, like “competence without comprehension” and the notion of “memes” (I couldn’t get through <em>The Selfish Gene</em>. (<a href="/2024/09/18/books-on-consciousness.html">Longer review here.</a>)</p>

<p><strong><em>Are We Smart Enough to Know How Smart Animals Are?</em>, Frans de Waal (2016)</strong><br />
The author describes how scientists have consistently underestimated animal intelligence, and the many forms it takes. One tidbit that spoke to me was his assertion that, while historically “scientific” definitions of intelligence have focused on logic and reasoning, emotional intelligence is far more important to survival, and far richer and more difficult to define or understand. Social animals need emotional and social awareness more than they need language.
(<a href="/2024/09/18/books-on-consciousness.html">Longer review here.</a>)</p>

<p><strong><em>Don’t Shoot the Dog!</em>, Karen Pryor (1984)</strong><br />
I learned a lot about postive reinforcement dog training while volunteering in an animal shelter, but I didn’t understand the philosophy and evidence behind it. This is the book that put it all together for me, how positive and negative reinforcement can be used to guide behaviors for all sorts of animals, including humans. At the same time I also enjoyed <em><strong>Plenty in Life is Free</strong></em> by Kathy Sdao (2012) a brief memoir and argument against dominance-based training even in positive reinforcement training.</p>

<h1 id="societies-and-cities">Societies and Cities</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/The_Death_and_Life_of_Great_American_Cities"><em>The Death and Life of Great American Cities</em></a>, Jane Jacobs (1961)</strong><br />
The book absolutely transformed the way that I understood city life, so much that it’s hard to even remember what I thought beforehand. I felt like a cipher had been unlocked to help me understand the city in ways I never had before. I wished very much that someone had given it to me to read before I moved to New York City. I only found out about her after I moved to Toronto and went on a city walk celebrating Jane Jacobs’ activist legacy in Toronto. I became a huge Jane Jacobs fan. For more about her history as an activist in New York, along with a bit of Robert Moses’ history, I really enjoyed the short history <em>Wrestling with Moses</em> by Anthony Flint (2009). (Update: I’ve since read <a href="https://en.wikipedia.org/wiki/The_Power_Broker"><em>The Power Broker</em></a>, and loved it. The length is intimidating, but once I started it was very easy to keep reading the whole thing.)</p>

<p><strong><a href="https://joekeohane.net/books"><em>The Power of Strangers</em></a>, Joe Keohane (2021)</strong><br />
A book about talking to strangers doesn’t sound like it would hold my attention for long, but this was a surprisingly deep exploration of the topic, through anthropology, sociology, and psychology, and many real-world experiences and practical tips. The author argues that talking to strangers is easier than we think, and an important glue in the social fabric that is in a poor state, and our country and world would be so much better if we did more of it.</p>

<p><strong><a href="https://www.penguinrandomhouse.com/books/659558/status-and-culture-by-w-david-marx/"><em>Status and Culture</em></a>, W. David Marx (2022)</strong><br />
The main thesis it that all culture, fashion, and taste arise from natural human status-seeking, and the book provides a dense analysis of the complex mechanics of both culture and status. Almost every page I was nodding as the book provided broad-strokes theoretical descriptions that perfectly matched my experience or understanding, together with a rich array of cultural examples to make it concrete. (Update: I’m also enjoying his follow-up book: <a href="https://www.penguinrandomhouse.com/books/769187/blank-space-by-w-david-marx/"><em>Blank Space</em></a>.)</p>

<h1 id="definitions-of-art">Definitions of Art</h1>

<p>A few years ago, I started reading a lot of books on definitions of art. <a href="/2020/05/04/art-book-reviews.html">Here are some of my longer reviews of those books</a>.</p>

<p><strong><em>The Art Instinct</em>, Dennis Dutton (2009)</strong><br />
A philosopher argues that art is an evolved behavior. Includes a clear discussion of definitions of art, andwhat it takes to argue that something is an evolutionary behavior.</p>

<p><strong><em>The Art Question</em>, Nigel Warburton (2002)</strong><br />
A compact survey of 20th-century approaches to definitions of art by a philosopher. If you want to understand definitions of art, and why no simple definition works (“art is about intent”, “art is about communication”, “art is about ideas”) then this is the book to read.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/The_Invention_of_Art"><em>The Invention of Art</em></a>, Larry Shiner (2001)</strong><br />
How the concept of “art” was invented in the Romantic era; our modern concepts of “art” would be unrecognizable to, say, Leonardo da Vinci or the ancient Greeks. Unlike other books on this list, I found it often a bit tedious, and skimmed many chapters. but it was really worth it.  It might be that <a href="https://www.jstor.org/stable/2707484">the paper that inspired</a> it is good enough (haven’t read it yet).
(<a href="/2022/06/01/art-books-2022.html">longer review</a>).</p>

<h1 id="food-and-health">Food and Health</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/In_Defense_of_Food"><em>In Defense of Food</em></a>, Michael Pollan (2008)</strong><br />
I loved Michael Pollan’s <em>The Omnivore’s Dilemma</em>, which compared different approaches to food production, although some aspects of the writing bothered me (like the way he anthropomorphized evolution). In <em>In Defense of Food</em>, he distilled his ideas and a lot of research to a basic dietary credo that affects how I eat today.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/A Ultra-Processed_People"><em>Ultra-Processed People</em></a>, Chris Van Tulleken (2023)</strong><br />
A surveyof the theories, evidence, and the broader context for why Ultra-Processed Food is the source of so many of our dietary problems.  (And, some of the original research on ultra-processed foods was inspired by Michael Pollan’s book.)</p>

<p><strong><a href="https://en.wikipedia.org/wiki/Trick_or_Treatment%3F"><em>Trick or Treatment?</em></a>, Simon Singh and Edzard Ernst (2008)</strong><br />
This book surveys the scientific method for medical knowledge, and applies it to alternative medicine. I had had a long list of confusing experiences and interactions around a variety of alternative medicine practices, and this book put it all into context. I still occasionally tell people about how <a href="https://en.wikipedia.org/wiki/Pseudoephedrine">pseudeoephedrine</a> came from distilling the useful parts of <a href="https://en.wikipedia.org/wiki/Ephedra_(plant)">ephedra</a> (used in Chinese traditional Medicene) and discarding the toxic parts.</p>

<h1 id="history">History</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/Guns,_Germs,_and_Steel"><em>Guns, Germs, and Steel</em></a>, Jared Diamond (1997)</strong><br />
An attempt to explain why Europe and Asia colonized the rest of the world, and not the other way around, through scientific and anthropological theories rather than racist ones.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/1491:_New_Revelations_of_the_Americas_Before_Columbus"><em>1491</em></a>, Charles Mann (2005)</strong><br />
Most accounts of pre-Columbian indigenous life in the Americas have painted it either as either primitive savagery, or, in progressive fantasies, indigenous populations are wise and perfectly-attuned to their environments. This book provides a more accurate account of what we now know about life before 1492: dense populations and civilizations that rose and fell, just like anywhere else.  A lot of our modern fantasies of pre-1492 life as being barely-populated hunter-gatherer tribes were shaped by the fact that 95-percent of the population of the Americas died from diseases brought unknowingly by European colonists, long before Europeans visited most of the affected areas. (I found the middle chapters of this book to be a bit of a slog and skimmed them.)</p>

<p><strong><a href="https://en.wikipedia.org/wiki/1493:_Uncovering_the_New_World_Columbus_Created"><em>1493</em></a>, Charles Mann (2011)</strong><br />  The global effects of the contact of the Americas and Europe. A fascinating account through history of post-1492 science, agriculture, and warfare. So many factoids stick in my head from this.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/A_Peace_to_End_All_Peace"><em>A Peace to End All Peace</em></a>, David Fromkin (1989)</strong><br />
All the ins and outs of how the British and French colonialists, when carving up the Ottoman Empire at then end of World War I, created the Middle East and set the stage for seemingly-endless conflict and tyrrany.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/Collapse:_How_Societies_Choose_to_Fail_or_Succeed"><em>Collapse</em></a>, Jared Diamond (2005)</strong><br />
I only read a magazine-article version of this, not the full book, because I find it too depressing to contemplate.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/Season_of_the_Witch:_Enchantment,_Terror,_and_Deliverance_in_the_City_of_Love"><em>Season of the Witch</em></a>, David Talbot (2012)</strong><br />
History of San Francisco, from the 1960s to the 1980s. I find it helpful to counterbalance our current dramas with our historical dramas. The book seems a little sensational; after reading it, I asked my father if San Francisco life was as intense in the 1970s as the book portrayed it and he basically shrugged.  Another local-history book I loved was <a href="https://en.wikipedia.org/wiki/The_Mayor_of_Castro_Street"><strong><em>The Mayor of Castro Street</em></strong></a> by Randy Shilts (1982).</p>

<h1 id="economics-and-inequality">Economics and inequality</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/Evicted:_Poverty_and_Profit_in_the_American_City"><em>Evicted</em></a>, Matthew Desmond (2016)</strong><br />
The crushing effects of poverty, and the cruel systems that keep people poor. I also got a similar message from reading a magazine-length version of Barbara Ehrenreich’s <a href="https://en.wikipedia.org/wiki/Nickel_and_Dimed"><em>Nickel and Dimed</em></a> many years earlier, as well as some articles accompanying the <a href="https://en.wikipedia.org/wiki/Ferguson_unrest">Ferguson riots</a> in 2014, but Desmond’s is more far-reaching.  I also loved Emi Nietfeld’s riveting memoir <a href="https://www.eminietfeld.com/books"><em><strong>Acceptance</strong></em></a>, about childhood in foster care and homelessness.</p>

<p><strong><a href="https://byjustinfox.com/myth-of-the-rational-market/"><em>The Myth of the Rational Market</em></a>, Justin Fox (2009)</strong><br />
How motivated economists invented the myth of market rationality (“markets know best”) that supported the degulation disasters of the 1980s and the financial crashes and consolidations that have happened since.</p>

<p><strong><a href="https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction"><em>Weapons of Math Destruction</em></a>, Cathy O’Neal (2016)</strong><br />
This book was my first introduction to the harms of algorithmic decisionmaking as reinforcing inequality and racism, back in a time of optimism about machine learning. I don’t think she gets enough credit for sounding the alarm on AI, but she was writing when it wasn’t called “AI”, it was called “big data.”  (Safiyah Noble’s 2018 <a href="https://en.wikipedia.org/wiki/Algorithms_of_Oppression"><strong><em>Algorithms of Oppression</em></strong></a> discussed this in the context of Google search, again predating today’s AI hype.)</p>

<p><strong><a href="https://en.wikipedia.org/wiki/Chokepoint_Capitalism"><em>Chokepoint Capitalism</em></a>, Rebecca Giblin and Cory Doctorow (2022)</strong><br />
How monopolies have created precarious conditions for artists (and also why copyright alone cannot save artists). This book was written before the current AI-art controversies and provides valuable context.</p>

<h1 id="other">Other</h1>

<p><strong><a href="https://en.wikipedia.org/wiki/Nonviolent_Communication"><em>Non-Violent Communication</em></a>, Marshall Rosenberg (2015 edition)</strong><br />
An invaluable approach to having more-productive and empathetic conversations, especially in the presence of conflict. (<a href="/2025/06/23/nvc-1-scicomm.html">Longer blog post</a>)</p>

<p><strong><a href="https://hyperboleandahalfbook.blogspot.com/2013/09/i-wrote-book_30.html"><em>Hyperbole and a Half</em></a>, Allie Brosh (2013)</strong><br />
Hilarious cartoon memoir that, while frequently making me laugh out loud, also helped me understand how absolutely debilitating clinical depression can be, for those that suffer it.</p>

<hr />

<p><em>Thanks to people who recommended some of the books on this list: Moshe Vardi, Matt Hoffman, Alyosha Efros, Jitendra Malik.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[Ever since I read Guns, Germs, and Steel, and then more books like it, I started a mental category of “books that changed the way I see the world.” These books have a few things in common: they are non-fiction; they often span eras; they present grand theories based on solid evidence; they’re engagingly written for a relatively broad audience; in most cases, I couldn’t put them down, finishing them in a short amount of time, and then buttonholed all my friends telling them to read the book; years later, I still find myself referring to ideas or tidbits that stuck out from these books, and sometimes thinking “how can you discuss topic X if you haven’t read this book?” I aspire to write books like this about art, technology, and perception someday.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Non-Violent Communication, and Technical Communication</title><link href="https://aaronhertzmann.com/2025/06/23/nvc-1-scicomm.html" rel="alternate" type="text/html" title="Non-Violent Communication, and Technical Communication" /><published>2025-06-23T00:00:00+00:00</published><updated>2025-06-23T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/06/23/nvc-1-scicomm</id><content type="html" xml:base="https://aaronhertzmann.com/2025/06/23/nvc-1-scicomm.html"><![CDATA[<p>A few years ago, an idea called <em>Non-Violent Communication</em> (NVC) became important to how I communicate, especially in situations of conflict or potential conflict. When I’ve mentioned the <a href="https://www.cnvc.org/store/nonviolent-communication-a-language-of-life">NVC book</a> to friends, I’ve found that many of them had already read it and use the ideas—one of my high school friends said “That book changed my life.”</p>

<p>In this blog post, I summarize some of the basic ideas of NVC, and advocate for the ideas in scientific and technical communication. I describe it in a way that makes sense to me, and might be more approachable for some people; the way <a href="https://www.cnvc.org/store/nonviolent-communication-a-language-of-life">the NVC book</a> is written really put me off when I first tried to read it. But, if you find this intriguing, I do encourage you read the book itself; this is not meant as a complete introduction to NVC.</p>

<p>Conflict is a natural part of scientific communication, not only because science involves resolving competing ideas, but also because it is a human process. Yet, I don’t think we’re often trained in how to deal with conflict.  NVC offers tools for managing conflict, and, more importantly, communicating what is important to us in a way that prevents conflict in the first place.</p>

<p>I also draw connections between NVC and scientific concepts like “umwelt” in this post.</p>

<p>I think that NVC can be useful to everyone; feel free to ignore the science stuff if it’s not interesting to you.</p>

<p>This post was spurred by a very nasty paper reviews I received recently. In the next blog post, I’ll apply some of these ideas to reviewing, including discussing that nasty review, and describe simple, easy-to-follow guidelines that greatly improve paper reviewing. See that post for an extended example of these concepts.</p>

<h1 id="non-violent-communication-nvc">Non-Violent Communication (NVC)</h1>

<p>NVC provides communication techniques for managing and preventing interpersonal conflict. NVC can generally improve empathy and connection between people.</p>

<p>A key concept is to separate <em>what you can observe</em> from your judgments and interpretations, the way that a scientist ideally separates the data they measure from their intepretations of the data.</p>

<p>Think about a recent time when you got angry at someone important in your life, or they got angry with you. Maybe someone was late, or talked over you, or broke a promise—and it made you mad.  In these cases, it’s tempting to describe your experience as objective facts about the world: “You’re always late” “That was really shitty of you.” “You didn’t even think about other people.”  Or, perhaps you know that saying these things won’t help, so you don’t say anything, and continue to feel bad when the same thing keeps happening again.</p>

<p>The problem with those statements that they they describe your own interpretations of events and judgements of people. They’re subjective. The other person may not agree with them, and even feels insulted by your comments, and then things get worse.</p>

<p>NVC offers a recipe for communication in these difficult situations. A key idea is that you can only talk about things you yourself observe.  That is, you can talk about what you saw, what you did, and what you felt.</p>

<p>In NVC, statements about things that you couldn’t observe are off-limits. You are not allowed to make judgements about other people, to say what they thought or believed, or to describe who they are. In NVC, these are all called “violent communication:” a violent communication is any statement that imposes an unwanted opinion onto someone else.  Telling someone that they’re usually late, or that they’re rude or inconsiderate are violent communications.</p>

<p>Even compliments can be violent communications. For example, in some contexts, someone telling me that I’m very organized might sound to me like they’re saying that I’m uptight or uncreative. Moreover, in my experience, unsolicited advice is usually violent, since it often conveys a lot of wrong assumptions by the advice-giver about the unwitting advisee.</p>

<p>One goal is to find common ground, which means avoiding statements that the other person disagrees with. If you make judgements or interpretations about what happened—especially about them—they may not only disagree, they may get defensive or angry. On the other hand, if you state only things you observe, the other person almost certainly cannot disagree with them, and then hopefully you have at least improved your understanding of each other. Honestly describing your feelings in these situations is so hard, but so powerful.</p>

<p>NVC is about managing conflict, and just describing your feelings or hurt is not enough; you cannot expect the other person to know what you need. For this, once you’ve stated your observations, you can describe the more basic needs you have, and then make a request: something that you want the other person to do. The request needs to be concrete and actionable; not “treat me better” but something very specific that the person can do “when I cook dinner, do the dishes without being asked.”</p>

<p>In summary, the recipe for NVC in a conflict is: 1. say what you saw and observed (observable facts), 2. how it made you feel (internal facts), 3. describe your personal needs, and 4. your precise and specific request for what you want the other person to do. “You said X, it made me feel Y, I need to feel respected, and I request that you to not say that again.” They might say no to the request, but they should ideally have no reason to dispute the facts of what you experienced and what you need.</p>

<p>This is not easy—it’s not easy to dispassionately observe and discuss intense emotions. It takes a lot of practice and iteration to learn the skills of NVC: to filter out the judgements, to try to observe your feelings accurately, and to openly describe them. I still find it quite difficult.</p>

<p>I have not attempted to fully explain NVC, and I’m certainly not an expert. I recommend reading <a href="https://www.cnvc.org/store/nonviolent-communication-a-language-of-life">the first few chapters of the book</a> to learn more.  As with any skill, reading isn’t enough: you then need to practice difficult conversations in your own words, ideally together with someone else, and iterate.  I promise it’s worth it.</p>

<h1 id="talking-about-feelings">Talking about feelings</h1>

<p>At the core of NVC is something that sounds antithetical to science: talking about feelings.</p>

<p>Many of us were trained in childhood—perhaps unintentionally—to avoid talking about feelings and peoples’ needs. In technical, scientific, and pragmatic topics, feelings might seem irrelevant and distracting—one works hard to write objective, factual accounts and to make rational decisions.  In interpersonal conflict, it’s tempting to focus on what promises were made, who is “in the right” or what people “deserve”.</p>

<p>If so, feelings and needs end up being neglected, when they’re sometimes what need the most attention. Often conflicts that seem to be about logistics or fundamental disagreements are really about miscommunication leading to hurt feelings. Whenever I see someone get angry in a work meeting (including myself), I believe that there’s a real issue of their feelings of being threatened or hurt, or not getting some basic needs met—not whatever practicalities the discussion is superficially about.</p>

<p>In, say, a faculty meeting, you might see a lot of charged discussion, which indicates a lot of difficult feelings. Some junior faculty feel insecure in their status and some senior faculty feel insecure in their continued relevance, and insecurity leads to people attacking colleagues and faculty candidates.  Some faculty seem upset about everything, and this often comes from their own insecurity and fears.</p>

<p>At times, just saying how you feel can really help in tense situations. It feels so difficult to recognize your feelings at the core of a difficult situation, and then to acknowledge them out loud. Then when you do, you think “that wasn’t so hard.”   (Actually, it is hard, and I am not good at this, especially in the heat of the moment.)</p>

<p>Several years ago, I was frustrated after receiving several rounds of very highly-critical feedback on a manuscript from a mentor. I couldn’t tell if they thought my work was even worthwhile at all. I finally wrote back that “This project is very important to me, … and I found your email to be really discouraging.” While they did not specifically comment on this, I found their subsequent comments to be much more considerate and balanced, even including the occasional compliment. Their comments and support were absolutely crucial to me eventually getting the work completed and published.</p>

<p>One of my favorite comedians, James Acaster, tells stories in his “Hecklers Welcome” tour about confronting conflict and fears. The show culminates in a story of how he disarmed scary bullies on a train by honestly describing how scared he was. It seemed like a textbook use of NVC (although rewatching it now, he doesn’t talk about needs or requests). (At the theater where I saw it in San Francisco, he complained that the hecklers were too nice, which got a laugh.)  It’s a very funny show and <a href="https://play.max.com/movie/5206b44b-6be5-4165-aefa-aa60b0e3eda3">I highly recommend the comedy special</a>. (I also loved <a href="https://www.jamesacaster.com/cold-lasagne/">his prior special for different reasons</a>, <a href="https://vimeo.com/799500985">alternate link</a>).</p>

<p>Even in everyday conversation, we can include discussion of feelings. These are little things: “I’m happy to see you” “I’m disappointed that they’re out of dessert.” But I think a willingness to mention feelings in everyday situations is good practice for more difficult situations; conversely, people who find this difficult might also have similar difficulties in difficult conversations.</p>

<h1 id="judgments-as-opinions">Judgments as opinions</h1>

<p>NVC says to avoid all sorts of external judgements. But, in everyday conversation, we make judgments all the time, like “The show is funny” or “That’s a good idea.” These judgments are generally statements of opinion, not facts, and different kinds of judgments can function in different ways.  When I said that “Hecklers Welcome” is a funny show, what I really mean is that I laughed a lot and enjoyed it, and I think that other people would find it funny. That is, I used an objective-sounding judgment about the show as a shorthand for my own opinion of the show.</p>

<p>I think of most judgments as being shorthands for opinions. Explicitly expressing every opinion <em>as</em> an opinion would be painfully verbose and unfun.</p>

<p>Trouble arises when people treat such judgements as absolute. If you insist that a show is funny—contradicting anyone that says otherwise—then you seem to be saying there’s something wrong with people who didn’t laugh at the show. For this reason, I try avoid such judgments when there’s potential disagreement.  For similar reasons, <a href="https://aaronhertzmann.com/2022/09/19/art-definitions-1.html">I advocate against evaluative definitions of art</a> (“This is good enough to be art”), and that <a href="https://aaronhertzmann.com/2024/06/21/judgments.html">unnecessary judgements impede creativity and artistic practice</a>.</p>

<p>One alternative is to replace generic judgments with specifics. Instead of “that’s a good movie,” or you comment on aspects of it, e.g., “I liked the acting but not the special effects.”</p>

<h1 id="interpreting-sensations-umwelt-and-nvc">Interpreting sensations: <em>umwelt</em> and NVC</h1>

<p>Now I turn to how NVC relates to scientific theories.</p>

<p>It seems like we experience the world as it is: you see a picture and know what’s in it. A stranger is aloof and unfriendly, and refuses to make eye contact with you. A dog sniffs some boring weeds but misses a spectacular view from a trailhead. Much of the time, these kinds of descriptions work.</p>

<p>But perceptions can mislead us.  <a href="https://en.wikipedia.org/wiki/Checker_shadow_illusion">Visual illusions</a> are not what they appear.  Someone who seems aloof and unfriendly might just be shy.  A dog experiences a wealth of sensations from smells that we humans cannot experience, whereas <a href="https://www.petmd.com/dog/general-health/what-colors-can-dogs-see">dogs are red-green colorblind</a> and overall have worse visual acuity than humans.</p>

<p>This is because <strong>we don’t see the world as it is—we interpret our sensations.</strong> In biology, this idea is expressed by the concept of <a href="https://en.wikipedia.org/wiki/Umwelt"><em>umwelt</em></a>: no organism has access to the true facts of the world, it can only access its senses. Many of us organisms interpret our sensations.  Different organisms have different experiences—me and my dog have different senses, and so we have really different experiences when we’re out on a walk.</p>

<p>Scientific research has analogous limitations. A scientist running an experiment cannot know the true causes of what happened in the experiment; they can only take measurements and form theories based on them. A scientist aims to report, as carefully as possible, what they did, and what they measured, and then to cautiously describe their interpretation of the results. Conversely, some cognitive-science theories liken organisms to scientists, measuring the data of their senses, performing experiments, and forming theories in their head, such as “Bayesian brain” theories and Gopnik et al.’s <a href="https://alisongopnik.com/TheScientistInTheCrib.htm"><em>The Scientist in the Crib</em></a>.</p>

<p><strong>Here’s the complete analogy</strong> between umwelt, scientific communication, and non-violent communication. In each, there is an actor (an organism, a scientist, a person), who can observe specific things (senses, experimental data, conversations and feelings). Organisms and scientists form interpretations of their data (the state of the world, scientific theories). In communication, scientists report their data and their interpretations, at some dispassionate remove (separately their feelings from the data), in order to come to a shared understanding of science; people using NVC report their direct experiences dispassionately (including reporting on their feelings) in the hopes of coming to shared understadning.</p>

<p>This is just an analogy, not a literal equivalence; communication would be impossible if we always had to report every sensation on our retinas instead of “the clock said 3pm” or just “it was 3pm.”   We can take shortcuts when there is shared understanding, and no real uncertainty about the meaning of our sensations.</p>

<p>Scientific communication typically involves a posture of objectivity, that one can make statements without a point-of-view or positionality. Even if we acknowledge that objectivity is technically impossible, the key idea here is that we always have some shared understanding. We can treat shared understanding as facts (no need to treat “the Earth is round” or “we went to a movie together yesterday” as opinion), and be more careful about uncertain or contested ideas.</p>

<h1 id="using-nvc-in-scientific-and-technical-communications">Using NVC in scientific and technical communications</h1>

<p>Having tried to incorporate NVC into my personal life—it’s difficult, but worth it—I find myself using those habits in some of my writing as well.  <strong>I don’t advocate employing the full NVC recipe in all scientific and technical communication, but I advocate using some of the ideas.</strong>  At the very least, one should be very careful with judgments, acknowledge feelings where relevant, and, in debates, avoid violent communication.</p>

<p>I’m currently co-authoring a technical paper on a controversial topic in “AI” art, and a draft used judgemental, emotionally-charged language. I found the language upsetting (I disagreed with some implications of it), and it did not seem possible to concisely summarize the issues—any judgments we made would likely be upsetting to some group. But we also didn’t need to take a position or to make these judgements. I reframed the introduction in factual terms that we could support with citations, so that we could motivate our technical problem without making controversial assertions.  This might sound just like careful technical writing, but to me it felt like NVC, since we avoided judgmental language and limited ourselves to factual observations.</p>

<p>Now, when I do discuss controversies directly, I do try to acknowledge the conflicting fears and angers of the different stakeholders, and the sources of those feelings.  For example, in <a href="/2022/12/17/when-tech-changes-art">a discussion of “AI art,”</a> I discussed the anger that traditional artists feel as a product of their fears for their own livelihoods and identities, many of which are quite understandable. (Although I’ve also discussed it in terms of the <a href="/2023/12/11/art-worlds">value systems of different kinds of artists</a>, which are themselves tied to feelings.) Some artists might find this discussion violent if they do not relate to it—different people have different opinions and feelings—but it seems better than not attempting to acknowledge the feelings.</p>

<h1><a href="../../../2025/06/23/nvc-2-reviewing.html">In my next blog post</a>, I discuss violent communications in paper reviewing (including some really bad reviews I've gotten), and describe a simple guideline for better, non-violent reviews</h1>

<hr />

<p><em>Many thanks to Rich Radke for comments on this blog post.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[A few years ago, an idea called Non-Violent Communication (NVC) became important to how I communicate, especially in situations of conflict or potential conflict. When I’ve mentioned the NVC book to friends, I’ve found that many of them had already read it and use the ideas—one of my high school friends said “That book changed my life.”]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Violent Communication in Scientific Paper Reviews</title><link href="https://aaronhertzmann.com/2025/06/23/nvc-2-reviewing.html" rel="alternate" type="text/html" title="Violent Communication in Scientific Paper Reviews" /><published>2025-06-23T00:00:00+00:00</published><updated>2025-06-23T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2025/06/23/nvc-2-reviewing</id><content type="html" xml:base="https://aaronhertzmann.com/2025/06/23/nvc-2-reviewing.html"><![CDATA[<p>In peer review for scientific and technical papers, peer reviews occasionally discuss the paper authors directly. In this post I argue that this behavior can be toxic: it harms the review process, the authors, and, potentially, the community as a whole. This particular problem can be prevented by one easy guideline, one already used in some communities: paper reviews should be about the paper, not the authors.</p>

<p>This follows <a href="/2025/06/23/nvc-1-scicomm.html">my previous blog post</a>, where I reviewed “non-violent communication” (NVC). NVC offers a way to communicate and find common ground by separating observable facts and feelings, while avoiding judgments of other people. For this discussion, a “violent communication” is any statement that imposes an unwanted opinion on someone else, like telling someone that they’re always late or rude. As I discuss in that blog post, even compliments and unsolicited advice can be non-violent in some contexts.  Additionally, I discuss the importance of talking about feelings, something many of us are trained to avoid, whether from childhood or scientific and technical training.</p>

<p>Even if you’re not specifically interested in scientific paper reviews, this post may provide an interesting discussion of how thinking about feelings and communication can be useful in a technical context.</p>

<p>Here, I begin by describing two “violent” paper reviews that I’ve gotten in the past few years.  The second one was especially nasty, and spurred this blog post.  In this discussion, I’ll particularly focus on feelings here—mine, and those of the reviewers.  In a few places, I will speculate on the reviewers’ feelings and identities in order to make my points, and this would constitute violent communication, and so would not be conducive to finding common ground in a discussion with these reviewers.</p>

<h1 id="my-first-toxic-review">My first toxic review</h1>

<p>Since 2019, I’ve been submitting papers to journals in vision science (i.e., the psychology and neuroscience of human vision), far outside of my background in computer science. Overall, my experience has been extremely positive, both in the review process, and with the vision scientists who have given me feedback and guidance on manuscripts. Reviewers have often misunderstood my submissions and written skeptical reviews, sometimes even leading to rejection. This is normal and useful: these misunderstandings give clues about problems in the paper, which I can then fix. But there have also been some toxic reviews.</p>

<p>My very first vision science submission got one violent review, based on misunderstanding the manuscript. Here’s a typical paragraph from it (typos included):</p>

<table>
  <tbody>
    <tr>
      <td><em>Hertzmann solves the problem by considering a world with no shadows or highlights or colour patches. ….  There is a logical problem here. How can the observer imagine the world and optical structure of the Hertzmann conjecture without solving the problem of distinguishing the shadow, highlight and colour borders from the borders due to « edges « – ie the depth and slant changes ? In short, he assumes the solution in order to reach the solution. This is a classic error in reasoning.</em></td>
    </tr>
  </tbody>
</table>

<p>Here, the anonymous reviewer has failed to understand the argument in the paper, and then made wrong statements about my argument. The reviewer writes their wrong judgments with total confidence.</p>

<p>Moreover, the review refers to me by name, and calls my conjecture by my name (I had not given it a name in this version). This make the entire evaluation personal—they make it <em>about me</em>—for no apparent reason.</p>

<p>This reviewer concluded with some advice:</p>

<table>
  <tbody>
    <tr>
      <td><em>Hertzmann shows good understanding of a wide variety of techniques for analyzing images. He could call on this knowledge to create a good description of the problem he is trying to solve, and to bring to bear novel, new ideas in the literature, to see how far one can get with these today. He might look up the research on lines and shadows, now some ten years old. …</em></td>
    </tr>
  </tbody>
</table>

<p>Perhaps the reviewer thought they were being helpful by giving a compliment and advice. But, in so doing, the reviewer implies that I am ignorant of the literature, that I do not know how to choose research problems, or how to write a paper. What sounds like well-meaning advice is offensive and insulting.  I want to say back: who are you to tell me what to do?</p>

<p>The paper was rejected. I revised it based on the questions and misunderstandings the reviews, and resubmitted it a month later. After a few more rounds of revision, <a href="https://journals.sagepub.com/doi/abs/10.1177/0301006620908207">the paper was published in <em>Perception</em> in 2020</a>. I did <em>not</em> follow any of that reviewer’s bad advice.  The fact that the paper was published feels vindicating: the reviewer was wrong about the core of the paper, and wrong about me.</p>

<p>But the review should not have been about me.</p>

<p>It’s normal for a reviewer to not understand a manuscript’s argument; seeing how readers misunderstand a paper is invaluable information for authors. I work very hard to understand reviewer comments and I heavily revise papers accordingly.</p>

<p>But it is not okay that the reviewer judges me, personally. This is violent communication. Not only were their comments inaccurate, they are <em>toxic</em>: they hurt, emotionally. They are insulting, they are painful to read.  They make me want to respond with my own violent communcation—to say that the reviewer is an asshole—and to discount everything they say.</p>

<p>Violent communication is counter-productive: it’s hard to learn much from someone when they are hurting you. (This is also one reason that <a href="https://avsab.org/wp-content/uploads/2021/08/AVSAB-Humane-Dog-Training-Position-Statement-2021.pdf">dog trainers have moved to training based on positive reinforcement</a> instead of negative reinforcement.)</p>

<p>Toxic communication harms the community. When editors allow these kinds of comments, they send the message that this kind of reviewing is acceptable, even encouraged. Toxic communication poisons open communication and can even drive people away, especially students.  I feel insecure about my ability to participate in vision science research, and feel like an outsider.</p>

<p>Authors need to feel safe to take the risk to submit papers. If, when I first submitted vision science papers, all the reviews had been like this, then I surely would have given up writing vision science papers entirely.</p>

<h1 id="my-second-toxic-review">My second toxic review</h1>

<p>My coauthors and I recently received a far worse  review from <em>Journal of Vision</em>:</p>

<table>
  <tbody>
    <tr>
      <td><em>These are three fine questions, but the authors, 1) seem to be largely ignorant of the vast literature on visual appearance (and in particular image appearance) that has for decades addressed and made progress on these issues, and 2) seem to be so tied to the beliefs articulated in their conclusion that they have designed experiments (if they can actually be called that) that are a confused mess.</em></td>
    </tr>
  </tbody>
</table>

<p>That is, they called us ignorant and described our beliefs. Later in the review they continue to describe our mental states and beliefs. They later commented directly on our training:</p>

<table>
  <tbody>
    <tr>
      <td>A quick review of the authors’ CVs show that they variously have training in Computer Science and Art History. … The authors badly need help in [vision science].</td>
    </tr>
  </tbody>
</table>

<p>The reviewer concluded with:</p>

<table>
  <tbody>
    <tr>
      <td><em>I have reached the end of my patience with this paper. It is a poorly conceived, designed, executed, and documented monstrosity and I regret that I will never get back the time I have spent reading and reviewing it. [The first author] can be forgiven for their ignorance, but [the senior authors and another uninvolved person] know better and should be embarrassed that they authorized the submission of this manuscript.</em></td>
    </tr>
  </tbody>
</table>

<p>Here the reviewer again describes us as ignorant and makes baseless about how the paper was written. Moreover, the reviewer says we <em>should</em> feel bad.</p>

<p>This review made us very angry and upset, and we are still mad and hurt. Even writing this blog post has made me upset.</p>

<p>I shared the review with some vision scientists colleagues. They sympathized, and said they’ve received reviews like this too. They speculated that these kinds of reviews come from scientists who did a lot of psychophysics in the 70s and 80s and now feel left behind by the field.</p>

<p>We have a ton of work ahead of us, to revise and overhaul the work, making use of the information in the reviews. We will certainly try to prevent these misunderstandings in our revisions. But it’s so much more difficult to study this review when it makes us angry and upset.  As I said before, it’s hard to learn much from someone when they’re hurting you.</p>

<center><b>The role of feelings</b></center>

<p>The reviewer made many useful criticisms; our submission had many problems.  But that doesn’t explain the intense emotions that the reviewer reports.  The review gives the impression of someone  suppressing rage.</p>

<p>Here’s my theory: the reviewer misunderstood one of our claims about the literature—they thought we were saying that no one had worked on these problems. They took it as an insult to their work. Instead of recognizing their hurt (or questioning their interpretation, or even noticing that we actually did cite papers and books from this literature), they lashed out at at us.  The genuine flaws in our manuscript just added fuel to the fire. But that’s just one possibility.</p>

<p>The harsh reviewer might think that they’re justified in being harsh—as if having a paper rejected, and seeing that it needs a major overhaul, isn’t negative feedback enough. I could imagine legitimate arguments that some papers should not be submitted. But, if that’s what they believed, then the reviewer should have made those arguments, instead of just giving emotional abuse.</p>

<p>I have speculated about the reviewer’s emotions, state, and background in order to make a point. In an actual conversation, this would all be violent conversation, and thus not constructive to finding common ground.</p>

<p>A reviewer feels threatened or insulted by a paper, so they write an angry review attacking the authors. The reviewer has bad emotions and tells the authors that they should feel bad. The reviewer discussed the manuscript in terms of the authors’ thought processes, thereby making the reviews unnecessarily personal, and sometimes insulting (especially when they were wrong). The authors in turn feel angry and upset by the review and then struggle to incorporate the feedback and figure out the next steps. This does not seem healthy for anyone.</p>

<h1 id="reviewing-guidelines">Reviewing guidelines</h1>

<p>I believe that scientific reviewing should include guidelines to prevent these kinds of reviews. The most simple principle is:</p>

<table>
  <tbody>
    <tr>
      <td><center>Reviews should be about the paper, not the authors.</center></td>
    </tr>
  </tbody>
</table>

<p>The reasoning is simple: reviews are judging the paper for acceptance. Nothing about the authors is relevant to this decision. Moreover, directly discussing the authors makes the process personal, and can often be insulting to authors. Even compliments can come off as personal judgments.</p>

<p>Most places that I have published do have explicit rules about this.  For example, <a href="https://s2025.siggraph.org/technical-papers-reviewer-instructions-ethics/">the SIGGRAPH policy states</a>:</p>

<table>
  <tbody>
    <tr>
      <td>Belittling or sarcastic comments, or comments on authors’ personalities, have no place in the reviewing process. Please evaluate the work, not the authors. The most valuable comments in a review are those that help the authors understand the shortcomings of their work and how they might improve it. …</td>
    </tr>
  </tbody>
</table>

<p>(I might have contributed to this, I don’t remember). <a href="https://cvpr.thecvf.com/Conferences/2025/ReviewerGuidelines">The CVPR policy uses almost the same wording</a>. These policies are enforced by the papers committee: if a review violates the policy, the reviewer will be required to fix it before it is sent to the authors.</p>

<p>This seems like it’s obviously the right thing to do.  The primary purpose of paper reviewing is to decide whether to publish the paper, and the secondary purpose is to give useful feedback on the paper manuscript to the authors. Discussing the authors does not further either goal. Moreover, the reviewer does not have enough information to say anything useful and meaningful about the authors. Such statements would just end up being violent communication.</p>

<p>Moreover, I do think that this rule leads to better reviews because it reminds reviewers to focus on the paper and its contents, and not get distracted by the authors’ identities and judging their competence.  (Blind review is even better for this.)</p>

<p>The whole point of peer review is to judge a paper on its merits, not the authors’ supposed merits.</p>

<p>For as long as I can remember, I have always followed a stricter rule for my own paper reviewing:</p>

<table>
  <tbody>
    <tr>
      <td><center>Reviews should <i>never</i> mention the authors.</center></td>
    </tr>
  </tbody>
</table>

<p>I don’t ever remember seeing violent reviews in computer science, and I don’t remember ever having to  enforce the rule myself as an Area Chair/Editor. I think that simply having the guideline creates a more civil review culture.</p>

<p>One corollary I follow is to not address the authors directly (“you”). I often see reviews written in second person (“you did X, why?”). This, too, feels unnecessarily personal, mismatched to the primary purpose of the review: to inform the editor/committee/reviewers about whether to accept the paper.</p>

<p>Obviously, making review processes double-blind would also improve these reviews (as they are in CS),  non-blind reviewing has been shown to create biased reviews in many studies, e.g., <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2435.14259">1</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.1707323114">2</a>). It seems bizarre to me that there remain some non-blind reviewing processes.</p>

<p>Exceptions can be made for certain cases, e.g., opinion and perspective pieces often reflect an author’s viewpoint and their identity may be relevant for these.</p>

<h1 id="reviewing-style">Reviewing style</h1>

<p>As a reviewer, how can you best implement these guidelines?  As a reviewer, I find it very easy: instead of “the authors say X”, I write “the paper says X.” I use passive voice when needed (e.g., “in the first experiment, participants were asked to …”)  And, of course, I avoid any temptation to mention the authors’ own motivation, knowledge, background, and so on.</p>

<p>For example, I might restate the initial quote from my first toxic review (above) as:</p>

<table>
  <tbody>
    <tr>
      <td><em>As I understand it, the paper solves the problem by considering a world with no shadows or highlights or colour patches. ….  But there seems to be a logical problem here. How can the observer imagine the world and optical structure of the paper’s conjecture without solving the problem of distinguishing the shadow, highlight and colour borders from the borders due to « edges « – ie the depth and slant changes ? It appears the paper assumes the solution in order to reach the solution.</em></td>
    </tr>
  </tbody>
</table>

<p>There’s also some added uncertainty here—maybe I misunderstood the paper?  It’s a lot more likely that a manuscript is unclear than that the authors genuinely mean something nonsensical.</p>

<p>If you really feel compelled the discuss the authors in reviews, then it’s worth seriously examining why, and whether doing so is actually beneficial.</p>

<p>This advice does contradict other advice I would give, like avoiding passive voice. In this case, I think avoiding violent communication is more important. Also, when writing about a paper that is published, I might mention the authors as saying/writing whatever they claim; it feels different when it’s a published paper versus an unpublished manuscript. My only rationalization for this is that a manuscript is just a draft, and the power relationship is different between an anonymous reviewer and an author of a submitted manuscript, versus the author of a published paper and someone commenting on it afterward.</p>

<p>One other factor is that I sometimes write theoretical papers with some first person in order to avoid passive voice (including that <em>Perception</em> paper), which might encourage reviewers to discuss the author, but I still argue that the review should be able the arguments.</p>

<h1 id="reviewing-and-non-violent-communication-nvc">Reviewing and Non-Violent Communication (NVC)</h1>

<p>The core argument of this blog post could be expressed in terms of the four steps of <a href="/2025/06/23/nvc-1-scicomm.html">NVC</a>:</p>

<ol>
  <li><strong>What I observed</strong> in the paper reviews I received</li>
  <li><strong>How I felt</strong> after receiving the reviews</li>
  <li><strong>My needs</strong>. I need to feel “safe” from attack when submitting work for publication, and I believe other authors do too.</li>
  <li><strong>My request</strong>: create and enforce reviewing guidelines not to mention authors in reviews.</li>
</ol>

<p>I also claim that the rule leads to better reviews and better reviewing culture overall.</p>

<p>I am not advocating the full use of NVC in reviews. In this post I have discussed <a href="">NVC</a> as a way of understanding (a) how one’s feelings influence the process, and (b) what <em>not</em> to include in a paper review, just because it becomes counter-productive.</p>

<p>I think similar ideas can be helpful in other kinds of scientific and technical communication as well.</p>

<hr />

<p><em>Thanks to Rich Radke and Maneesh Agrawala for comments on this blog post, and to the vision scientists who have given me advice and commisseration on submitting to vision science journals.</em></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[In peer review for scientific and technical papers, peer reviews occasionally discuss the paper authors directly. In this post I argue that this behavior can be toxic: it harms the review process, the authors, and, potentially, the community as a whole. This particular problem can be prevented by one easy guideline, one already used in some communities: paper reviews should be about the paper, not the authors.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/wica/bradford_viewers.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Choices During Painting: A Case Study</title><link href="https://aaronhertzmann.com/2024/10/18/adrift.html" rel="alternate" type="text/html" title="Choices During Painting: A Case Study" /><published>2024-10-18T00:00:00+00:00</published><updated>2024-10-18T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2024/10/18/adrift</id><content type="html" xml:base="https://aaronhertzmann.com/2024/10/18/adrift.html"><![CDATA[<p>There’s a myth that an artist begins with an intent or inspiration for a picture, and then just executes on that vision. <a href="/2023/03/06/open-ended-exploration.html">I’ve written a lot about how this is generally false</a>: art and science often involves discovering goals during a process of exploration. Moreover, <a href="/2020/10/23/planning-and-strategy.html">the process of painting involves a continual series of choices</a>. But I’ve found it hard to give detailed examples of these points from my own painting. During the process, it’s hard to make decisions, it harder to be aware of making these decisions, and even harder to remember the decisions. And there’s something about a finished piece that makes it seem inevitable, erasing all the uncertainty that went into making it.</p>

<p>During my recent sabbatical, I painted a picture that, finally, can illustrate some of the decision-making that went into it, because I remember enough of the decision-making that went into it, and there were enough changes of direction to be interesting. This isn’t my favorite painting, but hopefully sufficient to demonstrate the circuitous nature of painting.</p>

<p><strong>The main point here is: I only started with a vague idea of what the painting would look like, and I changed my mind about it several times during the process.</strong> Moreover, the final style was unplanned, but influenced by paintings I’ve seen in the past.</p>

<p>I did it while looking at this view out of a hotel window:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-photo-wide.jpg" width="50%" />
</center>

<p>Here’s the full timelapse of the finished painting that I want to discuss:</p>

<center>
<video width="640" height="480" controls="">
  <source src="../../../images/ipad_paintings/adrift/adrift-timelapse.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video></center>
<p><br />
<br /></p>

<p>On the previous day, I’d drawn this sketch:</p>

<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-sketch.jpg" width="50%" />
</center>

<p>As a subject for the picture, the building naturally drew my eye. On the second day, I had the same impulse to center the building. But I realized that I didn’t want to do that. The building is not that interesting, it’s the open space that’s interesting. So I made a conscious decision to focus on the open space.</p>

<p>So I started by drawing the horizon and the building:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame65.jpg" width="50%" />
</center>
<p>But even after this I decided to move the building further to the side. So I moved it and then started painting the beach:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame155.jpg" width="50%" />
</center>
<p>I noticed that I was drawing simple, relatively clean strokes, with the horizon being a set of stripes. I got interested in the idea of doing the whole painting like that, similar to, say, <a href="https://www.google.com/search?sca_esv=339a018dea4bc811&amp;sxsrf=ADLYWILiBBvT_8Kh1URPzdE6MSTzAcvj0A:1729216641991&amp;q=georges+schwizgebel&amp;udm=2&amp;fbs=AEQNm0Aa4sjWe7Rqy32pFwRj0UkWwAFG7ranuZ26H8lR7pf_8AzBs6lnFFuPH6eU3OV27QJxgWOC2pYAfK469oms6rmuT0DrgO-z8kQP827bqZHgXPifb8ex7SqiUCGoRuZWJXeY1uNxcoAMC_f-nAraj5xf8COdvqs9MKDO76-egULliOzk-nkuJHDhQ2SAgJJATfRK6uvL6BjX8gUQdJLF4CMqAwDmwQ&amp;sa=X&amp;ved=2ahUKEwjUy7396ZaJAxVZCjQIHVUZHnkQtKgLegQIEhAB&amp;biw=1020&amp;bih=967&amp;dpr=1">a Georges Schwizgebel animation</a> or Diebenkorn: solid geometric regions, rather than the more sketchy stuff I normally do.  Maybe I’d do the whole painting with this simple, geometric look.</p>

<p>The then I started filling in more of the mid-ground region. I composed three main elements in the center, selecting from the real scene and distributing them horizontally on the canvas:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame333.jpg" width="50%" />
</center>

<p>Then I added some texture, enough to get a sense of what it would look like:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame690.jpg" width="50%" />
</center>
<p>It would have been easy to just spend all my time filling in those details. But at some point I realized it was enough for now—time to move to the foreground to finish the initial composition.</p>

<p>The real scene had a lot of texture and detail in the foreground. Of course I couldn’t draw all of it. I drew rough versions of some of the shapes in the real scene:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame932.jpg" width="50%" />
</center>
<p>Then I thought I’d draw lots of detailed grass texture, lots of little blades of grass:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame1131.jpg" width="50%" />
</center>
<p>This is a really different style from the simple geometric idea I’d had a few minutes earlier.</p>

<p>But after awhile, painting all these blades of grass seemed too time-consuming and not worth it. So I pressed “undo” a bunch of times:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame1334.jpg" width="50%" />
</center>
<p>Then I started to use a much larger brush to draw the grass texture:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-frame1857.jpg" width="50%" />
</center>
<p>As I started to draw these textures, I became conscious of how different it was from what I was seeing. But it also reminded me of some David Hockney iPad drawings I’d seen.  This gave me a sort of “permission” to continue: the feeling that maybe this was something to expore rather than avoid.</p>

<p>Then, the rest of the time was just filling things in. Here’s the final painting:</p>

<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-painting.jpg" width="50%" />
</center>

<p>Here’s the portion of the view that contains the elements of the painting:</p>
<center>
	<img src="../../../images/ipad_paintings/adrift/adrift-photo-crop.jpg" width="50%" />
</center>

<p>In the painting, you can see that <a href="/2024/10/16/perspective-as-arrangement.html">elements of the view are arranged differently in the canvas than in the photo</a>. The painting contains a few different styles from different phases of the process: the relatively abstract beach and sky, the moderately-texture mid-ground, and the most-textured foreground.</p>

<p>The final painting has more distinct shapes, more like distinct objects, like caterpillars or single-celled organisms or paisley, or more like abstract art shapes. None of this was planned.  This style was definitely not intentional or planned, it’s something I stumbled upon, and I wouldn’t have had I not spent time looking at David Hockney paintings sometime in the past.</p>

<p>None of these choices, or the styles within, were planned.</p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[There’s a myth that an artist begins with an intent or inspiration for a picture, and then just executes on that vision. I’ve written a lot about how this is generally false: art and science often involves discovering goals during a process of exploration. Moreover, the process of painting involves a continual series of choices. But I’ve found it hard to give detailed examples of these points from my own painting. During the process, it’s hard to make decisions, it harder to be aware of making these decisions, and even harder to remember the decisions. And there’s something about a finished piece that makes it seem inevitable, erasing all the uncertainty that went into making it.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/ipad_paintings/adrift/adrift-painting.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/ipad_paintings/adrift/adrift-painting.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How I Paint Space: Multiperspective Arrangement</title><link href="https://aaronhertzmann.com/2024/10/16/perspective-as-arrangement.html" rel="alternate" type="text/html" title="How I Paint Space: Multiperspective Arrangement" /><published>2024-10-16T00:00:00+00:00</published><updated>2024-10-16T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2024/10/16/perspective-as-arrangement</id><content type="html" xml:base="https://aaronhertzmann.com/2024/10/16/perspective-as-arrangement.html"><![CDATA[<p><em>Note: This blog post has been completely rewritten and published as a paper. Please read and/or cite the paper, as appropriate:</em></p>

<table>
  <tbody>
    <tr>
      <td>A. Hertzmann. Comparing Perspective in Drawings, Photographs, and Perception. <em>Art &amp; Perception</em>. 2025. [<a href="https://brill.com/view/journals/artp/aop/article-10.1163-22134913-bja10069/article-10.1163-22134913-bja10069.xml?ebody=abstract%2Fexcerpt">Paper</a>] [<a href="https://psyarxiv.com/pq8nb/">Preprint</a>]</td>
    </tr>
  </tbody>
</table>

<p>This post describes how I think about perspective <em>when I’m drawing and painting</em>.  By <em>perspective</em>, I mean creating a sense of 3D space in a 2D picture. <a href="https://en.wikipedia.org/wiki/Perspective_(graphical)">Classical theories</a> treat perspective as a specific rigid principle that doesn’t accurately describe how artists or human vision actually work.  Since the Modern Art era, a lot of art instruction (including my own) treats perspective as freeform, without any particular rules or principles. I think there’s more to say about it.</p>

<p>When I draw and paint, perspective is about <em>arrangement</em>: about arranging shapes on the page. It’s not about reasoning about projections from 3D to 2D. Moreover, drawings are <em>multiperspective</em>: different parts of the page correspond to different viewpoints or eye directions.</p>

<p>The approach I’m describing here are <em>observations of my paintings</em>. That is, I paint pictures, and, then afterwards, notice patterns or trends in how my pictures came out. These observations have informed <a href="/2024/10/07/picture-perception.html">my theories of perspective</a>, and those theories have, in turn, led to more nuanced analysis of my own drawings, and influenced how I draw new experiments. (It’s hard to know influences for sure, e.g., David Hockney’s body of work is inspiring, as well as <a href="/2024/09/09/dvc-multiperspective.html">the history of multiperspective photography</a>, and Rob Pepperell’s writing.)</p>

<p>I’ve been thinking about this for a long time, but I haven’t had pictures that illustrated these concepts well, for various reasons. Only since my recent sabbatical, do I finally have some pictures that I think can illustrate how these principles emerge in my own drawings.</p>

<h1 id="wide-angle-versus-narrow-angle-depiction">Wide-angle versus narrow-angle depiction</h1>

<p>With an initial exception, all of my discussion on this page are <a href="/2024/06/10/perspective-distortions.html"><em>wide-angle depictions</em></a>. This means that any normal photograph of the field-of-view that I’m capturing would create distortions. For wide fields-of-view, <a href="/2024/09/09/dvc-multiperspective.html">artists and photographers often end up making different kinds of pictures</a> from the way normal linear perspective photography works.</p>

<p>In contrast, here’s a narrow field-of-view sketch that I drew in July:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/ragusa_duomo.jpg" width="320" />
	</figure>
</center>

<p>To my surprise, it aligned quite well to a photo taken at the same time:</p>

<center>
	<figure>
<p align="center">
		<img src="../../../images/ipad_paintings/ragusa_photo.jpg" width="320" />
		<img src="../../../images/ipad_paintings/ragusa_overlay.jpg" width="320" />
</p>
	</figure>
</center>
<p>despite the fact that I’d not been the least bit methodical about it:</p>

<center>
<video width="640" height="480" controls="">
  <source src="../../../images/ipad_timelapse/ragusa_timelapse.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video></center>
<p><br />
<br />
(The above photos are wide-angle, but, if one just takes the center portion, then centers are narrow-angle.)</p>

<p>I find this drawing makes me believe more in <a href="/2024/10/07/picture-perception.html">narrow field-of-view as a “perceptually-accurate” depiction technique</a>. It’s not conclusive evidence—perhaps I’ve trained myself to draw this way, but I doubt it.</p>

<h1 id="photos-are-misleading">Photos are misleading!</h1>

<p>Before going into any more specific examples, I want to make a point that is important to me.</p>

<p>In each case, I will show a photograph that I took after drawing the picture.  But I really do not like showing the reference photos, because I think that <strong>the reference photos are misleading: they create such compelling and seductive illusions of reality that we mistake them for reality.</strong></p>

<p>I often find myself comparing the drawing to the photo and thinking that my photo doesn’t look right <em>because it doesn’t look like the photograph</em>. Then I have to remind myself that <em>the photograph doesn’t look like what I saw</em>. <a href="/2022/03/17/photography-is-not-objective.html">Photographs are not objective</a>; they are the product of <a href="https://jov.arvojournals.org/Article.aspx?articleid=2783759">an enormous number of choices made by camera manufacturers to make good-looking pictures</a>. In wide-angle pictures, i.e., all the examples below, <a href="/2024/06/10/perspective-distortions.html">they systematically create misleading shapes</a>.</p>

<p>For example, here’s a quick sketch I made while looking out of an airline window, together with a photo taken at the same time:</p>

<center>
<figure>
	<p align="center">
		<img src="../../../images/ipad_paintings/vueling.jpg" height="240" />
		<img src="../../../images/ipad_paintings/vueling_photo.jpg" height="240" />
	</p>
</figure>
</center>

<p>Bright sunlight shone right in my eyes, reflecting off the tarmac and suffusing everything in view. My painting shows this, whereas the photograph makes most of the scene look rather dark.</p>

<p>In each case, my painting and the drawing are both two different ways to depict what I saw. Neither is “right” or “wrong.” There are lots of good reasons to prefer the photograph, or to describe it as “more accurate”, but the photograph is misleading in lots of ways. It itself is not the “ground truth” scene according to which the drawing should be measured.</p>

<p>When I write about how the painting and the photo compare, I constantly find myself about to say something like “I’ve made the picture brighter than the photo” as though the photograph is the starting point, or the “true” picture. In writing this blog post, I’ve had to make a conscious effort to avoid this.</p>

<h1 id="morning-coffee-composition-and-simplification">Morning coffee: Composition and simplification</h1>

<p>I’ll begin this story with a relatively simple example, of my morning coffee while trying an Oxford coffee shop that my friend Amanda had recommended:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/broche.jpg" width="320" />
	</figure>
</center>

<p>For comparison, here’s a photo that I took at the same time.</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/broche-photo.jpg" width="320" />
	</figure>
</center>

<p>In the photo, we see a complex space, with a bunch of different objects and elements. In my drawing, I’ve simplified the scene to have a three-part design: the road on the left, the wall on the right, and the triangular floor-plus-table on the bottom. I’ve clearly left out many of the objects and details, e.g., the different cars and the trash can. It’s almost like three separate paintings, arranged in a 2D design.</p>

<p>Some of these choices are the result of <a href="/2020/10/26/time-and-speed.html">time constraints</a>: it’s simply too time-consuming to try to draw every detail. But it’s also <a href="/2020/09/15/painting-in-karies.html">not necessarily a better to do so either</a>—if you want every detail, take a photo.</p>

<p>Another common theme here—<a href="/2022/02/28/how-does-perspective-work.html">as well as in many professional artists’ work</a>–is the way that the ground slopes away from the viewer much less than in the wide-angle photograph, making distant objects larger in the drawing than in the photograph.</p>

<p>I also find that my drawings of objects (like coffee mugs) become fronto-parallel unless I really consciously work against it (and I rarely do).</p>

<p>In these latter two aspects, I see similarities to some of Matisse’s studio paintings, e.g.,:</p>

<center>
	<figure>
		<img src="../../../images/arthistory/matisse.jpg" width="320" />
	</figure>
</center>

<p>It seems like Matisse has drawn each object as a separate fronto-parallel portrait and put them on a tilted plane, and I feel like my sketches and drawings often look like that to.</p>

<p>The same thing appears in some pre-Renaissance perspective, like <a href="https://www.metmuseum.org/art/collection/search/459131">this 14th Century painting</a>:</p>

<h1 id="railways-and-roads-simplification-and-foreshortening">Railways and roads: Simplification and foreshortening</h1>

<p>A few days later, I painted this picture, on a bridge over a railway by the Oxford train station:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/railway.jpg" width="320" />
	</figure>
</center>

<p>And here’s a photo that I took at the same time (original, and cropped to match visual angle of the photo):</p>

<center>
	<figure>
<p align="center">
		<img src="../../../images/ipad_paintings/railway_photo.jpg" width="320" />
</p>
	</figure>
</center>

<p>Again, comparing the two, you can see different tilts of the ground plane. Again, I did not consciously reason about this, I simply attempted to draw what I saw. You can also again see a considerable simplification of the contents, segmenting the scene into just a few distinct regions.</p>

<p>Interestingly, if you crop out the relevant part of the photo, and scale it to match the size of the canvas, it looks pretty similar to the painting. This would suggest that the canvas size itself is an important factor:</p>

<center>
	<figure>
<p align="center">
		<img src="../../../images/ipad_paintings/railway_photo_crop_rescale.jpg" width="320" />
</p>
	</figure>
</center>

<p>Here’s a more complex example, of Calle de Alfonso I in Zaragoza, where distant objects appear larger in my drawing than in the photograph, <a href="/2022/02/28/how-does-perspective-work.html">a common theme in perception versus photography</a>. Many objects are removed as well, and the drawing is much lighter—it was a very bright, sunny day.</p>

<center>
	<figure>
		<p align="center">
		<img src="../../../images/ipad_paintings/calle_de_alfonso.jpg" width="45%" />
		<img src="../../../images/ipad_paintings/calle_de_alfonso_photo.jpg" width="45%" />
	</p>
	</figure>
</center>

<center>
	<figure>
	</figure>
</center>

<h1 id="natural-history-museum-complex-3d-space">Natural History Museum: Complex 3D space</h1>

<p>Now I’m going to change gears a bit, to discuss more fine-grained 3D compositional choices.  Here’s a picture that I drew in the Oxford Natural History Museum</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/natural_history_museum.jpg" width="320" />
	</figure>
</center>

<p>And here’s a photo that I took at the same time:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/natural_history_museum_photo.jpg" width="320" />
	</figure>
</center>

<p>There’s a lot to say about the changes that I made while drawing, both consciously and unintentionally. As you can see, the actual space is enormously complex, full of details, and so I couldn’t draw everything. The photograph is misleading, in fact, in that it downplays just how many finescale details there are. When you’re in this museum—or any space—<a href="/2024/05/09/illusion-of-awareness.html">you can’t even truly see them all the details around you</a>.</p>

<p>Many of the changes can be described in terms of the objects in the space. I reduced the number of pillars. There are lots of little details that I removed, like rows of bricks.</p>

<p>I wanted the dinosaur skeleton to be more prominent than it would have been, so I moved it and made it bigger relative the pillars near it.</p>

<h1 id="la-seo-simple-but-detailed">La Seo: Simple but detailed</h1>

<p>Here’s my favorite picture from my sabbatical, drawn in <a href="https://en.wikipedia.org/wiki/Cathedral_of_the_Savior_of_Zaragoza">La Seo</a>:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/laseo.jpg" width="320" />
	</figure>
</center>

<p>Unfortunately, I didn’t take a photo at the same time. From looking online recently, it seems like the object in the lower left might be <a href="https://es.wikipedia.org/wiki/Retablo_mayor_de_la_Seo">the main alterpiece</a>.</p>

<p>When I first walked into this cathedral, the idea of depicting it seemed overwhelming. The spaces are vast and filled with details. When I Google search for photos of the cathedral, the results include a lot of extreme wide-angles.</p>

<p>In this picture I found a way to create a composition that seems comprehensible, organized around an arc, and then additional details filling it out. Lots of details were removed, rows of bricks, fine filligree work, and so on.</p>

<h1 id="putting-pictures-together-compositional-multiperspective-space">Putting pictures together: compositional multiperspective space</h1>

<p>As I look over wide-angle pictures I’ve drawn over the past few years, I start to see them in terms of multiperspective compositions. And now, when I draw them, I think of them this way as well. That is, I’m consciously arranging parts of the scene each with their own perspective. Sometimes this approach can create an effective sense of 3D space, and other times the multiperspective nature is more visible.</p>

<p>Here’s a painting from a few years ago. Again, not my favorite drawing, but one that I think illustrates the main ideas:</p>

<center>
	<figure>
		<p align="left">
		<img src="../../../images/ipad_paintings/columbia_city.jpg" width="320" />
		<img src="../../../images/ipad_paintings/columbia_city_photo.jpg" width="320" />
	</p>
	</figure>
</center>

<p>Let me try to explain what I think is going on here:</p>

<ol>
  <li>I wanted to capture a wide vertical angle, spanning from the coffee (cortado) on the bottom, to the sky above. The photograph captures this same wide angle, but not in a way that matches perception.</li>
  <li>My drawing partitions that vertical range of space into a few zones, from bottom to top:
    <ul>
      <li>The table, and on it, the coffee</li>
      <li>The sidewalk with the coffeeshop’s sign</li>
      <li>The street intersection</li>
      <li>The buildings on the street</li>
      <li>The street and stop lights</li>
    </ul>
  </li>
  <li>The sizes of these zones are (a) proportional to perceptual sizes, but (b) squeezed to fit into the canvas. There’s no sense in which they’re “metrically” accurate.</li>
  <li>The zones fit next to each other, and the streets are continuous. Sometimes long shapes (like streets) bend in order to fit</li>
  <li>Objects are drawn as “head-on”, rather than with perspective distortions. For example, linear perspective makes the vertical lines angled whereas they’re vertical in my drawing.</li>
</ol>

<p>If these points aren’t necessarily clear in my drawing, they may be clearer in some of the other examples below.</p>

<p>Something similar happened in this drawing from 2021, where the road slopes away from the table:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/mannys.jpg" width="320" />
	</figure>
</center>
<p>Unfortunately I didn’t take a photo with it.</p>

<p>Sometimes the multiperspective nature is apparent, such as this 2021 drawing from a friend’s backyard:</p>

<center>
	<figure>
<p align="center">
		<img src="../../../images/ipad_paintings/backyard.jpg" width="55%" />
		<img src="../../../images/ipad_paintings/backyard_photo.jpg" width="30%" />
	</p>
<figcaption></figcaption>
	</figure>
</center>
<p>On the right is photo taken with the iPhone panorama mode. I rotated the camera in panorama mode, so the picture on the right includes different viewing directions.
The eave over my head (on the top of the pictures) is shown as viewed from below, looking upward, whereas the lawn in the middle of the picture appears as viewed horizontally.</p>

<p>In the panorama, the changing view direction causes some straight lines to be curved. But this doesn’t happen in my drawing: in the drawing, the eave is a separate zone from the rest of the picture.</p>

<p>The ground plane in my picture also seems to have the same fronto-parallel depiction as compared to the photo.</p>

<p>There are numerous other differences in the sizes and slopes of objects between the two pictures, some of which may also reflect differences between perception and photography.</p>

<p>For the record, here’s a wide-angle linear perspective taken at the same time; it seems similar to the panorama in the regions it captures.</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/backyard-wide-angle.jpg" width="320" />
	</figure>
</center>

<p>Here’s a more subtle example:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/fourbarrel.jpg" width="320" />
	</figure>
</center>

<p>In this picture I see several zones/objects: the ceiling, the coffee bar in the middle of the picture, the windows in the back—each zone seems to be depicted with its own perspective that fits together into a coherent picture. Maybe I should go back to take a picture here to show how different the photo looks. Until then, here’s a photo I found online, looks like it was taken a few feet to the left:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/fourbarrel-photo.jpg" width="320" />
	</figure>
</center>

<p>One last drawing, that I made on a long car ride:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/backseat.jpg" width="320" />
	</figure>
</center>

<p>The backseat view is very wide-angle, and here I’ve squeezed a lot of stuff into a little space, arranging cartoon-like people and seats into a small picture. The view downward to my iPad on my knees is a different view direction from the view forward to the road.</p>

<h1 id="between-theory-and-practice">Between theory and practice</h1>

<p>Last year, I gave a talk on <a href="/2024/10/07/picture-perception.html">my perspective theories</a>, and someone asked how these theories had changed how I draw. It’s an interesting question, and hard to know for sure.</p>

<p><strong>Photographic correctness.</strong> First and foremost, I think that my earliest thoughts about perspective (and reading from <a href="https://brill.com/view/journals/artp/4/1-2/article-p1_1.xml">Pepperell and Koenderink</a>) freed me from notions of photographic correctness.  I never believed there was a “right and wrong,” but still I found I had some latent belief in the rightness of photos, like any deviation from photography is an artistic choice. I’ve <a href="/2022/03/17/photography-is-not-objective.html">written about this previously</a>.</p>

<p><strong>Knowing the deviations of photography.</strong> Occasionally I do draw pictures based on photos that I’d taken previously. For example, I took a photo on a beach, and then came home and painted this picture from the photo:</p>

<center>
	<figure>
		<p align="center">
		<img src="../../../images/ipad_paintings/ocean_beach_photo.jpg" width="45%" />
		<img src="../../../images/ipad_paintings/ocean_beach.jpg" width="45%" />
</p>
	</figure>
</center>
<p>While I couldn’t remember exactly what I’d seen on the beach (due to the fragmentary nature of vision), I knew from experience that the building in the distance would have seemed bigger than in the photo. I knew that, had I been able to do the painting while standing on the beach, I would have painted the building larger. And so it made the decision very easy and natural to make it bigger in the painting too.</p>

<p>In a sense, making it bigger seemed more “perceptually correct,” even though I still don’t believe in correctness.</p>

<p><strong>Thinking of composition as multiperspective arrangement.</strong> As I’ve mentioned throughout this blog post, I now think of painting as multiperspective arrangement. I often consciously thinking about moving things in pictures left or right or bigger or smaller as a choice for making the picture better. Maybe I was always doing it, but now it’s much more of a conscious set of choices, made more permissible by theoretical foundations.</p>

<hr />

<p><i>Thanks to Alyosha Efros for comments.</i></p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[Note: This blog post has been completely rewritten and published as a paper. Please read and/or cite the paper, as appropriate:]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/ipad_paintings/ocean_beach.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/ipad_paintings/ocean_beach.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Learning from Painting, Part 7: Finding Inspiration, Six Years Later</title><link href="https://aaronhertzmann.com/2024/10/14/six-years-later.html" rel="alternate" type="text/html" title="Learning from Painting, Part 7: Finding Inspiration, Six Years Later" /><published>2024-10-14T00:00:00+00:00</published><updated>2024-10-14T00:00:00+00:00</updated><id>https://aaronhertzmann.com/2024/10/14/six-years-later</id><content type="html" xml:base="https://aaronhertzmann.com/2024/10/14/six-years-later.html"><![CDATA[<p>In 2019, I started painting and drawing on my iPad frequently, especially during a month-long sabbatical in Europe, in which I often spent many hours just drawing. When I got home, <a href="/2020/10/05/art-is-a-process.html">I wrote a series of blog posts reflecting on those experiences</a>. Those experiences—and the reflections from them in my blog—<a href="/2024/08/19/journey.html">has driven my research since then</a>.</p>

<p>Over the years, my motivation to keep drawing has waxed and waned, for various reasons. I feel like I <a href="/2024/06/21/judgments.html">“should”</a> keep drawing, but often lack the drive.</p>

<p>I’ve recently finished another month-long sabbatical, including a lot of time traveling and  Europe, where I again spent many hours drawing each day.</p>

<p>So here’s Part 7 in the series: finding inspiration, six years later.</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/drawing_pilar.jpg" />
		<figcaption><i>My friend Alyosha commented on this photo: "The Wikipedia page under 'sabbatical' should have this picture!"</i></figcaption>
	</figure>
</center>

<h1 id="still-at-it">Still at it</h1>

<p>I’ve never knew if I would keep drawing and painting.  A lot of the ebbs and flows of whether or not I keep drawing depends on how excited or inspired I am around things, and that’s varied a lot.</p>

<p>When I first picked up the stylus in 2019, I thought it was really fun. I didn’t pursue it immediately, other than a few idle sketches in an airport:</p>

<center>
	<figure>
   <p float="left">
		<img src="../../../images/ipad_paintings/airport_plant.jpg" width="30%" /> <img src="../../../images/ipad_paintings/airport_iphone.jpg" width="30%" />
	</p>
	</figure>
</center>

<p>Then, on the first day of my 2019 sabbatical, I did some sketching and found it so rewarding that I made it a conscious plan to put a lot of my sabbatical time into drawing and painting, and kept at it for the whole month.</p>

<p>I thought that maybe I’d lose interest when I got back home. I didn’t.  The first morning when I got home I took a photo on my morning walk and immediately went and <a href="https://www.instagram.com/p/B6KG6b4pp4D/?img_index=2">painted a picture from it</a>. My experiences and compululsions to paint and draw had come home with me, and coming home had provided a whole new set of things to try drawing with my new skills in tow.</p>

<p>Drawing has to feel a bit new each time, or I lose interest; I don’t want to feel like I’m doing something too predictable, or too similar to something I’ve done before. Nor do I want to try things that are too difficult.  The longer I spent around my neighborhood, the harder it came to find inspiration, especially during the 2020 pandemic lockdowns.  At first, while stuck at home, I drew still-lives of things around my house:</p>

<center>
	<figure>
   <p float="left">
		<img src="../../../images/ipad_paintings/yak.jpg" width="240" />
		<img src="../../../images/ipad_paintings/pilea_flat.jpg" width="240" />
	</p>
	</figure>
</center>

<p>and the sights while having drinks with friends in the parks or on our patios</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/hibiki.jpg" width="240" />
	</figure>
</center>

<p>In December 2020, I went on a hike with friends, on the eve of a new lockdown mandating that we not see other people for awhile, and I drew this sketch of the SF skyline, just using my finger on my phone:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/skyline2020.jpg" width="320" />
	</figure>
</center>

<p>But, after enough drawings of still lives, hikes, patio dinners, and a few other common themes, I ran low on inspiration at home and did not draw much; even the sights on our regular city hikes became too familiar.</p>

<p>After vaccination,</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/amanda_vax.jpg" width="320" />
	</figure>
</center>

<p>I started traveling again, which offered even more new sources of inspiration for a few years.</p>

<p>By 2023, however, I felt the inspiration ebb, until I started volunteering at an animal shelter, and found new motivation in drawing and sharing pictures of the dogs I worked with:</p>

<center>
	<figure>
		<p align="center">
		<img src="../../../images/ipad_paintings/penny.jpg" width="45%" />
		<img src="../../../images/ipad_paintings/bernardo.jpg" width="45%" />
	</p>
	<figcaption><i>❤️ Penny and Bernardo</i></figcaption>
	</figure>
</center>

<p>(<a href="https://www.blurb.com/bookstore/invited/10173468/a72d12f01ce73d7fd7d284a930d192e4d4cb11c8">Here’s a book I made of last year’s dog drawings</a>; perhaps it deserves its own blog post.) The dog pictures are always drawn from photos. And, while drawing from life is important to me, <a href="">I find drawing from photos to be useful too,</a> as in the vaccination photo above. Animals don’t sit still for portraits.</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/ziggy.jpg" height="320" />
	</figure>
</center>

<p>But after over a year of drawing shelter dogs, I’ve lost momentum there too.</p>

<p>Sketching beverages has somehow kept my interest, especially my morning espresso, especially while traveling. It’s become a warm-up for days when I’m drawing a lot; it’s in a “sweet spot” of being visually interesting (to me), but not extremely complicated. Lots of complexity that can be drawn but needn’t be.</p>

<center>
	<figure>
   <p float="left">
		<img src="../../../images/ipad_paintings/barefoot2019.jpg" width="45%" />
				<img src="../../../images/ipad_paintings/odessa.jpg" width="45%" />
			</p>
<figcaption><i>A coffee from 2019, and one from 2023.</i></figcaption>
	</figure>
</center>

<p>On my recent sabbatical, I thought it would be interesting to go back and draw pictures at places in Oxford where <a href="/2020/10/05/art-is-a-process.html">I’d had some memorable drawing experiences</a> in my first sabbatical. But when I got to them I never found much desire to try drawing them again.</p>

<h1 id="reflections">Reflections</h1>

<p>Finding the excitment, inspiration, and space to draw is a continual challenge. It’s partly the sense that there’s something <em>interesting</em> about trying to draw this thing: it’s not too “easy” for me, or too difficult, but enough of a stretch.  Even when I have the time, it’s sometimes a tough decision whether to start—if I start on this, will it work out? Will it be better than something else I might find to draw instead, or doing something else, like, say, going for a walk or reading a book?</p>

<p>Over the years, these experiences with drawing have been a bit of a positive spiral: going around in circles revisiting familiar subjects, only after I’d learned something from the previous iteration. But it only seems that way in retrospect; in the moment, it often feels like a difficult search for motivation, inspiration, and something worthwhile. I’ve left out of a lot of false starts and disappointing drawings from these pages.</p>

<p>My painting experiences in 2019 were amazing, full of new discoveries, and this current one has been more incremental.  <a href="/2021/08/17/learning-skills.html">This fits my experience in learning other skills: you start out learning a lot quickly and being very excited, and then the pace slows as learning deepens</a>.</p>

<center>
<figure>
<img src="https://aaronhertzmann.com/images/skill_curve_plateau.jpg" alt="Curve of 
increasing skill, novice to excited to plateau to expert" />
<figcaption><i><a href="../../../2021/08/17/learning-skills.html">How it feels to learn a new skill</a></i></figcaption>
</figure>
</center>

<p><strong>A big factor are the responses from friends and acquaintances.</strong> Friends were very supportive of my initial scribblings; without which I might have stopped, or might have stopped at several times along the way. <a href="/2021/03/22/art-is-social.html">I still maintain that art is a fundamentally a social phenomenon</a>. When drawing dogs, positive feedback from that community and the sense of connection keeps me going, as well as wanting to spend more time with the pictures of the dogs themselves.  Positive feedback from acquaintaces and colleagues helps too, and I do notice it, and it doesn’t all have to come from one place. For example, I’ve noticed that I’m not getting many “likes” on social media, and, then, at conferences, colleagues tell me that they love my drawings on social media even if I’ve never seen them once “like” my posts. If all of these sources of feedback dwindled, then maybe I’d stop.  I’m continually grateful to everyone who offers support.</p>

<p><strong>My best new experience on this sabbatical</strong> came from drawing longer, larger pictures. In Fall 2019, I tried making one very-involved, large scale painting. I sketched it on-site and then painted in details over many hours, using a photographic reference. I was ultimately disappointed in <a href="https://www.instagram.com/p/B6bLjqdJlz1/?img_index=1">how it came out</a>.</p>

<p>One day of my sabbatical this year, I’d spent the day wandering around, working in cafes and sketching in plazas. Later in the day, I wandered into a cathedral, and, after exploring the place, sat down to begin a big drawing. The place was peaceful, with choral music lightly playing in the background; not too many people around, and no feeling of pressure to get things done quickly. Most of my drawings are relatively quick sketches, but I ended up spending 70 minutes working on this one:</p>

<center>
	<figure>
		<img src="../../../images/ipad_paintings/laseo.jpg" />
	</figure>
</center>

<p>I was very happy with it. When I walked out of the church, I felt serene.</p>]]></content><author><name>AaronHertzmann</name></author><summary type="html"><![CDATA[In 2019, I started painting and drawing on my iPad frequently, especially during a month-long sabbatical in Europe, in which I often spent many hours just drawing. When I got home, I wrote a series of blog posts reflecting on those experiences. Those experiences—and the reflections from them in my blog—has driven my research since then.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://aaronhertzmann.com/images/ipad_paintings/laseo.jpg" /><media:content medium="image" url="https://aaronhertzmann.com/images/ipad_paintings/laseo.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>