Book Reviews on Consciousness and Intelligence
Here are some responses to a few books I’ve read recently on intelligence and consciousness in animals, including humans.
Until recently, I’ve largely avoided scientific study of these topics, especially consciousness studies. Judging from the neuroscientists and philosophers that I follow on social media, there seems to be no scientific consensus at all about it. One survey paper I saw recently listed over 100 different scientific theories of consciousness, all of them “in play” enough to be worth surveying.
Then I got interested, for a few reasons. First, my research in pictures led me to topics that significantly overlap with consciousness studies. Second, my work in whether computers could be artists touches upon the nature of computer intelligence. Third, some colleagues heartily recommended books on these topics. I read several and am glad that I did.
In the back of my mind, when reading these books, is the topic of artificial intelligence. More on this below.
Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal
The author describes the many forms that animal intelligence takes, and how scientists have consistently underestimated it.
“Intelligence” has often been treated as binary: either you have it or you don’t, and only humans have it. Moreover, in a traditional view, there is a single ladder of evolution, with humans at the top. He argues (and I believe him) instead that “intelligence” is not just one single thing, but a range of skills. Moreover, there’s no single “ladder,” as the evolutionary tree is not a ladder: some animals have greater skills than us at some tasks, but that doesn’t make them more intelligence.
He spends considerable time with primates, since that’s his research area, and so much of the time is spent describing how chimpanzees can perform various complex tasks of prediction and planning, as well as social awareness and behavior, that had previously been only assigned to humans.
One tidbit that spoke to me was his assertion that, while historically “scientific” definitions of intelligence have focused on logic and reasoning, emotional intelligence is far more important to survival, and far richer and more difficult to define or understand. Social animals need emotional and social awareness more than they need language.
Metazoa by Peter Godfrey-Smith
In a short seminar talk, Peter Godfrey-Smith unlocked my interest in consciouness as a scientific concept.
He described octopus intelligence as one that, one hand, looks very different from our own—their brains are toroidal—but, on the other hand, exhibits many of the properties that we associate with consciousness. Other animals have other elements that seem conscious as well. When I asked if this means that “consciousness” isn’t one single thing that you have or you don’t, he agreed and said that when he tries to point this out to other philosophers some of them get really upset.
After the talk, I started reading this book. It goes through a range of different levels of animal consciousness and how it may have evolved.
Is a dog conscious? Is a prawn conscious? I come away from this book believing that the answer isn’t yes or no, but that these different animals have some elements of consciousness like us, and lack some aspects of our consciousness. Prawns have something like minimal consciousness, with little awareness of the world and none of themeselves. Dogs have a richer awareness, emotional states, awareness of other individuals and their states, and richer mental models of the world, but lack some of the self-reflection of human consciousness, ability to reason about the future, or generally to formulate concepts or symbolic language.
In a way, the book is an excellent companion to the Frans de Waal one, because both replace a binary concept with one that varies across animals.
But I gave up halfway through the book, because I wasn’t enjoying it enough.
From Bacteria to Bach and Back by Daniel Dennett
I really enjoyed this book on the evolution of minds, which several colleagues had recommended. It’s an ambitious plan—outlining evolution from bacteria to human intelligence, and I learned a lot. The first 100 pages were enjoyable even though I didn’t feel like I was learning much, yet I think one key idea he scopes out there: competence without comprehension is seemingly-obvious but very important, i.e., the degree to which organisms, brains, and people can excel at tasks without understanding them; true understanding is rare, and limited to people.
His theories of cultural evolution and memes seemed most profound to me. I might have previously thought that we evolved language, then culture, but he reverses the order. zMemes are, in their original definition, any patterns of behavior that can evolve through selection, and he uses words as the prototypical memes. I was completely persuaded of the evolutionary arguments for memes, including words: they might not be transmitted through DNA, but all the properties of natural selection apply. So basic memes appeared, and we could transmit them—simple words, basic behaviors—which amount to culture. From simple words more complex concepts and words appear, then language.
So he then puts language before intelligence and consciousness: first, animals communicated in simple ways, then, as communication becomes more complex, it becomes language, then we start to be able to talk about ourselves for simple communication, which leads to reflexivity (being able to observe our own inner states). At the same time, perception of the world is an “illusion”—in that things are not intrinsically “sweet” or “red;” these are constructs that exist only in perception. Consciousness is the same sort of illusion; like our illusory perception of the world, but of our own inner states.
I found the discussion of consciousness more confusing than earlier parts of the book, with fewer concrete examples and more difficult jargon; I’m not sure I’ve understood it or accurately summarized it. The book also felt repetitive at times; with some editing I bet it could be 100 pages shorter. But I’m still very glad I read it and it has already affected my thinking.
What does it mean for art?
I immediately found Dennett’s presentation useful for thinking about art, because it shows how biological and cultural evolution may be coupled, and, moreover, culture evolves without guidance. The evolution of art—beginning in our evolutionary history, but continuing to the present—could be much like the evolution of language: a set of behaviors passed on, then affecting our biological evolution, and then evolving as behaviors without (initially) any conscious-decision-making. Like language, art builds upon our physical constraints—we can’t use words in registers that we can’t hear, and we can’t understand paint colors in light spectra that we can’t see—and the analogy could go much deeper into the particular kinds of art that we make and appreciate. Some visual artistic styles are easier to understand than others (and require less training and cultural fluency) when they are closer to our visual experience in the natural world.
Of course, the implications for art also depend on what it means for AI, since, if we could make AI that are “equivalent” to people, then we could make artists. (Dennett even claims that an algorothm that created new things would be considered an artist, which I think has already been proven false.)
I find Dennett’s framing of “strange inversions”—design comes from evolution, language comes from words which come from behaviors—resonates for a lot of things I think about. We think of goals as coming at the beginning of creative processes, but often the goals come at the end. The common theme seems to be the way we often think of designs or goals as coming top-down, but things actually happen bottom-up, and then we retroactively interpret and structure them in a top-down way.
What does it mean for AI?
My fear with these books is that, read shallowly, people who argue for AI as intelligent or conscious could read these books as supporting Artificial General Intelligence (AGI). (AGI has no fixed meaning, but here I’ll mean it to use “human-level intelligence” and/or consciousness.)
For example, Frans de Waal does not discuss AI at all. But, when I read his defenses of animal intelligence from human-centric chauvinism, I can easily imagine AGI proponents seizing upon this: “See? We underestimate animals the same way we underestimate AI. It’s more humanity-chauvinism” They could reuse his book title: “Are we smart enough to know how smart AI is?”
Any time one describes the elements of intelligence, consciousness, or art-making, then it becomes possible to create an algorithm that seems to have those elements. If conciousness is about perception of one’s own mental states, then one could build a system that does that. But we’re not at a point to know if that’s all there is to it; consciousness and intelligence remain mysterious, and so is art-making.
Here’s a nice explainer summary of the state of philosophical theories of consciousness, including some heavy-hitters. They say that we’re still too early to really have a sense of what consciousness is: “Like biology before the theory of evolution, neuroscience is ‘pre-paradigmatic.’ You can’t say where consciousness can and can’t arise if you can’t say what consciousness is.”
Shedding binaries
These books ought to lead the reader away from one of the biggest flaws in AI hype: the idea that “intelligence” or “consciousness” as binary, either you have them or you don’t. That our current algorithms excel at some human-derived measures of intelligence does not mean they have the same intelligence as us. The ability to generate lots of plausible text does not indicate the presence of other aspects of intelligences (as demonstrated by certain narcissitic politicians). Indeed, I first learned about de Waal’s book from an excellent rant on this topic by Jitendra Malik.
(One tidbit I liked from Dennett was an early machine learning paper he cited that scores more highly on one test of intelligence (grading essays) than humans do, and that algorithm was just using histogram features. (I plan to look this citation up.)
It seems that gaining nuanced understanding of a subject often means shedding one’s binaries. This has been most important to me in defining art, but, whether it’s art, beauty, good and bad, intelligence, consciousness, gender … any concept in the world seems dependent on various contexts and continuums. Does Chop Suey count as Chinese food (invented by an Italian immigrant in California but served in Chinese restaurants)? What about Hakka Chinese food (invented by Chinese immigrants in India)? Is a burrito a sandwich? etc. Unlike Wittgenstein, however, I don’t mean to abandon definitions, but to abandon simple binary definitions.
Materialism, and the lure of false equivalences.
I believe, like all of these authors, that human intelligence and consciousness isn’t some magical thing, impenetrable to reason and scientific explanation. All phenomena are the product of physical interactions. Perhaps, with a powerful enough computer, everything in human behavior could be fully simulated and predicted.
But some people seem to take materialism to mean that building AGI is easy, or, even, a foregone conclusion. All we need to do is optimize algorithms like how humans have been optimized.
Whenever I give talks on why I don’t believe computers—as we currently understand them—can be artists, there’s at least one person in the audience who claims that obviously they can. The real world could hypothetically be simulated by powerful enough computer, and this computer could simulate humans, and thus computers can be artists. Ipso facto. No matter how much I try to disentangle this knot of false equivalences, this person (usually a computer scientist) is unwilling to admit any ambiguity, or to consider the absurd consequences of their reasoning, e.g., if computers are equivalent to humans, then it’s morally abhorrent to destroy your computer. If I ask, “why isn’t it ok to kill people if it’s ok to destroy your computer,” then they say “because evolution has biologically programmed to do so,” without further reflection about the consequences of this statement. This statement—which I basically agree with—indicates that there is an important difference between people and computers, even if it’s just a product of our biological programming, and it’s not something that can be overcome by any technique we currently understand—or are in the realm of understanding. It’s still science fiction.
Other books not on this list
I also liked Dennett’s autobiography I’ve Been Thinking, but it’s probably not of interest unless you’re interested in the academic cultures of philosophy and cognitive science. I’ve also seen some interesting reviews and recommendations for books by Ed Yong (“An Immense World”), and two books on the existence of free will (with opposite conclusions), by Kevin Mitchell and Robert Sapolsky.
(Thanks to Matt Hoffman and Alyosha Efros for recommending From Bacteria to Bach and Back.)