By the late 19th Century, physicists had produced physical laws that matched experimental data very well—so well that some physicists thought they might soon be done. Albert Michaelson wrote “it seems probable that most of the grand underlying principles have been firmly established … the future truths of physical science are to be looked for in the sixth place of decimals.” All that remained in physics was to dot a few i’s, cross a few t’s, and measure a few things more precisely. Other physicists weren’t so sure, like Lord Kelvin, who spoke about two “clouds” obscuring the sky of physics.
In order to fill in some of the gaps, Michaelson and Morley conducted experiments to measure the speed of light. Their experiments ultimately led to relativity theory, which completely toppled our previous understanding of physics. It was as if, in trying to close the last few open windows in a chilly house, they discovered that an entire side of the house was missing.
What is the difference? We do not know, but the difference is important. When people say that we’re just computers, and thus computers could be artists, they sound to me like Albert Michaelson, underestimating how little we know about how brains work, and what makes us people. Intelligence is much harder than many people seem to think.
What does it mean to be a person?
We care about art because we care about people and our human relationships. We care about the art that people make. If a natural process—like a tidal wave or an insect colony—creates something visually arresting, it shares some features of art but it’s not art.
Why do you care about other people? Why do people have lifelong friends and get married? Why do you give gifts and spend time with people? We care about relationships because we are social beings, as a product of our evolution. And we care about other people because they’re people.
What, scientifically, makes someone a person and not merely a machine? No one knows. We have all sorts of interesting theories. Our social behaviors arose from evolution, but that’s not an answer. The answer surely involves evolution, biology, psychology, evolution—and not just computing. We don’t even know what question we’re asking… what exactly is a “person” anyways?
When someone says we’re just computers, I say: We have morality about other people, and not computers. Why? If people are just computers, then why isn’t it ok to kill people, but it is okay to turn off computers, to start programs and stop them, to spawn processes and then “kill” them? Someone who thinks that other people are just bags of meat carrying computers in their skulls, with no more moral status than a laptop, is a sociopath. Someone who thinks that a computer has the same rights and moral status as a person, and that turning off a computer is murder, is a nutjob.
These are new versions of old questions, like free will versus determinism. If you believe in materialism (like I do), the notion that we live in a world governed solely by physical laws, then we are all just groups of atoms bouncing around and obeying physics. How can we have free will? Theologians have also debated versions of this question: if God created us, then isn’t he responsible for everything we do? Even though we can’t answer these questions, we still go about our lives believing in our own ability and others’ to make decisions, and the importance of our interpersonal relationships.
We are like computers in many ways, but it totally depends on how you define “computer”. Computation provides an immensely-useful model for understanding people and brains, and it has lead to many breakthroughs in neuroscience and cognitive science. I’ve used some of these ideas in my research, such as artificial neural networks models of the sensory cortex and Bayesian models of cognition. But it is largely an analogy or model for the brain, not equivalence.
One can make many similar analogies with varying degrees of usefulness and absurdity. In the 19th century, with the development of thermodynamics, we began to understand that people are like steam engines: you need to feed in fuel that can be burned; the potency of fuel is measured in units of heat, i.e., calories. Now we are careful with how many calories that we eat. But it doesn’t mean that we’re just engines.
Human consciousness is a complex system that seems to emerge from simpler electrochemical processes. But, even as a theory of consciouness, emergence is so vague as to be uninformative, it’s a “then a miracle occurs” leap from low-level circuity to creativity, intelligence, and culture. Neural networks are also complex systems with emergent behaviors, but this doesn’t mean they’re the same as people. The work of Zeiler and Fergus and David Bau provides useful analyses of what kinds of functions they’re learning: things like image feature detectors and generators, not how to be people.
Some theories suggest that cognition isn’t just neuronal, it’s a much more distributed biological proecess.
Saying that we’re just computers implies to many people that we’re like current desktop computers and risks dehumanizing us.
It’s science fiction
The idea that we will have sentient, intelligent, and conscious machines fascinates us, after decades of science fiction stories depicting it. But we are nowhere near creating such systems; it remains science fiction.
We don’t know enough to think through the implications of intelligent computers, because we don’t know enough of what that world will look like. Imagine if Jules Verne tried to think about a world with computers. He might imagine mechanical Difference Engines, carried about by fashionable Victorian dandies on their horse-drawn carriages, but society otherwise changed. It’s so hard to imagine how one technology can radically transform our world so as to be unrecognizable, and even harder to anticipate all the technological and societal changes that happen together. Virtually no science fiction writers, when imagining a future world with computers, imagined social media or smartphones radically transforming our world the way that they have today.
Working out the nuances of intelligent or art-making computers is like trying to work out the political system for our future space colonies. Maybe it’s a fun science fiction exercise, but we know so little about what a space colony might be like and what our technology and society will be like then—or even if it’s possible at all—that it’s an exercise unconnected to the reality of whatever that distant hypothetical situation might be.
And all this talk of conscious, artistic machines distracts from the urgant societal and ethical questions around the use of these software algorithms misleadingly called “artificial intelligence” in our real world.
Thanks to Rif A. Saurous for comments.