We’re an hour into an unexpectedly choppy catamaran ride up the Na’Pali coast of Kauaʻi when the captain kills the engine. The current and the wind are at odds, and we’ve been slapping the waves crosswise since we left Port Allen, but the spinner dolphins that have materialized alongside the hull are blissful, diving and flipping in what remains of our wake. The captain, who has been steering through the chop with his toes, makes an announcement over a crackling PA system: “folks, they’re just as curious about us as we are about them.”
Nature documentaries, park rangers and tour guides have all offered me some variation on this, insisting that they—bees, beach crabs, elk—are just as afraid of me as I am of them, and that they—squirrels, dolphins, and wild birds—are just as curious. This has always struck me as a bit self-serving. Why must we frame the agency of nonhuman beings in such terms? Is it only by virtue of being just like us that animals can be seen, respected, or left alone?
I suspect, as I watch the spinner dolphins perform neat barrel rolls on the horizon, that the nature of our encounter is far more complex. It goes, at least, three ways: I experience them, they experience me. We experience each other experiencing each other. This is true enough of my encounters with most people. Someday it may describe my encounters with machines as well.
On a reporting trip not too long ago, I met a commercial programming robot called NAO. It responded to my verbal commands, and even did a little tap dance on the tabletop. The routine delighted me, but it also felt forced, like watching a trained dolphin jump through hoops. Last year, a San Francisco-based animatronics company called Edge Innovations announced a bottlenose dolphin robot realistic enough to take the place of captive dolphins in ocean theme parks. As I watch the spinners dance across the Pacific waves, I wonder: what is captivity, to a creature with an AI brain and silicone skin? Might robots too, someday, deserve to be free?
Speculating about dolphin culture, the media historian John Durham Peters explores the tension between their high intelligence and their untethered existence underwater, living in a world where material artifacts are impossible. Dolphins, he writes, have “parliaments but no pyramids; memory but no history; poetry but no literature; religion but no scripture; education but no textbooks; law but no constitution; counting but no chalk, paper, or equations, and thus no mathematics; music but no scores; weather reports but no almanacs; navigation but no ephemerides; culture but no civilization.” In that sense, they’re the inverse of the nascent machine intelligences of our world, which—trained as they are on massive corpuses of human data—have textbooks without education, literature without poetry. For now, anyway.
My responsibility to a machine is not like my responsibility to a dolphin. One is sentient and unknowable; the other is not sentient, but a product of knowledge. Still, we speak of “AI ethics,” which revolves around two essential questions. The most urgent is the risk Artificial Intelligence poses to us. The thoughtless development of AI impacts vulnerable communities of human beings by eroding privacy, driving labor exploitation, and imposing structures of surveillance. It can reinforce historic biases in all kinds of malevolent, unexpected ways. Ethicists who point this out are troublesome to large technology corporations; they keep getting fired.
Another is the risk we pose to them. This is the stuff of science fiction: enslaved robots and oppressed replicants in the crosshairs of a blade runner. Of course, in the real world, there’s no such thing as a sentient machine. AI can’t feel; AI is just extremely complicated math. Robots can barely unscrew a jar of mayo. Concern for how we treat AI systems and robots seems utterly misplaced in light of present-day human suffering. Kick the Boston Dynamics robot dog: there are people who still don’t have human rights! Artificial people don’t exist! Real people exist!
Of course, these risks needn’t be mutually exclusive (though you should probably still kick the Boston Dynamics robot dog; it’s a cop). In her recent book The New Breed, robot ethicist Kate Darling argues that we need to stop thinking of robots and AI agents as artificial people. It puts us on edge, creates confusion about what robots actually are, and blinds us to the harms they could actually cause. The common refrain, “the robots are coming for our jobs,” gives rise to an unnecessary moral panic about personhood. We urgently need a new metaphor. Robots aren’t people, she says. If anything, they’re animals.
“Just [as] the variety of animal intelligence in our world is so different from ours, artificial intelligence will be different from human intelligence,” Darling writes. Machines, like animals, perceive the world in different ways than we do; research labs often borrow from the natural world to design robots that slither, creep, and swarm. Rather than competing with us for the mantle of humanity, AI agents and robots can be our collaborators—just as messenger pigeons, truffle-hunting pigs, and guide dogs help us perform different tasks today. Robots and animals alike can serve as mediators, facilitating human-to-human interactions. Our tendency to ascribe agency to anything animate seems to be hard-wired; we can’t help ourselves from projecting life onto machines, but that life needn’t be human.
Bêtes-Machine
Think of the machines, then, as beasts. It’s not a new comparison, just the inversion of an old one. Rene Descartes compared the cries of a wounded dog to the sound of a malfunctioning machine; he called animals bêtes-machine. He dispensed, entirely and influentially, with the inner lives of non-human creatures. “They eat without pleasure, cry without pain, grow without knowing,” wrote the French natural philosopher Nicolas Malebranche, one of Descartes’ inheritors. In the late nineteenth century, Behaviorists further reduced animal behavior to stimulus and response, exemplified memorably by Ivan Pavlov’s dog salivation experiments. Animal ethologist Frans de Waal writes that, for much of the twentieth century, scientists largely shared this mechanistic view of animals, seeing them as either “stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts.”
Today, however, scientists have largely concluded that animals feel as we do. Animals learn, suffer, experience joy, and grieve their dead; dolphins call each other by name, recognize themselves in the mirror, and exhibit altruism. And yet, as Darling points out, we still treat many animals like machines, extracting everything we can from them for our own gain. Factory farming practices condemn sentient creatures to truncated, nightmarish lives; cows, chickens, and pigs are born into dark, crowded conditions, torn from their mothers, and brutally killed.
Farms and slaughterhouses also traumatize their human employees, who are often from highly vulnerable communities. Will we continue these cycles of exploitation into the robot age?
“Robot rights” are having a moment. A recent literature review of publications exploring the moral consideration of artificial entities in academic journals suggests that interest in this question is growing exponentially. In published papers and symposia, scholars debate “suffering subroutines” and the ethical treatment of reinforcement learning algorithms. But these thought experiments, although fascinating in the abstract, aren’t very useful when it comes to imagining how the rights of sentient machines will play out in practice. The history of animal advocacy is more instructive. As Darling points out, our treatment of nonhumans has never conformed to philosophical frameworks. Rabbits and hamsters are protected from being used in laboratory experiments, but mice are not; “killer whales” didn’t become cultural icons worthy of protection until they were rebranded as “orcas.” Human empathy is emotional, and illogically selective. Fortunately, it’s also malleable.
Expanding the moral circle
We’re still decades away from the kind of Artificial General Intelligence depicted in science fiction films like Her, Ex Machina, or 2001: A Space Odyssey, but the process of developing such an AI is iterative. The science fiction writer Ted Chiang recently observed in an interview with Ezra Klein that “long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering…in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering.” Artificial sentience, if it emerges, will not flip on like a light switch. Rather, sentience will evolve, as it has with life on Earth. Along that continuum, it will be difficult to draw hard lines.
Even now, delineating where sentience begins and ends is tricky stuff. Is an oyster, which has no central nervous system, more or less sentient than cabbage? Is an adult dog more sentient than a human fetus? Is an octopus more sentient than a crow? The 19th century utilitarian philosopher Jeremy Bentham, often cited in animal rights circles, proposed that the buck stops with pain. He wrote, “The question is not, Can they reason? Nor Can they talk? But, Can they suffer?” The present-day philosophical and ethical movement of Sentientism, however, aligns sentience with the capacity to experience things. By this rubric, human and non-human animals are sentient; plants, sponges, and sea cucumbers are not. Notably, Sentientists leave room for the possibility that artificial and even alien intelligences might be—or become—sentient.
Sentientism’s most vocal advocate, Jamie Woodhouse, is a mild-mannered consultant who lives in North London; he hosts the sentientism.info website, several Facebook groups, and a regular podcast series exploring the nuances of this worldview. The Sentientist worldview, he explains to me, can be summarized as evidence, reason and compassion for all sentient beings. Some see it as an alternative to secular humanism that shares its naturalism but nudges the moral circle outwards. He asks, “if morality is about concern for others—and by others, we mean any other being that has a perspective, can suffer, can flourish, can experience things—then why wouldn’t we have compassion or moral consideration for all of those sentient beings?” He’s quite pleased with the logo he designed for his website: a circle containing clip-art renderings of a person, a pig, a robot, and an alien.
This may seem like an odd overlap of moral philosophy, veganism, and far-out speculation, but Woodhouse is keen on following the science. On his podcast, he regularly chats with animal rights advocates and computer scientists invested in AI safety. The latter, he says, are working to align Artificial Intelligence with human values. But what values, exactly? “If we were to try and align an artificial intelligence with current human ethics, that would involve explaining to them that it’s completely ethically acceptable to farm and kill less powerful sentient beings for your own resource needs,” he muses.
Woodhouse is quick to note that artificial sentience does not supplant any of our responsibilities to the living world, or to the people within it. In his understanding, which he shares with the philosopher Peter Singer, a foundational thinker in modern animal rights, “excluding any groups of sentient beings based on species is just as arbitrary as excluding other groups of humans.” The “speciesism” that entitles humans to cause suffering to animals is inextricable from the sexism, racism, and classism that humans have used as justification to oppress other humans for centuries—to treat them, in other words, like animals. Indeed, genocidal regimes often use comparisons to “subhuman” creatures like apes, pigs, or cockroaches to dehumanize marginalized groups. If fascists treat people like animals, then shouldn’t we do the opposite—treat animals like people?
The Relational Approach
Recently, Woodhouse has begun using a new word. That word is “substrate.” As in, he is considering what sentiences might exist in different substrates—say, silicon. “I don’t think all information processing is sentient,” he says, “but I do think all sentience is just information processing.” Ultimately, what matters is the pattern, not the hardware it’s running on. Talking to Woodhouse, I very quickly slip into dorm room mode. I tell him about my own pet theory: that the Turing Test doesn’t test the sentience of Artificial Intelligence, but rather the human interlocutor’s willingness to extend personhood to an AI. It’s more of a test of us.
“Oh, the relational approach,” he says.
The argument goes like this: whether or not a machine is sentient, we should treat it as though it is. Our instinct to anthropomorphize has been well-documented; numerous studies have shown that people respond emotionally to robots and treat computers as social actors. As a consequence, it’s likely “real bonds” will form between people and robots. When we violate a real bond, we lower our own moral standards. In this sense, protecting robots protects us from ourselves. Further, we exist in a socio-technological ecosystem—our relations with robots and other people form an interdependent, irrevocably entangled web. We can’t ascribe moral status in isolation because societies, too, make people. It’s difficult to untangle moral consideration from this web of social relations. As difficult, it seems, as it is to imagine ourselves outside of nature.
Perhaps moral status shouldn’t be dependent on properties like consciousness, the capacity for reason, or the capacity for suffering—since sentient machines, like animals, are likely to experience the world differently than us, possibly in ways we won’t understand. While traditional moral philosophy tends to be written from the point of view of privileged insiders choosing, benevolently, to extend rights to others, in this social-relational approach, as the scholar David J. Gunkel writes, “what the entity is does not determine the degree of moral value it enjoys.” Instead, the very existence of the other—be it meat or machine—interrupts our own sense of moral superiority. How we choose to behave in relation to this other is a test of us, not them. The question is not only Can they suffer? It’s also: Do we want to cause suffering?
Judith Donath, founder of the Sociable Media Group at the MIT Media Lab, offers the example of the humble Tamagotchi. Imagine a child that is spending too much time tending to a virtual pet. Is it better to encourage the child to kill their Tamagotchi and move on, or does keeping the Tamagotchi alive teach them care and responsibility? The Tamagotchi is not alive in any biological sense, but it models life, invites anthropomorphic attachment, and in doing so, elicits compassion from its young owner. “Compassion is not a finite good, that you use up when you care for something,” Donath writes. “Instead, it is a practice that grows stronger with use.”
Cultivating the practice of care with machines can only benefit other fellow travelers at the edge of the moral circle. Caring for a Tamagotchi can be a stepping-stone to caring for a hamster, a dog, or even another person; carrying a wounded bird to safety can be a gesture not dissimilar in spirit to repairing a broken machine rather than sending it to the landfill. Treating things with dignity, even if they are not alive, imbues our own actions with meaning, and underlines the impact of our choices to affect others. Care is something we carry with us. Those who advocate for the rights of sentient machines often consider their speculations to be a form of “future-proofing” human ethics, but the act of imagining them is reflective, too. “As we develop artificial intelligences, we’re being forced to crystallize and codify and improve our own ethics,” Jamie Woodhouse tells me. “Which is something you’d hope we’d be doing anyway.”