Let the Martian Die

Matt Damon is the Martian to be saved in the film adaptation of Andy Weir’s book, but neither book nor movie address ethics around allocating resources between exploring the universe and saving lives. To save the Martian, China agrees to redirect their space rocket from scientific exploration to rescue mission. They shouldn’t have.

Primitive societies make choices balancing the short term and the long, gathering food for today and building tools for tomorrow. Technology allows such choices to be more extreme, and Weir’s scenario brings a twist. China must have neglected millions of poor people (seeking better safety, food, shelter, education) in favor of a bold scientific mission to space. Then, for political reasons (the public relations value of doing something the United States could not was lost between book and movie), China redirects their mission to save a single life. In effect, millions sacrificed for the one. On the world stage those millions were faceless and the one was personal.

Humans evolved in small groups to protect family and tribe. That behavior benefited our genes because we shared them with many in our small group. Technology can make the most remote person (even on Mars!) feel like part of our tribe and worthy of great sacrifice to save. It is instinctive to identify with someone whose story we know and whose face grows familiar. Even technology fails at connecting our instincts with millions of people. Contrast the one life in the iconic photo of an emaciated child watched by a vulture with the millions of lives represented by a chart on malaria deaths:

imageimage

 

Our instinct to protect each other is good. So is our instinct to invest in our collective future. But instinct that evolved in a tribal world with primitive technology is insufficient today. As Daniel Kahneman reveals in his brilliant book Thinking Fast and Slow, our intuitive and fast mode of thinking fails miserably at evaluating novel situations, especially those involving probability or statistics. In familiar, practiced situations it is fast, requires little effort, and can be very effective.

Technology gives us choices ever farther removed from the familiar. If we don’t take the time to “think slow” then the evolutionary tail wags the technological dog. And we feel good about saving a person we think we know using the resources withheld from millions we’re sure that we don’t know. If one of our explorers is ever stranded on Mars, the wise and humane choice is to let the Martian die.

Who or what pulls the trigger?

A landmine is autonomous. However indiscriminate its selection of a target, a landmine does not require intervention by a human. Deadly weapons that do not require human intervention date back to pits with spikes, but something new is on the horizon: weapons that are selective in their targets. Concern that artificial intelligence will facilitate a new arms race has mobilized prominent scientists to publish an open letter warning of the dangers.

MQ-9 Reaper (Photo by Ethan Miller - Getty Images)
MQ-9 Reaper (Photo by Ethan Miller – Getty Images)

What are autonomous weapons? Technology that selects its own targets to destroy. This goes beyond the proximity fuse of World War II, kept as important an Ally secret as the atomic bomb. That caused an anti-aircraft shell to detonate when close to an airplane, but was aimed and launched by humans. In 2015, the U.S. Army developed guided bullets. The so-called “smart bullets” would follow the target at which they were shot. But they were still aimed and launched by humans. Weaponized drones that can independently identify targets and attack them are, perhaps, the first manifestation of autonomous weapons. The category, however, could stretch our imagination.

Why would we use autonomous weapons? For many of the same reasons that we automate anything: accuracy, speed, economy, scale. While humans are still better and often faster at recognizing patterns, for instance an appropriate target, technology is rapidly catching up. Technology has long been cheaper and easier to replicate than humans. Train a robot to do something, and then make copies.

One driver of autonomous technology is space exploration. Sending humans to other planets is both expensive and dangerous. But the distances mean that even speed-of-light communication is slow: Mars is minutes away and Saturn is over an hour. So some decisions have to be made by the technology. Imagine driving a car and feeling the lane-separating bumps only an hour after your tires touch them.

How could autonomous weapons change us? As imagined in the Open Letter, they could make it easier for dictators to control their populace, for warlords to murder by ethnicity, or for terrorists to target their attacks. Terrorists already deploy a poor-man’s autonomous weapon: suicide bombers. Someone clad in explosives can enter an area to choose the time and place of detonation. Autonomous weapons could bring economy and scale to this threat.

How can we change autonomous weapons? Thinking critically, we can understand and evaluate them. This essay has applied some of KnowledgeContext’s ICE-9 questions. Once we understand and evaluate, one possible act is the Open Letter. Another would be a treaty similar to the Ottawa Treaty or Anti-Personnel Mine Ban Convention. Civilization can choose its direction.