https://www.nytimes.com/2018/06/13/magazine/veterans-ptsd-drone-warrior-wounds.html

excerpted from

The Wounds of the Drone Warrior

Even soldiers who fight wars from a safe distance have found themselves traumatized. Could their injuries be moral ones?

By Eyal Press, June 13, 2018.

...

It has been almost 16 years since a missile fired from a drone struck a Toyota Land Cruiser in northwest Yemen, killing all six of its passengers and inaugurating a new era in American warfare. Today, targeted killings by drones have become the centerpiece of U.S. counterterrorism policy. Although the drone program is swathed in secrecy — the C.I.A. and the military share responsibility for it — American drones have been used to carry out airstrikes in at least eight different countries, analysts believe. Over the past decade, they have also provided reconnaissance for foreign military forces in half a dozen other countries. According to the Bureau of Investigative Journalism, a London-based organization that has been tracking drone killings since 2010, U.S. drone strikes have killed between 7,584 and 10,918 people, including 751 to 1,555 civilians, in Pakistan, Afghanistan, Yemen and Somalia. The U.S. government’s figures are far lower. It claims that between 64 and 116 noncombatants outside areas of active hostilities were killed by drones between 2009 and 2016. But as a report published last year by the Columbia Law School Human Rights Clinic and the Sana’a Center for Strategic Studies noted, the government has failed to release basic information about civilian casualties or to explain in detail why its data veers so significantly from that of independent monitors and NGOs. In Pakistan, Somalia and Yemen, the report found, the government officially acknowledged just 20 percent of more than 700 reported strikes since 2002.

“Kill chain” operations expanded under Barack Obama, who authorized roughly 500 drone strikes outside active conflict zones during his presidency, 10 times the number under George W. Bush. (This number does not include strikes carried out in Iraq, Afghanistan and Syria.) These operations have continued to grow under President Trump, who oversaw five times as many lethal strikes during his first seven months in office as Obama did during his last six months, analysts believe. According to the Bureau of Investigative Journalism, last year U.S. airstrikes more than tripled in Yemen and Somalia, where the Trump administration circumvented restrictions on operations outside war zones that were put in place in 2013. The administration has also made these operations even less transparent than under Obama, who received widespread criticism on this score.

The escalation of the drone wars has been met with strikingly little congressional or popular opposition. Unlike the policy of capturing and interrogating terrorism suspects that was adopted after Sept. 11, which fueled vigorous debate about torture and indefinite detention, drone warfare has been largely absent from public discourse. Among ordinary citizens, drones seem to have had a narcotizing effect, deadening the impulse to reflect on the harm they cause. Then again, the public rarely sees or hears about this harm. The sanitized language that public officials have used to describe drone strikes (“pinpoint,” “surgical”) has played into the perception that drones have turned warfare into a costless and bloodless exercise. Instead of risking more casualties, drones have fostered the alluring prospect that terrorism can be eliminated with the push of a button, a function performed by “joystick warriors” engaged in an activity as carefree and impersonal as a video game. Critics of the drone program have sometimes reinforced this impression. Philip Alston, the former United Nations special rapporteur on extrajudicial executions, warned in 2010 that remotely piloted aircraft could create a “PlayStation mentality to killing” that shears war of its moral gravity.

But the more we have learned about the experiences of actual drone fighters, the more this idea has been revealed as a fantasy. In one recent survey, Wayne Chappelle and Lillian Prince, researchers for the School of Aerospace Medicine at Wright-Patterson Air Force Base in Fairborn, Ohio, drew on interviews that they and other colleagues conducted with 141 intelligence analysts and officers involved in remote combat operations to assess their emotional reactions to killing. Far from exhibiting a sense of carefree detachment, three-fourths reported feeling grief, remorse and sadness. Many experienced these “negative, disruptive emotions” for a month or more. According to another recent study conducted by the Air Force, drone analysts in the “kill chain” are exposed to more graphic violence — seeing “destroyed homes and villages,” witnessing “dead bodies or human remains” — than most Special Forces on the ground.

Because the drone program is kept hidden from view, the American public rarely hears about the psychic and emotional impact of seeing such footage on a regular basis, day after day, shift after shift. Compared with soldiers who have endured blasts from roadside bombs — a cause of brain injuries and PTSD among veterans of the wars in Iraq and Afghanistan — the wounds of drone pilots may seem inconsequential. But in recent years, a growing number of researchers have argued that the focus on brain injuries has obscured other kinds of combat trauma that may be harder to detect but can be no less crippling. Drone warfare hasn’t eliminated these hidden wounds. If anything, it has made them more acute and pervasive among a generation of virtual warriors whose ostensibly diminished stress is belied by the high rate of burnout in the drone program.

As the volume of drone strikes has increased, so, too, have the military’s efforts to attend to the mental well-being of drone warriors. Last year, I visited Creech Air Force Base in Nevada to interview drone pilots about their work. Forty minutes north of Las Vegas, Creech is a constellation of windswept airstrips surrounded by sagebrush and cactus groves. It is home to some 900 drone pilots who fly missions with MQ-9 Reapers in numerous theaters. Creech also has a group of embedded physiologists, chaplains and psychologists called the Human Performance Team, all of whom possess the security clearances required to enter the spaces where drone pilots do their work, in part so that they can get a glimpse of what the pilots and sensor operators experience.

...

 

http://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html?hpw&rref=us&action=click&pgtype=Homepage&module=well-region&region=bottom-well&WT.nav=bottom-well&_r=0

The Pentagon’s
‘Terminator Conundrum’:
Robots That Could
Kill on Their Own

The United States has put artificial intelligence
at the center of its defense strategy, with weapons
that can identify targets and make decisions.

By MATTHEW ROSENBERG and JOHN MARKOFFOCT. 25, 2016

CAMP EDWARDS, Mass. — The small drone, with its six whirring rotors, swept past the replica of a Middle Eastern village and closed in on a mosque-like structure, its camera scanning for targets.

No humans were remotely piloting the drone, which was nothing more than a machine that could be bought on Amazon. But armed with advanced artificial intelligence software, it had been transformed into a robot that could find and identify the half-dozen men carrying replicas of AK-47s around the village and pretending to be insurgents.

As the drone descended slightly, a purple rectangle flickered on a video feed that was being relayed to engineers monitoring the test. The drone had locked onto a man obscured in the shadows, a display of hunting prowess that offered an eerie preview of how the Pentagon plans to transform warfare.

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.

The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans.

“If Stanley Kubrick directed ‘Dr. Strangelove’ again, it would be about the issue of autonomous weapons,” said Michael Schrage, a research fellow at the Massachusetts Institute of Technology Sloan School of Management.

Defense officials say the weapons are needed for the United States to maintain its military edge over China, Russia and other rivals, who are also pouring money into similar research (as are allies, such as Britain and Israel). The Pentagon’s latest budget outlined $18 billion to be spent over three years on technologies that included those needed for autonomous weapons.

“China and Russia are developing battle networks that are as good as our own. They can see as far as ours can see; they can throw guided munitions as far as we can,” said Robert O. Work, the deputy defense secretary, who has been a driving force for the development of autonomous weapons. “What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.”

Just as the Industrial Revolution spurred the creation of powerful and destructive machines like airplanes and tanks that diminished the role of individual soldiers, artificial intelligence technology is enabling the Pentagon to reorder the places of man and machine on the battlefield the same way it is transforming ordinary life with computers that can see, hear and speak and cars that can drive themselves.

The new weapons would offer speed and precision unmatched by any human while reducing the number — and cost — of soldiers and pilots exposed to potential death and dismemberment in battle. The challenge for the Pentagon is to ensure that the weapons are reliable partners for humans and not potential threats to them.

At the core of the strategic shift envisioned by the Pentagon is a concept that officials call centaur warfighting. Named for the half-man and half-horse in Greek mythology, the strategy emphasizes human control and autonomous weapons as ways to augment and magnify the creativity and problem-solving skills of soldiers, pilots and sailors, not replace them.

The weapons, in the Pentagon’s vision, would be less like the Terminator and more like the comic-book superhero Iron Man, Mr. Work said in an interview.

“There’s so much fear out there about killer robots and Skynet,” the murderous artificial intelligence network of the “Terminator” movies, Mr. Work said. “That’s not the way we envision it at all.”

When it comes to decisions over life and death, “there will always be a man in the loop,” he said.

Beyond the Pentagon, though, there is deep skepticism that such limits will remain in place once the technologies to create thinking weapons are perfected. Hundreds of scientists and experts warned in an open letter last year that developing even the dumbest of intelligent weapons risked setting off a global arms race. The result, the letter warned, would be fully independent robots that can kill, and are cheap and as readily available to rogue states and violent extremists as they are to great powers.

“Autonomous weapons will become the Kalashnikovs of tomorrow,” the letter said.

The Terminator Conundrum

The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them. Gen. Paul J. Selva of the Air Force, the vice chairman of the Joint Chiefs of Staff, said recently that the United States was about a decade away from having the technology to build a fully independent robot that could decide on its own whom and when to kill, though it had no intention of building one.

Other countries were not far behind, and it was very likely that someone would eventually try to unleash “something like a Terminator,” General Selva said, invoking what seems to be a common reference in any discussion on autonomous weapons.

Yet American officials are only just beginning to contend with the implications of weapons that could someday operate independently, beyond the control of their developers. Inside the Pentagon, the quandary is known as the Terminator conundrum, and there is no consensus about whether the United States should seek international treaties to try to ban the creation of those weapons, or build its own to match those its enemies might create.

For now, though, the current state of the art is decidedly less frightening. Exhibit A: the small, unarmed drone tested this summer on Cape Cod.

It could not turn itself on and just fly off. It had to be told by humans where to go and what to look for. But once aloft, it decided on its own how to execute its orders.

The software powering the drone has been in development for about a year, and it was far from flawless during the day of trials. In one pass over the mosque, the drone struggled to decide whether a minaret was an architectural feature or an armed man, living up to its namesake, Bender, the bumbling robot in the animated television series “Futurama.”

At other moments, though, the drone showed a spooky ability to discern soldier from civilian, and to fluidly shift course and move in on objects it could not quickly identify.

Armed with a variation of human and facial recognition software used by American intelligence agencies, the drone adroitly tracked moving cars and picked out enemies hiding along walls. It even correctly figured out that no threat was posed by a photographer who was crouching, camera raised to eye level and pointed at the drone, a situation that has confused human soldiers with fatal results.

The project is run by the Defense Advanced Research Projects Agency, known as Darpa, which is developing the software needed for machines that could work with small units of soldiers or Marines as scouts or in other roles.

Unlike the drones currently used by the military, all of which require someone at a remote control, “this one doesn’t,” said Maj. Christopher Orlowski of the Army, a program manager at Darpa. “It works with you. It’s like having another head in the fight.”

It could also easily be armed. The tricky part is developing machines whose behavior is predictable enough that they can be safely deployed, yet flexible enough that they can handle fluid situations. Once that is mastered, telling it whom or what to shoot is easy; weapons programmed to hit only certain kinds of targets already exist.

Yet the behavioral technology, if successfully developed, is unlikely to remain solely in American hands. Technologies developed at Darpa do not typically remain secret, and many are now ubiquitous, powering everything from self-driving cars to the internet.

Chess Champions

Since the 1950s, United States military strategy has been based on overwhelming technological advantages. A superior nuclear arsenal provided the American edge in the early days of the Cold War, and guided munitions — the so-called smart bombs of the late 20th century — did the same in the conflict’s final decade.

Those advantages have now evaporated, and of all the new technologies that have emerged in recent decades, such as genomics or miniaturization, “the one thing that has the widest application to the widest number of D.O.D. missions is artificial intelligence and autonomy,” Mr. Work said.

Today’s software has its limits, though. Computers spot patterns far faster than any human can. But the ability to handle uncertainty and unpredictability remain uniquely human virtues, for now.

Bringing the two complementary skill sets together is the Pentagon’s goal with centaur warfighting.

Mr. Work, 63, first proposed the concept when he led a Washington think tank, the Center for a New American Security. His inspiration, he said, was not found in typical sources of military strategy — Sun Tzu or Clausewitz, for instance — but in the work of Tyler Cowen, a blogger and economist at George Mason University.

In his 2013 book, “Average Is Over,” Mr. Cowen briefly mentioned how two average human chess players, working with three regular computers, were able to beat both human chess champions and chess-playing supercomputers.

It was a revelation for Mr. Work. You could “use the tactical ingenuity of the computer to improve the strategic ingenuity of the human,” he said.

Mr. Work believes a lesson learned in chess can be applied to the battlefield, and he envisions a military supercharged by artificial intelligence. Brilliant computers would transform ordinary commanders into master tacticians. American soldiers would effectively become superhuman, fighting alongside — or even inside — robots.

Of the $18 billion the Pentagon is spending on new technologies, $3 billion has been set aside specifically for “human-machine combat teaming” over the next five years. It is a relatively small sum by Pentagon standards — its annual budget is more than $500 billion — but still a significant bet on technologies and a strategic concept that have yet to be proved in battle.

At the same time, Pentagon officials say that the United States is unlikely to gain an absolute technological advantage over its competitors.

“A lot of the A.I. and autonomy is happening in the commercial world, so all sorts of competitors are going to be able to use it in ways that surprise us,” Mr. Work said.

The American advantage, he said, will ultimately come from a mix of technological prowess and the critical thinking and decision-making powers that the United States military prioritizes. The American military delegates significant decisions down its chain of command, in contrast to the more centralized Chinese and Russian armed forces, though that is changing.

“We’re pretty confident that we have an advantage as we start the competition,” Mr. Work said. “But how it goes over time, we’re not going to make any assumptions.”

Experts outside the Pentagon are far less convinced that the United States will be able to maintain its dominance by using artificial intelligence. The defense industry no longer drives research the way it did during the Cold War, and the Pentagon does not have a monopoly on the cutting-edge machine-learning technologies coming from start-ups in Silicon Valley, and in Europe and Asia.

Unlike the technologies and material needed for nuclear weapons or guided missiles, artificial intelligence as powerful as what the Pentagon seeks to harness is already deeply woven into everyday life. Military technology is often years behind what can be picked up at Best Buy.

“Let’s be honest, American defense contractors can be really cutting edge on some things and really behind the curve on others,” said Maj. Brian Healy, 38, an F-35 pilot. The F-35, America’s newest and most technologically advanced fighter jet, is equipped with a voice command system that is good for changing channels on the radio, and not much else.

“It would be great to get Apple or Google on board with some of the software development,” he added.

Submarines and Civilians

Beyond the practical concerns, the pairing of increasingly capable automation with weapons has prompted an intensifying debate among legal scholars and ethicists. The questions are numerous, and the answers contentious: Can a machine be trusted with lethal force? Who is at fault if a robot attacks a hospital or a school? Is being killed by a machine a greater violation of human dignity than if the fatal blow is delivered by a human?

A Pentagon directive says that autonomous weapons must employ “appropriate levels of human judgment.” Scientists and human rights experts say the standard is far too broad and have urged that such weapons be subject to “meaningful human control.”

But would any standard hold up if the United States was faced with an adversary of near or equal might that was using fully autonomous weapons? Peter Singer, a specialist on the future of war at New America, a think tank in Washington, suggested there was an instructive parallel in the history of submarine warfare.

Like autonomous weapons, submarines jumped from the pages of science fiction to reality. During World War I, Germany’s use of submarines to sink civilian ships without first ensuring the safety of the crew and passengers was seen as barbaric. The practice quickly became known as unrestricted submarine warfare, and it helped draw the United States into the war.

After the war, the United States helped negotiate an international treaty that sought to ban unrestricted submarine warfare.

Then came the Japanese attack on Pearl Harbor on Dec. 7, 1941. That day, it took just six hours for the United States military to disregard decades of legal and ethical norms and order unrestricted submarine warfare against Japan. American submarines went on to devastate Japan’s civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime.

“The point is, what happens once submarines are no longer a new technology, and we’re losing?” Mr. Singer said. He added: “Think about robots, things we say we wouldn’t do now, in a different kind of war.”