Human Judgment in Remote Warfare

This is an excerpt from Remote Warfare: Interdisciplinary Perspectives. Get your free download from E-International Relations.

Remote warfare describes ‘intervention that takes place behind the scenes or at a distance rather than on a traditional battlefield’ (Knowles and Watson 2017). But in remote warfare operations, who or what remains at a distance? The impetus for policymakers to pursue policy objectives abroad at low cost and low risk is not a new phenomenon. But the means by which policymakers seek to achieve them does change with technological developments. In the twenty-first century, one of the most noticeable developments in these means has been the advent of remotely piloted aircraft – some call them drones.’ This development is especially significant because in previous generations, policymakers established mission objectives from home while their agents – diplomats, soldiers, intelligence officers and others – went out into the operational environment to attempt to achieve those objectives. The advent of remotely piloted aircraft has allowed – in at least some cases – both the policymakers and most of their agents to remain at home while attempting to achieve mission objectives abroad. This supposed removal of the warfighter from the battlespace has raised important ethical questions that have, in turn, spawned a mountain of literature (e.g., Killmister 2008; Strawser 2010; Royakkers and van Est 2010; Galliott 2012; Gregory 2012; Chamayou 2013; Enemark 2014; Kaag and Kreps 2014; Rae and Crist 2014; Gusterson 2015; Himes 2016).

An understudied element of the literature is the role of human judgment in remote warfare. To address this gap, this chapter looks at the relationship between remotely piloted aircraft and human judgment, specifically as it pertains to targeting decisions. The chapter argues that, despite the great physical distances between aircrews and targets, this relatively new technology nevertheless enables crews to apply human judgment in the battlespace as if they were much closer to their weapons’ effects.

The Ethics of Remotely Piloted Aircraft

Much of the literature on the ethics of remotely piloted aircraft has focused on concerns at the strategic, or policy-level. There are at least two concerns in this category that continue to arise. First, many have argued that voters in liberal democracies are likely to reject military action that results in casualties to their own forces. If remote weapons provide policymakers with military options that will not likely result in casualties to their own forces, then policymakers might have strong political reasons to resort to military force by remote means – perhaps even in cases in which they have strong moral reasons not to. This is often referred to as the ‘moral hazard’ argument. It suggests that political leaders are perversely incentivised to commit unethical or illegal actions when those actions generate little domestic political risk. Though this argument appears throughout the literature on the ethics of remote weapons, its strongest formulation is in John Kaag’s and Sarah Kreps’ Drone Warfare (see Kaag and Kreps 2012, 2014, 107; Galliott 2012, Chamayou 2013, 189; Brooks 2016, 111).

Another common concern at the strategic level is that remote warfare has enabled powerful states such as the US to employ military force outside areas of active hostilities with relatively little political resistance either domestically or internationally. One possible result is that while al-Qaeda fighters in Afghanistan and Islamic State (ISIS) fighters in Iraq and Syria are lawful combatants, it is not clear whether members of terrorist organisations outside areas of active hostilities (e.g., in Yemen, Somalia, Libya, etc.) are lawful combatants. Though this discussion is about combatant status and not about remote weapons per se, it is closely related to the above concern. The ethical concern is that by reducing risk to crews, and therefore reducing political risk to policymakers, policymakers might be incentivised to resort to the unethical use of military force outside areas of active hostilities (see Chamayou 2013, 58; Kaag and Kreps 2014, 2; Enemark 2014, 19-37; Gusterson 2015, 15–21).

These two categories of argument are grounded in the reduced risk to remotely piloted aircraft crews and this reduction in risk is grounded in the physical distance between the crew and their weapons’ effects. If the pilot is seven thousand miles from the enemy, she is at no risk of being killed. Because she is at no risk of being killed, policymakers do not face the normal domestic political barriers to the use of military force. Finally, because these strikes are possible without deploying a large force into the country in question, states who employ these systems can potentially conduct violent military actions in a given state without entering into a large-scale war with that state. Much of the literature mentioned above, therefore, is ultimately grounded in the physical distance between crews and targets.

A secondary focus has arisen more recently in a body of literature that distinguishes between physical distance and psychological distance (Asaro 2009; Fitzsimmons and Sangha 2013; Sparrow 2013; Wagner 2014, 1410; Heyns 2016, 11; Lee 2018a). Psychologists as well as ethicists have become increasingly aware that psychological distance is conceptually distinct from physical distance and the two can come apart. Though at great physical distance from their weapons’ effects, Predator and Reaper crews, for example, can experience psychological effects as if they were much closer (see Chappelle, Goodman, et al. 2019; Chappelle, McDonald, et al. 2012; Fitzsimmons and Sangha 2013; Maguen, Metzler, et al. 2009). As US Air Force Colonel Joseph Campo (2015) has put it, ‘the biggest issue society failed to comprehend was the ability for technology to both separate and connect the warrior to the fight.’ In Peter Lee’s (2018a) analysis of his interviews with British Royal Air Force Reaper crews, he similarly points to what he calls the ‘distance paradox.’ Though RAF Reaper crews are physically further from their targets than at any time in the RAF’s 100-year history, they are nevertheless emotionally quite close. In his own words, ‘aircraft crews had never been so geographically far away from their targets, yet they witnessed and experienced events on the ground in great detail’ (Lee 2018a, 113).

Remotely piloted aircraft, however, also raise questions about a third and hitherto under researched sense of distance in war. It might be possible that remotely piloted aircraft crews are able to apply human judgment in the battlespace as if they were quite close, despite the great physical distance between crews and their weapons’ effects.

The ethics literature’s two-fold focus on physical distance and psychological distance obscures questions about where remote warfare operators can apply human judgment in the battlespace. Psychological distance is a useful conception, but it is limited in that it refers only to the effect violent actions have on the aircraft crews. What I have in mind here compliments, but is crucially distinct from, that conception. Just as the war might affect crews in intimate ways despite the great physical distances involved, those who employ remote weapons might apply human judgment from a relatively close epistemic position despite the great distances involved. In other words, if psychological distance is about the effect the war might have on the crews, the conception of human judgment I have in mind here refers to the effect the crews might have on the war.

One US Air Force Reaper pilot, Lt Clifton, put the relationship between remote crews and their ability to impose human judgment this way.

[It’s] a huge bonus to having that over-the-horizon look – being in the [ground control station] vs. being actually in an airplane [in] the skies. It’s a lot easier to stay calm and stay focused on an actual big picture concept instead of just tunnelling in on what you see out the window of a fighter jet or what you see in the pod of a fighter jet. By physically not being in that environment, it keeps the communication between the pilot, sensor [operator], and the intel [analysts] a lot smoother, a lot more direct, and a lot less hectic to make good decisions and I think that’s a huge benefit to actually being in [remotely piloted aircraft] than being in a manned asset.

(Clifton 2019)

Before going further, it is important to bound the scope of this chapter. Those who study ‘drones’ have sought to keep up with rapid development and proliferation. For instance, a 2017 Center for New American Security study reports that more than 30 countries either have or are developing ‘armed drones’ (Ewers, Fish, et al. 2017). Likewise, a 2019 New America study finds that 36 countries have ‘armed drones’ (Bergen, Sterman, et al. 2019). The claims that I make in this paper are not equally applicable across all of these instances for two reasons. This is firstly because the ability for the pilot or crew to impose human judgment depends upon a number of factors about the weapons system in question. ISIS, for example, has employed low-cost quadcopters with 40mm grenades attached after purchase (Gillis 2017; Rassler 2018; Clover and Feng 2017). Suppose a Western military organisation employed such a weapon for local base defence. Such a system does properly fall into the category of ‘armed drones,’ but it is not at all clear that such a system would provide the operator with sufficient situational awareness to adequately employ human judgment in response to battlefield dynamics.

The second reason is that, because I am here concerned with the relationship between physical distance and human judgment, many of the claims I make will apply directly to systems that a military organisation employs abroad from within its own territory. As Ulrike Franke reports, as of 2017, only a few states – the US, the UK, and China – conduct armed remotely piloted aircraft operations in this way (Franke 2018, 29). At the moment, therefore, my arguments apply most directly to the US and the UK because China’s remotely piloted aircraft program is more opaque (see Kania 2018). Moreover, the first-hand narrative accounts I have collected to which I refer below came from US Air Force MQ-9 Reaper crew members and support personnel.[1] The conclusions in this paper, however, will likely become more widely applicable as more states begin to operate remotely piloted aircraft from within their own territories.

There is one additional terminological point. ‘Physical distance’ and ‘psychological distance’ are less cumbersome than ‘distance as it pertains to human judgment’ in large part because ‘physical’ and ‘psychological’ are such simple and widely understood adjectives. The word ‘judgment’ does not offer a ready adjective. I propose the more manageable term ‘phronetic distance,’ which harkens to Aristotle’s term ‘phronesis,’ often translated ‘practical wisdom’ or ‘prudence’ (Aristotle and Crisp 2000, 107; Aristotle and Irwin 2000, 345). ’Phronesis’ is, for Aristotle, neither knowledge of how to perform a specific task nor is it scientific knowledge. It is a virtue of thought that relies upon reason and enables the one who possesses it to determine what is best for a human being in a wide array of circumstances (Aristotle and Irwin 2000, 8993). Courage is the character trait that enables a virtuous person to act courageously. Temperance is the character trait that enables a virtuous person to act temperately. Phronesis is the trait that enables a virtuous person to know what to do under the circumstances. By ‘phronetic distance,’ I mean the relative distance between the battlefield and the point of application of human judgment. As I argue below, phronetic distance and physical distance ought to remain conceptually distinct. Though remotely piloted aircraft crews might physically be several thousand miles from the battlefield, their phronetic position is often much closer.

The bin Laden Case

Understanding human judgment and distance in remotely piloted aircraft operations is difficult because physical and phronetic distance come apart. In many cases of military technological developments, increases in physical distance between warfighter and weapons’ effects correlate with an increase in phronetic distance. In the oft-cited example of early remote weapons, King Henry V’s longbowmen at Agincourt are able to engage French knights at a distance. This comes at a marginal increase in phronetic distance. During the fleeting seconds that the weapon is in the air, the longbowmen do not maintain control over it – they have no means of imposing judgment upon where it will impact. Many military technological developments since Agincourt have followed this model: increases in physical distance result in increases in phronetic distance. Unlike many previous technological developments in which increases in physical distance entailed increases in phronetic distance, remotely piloted aircraft have resulted in tremendous increases in physical distance but in relative decreases in phronetic distance.

To see that this is so, consider two cases of modern remote warfare independent of remotely piloted aircraft – namely, the two US attempts on Osama bin Laden’s life. In these two cases, the physical distance between the warfighter and the target correlates with phronetic distance.

In 1998, US President Clinton authorised a cruise missile strike against Osama bin Laden following al-Qaeda ’s bombings of US embassies in Kenya and Tanzania. The US Navy prosecuted the attack against what the US believed to be bin Laden’s location near Khost, Afghanistan with ship-fired Tomahawk cruise missiles from the Arabian Sea (Kean, Hamilton, et al. 2004, 116–⁠117). Bin Laden had indeed planned on going to Khost where he likely would have been killed in the strike. But, as Lawrence Wright (2011, 321–⁠322) describes, on the way there, in a car with his friends, bin Laden said:

‘Where do you think, my friends, we should go … Khost or Kabul?’

His bodyguard and others voted for Kabul, where they could visit friends.

‘Then, with God’s help, let us go to Kabul,’ bin Laden decreed – a decision that may have saved his life.

In this case, the naval surface warfare officer in the Arabian Sea responsible for launching the cruise missile was some five hundred miles from the target area. This distance, though considerably closer than the remotely piloted aircraft pilot thousands of miles away, is still applying military force while remaining outside the theatre of operations. But, crucially in the bin Laden case, the surface warfare officer has no means of imposing his or her judgment after the missile is launched. Just as King Henry’s longbowmen accepted an increase in phronetic distance, the cruise missile also imposes an increase in phronetic distance. For the longbowmen, this increase was marginal – the arrow’s flight time is a matter of single digit seconds. Like the longbowmen’s arrows, the cruise missile can neither be recalled nor can they be redirected once launched and its flight time is four to six hours long (Navy 2018; Shane 2016). And, of course, even if the missiles could have been redirected, the surface warfare officer has no intelligence feedback loop to alert him to the fact that the intelligence reporting was mistaken. Though the physical distance was significant, the application of human judgment in response to real-time dynamics on the ground is completely absent. In this case, the increase in physical distance entails an increase in phronetic distance.

Compare this 1998 event against the US’s 2011 raid that killed Osama bin Laden in Abbottabad, Pakistan. US President Obama opted for a ‘capture or kill’ mission conducted by special operations forces that ultimately led to bin Laden’s death. What is crucial for the present discussion is that the forces in the helicopters and on the ground had both the capability and the authority to employ their judgment in response to real-time dynamics. The raid provides two important examples. The first helicopter to arrive in the compound had planned to hover while the operators inside fast roped into the compound. But the solid walls of the compound affected the airflow differently from the chain-link fence within which the team had practiced. In response to the unexpected and debilitating air currents, the helicopter pilot had to put the helicopter down in the compound, ultimately in a forced landing that severely damaged the aircraft. The pilot of the second helicopter saw the first helicopter’s landing and was unsure whether the landing and damage were the result of enemy fire or mechanical problems. The second pilot, therefore, decided to land outside the compound forcing the SEALs to run into the compound from there – both were major deviations from the original plan (Schmidle 2011; Swinford 2011). In this instance, the team relied, not upon scripted orders from higher headquarters, nor on communications reach back. They employed human judgment in response to real-time battlefield dynamics.

Second, and more importantly, the room from which US Cabinet and other officials – including President Obama – watched the raid lost communications with the raid force for some 20–25 minutes (Swinford 2011) over half the time the team was on the ground (Schmidle 2011). During this crucial period of the ‘kill or capture’ mission, the raid team chose, based on real-time dynamics on the ground, to kill rather than to capture bin Laden. Again, they relied upon human judgment. Leon Panetta, then Director of the CIA, told reporters that ‘It was a firefight going up that compound. And by the time they got to the third floor and found bin Laden, I think it – this was all split-second action on the part of the SEALs’ (Swinford 2011).[2] Here, the fact that the special operators are in close physical proximity to the battlefield and to their target enables them to apply human judgment from a relatively close epistemic position. Their decreased physical distance to the target entails a decrease in phronetic distance.

In each of these two cases, physical distance correlates with phronetic distance. The naval surface warfare officer responsible for the 1998 cruise missiles is physically 500 miles away from his intended target, and the point of application of human judgment is at his physical fingertips. His ability to react to apply human judgment in response to real-time dynamics is constrained by the technological limitations of the weapon and by the officer’s physical dislocation from the target area. The special operators in the Abbottabad raid, however, are able to perceive real-time battlefield dynamics and apply human judgment in response because, among other things, they are physically present in the target area.

The task in the remainder of this chapter is to show that unlike in these two examples, in remotely piloted aircraft operations, increases in physical distance do not necessarily correlate with increases in phronetic distance.

Phronetic Distance and Remotely Piloted Aircraft

At first glance, it might look as though the phronetic distance from which remotely piloted aircraft crews apply human judgment is similar to phronetic distance in the standoff cruise missile case. Our intuitions in response to this question have unfortunately been primed by widespread misconceptions in both the popular and scholarly literature on ‘drones.’ We are often told that these systems are robotic (Schneider and Macdonald 2017; Coeckelbergh 2013, 90; Royakkers and van Est 2010, 289; Sharkey 2013, 797); and that they fall into the class of autonomous or semi-autonomous weapons (Kaag and Kreps 2014, vii; Brunstetter and Braun 2011, 338). These descriptors, ‘robotic’ and ‘semi-autonomous,’ are more apt for the cruise missile. It flies a pre-planned route toward a pre-designated target and the human operator cannot intervene post-launch. Neither of these claims obtain for the remotely piloted aircraft.

Unfortunately, published first-hand accounts from remotely piloted aircraft crews that might either confirm or rebut these claims are few. There are just two US pilot memoirs of which I am aware and a third written by a US intelligence analyst (Martin and Sasser 2010; McCurley 2017; Velicovich and Stewart 2017).[3] Peter Lee has also helpfully collected first-hand accounts from British Royal Air Force Reaper crews in his 2018 book, Reaper Force (Lee 2018b). Campo’s study provides an important, if often overlooked, insight here. Though the primary focus of his study was the psychological effects on remote warfare crews, he did ask US Air Force Predator and Reaper about instances in which they had intervened to stop or delay a strike. Among his more than one hundred interviewees, twenty-two subjects provided narrative accounts in which they applied human judgment to intervene to stop or delay a strike. In Campo’s words,

All twenty-two stories were remarkably similar. In each story, the aircrew were directed to strike a target, but something just ‘did not feel right’ to them regarding the situation, the target identification, or the surrounding area. In every case, the aircrew took positive steps to understand the situation, develop their own mental model of the battlespace, and then recommend (or demand) a different course of action besides immediate weapons engagement via [remotely piloted aircraft]. All twenty-two individuals steadfastly believe that had they simply followed directions without delay or critical inquiry, collateral damage or civilian casualties were nearly assured.

(Campo 2015, 7–8)

Though Campo does not use the words ‘human judgment,’ his description is relevantly similar to my description of human judgment above. The theme Campo observed reappeared anecdotally in my own discussions with Predator and Reaper crews. One US Air Force pilot, Captain Andy, told me about a case in which the Airman attached to the ground team who was directing the strike – a joint terminal attack controller, or JTAC (pronounced ‘jay-tack’) – was confused and disoriented while taking enemy fire:

The friendlies were getting shot at. Both sides were, I think, 75 meters apart. We got a 9 line [attack briefing from the JTAC] to shoot friendly forces. The sensor [operator] was like, ‘‘holy crap. This is just not right.’’ The hairs on the back of the neck stood up, then we correlated more, and then we told the JTAC, ‘‘hey, you gave us a 9-line for yourself,’’ […] ‘‘the grids are over here.’’ You’re not going to get that with a robot. […] You’ll give [the robot] a grid and tell them to shoot it and [it’s] going to shoot it.’’

This account, and others like it, run counter to the received wisdom on how remote warfighters will respond to battlefield dynamics. For example, in his 2013 chapter, ‘War Without Virtue,’ Rob Sparrow (2013, 100–⁠101) anticipates that ‘since the [remote] operators are not in any danger, it is more plausible to expect them to follow orders from other people who may be geographically distant and also to wait for orders to follow.’

Lt Clifton, a Reaper pilot and formerly a sensor operator, disagrees. He mentions three times that he ‘pushed back’ against the JTAC’s instructions.

Those three strikes would have been legal based on actions, locations, and what was observed, but because of other factors which I voiced up (I wasn’t comfortable with the shot) […] You just don’t have a warm fuzzy because you don’t have all the details necessary. […] I’ve had three specific occasions where I voiced it up and the JTAC said, ‘‘copy that, we’ll hold off’’.

(Clifton 2019)

Lt Clifton went on to say that ‘it’s a two-way process between JTACs and aircrew. JTACs can tell us “cleared hot” all day long, and give us orders to strike, but of course as aircrew we don’t have to because the weapon is ultimately our responsibility’ (Clifton 2019).

An instructor sensor operator, Technical Sergeant Megan, put it this way:

There [have] been several situations where I would say the conversation between the pilot in command or the crew and the JTAC […] is – I don’t want to say “heated,” but they feel like this is what needs to be done and the crew [says], ‘we’re not comfortable with that’ for whatever reason. … At the end of the day, this is [the pilot’s] weapon. This is our aircraft. This is what we’re comfortable with doing and this is what we’re not comfortable with doing. […] At the end of the day, I’d say most of our crews are very good at standing up for that.

(Megan 2019)

I asked another instructor sensor operator named Master Sergeant Sean if he had ever experienced a moral dilemma in the seat. He said:

I wouldn’t say that I’ve ever had a moral dilemma […] Just because typically we work so well as a crew just between myself as the sensor operator and the pilot, that we’re able to come to a reasonable solution […] JTACs are pretty receptive when we push back on them and say, “hey, we’re just not comfortable with the strike. Can we just, you know, hold off a little bit?”

He went on to say:

I’ve had several [instances] where we weren’t comfortable with a certain strike just because we were worried about CIVCAS [civilian casualties] and things like that so we pushed back to the JTAC and ended up waiting, and lo and behold, we were able to eliminate the target in clear terrain with no CIVCAS.

(Sean 2019)

Though the resounding claims from the US Reaper crewmembers interviewed suggest that they do have the capability to apply human judgment, there are still constraints on the crews’ ability to impose human judgment.

Phronetic Distance in Traditionally Piloted Aircraft        

The above quotations do not suggest that the remote crews can impose human judgment to the same degree that the special operators did in the bin Laden raid. One of the most significant differences between the two is the difference between their epistemic positions. To see a target through a targeting pod at 20,000 feet does provide the aircrew with greater awareness than was available in the 1998 cruise missile case. But the remotely piloted aircraft crew’s epistemic state is still far different from that of the soldier on the ground. Retired US Army General Stanley McChrystal, former commander of coalition forces in Afghanistan, put the epistemic concern this way: ‘Because if you see things in 2D, a photograph or a flat screen, you think you know what’s going on, but you don’t know what’s going on, you only know what you see in two dimensions’ (quoted in Kennebeck 2017). So how are we to understand phronetic distance in remotely piloted aircraft? If the phronetic distance that is relevant in remotely piloted aircraft operations is neither like previous generates of long-distance weapons nor like traditional warfighters on the ground, perhaps the more apt point of comparison is traditionally piloted aircraft. That is, though this relatively recent technological development has had profound impacts on physical distance and psychological distance, perhaps phronetic distance in air operations is more continuous.

I spoke with Captain Shaun and Technical Sergeant Megan in a ground control station while they flew an operational mission over Afghanistan. Captain Shaun has experience both as a Reaper pilot and as a MC-12 Liberty pilot – an unarmed, traditionally piloted, propeller driven airplane used for intelligence, surveillance and reconnaissance. While flying the MC-12 in Afghanistan, the numerous intelligence analysts and ground personnel watching his video feed reported two people emplacing an improvised explosive device (IED) in a culvert under a road. The various participants in the operation started preparing an attack briefing for another aircraft. Captain Shaun and his crew were not convinced that what they saw was an IED emplacement and repeatedly intervened in the momentum that was building toward a strike. In Captain Shaun’s words, ‘it didn’t feel right. We stalled the kill chain multiple times.’ The ‘kill chain’ is the US military’s shorthand for the dynamic targeting process, consisting the six steps, ‘find, fix, track, target, engage, and assess’ (USAF 2019) Eventually, Captain Shaun said:

The two people we were watching ended up walking up to two full-grown adults. Once we saw the relative size, we knew the two people we had been watching were kids. They [had been] pulling sticks out of a culvert to get the water to flow. If we hadn’t stalled the kill chain, who knows what would have happened?

(Shaun 2019)

In my view, this is undoubtedly a case in which the crew applied human judgment in the battlespace. In this case, the phronetic distance correlates with physical distance. Captain Shaun’s physical and phronetic position is 15,000 feet above the target and he is capable of observing and intervening from that position. Had he been a soldier on the ground, his epistemic position would have been different, the fact that the two people were children would have been more obvious, and his ability to apply human judgment strengthened.

When I asked Captain Shaun about the differences between his ability to apply human judgment in the traditionally piloted MC-12 and in the remotely piloted Reaper, he said interrupting the kill chain is even easier in the Reaper because he is now responsible, not just for the camera providing the situational awareness, but also for the weapon. ‘I can say ‘I’m the A-code [the pilot in command]. It’s my weapon. My sensor operator doesn’t like it. We’re not doing it.’ Sgt Megan added, ‘You have to have that level of respect that it’s a human life you’re taking. I’ll still do it for the right reasons, but it has to be for the right reasons.’ But as we have already seen, there are some conditions under which one’s position half a world away might improve one’s epistemic position, perhaps especially when friendly forces are taking fire.

Conclusion: Empowering Judgment

If the first limitation on the remotely piloted aircraft crew’s application of human judgment in the battlespace is their epistemic position, the second is the organisational constraints on their autonomy. This is a question, not of technological capability, but of organisational culture, doctrine, and training. The technological capability – the visualisation of the battlespace via high-resolution cameras in multiple segments of light spectrum; the long loiter times over the target area; and the integrated network of operators, intelligence analysts, and commanders – is a necessary, but insufficient condition for applying human judgment in the battlespace.

For the last few decades, many Western militaries, including NATO on the whole, have moved toward a concept of ‘mission command’ according to which commanders issue mission-type orders with an emphasis on the commander’s intent to ‘thereby empowering agile and adaptive [subordinate] leaders with freedom to conduct operations’ (Roby and Alberts 2010, xvi; Scaparrotti and Mercier 2018, 2017, 6, 18, 37; Storr 2003). The freedom to conduct operations that is so central to mission command consists in the freedom to employ human judgment in the battlespace. In this approach, subordinate commanders, to include pilots in command, will retain the authority required to apply human judgment even in complex and difficult circumstances.

A recurring, though not universal, theme in my interviews with Reaper crews was that commanders at the squadron level and above would support pilots’ decisions when those pilots employed human judgment – and especially restraint – in the battlespace. Though the interviewees were with American Reaper crewmembers, it is noteworthy that Reaper crewmembers from the UK, France, Italy, Australia, and The Netherlands train alongside one another – perhaps inculcating this empowered approach to human judgment (Tran 2015; Murray 2013; Stevenson 2015; Fiorenza 2019). As these systems continue to proliferate, however, it is not yet clear whether all the states that will operate them will continue to value aircrew autonomy.

Finally, as military technology continues to develop it will be important to compare the application of human judgment in remote weapons employment to potential future use of autonomous weapons. In many instances, it has been human judgment, rather than targeting systems, that have identified errors and prevented catastrophic strikes. As militaries continue to develop artificial intelligence systems and apply them in the targeting process, they risk eroding the crucial application of human judgment in some situations. If nothing else, this discussion of human judgment in the battlespace should motivate developers and military commanders, not merely to ask which military tasks can be automated, but also to ask where in the battlespace human judgment ought to be preserved.

*The views expressed in this chapter are those of the author and do not necessarily reflect those of the US Air Force, the Department of Defense, or the US Government.

Notes

[1] In March of 2019, I interviewed 31 MQ-9 Reaper aircrew members and support personnel at Creech and Shaw Air Force Bases. The interviews were anonymous at the interviewees’ request and were intended to provide first-hand perspectives rather than to draw qualitative or quantitative conclusions. The result was more than eight hours of recorded audio and shorthand notes.

[2] This is a contested point. In Schmidle’s account, he cites a special operations officer who claims that ‘There was never any question of detaining or capturing him—it wasn’t a split-second decision. No one wanted detainees.’ Because I am after the conceptual distinction between physical and phronetic distance, this disagreement can be set to one side.

[3] Martin’s memoir is particularly contentious within the US Air Force Reaper (and formerly Predator) community. See, for example, Byrnes, C. M. W. 2018. Review: ‘We Kill Because We Can: From Soldiering to Assassination in the Drone Age.’ Air  and Space Power Journal, 32.

References

Alberts, David S., Reiner K. Huber and James Moffat. 2010. NATO NEC C2 Maturity Model. SAS-065. Washington: DoD Command and Control Research Program.

Aristotle and Crisp, Roger. 2000. Nicomachean ethics. Cambridge, U.K., Cambridge University Press.

Aristotle and Irwin, Terence. 2000. Nicomachean Ethics. Indianapolis, Hackett Publishing Company Inc.

Asaro, Peter. 2009. Modeling the moral user. IEEE Technology and Society Magazine, 28: 20–⁠24.

Bergen, Peter, David Sternman, Alyssa Sims, Albert Ford, and Christopher Mellon. 2016 (updated 2019). ‘World of Drones: Examining the Proliferation, Development, and Use of Armed Drones.’ New America.

Brooks, Rosa. 2016. How everything became war and the military became everything: tales from the Pentagon. New York: Simon and Schuster.

Brunstetter, Daniel, and Megan Braun. 2011. ‘The Implications of Drones on the Just War Tradition.’ Ethics and International Affairs, 25: 337–⁠358.

Byrnes, Captain Michael W. 2018. ‘Review: We Kill Because We Can: From Soldiering to Assassination in the Drone Age.’ Air and Space Power Journal, 32.

Campo, Joseph L., 2015. ‘Distance in War: The Experience of MQ-1 and MQ-9 Aircrew.’ Air and Space Power Journal.

Catalano Ewers, Elisa, Lauren Fish, Michael C. Horowitz, Alexandra Sander and Paul Scharre. 2017. ‘Drone Proliferation: Policy Choices for the Trump Administration.’ Papers for The President. Washington: Center for New American Security.

Chamayou, Gregoire. 2013. A Theory of The Drone. New York, The New Press.    

Chappelle, Wayne, Tanya Goodman, Laura Reardon and Lillian Prince. 2019. ‘Combat and operational risk factors for post-traumatic stress disorder symptom criteria among United States air force remotely piloted aircraft “Drone” warfighters.’ Journal of Anxiety Disorders, 62: 86–⁠93.

Chappelle, Wayne, Kent McDonald, Billy Thompson, and Julie Swearengen. 2012. Prevalence of High Emotional Distress and Symptoms of Post-Traumatic Stress Disorder in US Air Force Active Duty Remotely Piloted Aircraft Operators: 2010 USAFAM Survey Results. Final Technical Report. Wright Patterson Air Force Base, OH: Air Force Research Laboratory.

Clifton, Lt. ‘Interview with MQ-9 Reaper Personnel.’ By Chapa, Joseph. 14 March 2019.

Clover, Charles, and Emily Feng. 2017. ‘Isis use of hobby drones as weapons tests Chinese makers.’ Financial Times. 11 December.

Coeckelbergh, Mark. 2013. ‘Drones, information technology, and distance: mapping the moral epistemology of remote fighting.’ Ethics and Information Technology, 15: 87–⁠ 98.

Enemark, Christian. 2014. Armed drones and the ethics of war: military virtue in a post-heroic age. London: Routledge.

Fiorenza, Nicholas. 2019. ‘RNLAF Reaper operators train in US.’Jane’s Defence Weekly, 21 January.

Fitzsimmons, Scott, and Karina Sangha. 2013. ‘Killing in High Definition: Combat Stress among Operators of Remotely Piloted Aircraft.’ Technology, 12: 289–⁠ 292.

Franke, Ulrike. 2018. The unmanned revolution: how drones are revolutionising warfare. ProQuest Dissertations Publishing.

Galliott, Jai C., 2012. ‘Uninhabited Aerial Vehicles and The Asymmetry Objection: A Response to Strawser.’ Journal of Military Ethics, 11(1): 58–⁠66.

Gillis, Jonathan. 2017. ‘In over their heads: US ground forces are dangerously unprepared for enemy drones.’ War on The Rocks. 30 May.

Gregory, Derek. 2012. ‘From a View to a Kill.’ Theory, Culture and Society, 28(7–⁠ 8): 188–⁠215.

Gusterson, Hugh. 2015. Drone: Remote Control Warfare. Cambridge, MA: MIT Press.

Heyns, Christof. 2016. ‘Human rights and the use of autonomous weapons systems (AWS) during domestic law enforcement.’ Human Rights Quarterly, 38: 350–378.

Himes, Kenneth R., 2016. Drones and the Ethics of Targeted Killing. Lanham, Rowman and Littlefield.

HM Government. 2017. ‘Future of Command and Control.’ The Development, Concepts and Doctrine Centre.

Kaag, John, and Sarah Kreps. 2012. ‘The Moral Hazard of Drones.’ The New York Times, 22 July.

———. 2014. Drone Warfare. Cambridge: Polity.

Kania, Elsa. 2018. ‘The PLA’s Unmanned Aerial Systems: New Capabilities for a “New Era” of Chinese Military Power.’ In China Aerospace Studies Institute, edited by BrendanMulvaney. Montgomery: Air University.

Kean, Thomas H,, and Lee Hamilton, Richard Ben-Veniste, Bob Kerrey, Fred F. Fielding, John F. Lehman, Jamie F. Gorelick, Timothy J. Roemer, Slade Gorton and James R. Thompson. 2004. The 9/11 Commission Report. National Commission on Terrorist Attacks upon the United States.

Kennebeck, Sonia. 2016. National Bird. Washington, DC: Ten Forward Films.

Killmister, Suzy. 2008. Remote Weaponry: The Ethical Implications. Journal of Applied Philosophy, 25(2): 121–133.

Knowles, Emily, and Abigail Watson, A. 2017. All Quiet on the ISIS Front: British Secret Warfare in an Information Age. Remote Control Project, Oxford Research Group.

Lee, Peter. 2018a. ‘The distance paradox: reaper, the human dimension of remote warfare, and future challenges for the RAF.’ Air Power Review, 21(3): 106–130.

Lee, Peter. 2018b. Reaper Force: The Inside Story of Britain’s Drone Wars. London: John Blake Publishing.

Maguen, Shira, Thomas J. Metzler, Brett T. Litz, Karen H. Seal, Sara J. Knight, and Charles R. Marmar. 2009. ‘The impact of killing in war on mental health symptoms and related functioning.’ Journal of Traumatic Stress, 22(5): 435–443.

Martin, Matt J., and Charles W. Sasser. 2010. Predator: The remote-control air war over Iraq and Afghanistan: A pilot’s story. Zenith Press.

McCurley, Mark T., 2017. Hunter Killer: Inside the Lethal World of Drone Warfare. Atlantic Books.

Megan, T. ‘Interview with MQ-9 Reaper Personnel.’ By Chapa, Joseph. 14 March 2019.

Murray, Airman 1st Class Leah. ‘Italians learn to fly RPAs at Holloman.’ News release. 13 June, 2013, https://www.holloman.af.mil/Article-Display/Article/317370/italians-learn-to-fly-rpas-at-holloman/

Navy. 2018. ‘Tomahawk Cruise Missile.’ News release. 26 April. https://www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2169229/tomohawk-cruise-missile/

Rae, James Deshaw. 2014. Analyzing the drone debates: targeted killing, remote warfare, and military technology. Basingstoke: Palgrave Macmillan.

Rassler, Don. 2018. The Islamic State and Drones: Supply, Scale, and Future Threats. West Point: Combating Terrorism Center at West Point.

Royakkers, Lambèr, and Rinie Van Est. 2010. ‘The cubicle warrior: the marionette of digitalized warfare.’ Ethics and Information Technology, 12(3): 289–296.

Scaparrotti, Curtis M., and Denis Mercier. 2018. ‘Framework for Future Alliance Operations.’ Keeping The Edge. NATO.

Schmidle, Nicholas. 2011. ‘Getting Bin Laden’. The New Yorker, 8 August.

Schneider, Jacquelyn, and Julia Macdonald. 2017. ‘Why Troops Don’t Trust Drones: The ‘Warm Fuzzy’ Problem.’ Foreign Affairs. 20 December.

Sean, Major. 2019. ‘Interview with MQ-9 Reaper Personnel.’ By Chapa, Joseph. 14 March.

Shane, Scott. 2016. Objective Troy: A Terrorist, A President, and The Rise of The Drone. Seal Books.

Sharkey, Noel E., 2013. ‘The evitability of autonomous robot warfare.’ International Review of the Red Cross, 94(886): 787–799.

Shaun, Captain. 2019. ‘Interview with MQ-9 Reaper Personnel.’ By Chapa, Joseph. O. 15 March.

Sparrow, Robert. 2013. ‘War Without Virtue?’ In Strawser, Bradley. J. (ed.) Killing by Remote Control. Oxford: Oxford University Press.

Stevenson, Beth. 2015. ‘RAAF begins Reaper training in USA.’ Flight Global, 23 February.

Storr, Jim. 2003. ‘A Command Philosophy for the Information Age: The Continuing Relevance of Mission Command.’ Defence Studies, 3(3): 119–129.

Strawser, Bradley J., 2010. ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.’ Journal of Military Ethics, 9(4): 342–368.

Swinford, Steven. 2011. ‘Osama bin Laden Dead: Blackout During Raid on bin Laden Compound.’ The Telegraph. 4 May.

Tran, Pierre. 2015. ‘UK, France Discuss Reaper Pilot Training.’ Defense News. 3 June.

USAF. 2019. Annex 3-60: Targeting. In Education, C. E. L. C. F. D. D. A., edited by Maxwell Air Force Base, AL.

Velicovich, Brett, and Christopher S. Stewart. 2017. Drone Warrior: An Elite Soldier’s Inside Account of the Hunt for America’s Most Dangerous Enemies. HarperCollins.

Wagner, Markus. 2014. ‘The Dehumanizatino of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems.’ Vand. J. Transnat’l L., 47: 1371.

Wright, Lawrence. 2011. The Looming Tower: al-Qaeda’s Road to 9/11. London: Penguin.

Further Reading on E-International Relations

.




The article from the source

Tags

Related Articles

Back to top button
Close