Automated Weaponry and Artificial Intelligence: Implications for the Rule of Law

Supreme and Federal Court Judges' Conference, Perth

Justice Melissa Perry* 22-25 January 2017

RTF version (232 kb)


1. Introduction

On 28 July 2015, over a thousand experts in autonomous technologies wrote an open letter to the United Nations.[1] The letter warned of the dangers posed by lethal autonomous weapons (LAWS). These weapons select and engage human targets independent of human intervention, replacing human soldiers with machines. Artificial intelligence technology, we are told, "has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades …". The message conveyed is that today's science fiction may be tomorrow's reality and that limits must be imposed on that progress to ensure that such technology is applied only so as to benefit humanity.

The growth and development in new technologies even in our daily lives bears out that message. We are already living in a technological world beyond our imaginings when we were young. Yet, as the letter warns, with such change comes responsibility. The challenge for lawyers is to ensure that the development and use of new technologies, especially those designed to maim and kill, occurs lawfully and in a manner that ensures transparency and accountability. No less is required for compliance with fundamental principles of the rule of law endorsed in the global Sustainable Development Goals (SDGs)[2] by which member States of the UN resolved in September 2016 to "… promote the rule of law at the national and international levels …".[3]

Against that background, I will start by explaining what we mean by automated systems, and the contexts and manner in which automated weapons are currently employed or are in the process of development. I will then outline existing principles governing the use and development of automated and potentially autonomous weapons systems, before identifying a number of key issues surrounding their use and development. As Justice Logan has flagged, this provides a context in which the issue of accountability in particular for use of such systems under criminal and defence disciplinary law may arise in domestic courts, as well as accountability in international fora.

2. What is an automated system?

Automated weapons are so-described because they employ automated systems. An automated system is, broadly speaking, a computerised process which uses coded logic or algorithms to make a decision or part of a decision, or to make recommendations. But not all machines are created equal.[4] Automated decision-making systems can be used across a broad spectrum of government, civilian and military applications. They also vary considerably both in their processing capacities and in the extent to which their operational capabilities are autonomous of or reliant upon human input. Decisions can be wholly or partially automated as the process of automation is "characterized by a continuum of levels rather than as an all-or-none concept".[5] Among other things, some of these systems have human involvement at the point of making a decision. Other systems operate autonomously without further human involvement beyond the programming stage. In other words, depending upon the system, humans may be "kept in the loop" of decision-making to varying degrees while, in others, humans are "outside the loop", as it is colloquially known.

Precursors to automated weapon systems which detect and react to incoming "threats" without the need for a human to "pull the trigger" have existed for generations. One example is land mines. These were first used widely in the First World War but are now prohibited under international law due among other things to their indiscriminate nature.[6] Analogous but more complex issues attend the potential development of lethal autonomous robotics with artificial intelligence. Robotics of this kind would be programmed to make independent and complex decisions, to adapt to their environment and to act on those decisions – i.e. to independently select and attack targets.

There is considerable interest in and funding of relevant research into this kind of weaponry.[7] This has prompted increasing international concern and debate. For example, States party to the Convention on Certain Conventional Weapons (CCW)[8] have met on a number of occasions in ongoing discussions concerning the issues that lethal autonomous weapons pose in terms of their legality and potential threat to the rule of law.[9]

So far as we know, lethal autonomous weapons do not yet exist. Nonetheless weapons systems with significant autonomy are already in use. Their use tends to be in defensive rather than offensive operations in simple, static and predictable environments. For example, a number of defence systems are currently employed with human supervised autonomous modes which detect, track and guide weapons to intercept targets so as to protect civilian areas against short range rocket attacks.[10] However, Australian unmanned systems are not currently fitted with the capability to deliver lethal force. An example of the use of unmanned platforms by the Australian Defence Force is the Heron-1 UAS – a remotely piloted aircraft deployed by the ADF in Afghanistan to provide commanders with real-time intelligence, surveillance and reconnaissance information.[11] Nonetheless, a Parliamentary Committee recommended in 2015 that the ADF procure armed unmanned platforms subject to the safeguards that they be used only by ADF personnel in accordance with international law and with appropriate transparency measures.[12] There are also reports of ADF personnel training in the US with the Reaper drone.[13] The Reaper is weaponised but retains a "human in the loop" meaning that ultimate control rests with a human operator.

Technology utilizing autonomous systems is not limited to defence, but is widely used in civilian contexts. Indeed this is a field where there may be a ready cross-over between technological advances initially developed for civilian use, on the one hand, and those for the military sphere or law enforcement applications, on the other hand.[14] As a simple example, one can readily envisage that technology currently in development for self-driving cars might be employed to improve the capabilities of remotely operated ground combat vehicles or other unmanned platforms.[15] In other words, the implications of developments in one sphere for the other cannot be left out of the equation.

In common with automated weapons systems, automated decision-making systems in non-military contexts vary in their processing capacities and in the extent to which their operational capabilities are autonomous of, or reliant upon, human input beyond the software design stage. This technology is not only employed in the private sector. Governments in many countries now extensively employ automated systems to make decisions that impact daily on the rights of individuals. These range, for example, from assessing a claimant's entitlement to welfare benefits or assessing tax, to screening devices at an airport to decide whether a person should be permitted to enter the country or poses a security risk. A timely reminder of the kinds of challenges which these automated systems may raise is afforded by the recent attention on Centrelink's automated debt recovery system[16] (described by one commentator somewhat colourfully as a "Weapon of Math Destruction"[17]). This example also reminds us that the implementation of automated processes has potentially serious human implications even outside the military sphere.

Other significant questions which also arise include the risk that discretionary or evaluative decisions may be replaced with rigid criteria of the kind necessary to enable such automated systems to be utilised[18] and the risk that algorithms which learn from patterns of behaviour (so called machine learning) may introduce gender, racial and other biases into decision-making.[19] The risks of machine based learning were all too vividly illustrated when Microsoft released a software program, Tay, on the internet to learn how humans talk in conversation and converse with them. Within 24 hours, however, Tay was tweeting racial slurs and calling for genocide.[20]

It follows that, while automated and autonomous weaponry systems raise most acutely the moral, ethical and legal issues associated with such technologies, they also provide a prism through which aspects of a broader debate as to the proper limits of computerised decision-making may be raised and considered.

3. Existing legal framework

Currently no international treaties deal specifically with automated or autonomous weapon systems. Nonetheless, while weaponry of these kinds was conceived only after the rules of international humanitarian law had come into existence, those rules govern their development and use during armed conflicts. Any other conclusion, as the International Court of Justice held in advising on the legality and use of nuclear weapons, "would be incompatible with the intrinsically humanitarian character of the legal principles in question which permeates the entire law of armed conflict and applies to all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future."[21] Importantly, these obligations include that in Art 36 of the Additional Protocol I to the Geneva Conventions on parties to determine whether a weapon or new method of warfare would be prohibited under the Protocol or any other rule of international law.[22]

Relevant also are international human rights. While human rights obligations exist irrespective of the existence of armed conflict,[23] during hostilities (and depending upon the nature of the right) their content may be impacted upon by international humanitarian law. Consequently, a consideration of whether the taking of a life is arbitrary,[24] for example, will fall to be determined by international humanitarian law as the applicable law during armed conflict.[25] If, in other words, the taking of a life during hostilities would not infringe international humanitarian law, it would not be arbitrary.[26] (I leave to one side for present purposes complex questions as to the extent of a State's obligations to secure human rights beyond the territorial boundaries of that State.[27])

I will turn now briefly to consider a number of key issues associated with the use of automated weapons systems.

4. Key issues

a. The lawfulness of automated weapons systems in international humanitarian law

First, to what extent is the use of robotics in warfare lawful?

Subject to the setting of proper limits, technologies may and are employed to achieve legitimate military advantages and may even reduce casualties on both sides. For example, it has been argued that "the greater intelligence, surveillance and reconnaissance persistence that can be provided by current unmanned systems can facilitate better target discrimination and lead to less incidental injury to civilians and damage to civilian property."[28] It has also been suggested that the remote operators of unmanned systems may be less likely to resort to greater force to address threats, as opposed to troops in the battlefield who are personally at risk of harm.[29] The use of machines with autonomy short of full autonomy may also help to save lives in a manner preserving human dignity. An example given is where the rifles of snipers undertaking a hostage rescue might be connected by a computer which releases force only when all of the snipers have a clear shot at the armed hostage takers.[30] 

What then of lethal autonomous robotics – machines which identify and select human targets and implement their targeting decisions independent of human intervention? Could machines of this character ever be developed and produced lawfully? There is general acceptance that caution and some form of control are needed beyond that already afforded under international law.[31] Controls that have been suggested include to slow their proliferation, to limit such weapons to defensive applications, and to limit their firepower.[32]

But there is also a case that even these measures may not suffice and that by their nature lethal autonomous robots could not meet the requirements of international law. Even if a decision by a human to take a life might be lawful under international humanitarian law, would such a decision be inherently arbitrary if entrusted to a machine? Would it be acceptable under Additional Protocol 1 of the Geneva Conventions which provides that the acceptability of such systems should be examined according to the principles of humanity and the dictates of public conscience?[33] Such concerns led the European Parliament, for example, on 27 February 2014 to adopt a non-binding resolution on the use of armed drones which included support for a ban on "the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention."[34]

The twin rules of international humanitarian law of distinction and proportionality already present significant barriers to the use of lethal autonomous robotics. The rule of distinction prohibiting the targeting of non-combatants or civilians[35] requires complex assessments to be made of infinitely variable circumstances. These include intentions, emotion and context which demand a complete "situational awareness". The complexity of modern armed conflict where, for example, combatants are not necessarily identified by uniform and civilians may be taking a direct role in hostilities,[36] only adds to the potential ambiguity of determining a person's role, if any, in hostilities at any given point in time. Equally, determining whether civilian losses would be proportionate to the military objectives sought to be achieved requires an abstract and qualitative assessment made on a uniquely case-by-case analysis. Yet as one commentator has observed, "[t]hese are far from [the] algorithmic specifications for decision-making and action" which drive automated weaponry.[37]

The use of automated systems where humans are left out of the loop would also impact on what might be termed situational innovation. By this I mean the human capacity to respond by thinking "outside the square" in lateral and imaginative ways in response to particular circumstances. In this field, this may mean achieving not merely a lawful result, but achieving a more compassionate one. If I can return to the movie "Eye in the Sky" to illustrate the point. The British and US authorities were advised that it was lawful to order an attack by a drone on a cell about to execute a suicide bombing. The likely killing of a little girl selling bread from a stall proximate to the attack was considered proportionate to the harm that would be caused if the suicide bombing were permitted to proceed. However, the agent on the ground took steps to avoid the child's death by bribing a local child to buy the last of her bread in the hope that she would leave before the assault.

b. The "accountability gap"

In the second place, individual responsibility is key to ensuring accountability for violations of criminal law, both international and domestic, and of military law and is therefore critical to the rule of law. Plainly a machine cannot be held responsible if humans are taken "out of the loop". Upon whom then would responsibility fall? Would it lie with the software programmers, those who design and produce the machines, military commanders, or political leaders? The existing paradigm does not necessarily assist. Responsibility depends upon the chain of command within military ranks. Yet under Protocol I to the Geneva Conventions, a commander is not absolved from penal or disciplinary responsibility for a subordinate's violation if she or he "knew or should have known that the individual planned to commit a crime" and failed to take all feasible measures to prevent the violation.[38] Among other things, this would require a sufficient level of understanding by the commander of the computer programming to attract liability under criminal law or military disciplinary law for its deployment in a given situation.[39]

c. Humans "in the loop" and meaningful human control

Finally, is it a sufficient answer to these challenges to ensure human involvement in the final decision on whether a person is a lawful target? While there is an emerging consensus in favour of a requirement that there must be "meaningful human control" over such decisions in order to comply with international law, there is lack of clarity as to precisely what this requires.[40] Does this mean, for example, that humans must remain in the loop of decision-making or is sufficient if a human is armed with the capacity to override the machine (colloquially called "on the loop")?

The indications are that it is generally accepted that humans must remain in the loop. At the CCW meeting of experts on autonomous weapons systems in 2014 and 2015, a number of States indicated that human control was necessary.[41] The US Department of Defence has indicated that current policy is that "autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force".[42] The Australian Defence Force has taken the approach that it will "embrace semi-autonomous systems where that capacity can save lives or reduce expense … but where lethal force is involved a trained operator will remain responsible for the application of that force".[43] Similarly the United Kingdom has taken the position that the operation of weapons systems will always be under human control, and that where a defensive system can operate in an automatic mode, there must be a person involved in setting the parameters of any such mode.[44]

In determining what is sufficient or "meaningful", it has been suggested that the capacity for human override or intervention to effectively reduce the risks of violations of international law would be largely illusory given that the processing speed of these machines is often measured in nanoseconds.[45] Further, even if delays were programmed into a system for "on the loop" intervention by humans, there is the added dimension that humans tend to trust the accuracy of automated systems in preference to their own judgment. This phenomenon, sometimes called automation bias, may impact upon whether human operators decide to countermand machine recommendations or decisions.[46]

These and other concerns led the Special Rapporteur on extrajudicial, summary or arbitrary executions in 2013 to warn of the risks that the development of such robotics posed to the right to life, the rule of law and international security. He called upon "all involved – States, international organizations, and international and national civil societies – to consider the full implications of embarking on this road"[47] and to do so while it is still possible.

5. Conclusion

The imperative for the global community to define with precision the limits which constrain the development and deployment of automated and autonomous weapons is clear. In line with the emphasis in the United Nations Sustainable Development Goals on the rule of law, this requires the erection of a more precise legal framework within which they may be developed, produced and used in a manner which will protect and preserve humanity.


* LL.B (Hons)(Adel), LL.M, PhD (Cantab), FAAL.  This paper is a revised and updated version of a speech presented at the International Association of Women Judges’ 13th Biennial Conference on 27 May 2016.  The author gratefully acknowledges the assistance of her associate, Kate Mitchell in researching the present paper and in providing helpful feedback.  The views expressed in this paper are the reflections only of the author and do not represent the views of the Australian Defence Force or Australia.

[1] Gibbs, Samuel, Musk, Wozniak and Hawking Urge Ban on Warfare AI and Autonomous Weapons (2015) the Guardian <http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons>.

[2] Transforming our world: the 2030 Agenda for Sustainable Development (adopted 15 September 2016, UNGA Res A/Res/70/1) (available <http://www.un.org/ga/search/view_doc.asp?symbol=A/RES/70/1&Lang=E> (accessed 24 January 2017). The SDGs were adopted at the United Nations Sustainable Development Summit in New York on 25-27 September 2012. The objective was to produce a set of universal goals that meet the urgent environmental, political and economic challenges facing our world. The SDGs replaced the Millennium Development Goals, which started a global effort in 2000 to tackle the indignity of poverty: <http://www.undp.org/content/undp/en/home/sustainable-development-goals/background.html> (accessed 24 January 2017).

[3] Transforming our world: the 2030 Agenda for Sustainable Development (adopted 15 September 2016, UNGA Res A/Res/70/1), Goal 16.

[4] W Marra and S McNeil, 'Understanding "The Loop": Regulating the Next Generation of War Machines' (2012) 36(3) Harvard Journal of Law and Public Policy 1139 at 1149.

[5] Raja Parasuraman and Victor Riley, 'Humans and Automation: Use, Misuse, Disuse, Abuse' Human Factors (1997) 39(2) 230, 232.

[6] Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction (adopted 18 September 1997, entered into force 1 March 1999) 2056 UNTS 211 (Ottawa Convention) art 1.

[7] Senate Standing Committee on Foreign Affairs, Defence and Trade, 'The potential use by the Australian Defence Force of unmanned air, maritime and land platforms' (25 June 2015) available <http://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Foreign_Affairs_Defence_and_Trade/Defence_Unmanned_Platform/Report/c05> (accessed 23 May 2016) (Australian Senate Report (2015)) at 44 [5.25].

[8] Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or to have Indiscriminate Effects (adopted 10 October 1980, entered into force 2 December 1983) 1342 UNTS 137.

[9] See 2016 CCW Meeting of Experts on LAWS' (United Nations Office at Geneva, 11-15 April 2016).

[10] Australian Senate Report (2015) at 43 [5.24].

[11] Department of Defence, Submission no. 23 to the Senate Foreign Affairs, Defence and Trade References Committee Inquiry: Use of Unmanned Platforms by the ADF (2015) available <http://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Foreign_Affairs_Defence_and_Trade/Defence_Unmanned_Platform/Submissions> (accessed 21 September 2016) (Defence Senate Inquiry Submission (2015)) at p.8; Royal Australian Airforce, 'Heron' available <http://www.airforce.gov.au/Technology/Aircraft/Heron/?RAAF-U3cQ7cNqUl7hOR9akHK4KUQKnbbWmZnX> (accessed 21 September 2016).

[12] See Foreign Affairs, Defence and Trade References Committee, Use of unmanned air, maritime and land platforms by the Australian Defence Force, Parliament of Australia, Canberra, 2015, Recommendation 2 available <http://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Foreign_Affairs_Defence_and_Trade/Defence_Unmanned_Platform/Report/b02> (accessed 24 January 2017).

[13] See J Mugg, 'DWP 2016: unmanned systems and the future ADF', The Strategist (16 March 2016): see <https://www.aspistrategist.org.au/dwp-2016-unmanned-systems-and-the-future-adf/> (accessed 24 January 2017).

[14] As observed in 2013: Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns (9 April 2013), A/HRC/23/47, at [29].

[15] Government of Canada, "Food for Thought" Paper: Mapping Autonomy' (Paper presented at the 2016 Meeting of Experts on Laws, 11-15 April 2016).

[16] See, for example, Tom McIlroy, 'Low-income earners slam automated debt system' (Sydney Morning Herald, 3 January 2017) page 5; Peter Martin, 'How Centrelink unleased a weapon of math destruction' (Sydney Morning Herald, 7 January 2017) available <http://www.smh.com.au/comment/how-centrelink-unleashed-a-weapon-of-math-destruction-20170105-gtmsnz.html> (accessed 17 January 2017). The difficulties which Centrelink's system are said to have caused include the online form and incorrect assessments said to have been produced by the automated system.

[17] Peter Martin, 'How Centrelink unleased a weapon of math destruction' (Sydney Morning Herald, 7 January 2017) available <http://www.smh.com.au/comment/how-centrelink-unleashed-a-weapon-of-math-destruction-20170105-gtmsnz.html> (accessed 17 January 2017) (adopting the title of the book, Weapons of Math Destruction (2016), by data scientist Cathy O'Neil).

[18] See further Perry, M, and Smith, A, "iDecide: the legal implications of automated decision-making", paper presented at the Cambridge Centre for Public Law Conference 2014: Process and Substance in Public Law, University of Cambridge, 15-17 September 2014 (available <http://www.fedcourt.gov.au/publications/judges-speeches/justice-perry/perry-j-20140915> ); for an example of how this may translate into a jurisdictional error, see e.g. Salama v Minister for Immigration and Border Protection [2017] FCA 2.

[19] Hannah Devlin, 'Discrimination by Algorithm: scientists devise test to detect AI bias' (Guardian, 19 December 2016) available <https://www.theguardian.com/technology/2016/dec/19/discrimination-by-algorithm-scientists-devise-test-to-detect-ai-bias> (accessed 17 January 2017).

[20] See e.g. BBC News, "Microsoft chatbot is taught to swear on Twitter", Jane Wakefield, Technology Reporter, 24 March 2016 available <http://www.bbc.com/news/technology-35890188> (accessed 24 January 2017).

[21] Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion [1996] ICJ Rep 226 (Nuclear Weapons Advisory Opinion) at [86].

[22] International Committee of the Red Cross (ICRC), Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977, 1125 UNTS 3.

[23] Nuclear Weapons Advisory Opinion at [25]; Legal Consequences of the Construction of a Wall in the Occupied Territory, Advisory Opinion [2004] IJC Rep 136 at [106]; DRC v Uganda [2004] ICJ Rep 116 at [216].

[24] International Covenant on Civil and Political Rights (entered into force 23 March 1976) 999 UNTS 171 (ICCPR) art 6(1); European Convention for the Protection of Human Rights and Fundamental Freedoms, as amended by Protocols Nos. 11 and 14, (adopted 4 November 1950, entered into force 3 September 1953) ETS 5 (ECHR) art 2.

[25] Nuclear Weapons Advisory Opinion at [25].

[26] C Heyns, D Akande, L Hill-Cawthorne and T Chengeta, 'The Right to Life and the International Law Framework Regulating the Use of Armed Drones in Armed Conflict or Counter-Terrorism Operations', submission to the United Kingdom Joint Committee on Human Rights Parliamentary Inquiry on The Government's policy on the use of drones for targeted killing (10 December 2015) at [6].

[27] See by way of example the recent decision of the Court of Appeal in Al-Saadoon v Secretary of State for Defence [2016] EWCA Civ 811 concerning the reach of the European Convention on Human Rights into war zone (relevantly Iraq).

[28] Henderson, I, Submission 20  at p. 3 (cited in the Australian Senate Report (2015) at [5.10]

[29] Ibid. See also Henderson et al, 'Adopting the Law of Armed Conflict to Autonomous Weapons Systems' (2014) 90 International Legal Studies 386,393.

[30] Comments by C Heyns, United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions, Panel on Human Rights and LAWS, Informal Meeting of Experts on LAWS: Convention on Conventional Weapons, 16 April 2015, Geneva at p. 5.

[31] Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns (9 April 2013), A/HRC/23/47 at 6 [32].

[32] Australian Senate Report (2015) at p. 45 [5.30].

[33] Australian Senate Report (2015) at 45 [5.28]-[5.29].

[34] European Parliament, Resolution on the use of armed drones (2014/2567) para 2(d) <http://www.europarl.europa.eu/sides/getDoc.do?type=TA&language=EN&reference=P7-TA-2014-0172> (accessed 27 May 2016) (emphasis added).

[35] Protocol I additional to the Geneva Conventions, 1977, arts 51 and 57.

[36] Written Evidence from C Heyns, D Akande, L Hill-Cawthorne and T Chengeta (DRO0024), "The Right to Life and the International Law Framework Regulating the Use of Armed Drones in Armed Conflict of Counter-Terrorism Operations" at p [13], United Kingdom Parliamentary Inquiry to Drone Policy.

[37] L Suchman, "Situational awareness: Deadly bioconvergence at the boundaries of bodies and machines", (2015) vol. V Media Tropes eJournal 1 available <http://www.mediatropes.com/index.php/Mediatropes/article/view/22126/17971>.

[38] Protocol I, additional to the Geneva Conventions, 1977, art 86(2). See also ibid, art 87.

[39] Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns (9 April 2013), A/HRC/23/47 at 15 [78].

[40] See for example the research compiled by CCW Meeting of Experts: 'Convention on Certain Conventional Weapons: Background- Lethal Autonomous Weapons Systems' <http://www.unog.ch/80256EE600585943/%28httpPages%29/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument> (accessed 23 May 2016); Australian Senate Report (2015) at 46 [5.32].

[41] See Human Rights Watch and Harvard International Human Rights Clinic, 'Killer Robots and the Concept of Meaningful Human Control' (April 2016) p. 7.

[42] US Department of Defence, Directive: Autonomy in Weapon Systems, No.3000.09 (21 November 2012).

[43] Defence Senate Inquiry Submission (2015) 6; see further Australian Senate Report (2015) Chapter 5 [5.34]-[5.35].

[44] House of Lords and House of Commons Joint Committee on Human Rights, 'The Government's policy on the use of drones for targeted killing, Second report of Session 2015-2016' (10 May 2016) <http://www.publications.parliament.uk/pa/jt201516/jtselect/jtrights/574/57402.htm> (accessed 23 May 2016). UK Parliamentary Under-Secretary of State for Foreign and Commonwealth Affairs (Alistair Burt), 'Lethal Autonomous Robotics'  (Hansard Debates, 17 June 2013) Column 732, 734 <http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm130617/debtext/130617-0004.htm> (accessed 23 May 2016).

[45] 2013: Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns (9 April 2013), A/HRC/23/47 at 8 [41].

[46] Linda Skitka, 'Does automation bias decision-making?' (1999) 51 International Journal of Human-Computer Studies 991, 992-993; Danielle Citron, 'Technological Due Process' (2008) 85 Washington University Law Review 1249, 1272.

[47] Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns (9 April 2013), A/HRC/23/47 at 6 [30].

Was this page useful?

What did you like about it?

How can we make it better?

* This online submission is protected by captcha
Security key


Can't read the security key? Click here to get a new key