Explainable Artificial Intelligence (XAI): A reason to believe?
Copyright (c) 2022 Law in Context. A Socio-legal Journal

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- Articles
- Submited: April 26, 2022
-
Published: April 26, 2022
Abstract
Artificial intelligence is an alluring technology which companies and governments hope to benefit from. In many circumstances a condition of its use is that humans can understand an explanation of why the action of an AI system took place. This has encouraged the development of a field of “explainable artificial intelligence”, or XAI. Much of the work in this field has been encouraged by the US Defense Advanced Research Projects Agency (DARPA), through its XAI program initiated in 2016. This paper argues that an underacknowledged challenge of XAI is that unlike most traditional technology, many AI systems contain inherent uncertainty. These systems are widely described as “black boxes”, and can be described only through their behavior, a technique described in the literature as post-hoc, rather than through an understanding of their functioning. Explaining such systems is akin to explaining the functioning of the natural world, rather than explaining the functioning of a known technology. While extensive work has been undertaken to explain the behavior of black box AI systems, there are limitations to the certainty that a post-hoc method can bring. Recognizing this is an important part of understanding the limitations of post-hoc reasoning in the use of advanced AI systems. Far simpler technologies have been seen to cause significant social damage: the UK Post Office Horizon system, and the Australian federal government Robodebt program. Coming to advanced AI system examples, two recent prestigious reports on AI systems and law display an unreasoned enthusiasm for AI explainability. AI researchers should be acknowledging that many advanced AI systems remain black boxes, that post-hoc explanations of these are inferences describing how the AI system may function, not how it does function, and the application of these technologies should be managed accordingly. Otherwise, the search for explanations may simply become a reason to believe.
References
doi.org/10.1145/3173574.3174156.
2 Adamson, Greg 2020. "Can We Use Non-Transparent Artificial Intelligence Technologies for Legal Purposes?” in 2020 IEEE International Symposium on Technology and Society (ISTAS) (2020) 43-52,
doi: 10.1109/ISTAS50296.2020.9462204.
3 Agarwal, Ranjan, Faiz Lalani and Misha Boutilier, 2018. “Lessons from Latif: Guidance on the Use of Social Science Expert Evidence in Discrimination Cases” 96(1) Canadian Bar Review 37.
4 Anon 1946, “Radar for Airlines” (2 May 1946) Flight 434
5 Atkinson, Robert D. 2016 “It's Going to Kill Us!” and Other Myths about the Future of Artificial Intelligence, Washington: Information Technology & Innovation Foundation.
6 Australian Human Rights Commission 2021. Human Rights and Technology: Final Report .
7 Barredo Arrieta, Alejandro, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera 2020. "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI." Information Fusion 58 (2020): 82-115
8 Baumgartner, Frank, Derek Epp and Kelsey Shoub, 2018. Suspect Citizens, What 20 Million Traffic Stops Tell Us about Policing and Race. Cambridge: Cambridge University Press,
9 Benjamins, Richard 2021 “A Choices Framework for the Responsible Use of AI” 1(1) AI and Ethics 49.
10 Bowling, Ben and Coretta Phillips, “Disproportionate and Discriminatory: Reviewing the Evidence on Police Stop and Search” (2007) 70(6) Modern Law Review 936.
11 Brown, Lesley and William Little (eds) 1993, The New Shorter Oxford English Dictionary on Historical Principles. Oxford: Clarendon Press.
12 Burtis, Michelle, Jonah B Gelbach and Bruce H Kobayashi 2017, “Error Costs, Legal Standards of Proof and Statistical Significance” 25(1) Supreme Court Economic Review 1-57.
13 Caruana, Rich, Lou Yin, Johannes Gehrke, Paul Koch and Noemie EZlhadad 2015, “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Association for Computing Machinery, 2015) 1721,
14 Chopra, Samir and Laurence F White 2011, A Legal Theory for Autonomous Artificial Agents Ann Arbor: University of Michigan Press.
15 Ciciline, David 2020, “Antitrust Subcommittee Chair Cicilline Statement for Hearing on “Online Platforms and Market Power, Part 6, Examining the Dominance of Amazon, Apple, Facebook, and Google”“, U.S. House Judiciary Committee (29 July 2020)
16 Colombo, N, 2018, “Virtual Competition”: 2(1) European Competition and Regulatory Law Review 11
17 DARPA 2016, Explainable Artificial Intelligence (XAI). Web:
18 Dicey, AV, 1979 Introduction to the Study of the Law of the Constitution UK: Palgrave Macmillan (1889)
19 Dodd, Vikram 2009, “Jean Charles de Menezes” Family Settles for £100,000 Met Payout”, the Guardian Web:
20 Doshi-Velez, Finale Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Kate Scott, Stuart Schieber, James Waldo, David Weinberger, Adrian Weller, and Alexandra Wood. 2017.. “Accountability of AI Under the Law: The Role of Explanation” Web
21 Dunn, Christopher and Michelle Shames 2019, Stop and Frisk in the de Blasio Era New York: Civil Liberties Union.
22 Dutilh Novaes, Catarina and Erich Reck 2017, “Carnapian Explication, Formalisms as Cognitive Tools, and the Paradox of Adequate Formalization” 194(1) Synthese 195.
23 Eberle, Edward J. 2011, “The Methodology of Comparative Law” 16(1) Roger Williams University Law Review 51
24 Epp, Charles R, Steven Maynard-Moody and Donald Haider-Markel 2014, Pulled over: How Police Stops Define Race and Citizenship Chicago: University of Chicago Press.
25 Faruqi, Osman 2020, “Compliance Fines under the Microscope”, The Saturday Paper Web:
26 Foster, Lorne, Les Jacobs and Bobby Siu 2013, “Race Data and Traffic Stops in Ottawa, 2013-2015 : A Report on Ottawa and the Police Districts” Web.
27 Fridell, Lorie A, “By the Numbers: A Guide for Analyzing Race Data from Vehicle Stops” 1
28 Gal, Michal S 2019, “Algorithms as Illegal Agreements” 34(1) Berkeley Technology Law Journal 67
29 Gilpin, Leilani H., David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal 2018, “Explaining Explanations: An Overview of Interpretability of Machine Learning” in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) 80
30 Gordon, Ian 2012, 2nd Expert Report of Ian Gordon, Filed in Haile-Michael & Ors in the Federal Court of Australia on 13/11/2012.
31 Gordon, Ian 2012, First Report of Professor Gordon (Redacted), Filed in Hail-Michael v Konstantinidis VID 969 of 2010 on11 September 2012.
32 Goris, Indira 2009, Profiling Minorities: A Study of Stop and Search Practices in Paris Web: Open Justice Society.
33 Graves, Robert 1960, The Greek Myths London: Penguin Books.
34 Hardin, Tim, “Reason to Believe”. Columbia, 1966.
35 van Hoecke, Mark 2015, “Methodology of Comparative Legal Research” Law and Method Web:
36 Hoosen, Shaheen, “Sole Witness: Should the Hearsay Rule Be Amended to Make Room for AI?”, Lawpath (7 March 2019)
37 Hopkins, Tamar 2017, Monitoring Racial Profiling Introducing a Scheme to Prevent Unlawful Stops and Searches by Victoria Police A Report of the Police Stop Data Working Group Web: Police Stop Data Working Group.
38 IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems 2016, Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1 IEEE.
39 Kidd, DA (1957) “Collins Latin Gem Dictionary”. Glasgow: Harper Collins.
40 Kissinger, Henry A. 1993 “How the Enlightenment Ends Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence” 321(5) The Atlantic Monthly 11.
41 Kroll, Joshua A. 2018 “The Fallacy of Inscrutability” 376(2133) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 1.
42 Lipton, Zachary C 2017, “The Mythos of Model Interpretability” Web:
43 Liu, Benjamin and John Selby 2019, “Directors” Defence of Reliance on Recommendations Made by Artificial Intelligence Systems: Comparing the Approaches in Delaware, New Zealand and Australia” 34(2) Australian Journal of Corporate Law 141-159.
44 Lynskey, Orla 2017, “Regulating “Platform Power”“ in LSE Law, Society and Economy Working Papers London: LSE Law Department.
45 McGinnis, John O. 2018, “Accelerating AI” in Ugo Pagallo (ed) Research Handbook on the Law of Artificial Intelligence. Northhampton, Ma: Edward Elgar Publishing, 40.
46 Miller, Tim, “Explanation in Artificial Intelligence: Insights from the Social Sciences” 2018, Web:
47 Milli, Smitha, Ludwig Scmidt, Anca Dragan and Moritz Hardt 2019, “Model Reconstruction from Model Explanations” in Proceedings of the Conference on Fairness, Accountability, and Transparency 1 9
48 Morgan, Charles 2019, Responsible AI: A Global Policy Framework International Technology Law Association.
49 Naughton, John 2019, “To Err Is Human – Is That Why We Fear Machines That Can Be Made to Err Less? The Guardian (14 December 2019) Web:
50 Nemitz, Paul 2018 “Constitutional Democracy and Technology in the Age of Artificial Intelligence” 376(2133) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20180089.
51 OED Online Web: www.oed.com.
52 Office of the Australian Information Commissioner 2014, Australian Privacy Principles Guidelines: Privacy Act 1988. Web: Australian Government Publishing Service,
53 Office of the Information Commissioner UK 2020, Explaining Decisions Made with AI Web: ICO, 21 May 2020.
54 Office of the Victorian Information Commissioner 2018, Artificial Intelligence and Privacy: Issues Paper. Web:
55 Pasquale, Frank 2015, The Black Box Society: The Secret Algorithms That Control Money and Information Cambridge: Harvard University Press,
56 Posner, Richard A 1988, “The Jurisprudence of Skepticism” 86(5) Michigan Law Review 827.
57 Selbst, Andrew D and Solon Barocas 2018, The Intuitive Appeal of Explainable Machines Web:
58 Sentas, Vick 2014, Traces of Terror Counter-Terrorism Law, Policing, and Race Oxford: Oxford University Press,
59 Shiner, Michael, Zoe Carre, Rebekah Delsol and Niamh Eastwood 2018, The Colour of Injustice, “Race”, Drugs and Law Enforcement in England and Wales. London: Stopwatch,
60 Shiner, Michael and Paul Thornbury 2019, Regulating Police Stop and Search: An Evaluation of the Northamptonshire Police Reasonable Grounds Panel. New York: Open Society Justice Initiative
61 Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2017. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm” Web:
62 Skolnick, Jerome H and James J Fyfe 1993, Above the Law, Police and the Excessive Use of Force New York: The Free Press.
63 Snow, C. P. 1956, “The Two Cultures” (6 October 1956) New Statesman 413.
64 Sovrano, Francesco, Salvatore Sapienza, Monica Palmirani and Fabio Vitali 2021, “A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act”. Web:
65 Stuart, Forrest 2016, Down, out and under Arrest. Policing and Everyday Life in Skid Row Chicago: University of Chicago Press.
66 Taylor, Andrew and Matthew Jones 2000, “Banks Pull Plug on Independent Energy”, Financial Times (10 September 2000) 16.
67 Tschider, Charlotte A. 2018, “Deus Ex Machina: Regulating Cybersecurity and Artificial Intelligence for Patients of the Future” 5(1) Savannah Law Review 177.
68 Turek, Matt 2018, “Explainable Artificial Intelligence (XAI)”, DARPA Defense Advanced Research Projects Agency Web:
69 Wachter, Sandra, Brent Mittelstadt and Chris Russell 2017, “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR”. Web:
70 Wiener, N. 1961, Cybernetics, or Control and Communication in the Animal and the Machine (2nd ed.) Cambridge Mass.: The MIT Press,)
71 Wiener, N. 1953, Ex-Prodigy: My Childhood and Youth Cambridge Mass.:The MIT Press.
72 Wiener, N. 1954, The Human Use of Human Beings: Cybernetics and Society. Boston: Houghton Mifflin.
73 Wiener, Norbert 1960, “Some Moral and Technical Consequences of Automation” 131(3410) Science 1355
74 Wortley, Scot 2019, Halifax, Nova Scotia: Street Checks Report Halifax: Nova Scotia Human Rights Commission.