Legal Studies in International, European and Comparative Criminal Law 4 Serena Quattrocolo Artificial Intelligence, Computational Modelling and Criminal Proceedings A Framework for A European Legal Discussion Legal Studies in International, European and Comparative Criminal Law Volume 4 Editor-in-Chief Stefano Ruggeri Department of Law, University of Messina, Messina, Italy Editorial Board Members Chiara Amalfitano University of Milan, Milan, Italy Lorena Bachmaier Winter Faculty of Law, Complutense University of Madrid, Madrid, Spain Martin Böse Faculty of Law, University of Bonn, Bonn, Germany Eduardo Demetrio Crespo University of Castile-La Mancha, Toledo, Spain Giuseppe Di Chiara Law School, University of Palermo, Palermo, Italy Alberto Di Martino Sant'Anna School of Advanced Studies, Pisa, Italy Sabine Gleß University of Basel, Basel, Switzerland Krisztina Karsai Department of Criminal Law, University of Szeged, Szeged, Hungary Vincenzo Militello Dipto Sci Giuridiche, della Società, University of Palermo, Palermo, Italy Oreste Pollicino Comparative Public Law, Bocconi University, Milan, Italy Serena Quattrocolo Department of Law, University of Piemonte Orientale, Alessandria, Italy Tommaso Rafaraci Department of Law, University of Catania, Catania, Italy Arndt Sinn Faculty of Law, University of Osnabrück, Osnabrück, Germany Francesco Viganò Bocconi University, Milan, Italy Richard Vogler Sussex Law School, University of Sussex, Brighton, UK The main purpose of this book series is to provide sound analyses of major developments in national, EU and international law and case law, as well as insights into court practice and legislative proposals in the areas concerned. The analyses address a broad readership, such as lawyers and practitioners, while also providing guidance for courts. In terms of scope, the series encompasses four main areas, the first of which concerns international criminal law and especially international case law in relevant criminal law subjects. The second addresses international human rights law with a particular focus on the impact of international jurisprudences on national criminal law and criminal justice systems, as well as their interrelations. In turn the third area focuses on European criminal law and case law. Here, particular weight will be attached to studies on European criminal law conducted from a comparative perspective. The fourth and final area presents surveys of comparative criminal law inside and outside Europe. By combining these various aspects, the series especially highlights research aimed at proposing new legal solutions, while focusing on the new challenges of a European area based on high standards of human rights protection. As a rule, book proposals are subject to peer review, which is carried out by two members of the editorial board in anonymous form. More information about this series at http://www.springer.com/series/15393 Serena Quattrocolo Artificial Intelligence, Computational Modelling and Criminal Proceedings A Framework for A European Legal Discussion Serena Quattrocolo Department of Law, Politics, Economics and Social Science University of Eastern Piedmont Alessandria, Italy ISSN 2524-8049 ISSN 2524-8057 (electronic) Legal Studies in International, European and Comparative Criminal Law ISBN 978-3-030-52469-2 ISBN 978-3-030-52470-8 (eBook) https://doi.org/10.1007/978-3-030-52470-8 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: © Maria Isabel Ruggeri This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland To the beloved memory of Daiana, the most curious, lively and unlucky of my students: may your intellectual enthusiasm live in those who read this book. Foreword: Let a Lawyer Write of AI There are three different ways in which we can appreciate the role of Artificial Intelligence (AI) and of further emerging technologies in the field of criminal law. AI may entail either new forms of criminal responsibility, or new loopholes in the criminal law field, or new challenges for the rights of criminal defendants to a fair trial. The sequence of this list follows a chronological order, because scholars, first, debated since the early 1990s whether AI could ever be considered as an account- able agent in the criminal law field, due to its mens rea, i.e. the mental element of an offence. A decade later, in the mid-2000s, scholars increasingly focused on the material elements of a crime, in order to determine to what extent AI could ever trigger a new generation of actus reus. Finally, in the mid-2010s, the attention was progressively shifted to AI either as a collector of evidence in criminal investiga- tions, or as a human substitute in the judicial decision-making process, or as a proxy of the principle of fair trial in criminal proceedings, thereby affecting the discretion of courts. The first kind of debate on the criminal personality of AI started with the father of “robotics”, Isaac Asimov in 1941. In the legal domain, this kind of debate became particularly popular fifty years later, when some brilliant scholars, such as Justice Curtis Karnow and Professor Lawrence Solum, discussed new forms of account- ability and personhood for “distributed artificial intelligence”, as in Karnow’s 1996 paper, or for artificial intelligences, i.e. “AIs”, as in Solum’s seminal work from 1992. Two decades later, the advancements of technology turned this academical discussion into a hot political and ideological debate. A crucial role in this shift was played by the European Parliament’s proposal from February 2017, in which the EU institution invited the European Commission “to explore, analyze and consider the implications of all possible legal solutions, (including) … creating a specific legal status for robots in the long run”. Some reckon that AI can actually be considered “aware” and fulfilling the mental requirements of that which criminal lawyers dis- cuss as intentional and negligent offences (e.g. Gabriel Hallevy’s thesis). Others claim that cognition and awareness, volition and intention or reason responsiveness can be attributed to an AI system in a way that is meaningful for criminal lawyers, although no reference is necessary to human-like properties (e.g. Giovanni Sartor’s vii viii Foreword: Let a Lawyer Write of AI stance). Most scholars note, however, that nobody would bring AI before judges today, in order to declare AI “guilty” in criminal courts. This is not to say that fur- ther forms of legal agenthood for AI make no sense, for example, in such fields as business and corporate law. Yet, it seems fair to admit, according to the phrasing of the European Parliament, that the criminal personality of AI, if ever, could only develop “in the long run”. The second kind of debate on a new generation of AI actus reus started with the second Gulf War in Iraq (and in Pakistan), in the mid-2000s. Years later, in the 2010 Report to the UN General Assembly, the Special Rapporteur on extrajudicial, sum- mary or arbitrary executions, Christof Heyns, urged the then Secretary-General Ban Ki-moon to convene a group of experts in order to address “the fundamental ques- tion of whether lethal force should ever be permitted to be fully automated”. A hot debate revolved around what AI can justly do in war (ius in bello) and when and how resort to war via AI can be justified (ius ad bellum or bellum iustum). In addi- tion, scholars examined a wider impact of AI on a tenet of criminal law, which is traditionally summed up with the idea that “everything which is not prohibited is allowed”. Similar to what occurred with a new generation of computer crimes in the early 1990s, scholars have progressively stressed that AI may soon trigger a new generation of actus reus, therefore affecting the principle of legality as enshrined in, for example, Article 7 of the 1950 European Convention on Human Rights. A recent literature review has proposed five areas of foreseeable threats of AI in criminal law, such as (i) new offences against the persons, e.g. harassment; (ii) sexual offences; (iii) trafficking, selling, buying and possessing harmful or dangerous banned drugs; (iv) commercial insolvency and further issues of financial markets, such as price- fixing and collusions and (v) theft and non-corporate fraud. As I like to say, these new scenarios for AI crimes are only limited by the human imagination. In the field of computer crimes, illegal systems started amending their own laws from the early 1990s, e.g. the Italian regulation 547 from December 1993; then, the international legislator intervened, so as to formalise such legal experiences through the general provisions of the Budapest convention from 2001. I think something similar will happen with a new generation of AI crimes. The third kind of debate on how the rights of criminal defendants to a fair trial can be affected by AI has increasingly drawn the attention of experts in the 2010s to the role that AI may play as an evidence collector, as a human substitute or as a proxy of the fair trial principle in criminal proceedings. The US Supreme Court’s case law is particularly instructive. In Jones v. United States (565 U.S. 400 (2012)) and then in Carpenter v. United States (585 U.S. _ (2018)), the rulings concerned the protection of the “reasonable expectation of privacy” vis-à-vis the collection of criminal evidence through the use of GPS and cell phone locations techniques, respectively. In 2017, criminal proceedings’ safeguards in an age of AI ignited a popular debate in journals and newspapers because of the use of risk assessment programs by the Court of the Loomis v. Wisconsin case (881 N.W. 2d. 749 (Wis. 2016)). As the New York Times was keen to inform us on 1st March 2017, Loomis’ claim was that “his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one that Mr. Loomis was unable Foreword: Let a Lawyer Write of AI ix to inspect or challenge”. In June 2017, the Supreme Court denied Loomis’ petition for a writ of certiorari. All in all, against such case law, we can suspect that this kind of debate on how AI may impinge on the rights of criminal defendants will become more and more urgent. Three reasons suggest this conjecture. The first reason has to do with the rights of defendants to examine the algorithms of AI: some features of AI, such as the inscrutability of machine learning tech- niques, add a layer of complexity to traditional digital forensics. The second reason regards the role AI plays in the decision-making of judges: it is still unclear how courts should exercise their discretion when striking the balance between fair trial or due process arguments of criminal defendants and the value of AI risk assess- ments. The third reason hinges on the interplay between fair trial principles and the protection of personal data and individual privacy rights. In Europe, for example, the European Court of Human Rights has so far subordinated some legal safeguards of Article 6 of the Convention on fair criminal trials, to a preliminary violation of Article 8 on the right to privacy: AI will likely exacerbate the weaknesses of this stance on the interplay between criminal safeguards and privacy rights, since the right to a fair trial can be strengthened—but not replaced—by the protection of the right to privacy (and data protection). In the USA, a new generation of AI collectors of evidence will similarly stress some shortcomings of the Supreme Court’s doc- trine on the third-party doctrine, namely the idea that secrecy is a prerequisite for the protection of privacy rights under the Fourth Amendment to the Constitution. Further, the right to a reasonable expectation of privacy, both societal and individ- ual, may end up with “the chicken or the egg” causality dilemma. Such reasonable privacy expectations rest on the assumption that individuals and society have devel- oped a stable set of privacy expectations, and still, AI can dramatically change such beliefs. Therefore, its vital scholars properly address the urgent issues of AI forensics, judiciary discretion and data protection in an age of increasingly smart AI. By com- parison with the previous debates on the criminal personality of AI or on a new generation of AI crimes, it should be noted, however, that there are still few works on this subject matter. A reason may depend on recent developments of law and technology; another on the complexity of the issues that are at stake with AI. The direct and indirect impact of computational modelling on evidence gathering, much like the challenges of AI to the judicial decision-making process in criminal pro- ceedings, do not only concern legal expertise, but scientific knowledge and techno- logical know-how. In particular, the focus should be on criminal investigations that hinge on the use of computational techniques and AI systems, in order to understand how they may affect the principles of fair trial and the equality of arms through automatedly generated evidence. Likewise, the attention should be drawn to the distinction between deciding and predicting, between criminal justice and predic- tive justice. AI impacts the different steps of the criminal justice decision-making process when tackling, for example, violent behaviour and recidivism. Few investigators, however, could have ever set the proper theoretical frame- work, in which to address these complex sets of issues on philosophy of law and criminal justice, legal informatics and digital forensics, machine learning and deep x Foreword: Let a Lawyer Write of AI learning. Professor Quattrocolo’s book fills this gap. Her book is timely and badly needed, because it provides an in-depth analysis on some of the most urgent legal threats brought forth by AI today. Moreover, the book is solid and yet provocative, because it will generate a multidisciplinary discussion on how the law could be used to safeguard the rights of all parties involved as AI proliferates more deeply into society. Lawyers can learn a lot about their own field, working together with com- puter scientists and AI developers, much as AI developers and computer scientists can reflect on the normative constraints of their work, collaborating with law profes- sors, judges, attorneys and other legal experts. This monograph is the fruit of this essential interaction. Let the author of this book, a lawyer, write important things about AI. Law Department Ugo Pagallo University of Torino Torino, Italy