This report seeks to critically assess the use of machine learning algorithms for policing, and provide practical recommendations designed to contribute to the fast-moving debate over policy and governance in this area.
This report explores the applications of machine learning algorithms to police decision-making, specifically in relation to predictions of individuals’ proclivity for future crime. In particular, it examines legal, ethical and regulatory challenges posed by the deployment of such tools within an operational policing environment.
In the UK, the use of machine learning algorithms to support police decision-making is in its infancy, and there is a lack of research examining how the use of an algorithm influences officers’ decision-making in practice. Moreover, there is a limited evidence base on the efficacy and efficiency of different systems, their cost-effectiveness, their impact on individual rights and the extent to which they serve valid policing aims. Limited, localised trials should be conducted and comprehensively evaluated to build such an evidence base before moving ahead with large-scale deployment of such tools.
There is a lack of clear guidance and codes of practice outlining appropriate constraints governing how police forces should trial predictive algorithmic tools. This should be addressed as a matter of urgency to enable police forces to trial new technologies in accordance with data protection legislation, respect for human rights and administrative law principles.
While machine learning algorithms are currently being used for limited policing purposes, there is potential for the technology to do much more, and the lack of a regulatory and governance framework for its use is concerning. A new regulatory framework is needed, one which establishes minimum standards around issues such as transparency and intelligibility, the potential effects of the incorporation of an algorithm into a decision-making process, and relevant ethical issues. A formalised system of scrutiny and oversight, including an inspection role for Her Majesty’s Inspectorate of Constabulary and Fire and Rescue Services, is necessary to ensure adherence to this new framework.
There are various issues concerning procurement contracts between the police and private sector suppliers of predictive policing technology. It is suggested that all relevant public procurement agreements for machine learning algorithms should explicitly require that it be possible to retroactively deconstruct the algorithm in order to assess which factors influenced the model’s predictions, along with a requirement for the supplier to be able to provide an expert witness who can provide details concerning the algorithm’s operation if needed, for instance in an evidential context.
The legal and ethical issues concerning the use of machine learning algorithms for policing are complex and highly context-dependent. Machine learning algorithms require constant attention and vigilance to ensure that the predictions they provide are as accurate and as unbiased as possible, and that any irregularities are addressed as soon as they arise. For this reason, multidisciplinary local ethics boards should be established to scrutinise and assess each case of algorithmic implementation for policing. Such boards should consist of a combination of practitioners and academics, and should provide recommendations to individual forces for practice, strategy and policy decisions relating to the use of algorithms.
A collaborative, multidisciplinary approach is needed to address the complex issues raised by the use of machine learning algorithms for decision-making. At the national level, a working group consisting of members from the fields of policing, computer science, law and ethics should be tasked with sharing ‘real-world’ innovations and challenges, examining operational requirements for new algorithms within policing, with a view to setting out the relevant parameters and requirements, and considering the appropriate selection of training and test data.
Officers may need to be equipped with a new skill set to effectively understand, deploy and interpret algorithmic tools in combination with their professional expertise, and to make assessments of risk using an algorithmically generated forecast. It is essential that the officers using the technology are sufficiently trained to do so in a fair and responsible way and are able to act upon algorithmic predictions in a way that maintains their discretion and professional judgement.
Dr Marion Oswald MBE