CEPOL Research & Science Conference 2022 MRU, Vilnius

Intelligence Led Policing and the Risks of Artificial Intelligence
06-09, 13:30–13:50 (Europe/Vilnius), Panel Room III - I-408

The datafication throughout different spheres of our societies has led to an increasing reliance on data, even more so with the advances of artificial intelligence (AI) methods. A development that can also be observed in the sphere of law enforcement and crime analysis. Tools like predictive policing, that rely on the quantitative analysis of past crime data, are used to inform police operations. Ideally – as is indicative of the name – by predicting future criminal behaviour and occurrences. But also, other AI supported tools are increasingly used in a policing context: facial recognition, shot spotters, crime scene analysis tools; to name just a few. All these tools have in common that they are built with the purpose to make police work and crime fighting more efficient, improving the routines and practices of policing, and lead to faster and more pre-emptive insights.
That being said, the use of these AI-tools doesn’t come without a caveat. There are a broad range of examples that show the unintended side-effects of AI use in policing. These range from a shift of financial investment into intelligence led policing at the expense of other areas of law enforcement, to highly biased and discriminatory decisions in policing. The first brings forth problems within the police as an organisation, the latter risks solidifying discriminatory practices through the materialization of AI-technologies. In this paper, I will provide an analysis of these problems, which have similarly emerged in other areas of society – such as (health) care, or consumer research. While I will first expand on how these problems emerge and try to indicate their causes, I will discuss in a second step some ideas on how these issues can be acknowledge and resolved – at least partially.

Roger von Laufenberg is a Senior Researcher at the Vienna Centre for Societal Security (VICESSE), where he critically researches how artificial intelligence systems are developed for a wide range of use cases (policing, care, marketing, etc.), how norms, beliefs and ideologies tend to be included in these systems, as well as the effects AI-systems have on individuals, groups and society at large. He holds a PhD from the University of St Andrews, School of Management (United Kingdom) and a MA from the University of Vienna (Austria).