The International Centre of Comparative Criminology (ICCC), attached to the University of Montreal and the University of Quebec at Trois-Rivières, organized with the Observatory of Profiling “The ICCC Scientific Season 2022-2023” which took place from October 24 to 28. This event began with conferences on the theme of “Artificial Intelligence and Profiling”. The conference “AI and profiling: ethical and legal risks” was presented by Céline Castets-Renard, Professor at the Faculty of Civil Law, University of Ottawa, where she holds the Global Responsible AI Chair and the Law Chair at ANITI (Artificial and Natural Intelligence Toulouse Institute).
The CICC, born in 1969 from a scientific partnership between the Université de Montréal and the International Society of Criminology, is one of the most important criminological research centers in the world. In addition to 63 regular researchers, it also brings together 104 collaborators from Quebec, Canada and abroad. With doctorates in criminology, psychology, political science, law, sociology, anthropology, social work, history, economics, forensic sciences, biology and chemistry, the CICC’s regular researchers and collaborators seek to understand, from a multidisciplinary perspective, the processes by which criminal behaviour is regulated, as well as the various intervention modalities deployed by public, private and community institutions to deal with it. The Centre regularly organizes debates and international conferences on criminal and security-related issues.
For this event, it has joined forces with the Observatoire des profilages (ODP), which is made up of more than thirty researchers, twenty community and institutional partners, as well as some forty master’s and doctoral students. Their work focuses on profiling practices and experiences in the police, justice, corrections, youth protection, health and social services, social assistance and migration sectors.
The conference: ” AI and profiling: ethical and legal risks”, presented by Céline Castets-Renard
The conference first addresses some of the issues of AI and profiling, then presents some problematic uses of facial recognition in Canada, discusses the case of Clearview AI and then looks at the responses of Canadian law and its limitations, including Bill C-27.
The challenges of AI and profiling
Céline Castets-Renard’s presentation begins with a presentation of automated decision-making systems, their uses, their impact on society: predictions (weather, terrorism, security…), recommendations, decision-making assistance. These systems, supervised or not, trained with data such as age, race… make classifications leading to profiling, even discrimination. The first issue is therefore the choice of data.
Training data
Biases and discriminations often come from poor quality or insufficient data. While some data are over-represented, there are many cases of under-representation, such as the data on feminicide, particularly in Mexico.
Céline Castets-Renard cites as an example of predictive justice the COMPAS case, a score for calculating recidivism in relation to certain categories of the population. This score had the same error rate for both black and Caucasian individuals, but it was not specified that this same rate was in favor of the latter and against the former…
Because humans trust machine decisions more than their expertise, these kinds of errors can have a big impact. The speaker took another example of misleading success, that of facial recognition, announced at around 85% and presented the work of Joy Buolamwini, on gender and/or race bias and discrimination in this technology. They pointed out that black women were underrepresented in the training data, which led to a high error rate. In the case of Amazon’s facial recognition system, the success rate is 100% for white men and drops to 68.6% for black women.
Facial recognition, authentication and identification
For Céline Castets-Renard, facial recognition is an intrusive technology for individuals. She questions the lack of transparency in public action, citing the case of two young women claiming to be of Somali origin who were granted political refugee status but whose status was invalidated by the SPR (Refugee Protection Division), alleging that their photos were very similar to those of two Kenyan students…
Regarding identification, she referred to the case of a native man arrested in a store for allegedly stealing a few months earlier. This man would have been particularly watched because of his origins.
Legal responses
In Canada, the laws regarding private data in the private and public sectors are different. For the private sector, Bill C-7 is being drafted, reforming the PIPEDA Act of 2000, while for the public sector, the Personal Information Act of 1985 applies, thus not concerning AI systems. On the other hand, a directive on automated systems, aimed at assessing their risks, has been adopted by the Treasury Board.
The presenter raised the issue of the use of Clearview AI facial recognition technology by non-police agencies, including the immigration service. She highlighted the lack of strong sanctions against the company by the authorities compared to Australia, some European countries and the CNIL before considering the lack of transparency of automated decisions within the Canadian immigration service.
Translated from Focus sur la conférence « IA et profilages : risques éthiques et juridiques » de Céline Castets-Renard