
Unpacking Equity is a collaboration between the Public Policy and Governance Review and the Gender, Diversity and Public Policy Initiative (GDPP) at the Munk School of Global Affairs and Public Policy. This series aims to explain equity-related policy issues and break down complicated topics involving equity, diversity and inclusion. Policy professionals can gain a better understanding of these complex issues in order to incorporate an equity lens into their practice. To learn more, please get in touch with the GDPP.
By Mackenzie Rice

The use of artificial intelligence (AI) and machine learning (ML) to improve public sector efficiency is both exciting and controversial. While these technologies have the power to present new solutions to some of the toughest global challenges, their use has also called attention to serious concerns relating to data privacy, human rights, and equity. In Canada, controversies associated with using AI and ML in the public sector have been focused on the integration of these technologies into immigration and refugee systems.
Publicly available information indicates that the Canadian government has been using automated decision-making technologies in the country’s immigration and refugee system since 2014, although the full extent of the integration of AI in this space is unknown. In 2018, several government agencies including Immigration, Refugees, and Citizenship Canada (IRCC) issued a Request for Information (RFI), seeking private sector guidance on the acquisition of new innovative and emerging technologies. This RFI indicated the interest of Canadian federal government in expanding the future use of automated decision making in the immigration and refugee systems by stating, “IRCC will explore the possibility of whether or not the AI/ML powered solution(s) could also be used by front-end IRCC administrative decision-makers across the domestic and international networks to aid in their assessment of the merits of an application.”
The information revealed by the RFI inspired the joint 2018 report by The University of Toronto’s International Human Rights Program and The Citizen Lab at the Munk School of Global Affairs and Public Policy, entitled Bots at the Gate. The report acknowledged the arguments supporting the use of these technologies while also raising serious concerns about the application of AI and ML in the highly discretionary context of immigration and refugee decisions. The report made national headlines, calling on the government to release complete information pertaining to the use of automated decision-making systems in Canada’s immigration and refugee systems.
One of the arguments supporting the use of AI and ML in such fields is that these technologies will be able to reduce or even eliminate both conscious and subconscious forms of human bias that lead to discriminatory outcomes. While Canada’s immigration and refugee system is often revered on the international stage, discriminatory screening practises remain a prevalent issue at Canadian borders. For example, RCMP faced intense scrutiny after a 2017 investigation revealed that racially and religiously charged questionnaires were being distributed to Muslim migrants crossing the border between the United States and Quebec. While AI and ML technologies run the risk of learning the very biases that the government is trying to circumvent, they also have the potential of creating a more standardized data collection processes that could ensure that all people entering Canada are treated with fairness and impartiality.

Additionally, the benefits resulting from the use of AI in Canadian immigration and refugee systems may include increased expediency in processing applications. By using AI and ML systems to assess cases quickly and with low-cost, the Canadian refugee and immigration system, which has been extremely backlogged, could become more efficient. Greater efficiency could have the greatest benefits for persons claiming protected status in Canada, as faster determinations could provide quicker access and better protection for people fleeing persecution who would otherwise be left to wait months or years for the results of their application. Canada’s search for new solutions to process refugee claims specifically is a growing concern, as documents released by the Immigration Refugee Board in 2017 indicated that the agency’s budget falls millions of dollars short of what is required to manage the increasing number of refugee claims.
However, the risks of AI and its learned biases have been well-documented. Embedded biases in AI and ML systems can arise from many sources including the prejudice of human coders, skewed training data, or incomplete input data, which can lead automated systems to make decisions that reinforce and exacerbate human biases. Additionally, studies have revealed that even when demographic variables are not explicitly coded into AI systems, a number of seemingly neutral variables can work as proxies for race. For example, in areas with highly segregated communities, the use of variables such as postal codes in an algorithm that does not include a variable for race can still lead the system to produce racially biased outcomes. As a result, while an algorithm may appear to be neutral, its outcomes may still be discriminatory.
The potential for algorithms to produce discriminatory outcomes is exemplified by case studies on predictive policing and the use of risk assessment tools in the criminal justice system. For example, 2016 study on predictive policing in Oakland, California found that African Americans were significantly more likely to be over-policed for drug crimes, despite relatively equal rates of drug use across racial groups.
Similarly, in the context of the criminal justice system, the use of risk-assessment algorithms in sentencing determinations has been found to misclassify black defendants as having a higher risk for recidivism more frequently than comparable white defendants, leading judges to prescribe higher sentences for minorities. Noting the discriminatory outcomes that AI has produced in these other settings, Petra Molnar, an immigration lawyer and one of the co-authors of the report Bots at the Gate, expresses concern for the implications of these technologies being “imported into the high-risk laboratory of immigration decision-making.”
The debate surrounding the IRCC’s use of AI and ML is particularly controversial because of the specific areas where these technologies could be integrated. In the RFI submitted in 2018, the IRCC identified Pre-Removal Risk Assessments and Humanitarian and Compassionate Considerations as two areas where the agency is looking to pilot AI and ML systems. These applications are among two of the most discretionary processes in Canada’s immigration system, and the outcomes of these cases carry some of the heaviest implications. These applications revolve around assessments of whether an individual is likely to face extreme hardship, discrimination, safety concerns, or family separations if they were to be denied access to Canada or removed from Canada. These are often considered the “avenues of last resort”, and rely on qualitative human evaluation.
In the near future, the increased integration of AI and ML in the public sector will have a profound impact on Canadian social, economic and political institutions. As a result, it is critical for us to question whether these technologies can be used to promote equity. By drawing on the important policy lessons that have been learned from using AI in policing and criminal justice, policymakers can take a proactive approach to the use of AI and ML technologies in Canada’s immigration systems by designing comprehensive measures to mitigate and eliminate sources of embedded bias. Following the publication of Bots at the Gate, Petra Molnar commented, “We are beyond the question of whether AI is being used. The question is if AI is here to stay, we want to make sure it is done right.” As Canada is navigating the new technological frontiers produced by automated systems and AI, it is important for policymakers to consider how these technologies can be used to advance equity instead of reinforcing bias.
Mackenzie Rice is a Master of Global Affairs student at the University of Toronto’s Munk School of Global Affairs and Public Policy. She is also a News Watch contributor for the magazine Global Conversations, where she writes short essays on emerging international news stories. At the Munk School, Mackenzie focusses on strategies for sustainable global development and international protections for human security.