Photo by Adolfo Félix on Unsplash
Written as a paper for PUBPOL703 – Digital Governance with Professors Tony Porter for the McMaster University Master of Public Policy in Digital Society.
Introduction
Information technology, such as algorithms, artificial intelligence (AI), and data predictive analytics, are increasingly being used by governments in their decision-making with citizens and immigrants, operating with less-developed or no oversight. The use of AI by the public sector in making government administrative processes more efficient is changing the relationship between the state and individuals. These systems are known as “Automated Decision Making” systems (ADM), which are AI technologies adopted by the public sector “that either assists or replaces the judgement of human decision-makers… us[ing] techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.” ADMs hold much promise in “offer[ing] novel and more timely services to citizens and other users,” and providing “better user experiences to make their services easier to use.” Moreover, they can work faster and more efficiently than humans in performing routine tasks, with the ability to run outside of normal work hours, especially to process large volumes of applications, reduce backlogs, and improve response time. However, particularly when decisions are “complex and value-laden,” ADMs can have significant concerns regarding transparency, fairness, accountability, due process, public trust, bias and discrimination, privacy, and human rights. ADMs can make flawed decisions based on flawed algorithmic design or flawed data, and, with lack of a clear governance framework, make it difficult for affected individuals to appeal, seek redress, or even know that their application was processed wholly or in part by an algorithm. In this author’s previous research, it was found that Immigration, Refugees, and Citizenship Canada (IRCC) was using ADM processes as early as 2015 for permanent residency applications through the Express Entry program, Temporary Resident Visa applications from China, and spousal/common-law partner sponsorship applications from Manila, Philippines and New Delhi, India. Research by Petra Molnar and Lex Gill of Citizen Lab and the University of Toronto’s International Human Rights Program conducted an in-depth study on how ADMs used by IRCC and Canada Border Services Agency (CBSA) impacted the human rights of immigrants and refugees. Regulation of these processes is beginning to be developed but has yet to catch up to the real impact on people.
Canadian governments are beginning to address the impacts of ADMs. The Canadian Federal Treasury Board Secretariat (“TBS”) issued the Directive on Automated Decision-Making (“Federal Directive”)in 2019. The Federal Directive was one of the earliest examples of regulating ADMs internationally. It has drawn praise for its novel attempt to govern ADMs based on principles of administrative law. However, it has been criticized for gaps in its approach and scope. The Ontario government is also considering the development of a governance framework for AI, through normative principles and guiding documents, as well as the modernization of privacy legislation. However, the Beta principles for the ethical use of AI and data enhanced technologies in Ontario (“Ontario Principles”) and Transparency Guidelines for Data-Driven Technology in Government (“Ontario Guidelines”) are more akin to soft law without proper enforcement mechanisms, and the Modernizing Privacy in Ontario White Paper (“Ontario White Paper”) has yet to lead to enacted legislation.
Thus far, there has yet to be a comprehensive, harmonized, and effective Canadian approach to governing ADMs. Building on the initiatives by the federal and Ontario governments, ADM governance needs to go beyond prioritizing the administrative law principles of fairness, accountability, and transparency, but also enshrining principles of justice, human rights and dignity, and building public trust for the public interest. This means strengthening accountability measures by increasing external scrutiny through multi-stakeholder consultations and building effective enforcement mechanisms, expanding the scope of regulation to close loopholes, moving away from a self-reporting survey towards more complete assessments, creating mechanisms for remedies for wrongful decisions, and building stronger responsibilities for ADMs that “significantly affect” an individual.
Algorithmic Impact Assessments and the Federal Directive
The concept of “Algorithmic Impact Assessments” (AIAs) is emerging as an accountability tool for AI governance. They are “mechanisms intended for public agencies to better understand, categorize and respond to the potential harms or risks posed by the use of algorithmic systems, usually prior to their use.” These tools take a risk-based approach to conduct an ex-ante evaluation that identifies potential impacts to implement mitigation measures. However, as the Ada Lovelace Institute warns, “AIAs are not a complete solution for accountability on their own: they are best complemented by other algorithmic accountability initiatives, such as audits or transparency registers.” The AIA is a central component of the Federal Directive.
Principles of administrative justice, fairness, and risk management are at the core of the Federal Directive to ensure that ADMs “reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.” (Emphasis by author) The Federal Directive expects that decisions by ADMs “are data-driven, responsible, and comply with procedural fairness and due process requirements” and that“Impacts of algorithms on administrative decisions are assessed and negative outcomes are reduced…” (Emphasis by author)
There have been many criticisms of the Federal Directive from scholars. The policy applies only for “external services,” but excludes internal decisions for HR, for example. This limitation can leave out ADMs that make decisions that indirectly affect external individuals. Additionally, the policy “does not apply to any National Security Systems.” While TBS recognizes that – though the priority is that “AI systems should be deployed in the most transparent manner possible” – there is also a “need to protect privacy and national security.” The definition of “National Security System” in the Policy on Management of Information Technology can still be interpreted broadly to justify the use of ADMs in immigration and refugee assessments, policing, and surveillance with little oversight. Thus, the lack of clarity in this exemption could put classified ADMs that could significantly affect individuals out of the scope of the Directive. Lastly, the Federal Directive only applies to ADMs “developed or procured after April 1, 2020,” and not on prior technologies. This leaves out ADMs that are already being used, unless their departments re-apply for an AIA. These systems become unchecked, particularly if they have not garnered public attention, unless an audit of these ADMs is mandated.
At the heart of the Federal Directive is a self-reported AIA which is described as “a mandatory risk assessment tool” consisting of 81 questions that score a proposed ADM on risk and mitigation measures. The AIA investigates risk areas about the system and algorithm, the nature of the decision being made, the impact it will have on individuals, and the data being collected and used. It also asks about the measures the operator of the AIA has taken to mitigate risks, such as ensuring data quality that is “representative and unbiased,” procedural fairness – including auditing and recourse processes – as well as privacy measures to safeguard data. The ADM is then scored into four impact levels based on how decisions will affect certain factors. This imposes a sliding-scale framework of responsibilities based on the impact level. ADMs that score on higher impact levels must abide by additional responsibilities that are meant to increase accountability and transparency. These factors include:
- the rights of individuals or communities,
- the health or well-being of individuals or communities,
- the economic interests of individuals, entities, or communities,
- the ongoing sustainability of an ecosystem.
The Federal Directive prioritizes transparency as a principle, requiring ADMs to provide notice before decisions and explanations after decisions, as procedural fairness processes. ADMs are specifically required to provide notices “in plain language,” which is important because it can be difficult for many individuals to understand how algorithms work. Of particular importance is that ADMs must “Provid[e] a meaningful explanation to affected individuals of how and why the decision was made.” When an algorithm is making a decision, it is opaque as to how that decision was made and what factors contributed to that outcome. Even on human-made decisions, meaningful explanations can be difficult to obtain. It is thus even more important for individuals to receive a meaningful explanation when an ADM is making the decision.
It is also important for the government to have access to the algorithm, its components, and the data to monitor and test the ADMs. The Federal Directive ensures that even if the algorithm is sourced from an external vendor through a proprietary license, the government “retains the right to access and test the Automated Decision System.” ADMs must also have processes to “monitor the outcomes… to safeguard against unintentional outcomes and to verify compliance,” as well as testing for “unintended data biases and other factors that may unfairly impact the outcomes.” Lastly, according to impact level, ADMs should “[allow] for human intervention” and “provid[e] clients with any applicable recourse options that are available to them to challenge the administrative decision.”
Ontario’s Principles and Guidelines
Ontario is taking a different approach from the Federal Directive by outlining Principles and Guidelines for “ethical considerations and values” in “the use of data enhanced technologies within government processes, programs and services.” This approach is akin to what the Ada Lovelace Institute, AI Now Institute, and Open Government Partnership describe as “A number of policy documents [that] provide non-binding normative guidance, in the form of principles and values, for public agencies to follow” that “generally identify high-level policy goals.” The Ontario Principles are meant to “complement” the federal approach “by addressing a gap concerning specificity,” and “not [clash] with existing best practices, principles and frameworks.” The Principles argue that “This approach references and harmonizes with known standards, principles and tools to create clarity rather than barriers for innovation that is safe, responsible and beneficial.” However, in its soft law approach by outlining normative behaviour, there is little clarity on the “source of legitimacy,” and thus, “effective enforcement of accountability mechanisms.” Nonetheless, the strength of Ontario’s approach is its flexibility and iterative nature in addressing novel technologies as they emerge. The Ontario approach also goes beyond the administrative law principles enshrined in the Federal Directive. Moreover, outlining normative principles can still be valuable in establishing what is acceptable behaviour by actors.
The Ontario Principles outline six key principles for the responsible use of AI in the Ontario government:
- Transparent and explainable
- Good and fair
- Safe
- Accountable and responsible
- Human centric
- Sensible and appropriate.
The Ontario Principles emphasize the importance of transparency as “the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies.” In practical terms, this means providing meaningful explanations including “relevant information about what the decision was, how the decision was made, and the consequences.” It is also important to note that the Principles recognize how ADMs can specifically negatively impact “historically disadvantaged groups.” The Principles also go beyond administrative law principles by recognizing that ADMs must “[respect] the rule of law, human rights, civil liberties, and democratic values… includ[ing] dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.” ADMs must be safe by ensuring adequate safeguards against unintended consequences and adverse outcomes through “ongoing monitoring and mitigation planning.” ADMs must also be “Accountable and responsible” by identifying clear roles of responsibility with “a public and accessible process for redress… with input from a multidisciplinary team and affected stakeholders.” The Principles recognize that it is important that clear lines of accountability are defined and ethical responsibilities are equally distributed to avoid situations where no party wants to take responsibility for action. Human centric design is also increasingly becoming an important aspect of government decision-making and the Principles affirm that ADMs must have “a clearly articulated public benefit… that enables meaningful dialogue early with affected groups and allows for measurement of success later.” Lastly, ADMs must be “Sensible and appropriate” for the context it is being used for, with the benefit being proportional to the potential risks to individuals.
Challenges
While the Federal Directive and the Ontario Principles and Guidelines are a good start to addressing ADMs, there are still many gaps that prevent these documents from constituting a comprehensive and effective governance framework. Both the Federal Directive and the Ontario documents suffer from a lack of legitimacy, an effective enforceable legal framework, and limited scope. (Emphasis by author) While the Federal Directive was issued by TBS and is thus mandatory and binding on federal government departments, scholar Teresa Scassa argues that “Directives do not create actionable rights for individuals or organizations.” The Directive lacks external accountability, whether Parliamentary or from civil society. Obligations and enforcement are internal. In a similar way, it is unclear where the source of legitimacy is for the Ontario documents and what enforcement mechanisms exist. It is unclear what central agency will be the lead organization in enforcing these principles.
Both the federal and Ontario approaches fail to adequately address how humans fit into the governance framework. This means concerns about how humans interact with their rights as they interact with the technical complexity of ADMs, as well as multi-stakeholder consultation of those most affected.
A large concern about the governance of ADMs is the “Inaccessible language or lack of transparent explanations [that] can make it hard for… the public to understand the technologies and their uses, undermining public scrutiny and accountability.” It is important then that the Federal Directive mandates that information about the ADM be posted in “plain language.” However, posting the notice at the bottom of the webpage or in a separate link – for example, to a “Frequently Asked Questions” page – might seem more like “checking off the box” to meet legal obligations, rather than providing meaningful notice to individuals about how the ADM will affect them. Additionally, while the Federal Directive mandates that individuals receive a “meaningful explanation” for higher impact level ADMs, it is unclear what counts under this concept. In this author’s experience, a decision maker may provide the bare minimum for written reasons based on template language taken out of context from the individual’s case, rather than a detailed explanation as to why the decision was made. When decisions are significantly affecting an individual through an opaque ADM, it is even more important that meaningful explanations be given to individuals. While the Federal Directive notes that “any applicable recourse options” should be made available to affected individuals, it is unclear what kind of legal remedy – whether it is appeal, human review, judicial review, or other – is available to affected individuals. Lastly, there may be a greater onus on the affected individual to prove the harm and that there are flaws in the algorithm or its data, which is made even more difficult because of the complexity and opaqueness of ADMs, as well as barriers to accessing adequate resources and expertise to conduct the analysis.
In practicality, this creates a barrier to justice. If it is difficult for an individual to access rights to appeal or recourse because of a lack of understanding and a high onus to prove harm, then justice is denied – let alone the fact that the Federal Directive does not “create actionable rights.” Moreover, it is marginalized communities and individuals from historically disadvantaged groups – Indigenous people, racialized individuals, people with disabilities, the poor, and queer individuals – that “are often both the earliest and the most severely impacted by the implementation of automated decision-making systems,” and thus making ADMs a “significant justice and rule of law [issue].” Access to justice is already difficult for these marginalized groups, and further obstacles to recourse and “judicial review places an undue burden on individuals to identify and successfully challenge decisions” or even “‘a costly and undue form of punishment.’” With these obstacles, further reform of ADM governance must reconsider recourse avenues, as well as shifting the onus of proof to the operators and developers of the ADM to prove the system is safe and accurate.
For there to be meaningful rights for individuals affected by ADMs, Canadian governments can learn from Article 22 of the European Union’s General Data Protection Regulation (“GDPR”) which creates “the right not to be subject to a decision based solely on automated processing, including profiling, where that decision will have either a legal impact or similarly significant effects.” Additionally, Canadian governments can learn from France’s Loi pour une republique numérique where a meaningful explanation includes information about:
- “the degree and the mode of contribution of the algorithmic processing to the decision-making;
- the data processed and its source;
- the treatment parameters, and where appropriate, their weighting, applied to the situation of the person concerned;
- the operations carried out by the treatment”
Full accountability requires a forum for stakeholders to learn, analyze, and scrutinize an ADM. A report by Data & Society entitled “Assembling Accountability” borrows sociologist Mark Bovens’ definition of accountability as “a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences.” The Federal Directive and the Ontario Principles and Guidelines, while being good first steps, do not constitute effective accountability. The Federal Directive relies on a self-reported survey as an AIA, the nature of which does not meet the standard of an independent, impartial review. Moreover, it is unclear how the Ontario documents will compel parties to commit to transparency and accountability mechanisms. The “Assembling Accountability” report “argue[s] that voluntary commitments to auditing and transparency do not constitute accountability. Such commitments are not ineffectual—they have important effects, but they do not meet the standard of accountability to an external forum.”
For there to be real accountability, particularly to the communities most impacted by potential harms by an ADM, Canadian governments should mandate that ADM operators engage the public through multi-stakeholder consultations and “incorporate[e] community advocates into the rule-making process, to ensure that the assessment practices take into account the experience of being subject to algorithmic systems, and to protect the public interest.” There is no avenue for impacted communities to provide feedback about an ADM that can significantly affect their livelihoods. Even if the Federal Directive gave meaningful explanations and provided transparency about an ADM, “without a formal avenue for expressing their concerns there is a hard limit as to what the public can do with this information.” Transparency without an avenue for scrutiny is not accountability.
Next Steps and Conclusion
In developing a comprehensive and effective Canadian approach to governing ADMs, there must be a strong mandate for enforceable accountability and transparency that goes beyond administrative law principles of procedural fairness, but to also uphold justice, human rights, human dignity, and using ADMs for the public interest by building public trust. This starts with a strong source of legitimacy that can mandate effective regulation. On both the federal and Ontario levels, this means going beyond normative principles and guidelines and central agency directives and instead passing legislation. While this may be difficult politically, legislative measures would provide legitimate constraints on government to abide by accountability mechanisms and create legal and actionable rights for affected individuals. To be more effective, ADM regulation must also include “National Security Systems.” Allowing an exemption for these particularly invasive technologies would avoid true accountability for ADMs with the greatest potential for harm, particularly for systems used to surveil, police, or deny asylum seekers refuge. There are procedures that can be put in place to maintain the integrity and confidentiality of “National Security Systems,” while allowing for scrutiny, transparency, and accountability to mitigate risks and minimize adverse outcomes. ADMs must also welcome greater external scrutiny conducted by independent and impartial assessors. It should not be up to a self-reported survey score to determine an appropriate risk and impact level. While this may come with greater operational costs, independent assessments are worth it for additional levels of accountability and transparency. The AIA is not meant just for government to fulfill legal obligations and minimize liability – it is meant to be one (but not the only) tool to hold governments accountable for the extraordinary nature of ADMs and their potential for significant harm.
Federal regulation can also learn from the Ontario Principles by integrating human centric perspectives. In order to build public trust, individuals should know that ADMs are not just being used to make the lives of public servants easier or that it will save money, but that it is for “a clearly articulated public benefit” that is proportional to the additional risk. It is also important to clarify what meaningful explanation means for ADMs and clearly outline avenues for redress and legal remedy that does not put an undue burden on individuals who do not have the expertise or resources to scrutinize an opaque algorithm. A truly effective Canadian approach would be to pursue a multi-stakeholder consultation approach that engages the most affected and vulnerable community groups. This is particularly important with Canada’s commitment to reconciliation with Indigenous peoples, as ADMs can significantly affect their livelihoods. A comprehensive governance document would recognize the disproportionate negative impact that ADMs could have on marginalized communities and historically disadvantaged groups, and that operators of ADMs would commit to working with community leaders to develop these systems, monitor for unintended adverse impacts, and improve their performance throughout their lifecycles.
ADMs need to earn a special level of public trust as their emergence changes the power dynamic and relationship between the state and the individual. Emerging technologies start off as powerful tools, but they can grow to become more integrated in our lives – and not always in beneficial ways. As TBS recognized in their white paper, it is important to “balance[e] the potential for gains in efficiency and effectiveness of government with the risk of misuse.” Canadian policymakers have to ask themselves, “Do concerns for large-scale efficiencies and service enhancements justify a mitigation or adjustment of expectations of procedural fairness?” ADMs do not just have individual impacts but systemic issues of justice, particularly affecting already disadvantaged populations. Thus, Canadian governments have to be careful about the use of ADMs and ensure that they are being used responsibly and with the proper safeguards and enforceable mechanisms to prevent harms. At the core of it, a comprehensive and effective Canadian approach to governing ADMs must recognize that “People should always be governed – and perceive to be governed – by people.”
Bibliography
Ada Lovelace Institute. “Algorithmic Impact Assessment: A Case Study in Healthcare,” February 2022. https://www.adalovelaceinstitute.org/report/algorithmic-impact-asssessment-case-study-healthcare.
Ada Lovelace Institute, AI Now Institute, and Open Government Partnership. “Algorithmic Accountability for the Public Sector Executive Summary.” Ada Lovelace Institute, August 2021. https://www.opengovpartnership.org/documents/algorithmic-accountability-public-sector/.
Government of Ontario. “Beta Principles for the Ethical Use of AI and Data Enhanced Technologies in Ontario.” Government of Ontario. Accessed March 6, 2022. http://www.ontario.ca/page/beta-principles-ethical-use-ai-and-data-enhanced-technologies-ontario.
———. “Modernizing Privacy in Ontario: Empowering Ontarians and Enabling the Digital Economy – White Paper,” 2021.
Karanicolas, Michael. “To Err Is Human, to Audit Divine: A Critical Assessment of Canada’s AI Directive.” SSRN Electronic Journal, 2019. https://doi.org/10.2139/ssrn.3582143.
Lindsay, Susie. “Re: Ontario Government Alpha Guidance Documents Concerning AI Use by Government – LCO Comments,” June 30, 2020.
Moss, Emanuel, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” Data & Society, June 2021.
———. “Assembling Accountability Policy Brief.” Data & Society, June 2021.
OECD. OECD Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449 § (2019). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
Office of the Privacy Commissioner of Canada. “Guidance on Inappropriate Data Practices: Interpretation and Application of Subsection 5(3),” May 24, 2018. https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/consent/gd_53_201805/.
Petra Molnar and Lex Gill. “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System.” International Human Rights Program and Citizen Lab, 2018.
Scassa, Teresa. “Administrative Law and the Governance of Automated Decision Making: A Critical Look at Canada’s Directive on Automated Decision Making.” U.B.C. Law Review 54, no. 1 (2021).
Treasury Board Secretariat of Canada. “Algorithmic Impact Assessment Tool,” March 22, 2021. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html.
———. “Directive on Automated Decision-Making.” Government of Canada – Treasury Board Secretariat, February 5, 2019. https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.
———. “Responsible Artificial Intelligence in the Government of Canada (Version 2.0) – Digital Disruption White Paper Series,” 2018.
