Written as a discussion paper for PUBPOL707 – Architectures of Digital Ecosystems for the McMaster University Master of Public Policy in Digital Society
“there is absolutely no inevitability as long as there is a willingness to contemplate what is happening” – Marshall McLuhan
“This is superconnected, it’s time to leave.” – Broken Social Scene
“All media work us over completely.” Marshall McLuhan’s words perfectly describe how AI has completely taken over and are a part of our daily lives, whether we are aware of it or not. AI is developing rapidly and emerging in every aspect of our lives. When you ask Google Assistant, “What is the weather today?” or have Spotify recommend new music, AI can be a wonderful thing that can benefit our life. There are also many issues and flaws with AI.
However, it is not inevitable that AI will always negatively impact us – as long as we are willing to work towards building a more “trustworthy AI” framework that reflects our values and centres our individual agency, benefits our communities, and declines to use it in sensitive cases that affect vulnerable people.
When we talk about “AI” and algorithms, different people will have different answers for what it means.
Today’s AI does not mean Hal from 2001: A Space Odyssey (Kubrick, 1968) or Samantha, the AI assistant voiced by Scarlett Johansson, who Joquin Phoenix falls in love with in the movie Her (Jonze, 2014).
In practical terms, according to Prediction Machines, AI is often machine learning algorithms that use data – particularly large data sets (“big data”) – and “learns” to recognize patterns to make predictions. There are two aspects to AI: 1) Predictions using data that is fed into the algorithm and quickly interprets it, and 2) Decision-making, where the algorithm uses the insights from the data in order to inform a decision.
However, when AI is tasked with making decisions based on its predictions, there can be negative consequences. Many people recognize fundamental issues with both the prediction and decision-making side, often concerning the data that is fed into the algorithm. Data sets can have biases that reflect our own societal and cultural biases, resulting in decision-making outcomes that adversely affect certain people. In a relatively harmless example, Twitter image previews favour white faces – even blank white backgrounds – over black faces. In a more critical example, studies have shown that facial recognition technology is less accurate for racialized and women’s faces. This can be dangerous when this type of technology is used in sensitive issues like policing, employment insurance, hiring, and applying for bank loans.
But, as Danks and London describe in their study providing a taxonomy of different algorithmic biases, biases can also be designed into the algorithm, which in different cases can be beneficial or even necessary. Awad et al.’s Moral Machine experiment is an example of ethics as designed, conscious, and necessary bias in autonomous driving cars based on society’s willingness to spare certain lives over others. Additionally, Danks and London recognize that algorithms can be designed to correct and compensate for flawed input data and address negative outcomes.
The bigger question around AI today is its use by consumers and its relationship to the corporations that develop AI applications.
As Katz and Shapiro argue, technology tends to “tip” towards monopolies. With AI in particular, the top tech companies amass large amounts of data and create network effects where users are compelled to use or buy the dominant platforms or risk being left behind, which in turn allows the company to collect more data. This centralization of data consolidates power for a handful of companies and restricts new companies from entering the market. This is potentially why anti-trust law is an emerging topic as a potential solution to this issue.
In collecting more data, it raises concerns regarding the collecting, sharing, and use of personal data, often without explicit and unambiguous consent. When companies legally obtain consent, it is often ambiguous, vague, and written in labyrinthine Terms of Service agreements. And in the use of that data for content recommendation and programmatic advertising, platforms can push discriminatory posts – as observed by YouTube’s algorithm pushing problematic videos. Despite Facebook implementing changes to stop targeting based on “ethnic affinity,” advertisers use other attributes associated with racialized groups, almost reflecting “dog whistle politics” and Lee Atwater’s “Southern strategy.”
In addressing the flaws of AI, this author believes that a key strategy must be changing industry norms. The current mindset in the tech industry is “Disrupt first, ask for forgiveness later.” Any potential solution, in this author’s opinion, must involve the changing of norms, where our values – such as transparency, justice and fairness, a duty not to commit harm, responsibility, privacy, and human well-being (see the Harvard and Nature Machine Intelligence survey) – are pushed on industry players. If the central goal of companies is profit, it is incentivized to pursue manipulative techniques to increase revenue and no incentive towards developing AI that is beneficial for all.
Multiple approaches must be taken to force changes in industry norms. Consumer demand through purchasing choices and public pressure from stakeholders like journalists can be effective. However, monopolies may disregard these concerns as they understand consumers have no viable alternative competitors to move to. Public pressure must be accompanied by government pressure. This can be done through new policies, enforcement of older legislation, and updating outdated legal –such as anti-trust law – to reflect the modern nature of today’s technologies. Government can use policy not as an end to itself but a means to an end as leverage – both beneficial incentives and threats of penalties – for tech companies to act themselves. Tech companies are beginning to feel the pressure as Apple is adopting more privacy features like iOS 14.5’s “Do Not Track” feature, which has pressured Google to include a similar feature in the upcoming Android 12.
What the readings do not touch upon is the use of AI by government through automated decision systems (ADS and ADM are used interchangeably).
In some ways, this issue is more dangerous as it fundamentally changes the citizen-government relationship. (Lindgren et al.) However, we are seeing efforts by governments, such as the European Union with the GDPR in Articles 21 and 22, in implementing “the right not to be subject to a decision based solely on automated processing” and “the right to object at any time to processing of personal data concerning him or her for such marketing.”
One of the areas in which ADS is being used is in immigration and refugee decision-making and enforcement. Petra Molnar is a prominent scholar on this issue, as she published a report called “Bots at the Gate,” where she uncovered how “predictive analytics” and AI are being used in the Canadian immigration system. This is problematic because immigrants and refugees are not afforded the same Charter rights as citizens. There are administrative law issues where ADS do not afford these vulnerable populations procedural fairness. Additionally, Molnar’s current research is on the Greek government’s use of facial recognition technology and AI lie detector systems on refugees in the Mediterranean.
Canadian governments have begun to address AI issues. In 2019, the federal Treasury Board Secretariat issued the “Directive on Automated Decision-Making,” which requires departments using AI applications to submit an Algorithmic Impact Assessment. However, Karanicolas and Scassa have written critical assessments on the lack of effectiveness of the TBS’ Directive, such as its exemption for issues like national security as well as potential issues regarding enforcement of the directive. In recent months, Ontario has also begun their own discussions on building a framework for regulating AI in the province, including hosting public consultations and releasing a white paper titled “Modernizing Privacy in Ontario.”
Conclusion
As we continue to use AI in our daily lives, it becomes an extension of ourselves, as Marshall McLuhan theorized. And in doing so we build up a “social mortgage” as Ursula Franklin talks about in which we cede control to algorithms – “All media work us over completely.” We let them make decisions for us, bypassing human judgment and “principled decisions,” and perhaps not reflecting our social and cultural values.
What is required is a radical re-imagining of the design and use of AI to better reflect our values as a society. In changing the norms surrounding AI, we must embed human agency, transparency, accountability, and care for marginalized communities. We must understand when AI can be used for good, when it can be concerning and how we can address those issues, and especially, when AI should not be used in order to protect the most vulnerable. “The future’s not what it used to be,” as Broken Social Scene sings. But we have to create our own future so that AI does not own us.
References
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. “The Moral Machine Experiment.” Nature 563 (7729): 59–64. https://doi.org/10.1038/s41586-018-0637-6.
Becca Ricks and Mark Surman. 2020. “Creating Trustworthy AI: A Mozilla White Paper on Challenges and Opportunities in the AI Era.” Mozilla Foundation.
Danks, David, and Alex John London. 2017. “Algorithmic Bias in Autonomous Systems.” In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 4691–97. Melbourne, Australia: International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2017/654.
“Directive on Automated Decision-Making.” 2019. Government of Canada – Treasury Board Secretariat. https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.
European Union. 2016. EU General Data Protection Regulation (GDPR). Vol. 2016/679.
Joy Buolamwini and Timnit Gebru. 2018. “Gender Shades.” 2018. http://gendershades.org/index.html.
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. “Machine Bias.” ProPublica. 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?token=siiaBuUx_5-LH2f_432kxejIHJI-dlxM.
Karanicolas, Michael. 2019. “To Err Is Human, to Audit Divine: A Critical Assessment of Canada’s AI Directive.” SSRN Scholarly Paper ID 3582143. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3582143.
Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8 (2): 93–115. https://doi.org/10.1257/jep.8.2.93.
Lindgren, Ida, Christian Østergaard Madsen, Sara Hofmann, and Ulf Melin. 2019. “Close Encounters of the Digital Kind: A Research Agenda for the Digitalization of Public Services.” Government Information Quarterly 36 (3): 427–36. https://doi.org/10.1016/j.giq.2019.03.002.
Mark Surman. 2021. “The Real World of AI (Forthcoming).”
McLuhan, Marshall, Quentin Fiore, and Jerome Agel. 2001. The Medium Is the Massage: An Inventory of Effects. Corte Madera, Calif: Gingko Press.
“Modernizing Privacy in Ontario.” 2019. Government of Ontario Ministry of Government and Consumer Services (MGCS). https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.
Petra Molnar and Lex Gill. 2018. “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System.” International Human Rights Program and Citizen Lab.
Scassa, Teresa. 2020. “Administrative Law and the Governance of Automated Decision-Making: A Critical Look at Canada’s Directive on Automated Decision-Making.” SSRN Scholarly Paper ID 3722192. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3722192.
