Canada’s research expertise in artificial intelligence (AI) has led to significant recent investment. But far fewer resources have been dedicated to the governance, ethics or social responsibilities of this new technology, leaving many different local initiatives to try to fill the gap.
AI already affects our everyday lives. With each like, swipe and comment on social media, most of us are already contributing data — to improve AI applications that recommend videos online or help us manage our email.
Our ability to ask a smartphone a question results from a long history of research into how we can use computers to automate human labour and improve decision making. Better processors, advances in algorithms, as well as big data, often gathered from our online interactions, have driven major advances in deep learning, a branch of AI.
Advances in these fields have been heralded as a net benefit to all humankind.
But treating AI as inherently good overlooks the important research and development needed for ethical, safe and inclusive applications. Poor data, inexplicable code or rushed deployment can easily lead to AI systems that are not worth celebrating.
For example, automated systems can be programmed with bias. A ProPublica investigation revealed that the COMPAS program, used to assess criminal risk scores in the United States, was inherently biased against African Americans.
Discoveries like this reveal an important reality: First, the impacts of AI are very real and affect real lives and families. Second, this is taking place right now — these are not far-fetched scenarios that might happen in the future.
Assessing the consequences
Despite these risks, interest in AI in Canada — and around the world — has not slowed.
In Canada, the federal government invested dramatically in artificial intelligence research in the past year. The 2017 budget included $125 million for a pan-Canadian Artificial Intelligence Strategy that will be administered by the Canadian Institute for Advanced Research. A Quebec-based proposal for research on AI and supply chains won a share of the $950 million federal “superclusters fund” in February.
But without proper outreach to sectors with expertise in the social, cultural and political consequences of new technologies, these initiatives will likely widen the gap between technology firms and the public interest and disconnect research from critical reflection.
Fortunately, the social implications of AI are beginning to appear on the federal government’s radar.
The Treasury Board Secretariat (TBS) of Canada is finalising the first round of public consultation around the responsible use of AI in government. It has worked closely with the AI community by piloting an open online consultation that has allowed us and others to collaborate on the ensuing guidelines.
That report reviews many potential applications of AI, such as chatbots to help users navigate government services and programs to create website content. The initiative demonstrates how the federal government can lead by adopting strong guidelines for its own use of AI.
These concrete, narrow applications are a sign the government understands the risks of automation in its own work. We hope the final report will result in strong guidelines that guarantee transparency, accountability and fairness.
In addition, Global Affairs Canada is leading a multi-university collaboration on artificial intelligence and human rights. For example, graduate students in Fenwick McKelvey’s Media Policy seminar at Concordia University are contributing to a broad scan of AI’s policy implications for human rights. McKelvey aims to link debates around AI to the expertise in communication and cultural studies that have long been questioning the cultural, social and political dimensions of media and technology.
McKelvey’s research connects with these initiatives by exploring the implications of algorithms and AI on internet governance and broadband performance.
Abhishek Gupta is evaluating the impacts of AI-enabled automation in the financial services industry and how to tweak university curricula to better train practitioners to meet evolving job needs.
Gupta also founded the Montreal AI Ethics group, where conversations have focused on developing the Montreal Declaration for a Responsible Development of Artificial Intelligence. This document provides clear principles for how AI will respect well-being, autonomy, justice, privacy, knowledge, democracy and for its own responsible development.
AI as a data policy
An AI strategy does not need to develop in isolation.
A national approach to AI should learn from the philanthropic and nonprofit sectors’ emerging discussions around data. Montreal’s Powered by Data has helped lead important conversations on how philanthropic organisations and nonprofits can become more data driven and serve as models for the inclusive and ethical use of data.
Data is just one of the ways the sector is integrating digital innovation into operations and developing digital strategies to help focus on missions.
Part of this shift requires reassessing questions of inequality and discrimination in light of digital automation. The Brookfield Institute, a policy institute for innovation and entrepreneurship, has studied the potential benefits and threats of automation to the labour market. Their assessment indicates these effects will not be shared evenly across Canada, with some communities more susceptible to job reduction.
But these initiatives lack the size and scope of interventions elsewhere.
In the United States, the Ford Foundation and the MacArthur Foundation fund the AI Now Institute, hosted at New York University. The group has released two major reports on AI governance, leading to an international debate about how AI should be developed and used — and demonstrating that these sectors can play an important role.
AI Now’s success demonstrates that the philanthropic and nonprofit sectors can inform the development and application of AI. By creating an institute or a commitment to long-term initiatives, these sectors could ensure the development of an AI strategy that connects with important discussions about rising equity, precarious labour, data discrimination and reconciliation.
Putting Canada in the lead
Canada has a clear choice. Either it embraces the potential of being a leader in responsible AI, or it risks legitimating a race to the bottom where ethics, equity and justice are absent.
Better guidance for researchers on how the Canadian Charter of Rights and Freedoms relates to AI research and development is a good first step. From there, Canada can create a just, equitable and stable foundation for a research agenda that situates the new technology within longstanding social institutions.
Canada also needs a more coordinated, inclusive national effort that prioritizes otherwise marginalized voices. These consultations will be key to positioning Canada as a beacon in this field.
Without these measures, Canada could lag behind. Europe is already drafting important new approaches to data protection. New York City launched a task force this fall to become a global leader on governing automated decision making. We hope this leads to active consultation with city agencies, academics across the sciences and the humanities as well as community groups, from Data for Black Lives to Picture the Homeless, and consideration of algorithmic impact assessments.
These initiatives should provide a helpful context as Canada develops its own governance strategy and works out how to include Indigenous knowledge within that.
If Canada develops a strong national strategy approach to AI governance that works across sectors and disciplines, it can lead at the global level.
Fenwick McKelvey, Assistant Professor in Information and Communication Technology Policy, Concordia University and Abhishek Gupta, AI Ethics Researcher, McGill University
This article was originally published on The Conversation. Read the original article.