Good morning subscribers and thanks for tuning in to this edition of the Morning Brief!
Our theme for the week is the ethics of fast-developing technologies like artificial intelligence and machine learning. Recalling Microsoft’s disastrous Tay Twitter experiment of last year, how do we ensure that new technology doesn’t reflect and perpetuate the worst in our human society?
This week’s Morning Brief was prepared by Cindy Liu. Sign up here to receive the Morning Brief directly to your inbox.
The fairness of technology
- Let’s begin by understanding the difference between equality and equity and how it is useful for making sense of what it means to have a truly equal society. Merely focusing on achieving equality without accounting for the inherent inequities found in society violates the original objective. Starting off the Gender Diversity Public Policy Initiative’s Unpacking Equity series, which aims to breakdown complicated topics such as equity, diversity and inclusion, [Sahota/PPGR] discusses these terms using the Canadian healthcare system, where free to all does not account for the same access for all.
- Machine learning is the science of giving computers access to data and getting them to recognize patterns. The technology is already being used by companies to sort through resume submissions and make hiring recommendations. Although it has some obvious benefits such as efficiency and consistency, is machine learning considered equitable and free from discrimination? [Kumar/PPGR] ponders some interesting examples that raise a few red flags about the use of this technology.
- For policymakers, machine learning can involve a difficult trade-off between ensuring accuracy and mitigating concerns about fairness. For example, although this technology is able to improve procedural justice by increasing the accuracy of government procedures, questions of fairness and substantive justice are more difficult to answer. While we can tell a machine not to take race into consideration, unless we explicitly tell it to avoid all other factors linked to historical racial hostility, such as criminal behaviour, we may continue to perpetuate the same biases. [Schlabs/The Regulatory Review].
- Alternatively, we can approach the problem of ethics in developing technologies through machine learning – we can teach robots about ethics by programming in principles (e.g. “avoid suffering”, “promote happiness”) and then having them learn from particular scenarios, the correct way of applying these principles in new situations. Carebots, robots designed to assist the sick and elderly, are a prime candidate for machine learning since they are likely to face difficult choices. We can’t forget about the potential drawbacks though – machines have been shown to import biases, and may evolve to a point where humans are incapable of predicting their future actions [Edmonds/BBC News].
We hope this week’s articles have piqued your interest about equity and ethics in technology and makes you think a bit the next time Netflix recommends your movie selection or Facebook suggests another event to attend. The next edition of the Brief will be making its way to your inboxes on November 8th.