This marks the halfway point in our series on the frontiers of learntech. Our hope is that you have already begun to consider the implications this explosion in learning technology will have on your learning business. And when it comes to implications we want to focus on some of the major themes that emerged from our earlier conversations with Donald Clark and Sae Schatz—particularly those related to bias and equity.
Bias and artificial intelligence or, more specifically, bias in AI is not a new concern. But it’s one that’s been garnering more and more attention in recent years, and it feels appropriate to focus on it now because of the rise in social justice movements we’ve experienced in the United States.
In this fourth episode of the series, we explore the potential harm of bias in AI drawing on research from Joy Buolamwini and Cathy O’Neil, both featured in the documentary Coded Bias, available on Netflix. We also discuss the difference between interpretability and explainability when it comes to understanding AI and why looking for bias in data is equally important to looking for bias in AI processes.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
[00:18] – A summary of what we’ve covered up to this point in the series and a preview of what’s to come in our conversations with Sam Sannandeji, founder and CEO of Modest Tree, and Ashish Rangnekar, co-founder and CEO of BenchPrep.
Harm from Bias
We recently watched Coded Bias, a 2020 documentary available on Netflix. The film investigates bias in algorithms and features the work of MIT Media Lab researcher Joy Buolamwini, who uncovered flaws in facial recognition technology. The technology was really good at recognizing the faces of white men, less good with the faces of women and people of color. Because of her work, Google and other tech companies have worked to improve their AI, and it’s gotten better at recognizing faces of all types.
Joy founded the Algorithmic Justice League, which “combines art and research to illuminate the social implications and harms of artificial intelligence. AJL’s mission is to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI bias and harms.”
Watch Joy Buolamwini’s TED Talk below about the need for accountability in coding and how she’s fighting bias in algorithms.
Cathy O’Neil is also featured in the Coded Bias documentary. Cathy wrote Weapons of Math Destruction: How Big Data Increases Inequity and Threatens Democracy (2016). In the documentary, Joy and Cathy focus on the use of AI in policing, surveillance, credit and lending decisions, insurance, advertising, and more. In the film Cathy O’Neil says, “People are suffering algorithmic harm.”
Both Cathy and Joy are focused on the harm. Harm is in the quote from Cathy, and the mission of AJL also mentions harm. They have real concerns—and there is real reason for their concerns. Lost opportunities in accessing money through lending, greater likelihood of being stopped by police, higher interest rates, etc. There’s enough harm and enough real instances of harm that many are clamoring for legislation and regulation and standards. In fact, as we’re recording, the European Commission is expected to unveil a proposal on artificial intelligence regulations in the European Union.
One concern covered in Coded Bias involved a teacher in Houston who had won numerous teaching awards over many years, but he received a poor evaluation when the district implemented an algorithmic approach to assessing teachers. He and other teachers sued, and part of their argument (they won the case) was that they didn’t know why they’d gotten the poor evaluation—the algorithm was a black box that they couldn’t question, and so they couldn’t contest the result because the premises for the result weren’t known.
Black Boxes, Explainability, and Interpretability
[05:45] – The black box argument is interesting. As a society, we use a lot of technology we don’t understand—for example, our laptops, smartphones, and Google Search. We have a pretty crude understanding of how all those work, but we aren’t likely to want give any of them up.
We recently came across a really helpful distinction from Christopher Penn in a Marketing Over Coffee podcast episode. He says when we want to understand how software arrived at a particular outcome, we choose between explainability and interpretability. “Interpretability is the decompilation of the model into its source code. We look at the raw source code used to create the model to understand the decisions made along the way,” per Penn. “Explainability is the post-hoc explanation of what the model did, of what outcome we got, and whether that outcome is the intended one or not.”
Christopher uses an analogy to make explainability and interpretability more digestible. Explainability is tasting a cake—we can taste it and get a general idea of what went into making it. We might not get 100% of the ingredients right—is that vanilla extract or almond extract?—but it’s a fast, easy way of testing. Interpretability, though, is looking over the recipe for the cake. We look at the list of ingredients and the steps, and that allows us to verify that the recipe makes sense and the ingredients are good. This is a more rigorous way of validating results, so it makes sense in high-stakes situations—if someone has a severe allergy, if harm could come from eating the cake, then we want interpretability not just explainability.
But if the stakes aren’t very high, explainability usually is the go-to. Interpretability is costly, and it’s often operationally difficult to do a thorough review. “For more complex systems like neural networks, interpretability is a massive, expensive undertaking. It slows down systems like crazy, and in the most complex models, we might never fully unravel all the details. Interpretability also reveals the secret sauce, to the extent there is any, in the process,” says Christopher. So AI software makers don’t really want interpretability—at least not publicly available interpretability.
But, if you’re the teacher in Houston whose job is suddenly in jeopardy, you want interpretability, not just explainability. Or if you’re not getting job interviews because of AI screening or you can’t get access to a loan because of your zip code. There’s a power differential that comes into play—and this is mentioned in Coded Bias.
Cathy O’Neil makes the point that it’s very hard for an individual to push back against large-scale AI-driven decisions because a lot of them are invisible and are happening in a black box. Many people concerned about bias in AI are focused at this point on simply making the issue known, calling our attention to these often invisible systems, and raising awareness of the potential for misuse, whether that misuse is intentional or incidental.
Weighing Benefits and Risks
[09:39] – In our conversation with Donald Clark earlier in the series, he said he feels like the bias-in-AI discussions lean too hard to the harm side. He makes the point that calling out bias in AI is problematic because the alternatives aren’t bias-free. In the case of learning, human teachers and facilitators are rife with bias. So eliminating AI does not eliminate bias.
We think Donald and Joy agree on this point. Joy says in the documentary, “The past dwells within our algorithms.” They both acknowledge that past and current biases are reflected in AI and its algorithms. Donald just wants to make sure that the baby doesn’t get chucked with the bathwater. Because there’s bias everywhere and because the current educational system isn’t working—he calls the current educational systems “far too expensive, clumsy, and slow”—he sees hope in AI. AI can be taught and audited, probably more effectively than humans, so over time hopefully we catch and remove biases. In the meantime we have to act carefully, though, since AI can scale bias.
AI has the potential to do much more harm than a single biased teacher, but it also has the ability to do great good. If we can scale unbiased AI, it can help learning be the great equalizer it’s often been held up to be in the past.
Jeff Cobb
Though, as Donald points out, learning is often not an equalizer but something exclusive. Learning is not cheap enough or fast enough to be equally useful and accessible to all. On the Pollyanna-to-Doomsday spectrum, it seems we’re somewhere in the middle. Artificial intelligence feels like a both/and at this point—it has dystopian and utopian possibilities.
Rating the Risk of AI
[11:53] – Tied up in Donald’s baby-and-bathwater comment is an idea of the risk involved. Donald used cars as an example. Tens of thousands of people die in car crashes in the U.S. every year, but we still drive cars. We’re not talking about banning the use of cars. We’ve collectively concluded that the good outweighs the bad. The same is likely true for AI—we won’t ban it, but what rules and regulations, what speed limits do we need in place to make it as safe as possible?
We’d argue that AI for personalizing learning is at the lower end of the risk scale, especially if the AI is not a gatekeeper giving access to content to some and keeping others out but is more of the guide on the side, recommending and trying to help learners find relevant and useful content.
In our conversation with Sae Schatz, she homed in on the fact that we can’t be satisfied when we can say everyone has Internet and a computer—i.e., the tools for access aren’t enough. For equity in learning, everyone needs access to high-quality opportunities and experiences—and that’s something that AI can help with, if done right. If not, we run the risk of exacerbating existing inequalities and of creating what Sae called “hidden haves and have-nots.”
In his book Artificial Intelligence for Learning, Donald Clark points out the risk in AI for learning. He writes, “The danger is that AI could deliver narrow, deterministic, prescribed pathways, not allowing the learner to breathe and expand their horizons, and apply critical thought.” So, “We need to be careful that the learner retains the curiosity and critical thinking necessary to become an autonomous learner.” But he also points out, “The degree to which human agency is included in AI-driven systems is a design issue.”
So it comes down to human designers. Do we design AI as a guide and nudge with lots room to still explore or even ignore recommendations? Or do we make it a gatekeeper with tight control over access to learning resources? As long as AI is a guide and not a gatekeeper, the risk of getting AI wrong feels minimal, and the potential for getting right seems huge.
Sponsor: BenchPrep
[14:53] – If you’re looking for a partner to help you realize the possibilities of learning technology, check out our sponsor for this series.
BenchPrep is a pioneer in the modern learning space, digitally transforming professional learning for corporations, credentialing bodies, associations, and training companies for over a decade. With an award-winning, learner-centric, cloud-based platform, BenchPrep enables learning organizations to deliver the best digital experience to drive learning outcomes and increase revenue.
The platform’s omni channel delivery incorporates personalized learning pathways, robust instructional design principles, gamification, and near real-time analytics that allow organizations across all industries to achieve their goals. More than 6 million learners have used BenchPrep’s platform to attain academic and professional success. BenchPrep publishes regular content sharing the latest in e-learning trends.
To download BenchPrep’s latest e-books, case studies, white papers, and more go to www.benchprep.com/resources.
Data As Dangerous As AI
[16:12] – In Coded Bias, Joy Buolamwini says, “The past dwells within our algorithms.” The good and the bad of our past are on display in the algorithms and in AI. The past is there because AI needs data to focus. That means the datasets used to teach AI can be troublesome—gaps in the data or over-representation by particular groups can skew results. So even if the algorithm is unbiased, the data might be biased. We have to look for and audit for bias in the data and in the AI processes.
If you think about the term systemic racism and the fact that learntech is made up of systems, like learning management systems, you see the potential bias and injustice in learntech.
We’re struck by how everyone we’ve talked to for this series has emphasized the importance of data. Data is the lifeblood that makes AI, personalization, recommendations, and more work. When we spoke with her, Celeste Martinell, vice president of customer success at BenchPrep emphasized the essential importance of data in learning going forward.
Data was a topic we spent time on in episode 265, and it was a refrain that both Donald and Sae returned to again and again. Sae mentioned the conventional wisdom summed up in the cliché that data is the new oil. Oil is arguably a 20th-century point of comparison—a more 21st-century view might argue data is new solar or the new wind. But the point is that data is necessary for powering and enabling other types of activity.
Donald Clark outlines four levels of use of data in his book Artificial Intelligence for Learning: describe, analyze, predict, and prescribe. The levels move up in terms of difficulty. Using data to describe who completed which course and when is relatively easier than getting to the fourth level, prescribe, where we use data to help us understand not what’s happened but what should happen. What should this learner study? Prescribing gets into recommendations and true personalization.
AI plays a significant role in marketing today, and AI in marketing is pretty good at getting beyond the lower levels (describe and analyze) and into the upper levels (predict and prescribe). We feel like many learning businesses are still in the describe and analyze levels with their learning, even while marketing is further along. We know our listeners have to market their learning offerings, so marketing’s use of AI is important and relevant in its own right but also because what happens with martech is often a bellwether for what happens with learntech.
This fits with Donald Clark’s assertion that consumer tech drives learntech. It’s also a point that Joe Miller, vice president of learning design and strategy at BenchPrep, made when we spoke to him. He likes to look at what the other “x” techs are doing—fintech or retail tech for example. Looking at how they’re approaching problems and opportunities can be instructive for organizations looking to get the most out of their learntech.
[20:34] – Wrap-up
Reflection Questions
Below are two questions to explore and re-evaluate as your learning business gets into AI and increased automation in its use of learntech.
- How risky are the ways in which you use or plan to use artificial intelligence and automation? More risky ways would be tied to when the technology is a gatekeeper, allowing access to some and keeping others out. Less risky ways would be where these technologies make suggestions and recommendations to learners that can be acted on or ignored.
- For each of your uses of automation and AI, is the technology interpretable or explainable? And does that match the risk? For riskier uses that might be tied to job promotion, salary increases, etc., you’ll want to lean towards the interpretability side, being able to check out the cake recipe, so to speak.
To make sure you don’t miss the new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also helps us get some data on the impact of the podcast. Data’s the lifeblood of podcasts too.
We’d also appreciate if you give us a rating on Apple Podcasts by going to https://www.leadinglearning.com/apple. We personally appreciate your rating and review, but more importantly reviews and ratings play a big role in helping the podcast show up when people search for content on leading a learning business.
We encourage you to learn more about the sponsor for this series by visiting benchprep.com/resources.
Finally, consider following us and sharing the good word about Leading Learning. You can find us on Twitter, Facebook, and LinkedIn.
[23:10] – Sign-off
Other Episodes in This Series:
- Learntech: The Next Generation
- AI, Data, and Optimism with Donald Clark
- The Future Learning Ecosystem with Sae Schatz
Episodes on Related Topics:
Leave a Reply