We all recognize the transformative power of AI, yet we often overlook its flaws.
AI, like a child, learns from the data it is given. This learning process is not without its problems – notably, the inheritance of biases from its training data.
This issue is particularly critical in educational settings where the impact is profound. We must address these biases before fully embracing AI in our classrooms.
Inherent Biases in AI:
AI’s learning process is analogous to a child’s – it absorbs what it’s exposed to. But what if the data it has absorbed is inherently biased?
Consider AI plagiarism detectors, often biased against non-native English speakers due to their training on native-speaker datasets.
How?
According to James Zou, a Stanford professor and co-author of a recent paper entitled “**GPT detectors are biased against non-native English writers”**:
“If you use common English words, the detectors will give a low perplexity score, meaning my essay is likely to be flagged as AI-generated. If you use complex and fancier words, then it’s more likely to be classified as ‘human written’ by the algorithms.”
So these AI plagiarism detectors are biased towards more sophisticated language use, which correlates with native English fluency. As non-native speakers tend to use simpler vocabulary and grammar, the detectors are far more likely to incorrectly classify their writing as AI-generated.
How? A snippet from a Stanford article shows how stark this bias is:
“While the detectors were “near-perfect” in evaluating essays written by U.S.-born eighth-graders, they classified more than half of TOEFL [Test of English as a Foreign Language] essays (61.22%) written by non-native English students as AI-generated (TOEFL is an acronym for the Test of English as a Foreign Language).
It gets worse. According to the study, all seven AI detectors unanimously identified 18 of the 91 TOEFL student essays (19%) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97%) were flagged by at least one of the detectors.”
And it doesn’t stop with language. AI’s biases can be found everywhere.
There are facial analysis tools that misidentify people of color due to lack of diversity in training images. Not only that, but it gets worse in relation to gender.
Here’s how interesting the numbers get for Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group:
Her research led her to find that the “examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.” and that when
”…she applied three commercial facial-analysis systems from major technology companies to her newly constructed data set. Across all three, the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects.”
The solution?
We need broader, more diverse datasets and crucially, the inclusion of educators in AI development.
AI tools shouldn’t be solely in the hands of engineers; educators must have a say in their design. This approach will help create ethical, representative AI tools.
Addressing Biases at the Classroom Level:
Systemic changes take time. But at the classroom level, we can foster critical thinking to address biases embedded in AI. This involves:
- Educating Ourselves and Students: We must be aware of our biases and how they perpetuate through technology.
- Understanding Algorithmic Bias: Recognizing how algorithms can reinforce our biases.
- Promoting Critical Thinking: Encouraging students to question information critically – is it true, biased, and where does it come from?
- Media literacy: Understanding where information comes from and evaluating claims critically. Teach students to ask: What evidence supports this? Whose voices are included or excluded?
- Algorithmic literacy: Recognizing how algorithms can perpetuate biases and questioning their recommendations. Students can examine: What data trained this algorithm? What assumptions or biases shaped its development?
- Multiple perspectives: Seeking out diverse voices and considering issues from different standpoints. Ask students: How might other people see this differently? Whose perspectives haven’t we heard?
Even young students can engage in these conversations. Instead of dictating thoughts, we should ask, “What do you think?” or “Why do you think that?”
Now What?
To effectively harness AI in education, confronting and correcting its inherent biases is imperative. This involves not only diverse data and educator involvement in AI development but also fostering critical thinking and AI literacy in the classroom. By taking these steps, we can ensure that AI becomes a beneficial tool in education, rather than a source of perpetuated biases.