Baked-in bias or sweet equity: AI’s role in motivation and deep learning

Baked-in bias or sweet equity: AI’s role in motivation and deep learning

Key points:

It is critical to monitor AI systems for bias and impact on student groups

AI use guide helps students navigate AI in learning

AI’s transformative role in accessibility

For more news on AI and bias, visit eSN’s Digital Learning hub

In the quickly evolving landscape of AI, education stands at the forefront. New AI tools are emerging daily for educators and students; from AI tutors to curriculum creators, the AI education market is surging.

However, the long-term impact of AI use on students is unknown. As educational AI research tries to keep up with AI development, questions remain surrounding the impact of AI use on student motivation and overall learning. These questions are particularly significant for students of color, who consistently encounter more systemic barriers than their white peers (Frausto et al., 2024).

Emerging in the wake of the COVID-19 pandemic and related declines in student learning and motivation, AI refers to a broad range of technologies, including tools such as ChatGPT, that use vast data repositories to make decisions and problem-solve. Because the tool can assist with assignments like generating essays from prompts, students quickly integrated these technologies into the classroom. Although educators and administrators were slower to adopt these technologies, they have started using AI both to manage unregulated student usage and to streamline their work with AI-powered grading tools. While the use of AI in education remains controversial, it is clear that it is here to stay and, if anything, is rapidly evolving. The question remains: Can AI enhance students’ motivation and learning?

A recent rapid review of research concluded that students’ motivation is impacted by their experiences in and out of the classroom. The review highlights how student motivation is shaped by more than just individual attitudes, behaviors, beliefs, and traits, but it does not comprehensively address the effects of AI on student motivation (Frausto et al., 2024).

To understand how AI may impact the motivation and learning of students of color, we need to examine the nature of AI itself. AI learns and develops based on preexisting datasets, which often reflect societal biases and racism. This reliance on biased data can lead to skewed and potentially harmful outputs. For example, AI-generated images are prone to perpetuating stereotypes and cliches, such as exclusively generating images of leaders as white men in suits. Similarly, if we were to use AI to generate a leadership curriculum, it would be prone to create content that aligns with this stereotype. Not only does this further enforce the stereotype and subject students to it, but it can create unrelatable content leading students of color to disengage from learning and lose motivation in the course altogether (Frausto et al., 2024).

This is not to say that AI is a unique potential detractor. Discrimination is a persistent factor in the real world that affects students’ motivational and learning experiences, and similar bias has previously been seen in non-AI learning and motivation tools that have been created based on research centering predominantly white, middle-class students (Frausto et al., 2024). If anything, AI only serves as a reflection of the biases that exist within the broader world and education sphere; AI learns from real data, and the biases it perpetuates reflect societal trends. The biases of AI are not mystical; they are very much a mirror of our own. For example, teachers also demonstrate comparable levels of bias to the world around them.

When we think about current AI use in education, these baked-in biases can already be cause for concern. On the student use end, AIs have demonstrated subtle racism in the form of a dialect prejudice: students using African American Vernacular English (AAVE) may find that the AIs they communicate with offer them less favorable recommendations than their peers. For teachers, similar bias may impact the grades AI-powered programs assign students, preferring the phrasing and cultural perspectives used in white students’ essays over those of students of color. These are just a few examples of the biases present in current AI use in education, but they already raise alarms. Similar human-to-human instances of discrimination, such as from teachers and peers, have been linked to decreased motivation and learning in students of color (Frausto et al., 2024). In this way, it seems AI and its biases may be situated to serve as another obstacle that students of color are required to face; AI learning tools and supports that have been designed for and tested on white students to a positive effect may negatively affect students of color due to inbuilt biases. 

For humans, we recommend anti-bias practices to overcome these perceptions. With AI, we may yet have an opportunity to incorporate similar bias awareness and anti-discriminatory practices. Such training for AI has been a prominent point in the conversation around responsible AI creation and use for several years, with companies such as Google releasing AI guidelines with an emphasis on addressing bias in AI systems development. Approaching the issue of AI bias with intentionality can help to circumvent discriminative outputs, such as by intentionally selecting large and diverse datasets to train AI from and rigorously testing them with diverse populations to ensure equitable outcomes. However, even after these efforts, AI systems may remain biased toward certain cultures and contexts. Even good intentions to support student learning and motivation with AI may lead to unintended outcomes for underrepresented groups.

While AI-education integration is already occurring rapidly, there is an opportunity to address and understand the potential for bias and discrimination from the outset. Although we cannot be certain of AI’s impact on the motivational and educational outcomes for students of color, research sets a precedent for bias as a detractor. By approaching the implementation of AI in education with intentionality and inclusivity of perspectives, as well as awareness of potential harm, we can try to circumvent the inevitable and instead create an AI-powered learning environment that enhances the learning experiences of all students.

 In the quickly evolving landscape of AI, education stands at the forefront. New AI tools are emerging daily for educators and students; from AI tutors to curriculum creators, the AI education market is surging. AI in Education, Digital Learning, Featured on eSchool News, accessibility, digital, digital learning, IT, learning, news, student, students, visit eSchool News

Leave a Reply

Your email address will not be published. Required fields are marked *