No One Is Going To Be Surprised That Research Finds ChatGPT Use Can Hurt Student Learning, But Study Misses Huge Point

No One Is Going To Be Surprised That Research Finds ChatGPT Use Can Hurt Student Learning, But Study Misses Huge Point

 

Jill Barshay at The Hechinger Report has shown again why she is the THE person to read to keep up on education research.

She’s just posted Kids who use ChatGPT as a study assistant do worse on tests, about a big new study about students using AI (the study is NOT behind a paywall).

The study’s findings shouldn’t surprise anybody, and I have to say that the way the study appeared to have been structured, its conclusions should have been obvious to all those who were involved before the study was even conducted.  In fact, it seems to me, at least, that it should have been so obvious that they did a disservice to the students in the study who were allowed to to use AI.

Certainly put me in the skeptic’s corner about AI having a huge positive impact on education (see WHAT I THINK ARTIFICIAL INTELLIGENCE WILL DO – & WHAT IT WON’T DO – IN K-12 EDUCATION).

However, it seems to me that if you truly want to get an accurate picture of what AI can do in general education classes, you want to do it in one of two ways:

One, is there really anybody besides Bill Gates and Sal Kahn who believe that if you tell your students, “Hey, I’m going to teach the class in basically my usual way.  The only change is that you can use AI whenever you want” that this process is going to lead to learning gains?  It seems to me that you would need to redesign your entire course around using AI, if you wanted to see if it was going to help students without doing a disservice to them by doing something you already knew would hurt their learning (which is what I think was done to students in this study).

Two, the other way to fairly measure AI’s impact would be to first spend time teaching about AI, its potential benefits and disadvantages, and provide guidelines to students on how to use it.  In my case, I do a lot of that now not because I think AI is going to help students, but as a defensive measure to minimize its damage.  Here are posts about some of the things I do in my IB Theory of Knowledge classes:

HERE’S THE AI GUIDANCE I’M GIVING TO MY TOK STUDENTS THIS YEAR.

HERE’S A DRAFT WEEK-LONG UNIT ON ARTIFICIAL INTELLIGENCE I’M USING TO FINISH THE YEAR – HELP ME MAKE IT BETTER

THE BEST RESOURCES FOR HELPING STUDENTS SEE THE BENEFITS OF WRITING (IN THE AI AGE)

Some final points:

Reading this study reminded me of two things.

First, critics of inquiry learning often claim it is “unassisted discovery learning” when students are not given any responsible guidelines about what to do in a lesson.  As I’ve said before, I think that’s a “straw man” because no good teacher is going to use that kind of method, yet, in many ways, that’s what they did in this ChatGPT study.

This ChatGPT study also reminds me of ones on loss aversion where researchers literally first gave eight year-old students prizes and then, to quote one of the researchers, had to “rip a trophy out of the hands of an eight year old” when they didn’t meet academic standards (see The Best Posts On “Loss Aversion” & Schools).

Researchers, and the teachers who cooperate with them, would do well to remember the medical profession’s oath, “First, do no harm.”

 

 

 

   Jill Barshay at The Hechinger Report has shown again why she is the THE person to read to keep up on education research. She’s just posted Kids who use ChatGPT as a study assistant do worse on tests, about a big new study about students using AI (the study is NOT behind a paywall). AI, research studies Larry Ferlazzo’s Websites of the Day…

Leave a Reply

Your email address will not be published. Required fields are marked *