Are Human Beings Necessary in the Life Cycle of Knowledge?

A few nights ago I spent 90 minutes online with the staff of an elite public school in Shenzhen, China. I gave an hour-long talk about generative AI and its uses in education then engaged my audience in a 30-minute Q&A. 

Near the end of our chat, one of the teachers made a passing comment that has made me, for the first time, think about a potential crisis in our implementation of AI in schools. I call it, with very little hyperbole, the Classroom AI Doom Loop. 

The teacher in question strode to the front of the room to grab the microphone and then concisely explained the tremendous pressure he and his colleagues were facing. Students, parents, administrators, government officials, industry leaders, and the public were demanding that they increase students’ AI literacy while training these young learners to use the tools in ethical and effective ways.

This implementation was to occur in a learning environment bereft of guidelines, policy, laws, regulations, research, models, or proven strategies. He and his teaching colleagues were left on their own to face the most invasive disruption to education since, well, paper.

Students in his high school history class were turning in 25-page reports, perfectly written, clearly reasoned, and adorned with pages of citations. Did he really want to tell influential parents that their cherished offspring had cheated by using AI? Even if he did, could he prove it? He had no clue how to assess the work. 

The teacher glared at me, frustration penetrating the screen and miles, and asked a two-part question: “Where do you see this going? Best-case scenario and worst-case scenario?” 

Worst-Case Scenario: A Screenplay

I, and all the other talking heads in this field, have answered the best-case scenario question countless times, but I had never really pondered the idea of a worst-case scenario. Three days later, I have one. I’ll call it Johnny and Mr. Bennett Enter the Classroom AI Doom Loop, with apologies to the team who wrote the Twilight Zone intro … 

You’re traveling through another dimension — a dimension not only of sight and sound but of mind, both human and artificial. A journey into a wondrous land whose boundaries are that of imagination. There’s a signpost up ahead: You are now entering the Classroom AI Doom Loop …

Mr. Bennett, who teaches 10th-grade history, asks ChatGPT to create five standards-based writing prompts for the topic of a compare–and-contrast essay on the root causes of the French and American revolutions. ChatGPT pops out five prompts a moment later. Mr. Bennett, who is the father of a newborn, skims the options and quickly chooses prompt No. 2:

“Examine how economic hardship—especially unfair taxation and national debt—fueled the American and French Revolutions. How did the similarities and differences in each country’s financial pressures affect the direction of revolt?”

Mr. Bennett pastes the prompt into a new assignment in Google Classroom and schedules it for release the following day at 8 a.m. He goes back to caring for his new baby. 

Johnny sees the assignment in the morning, copies it, pastes it into Claude and asks Claude to write a response in the style of a 10th-grade student. Claude generates a passable essay, which Johnny scoops off the screen and pastes into a blank email in an attempt to remove any digital watermarks. He then copies the scrubbed text and pastes it into the AI Humanizer Twixify. The Humanizer makes it read even more like a 10th-grader by eliminating the mechanical tropes of AI-generated text.

Johnny, who is thinking only about his daily schedule (track practice, a few hours spent working at Best Buy, a late dinner, League of Legends, homework, sleep) doesn’t even read the output from the humanizer before downloading it as a Word doc and sending it via email attachment to Essay Grader AI for automatic grading.

Essay Grader AI assesses the essay using a system-generated rubric, giving Mr. Bennett advice for improving the assignment and Johnny for improving his writing. The AI outputs the score back into the Google Classroom grade book, which ports that grade into the district grade book (PowerSchool). 

Mr. Bennett, consumed by his fatherly duties, should really look at the feedback from Essay Grader AI, but he doesn’t have time. He supplements his salary by coaching the girls’ basketball team after school, which means he gets home late enough to irritate his wife, who also works outside the home. For understandable reasons, he trusts the AI to relieve him of the onerous task of grading 125 essays every week. 

“We know that a scenario can be real, but who ever thought that reality could be a scenario? We exist, of course, but how, in what way? As we believe, as flesh-and-blood human beings, or are we simply parts of an AI’s automated workflow? Think about it, and then ask yourself, do you live here, in this classroom, in this school, or do you live, instead, in the Classroom AI Doom Loop?”

The Mechanics of the Doom Loop

Let’s follow the workflow.

  • Teacher prompts AI, which generates assignment→
  • Teacher posts assignment in Learning Management System→
  • Student copies text of assignment and prompts AI to create response→
  • AI generates a response, which student pastes into a AI Humanizer→
  • AI Humanizer outputs revised response, which student uploads to automated AI grader→
  • AI grader evaluates response then posts grade/feedback to LMS→
  • LMS outputs grade into Student Information System

No humans were harmed in this process because humans were only ancillaries to the process. And this is today’s technology. By the beginning of the next school year, agentic AIs such as Manus, Convergence or Responses API from Open AI will be able to completely eliminate humans from any involvement in the knowledge transmission cycle. If the last 100 years of technological innovation have taught us anything, it’s that if something can be automated, it will be automated. 

Is this scenario really that far-fetched? Students and parents are busy and stressed. They hate homework because they have to give up their evenings and weekends to do it or monitor it. Teachers are busy and stressed. They hate homework because they have to give up their evenings and weekends creating it and then grading it. School districts are desperate to please parents and desperate to improve abysmal teacher retention rates. There is no conspiracy here, but humans will all choose to use AI for similar reasons. Refugees vote with their feet; learners and educators vote with their time. 

Wouldn’t it be ironic if the solution to the industrial model of knowledge transmission is in fact automation? 

Do Not Blame the Tech Sector

The tech sector did not create AI so that students could cheat on history homework or teachers could relieve themselves of the drudgery of grading assignments. These are unintended consequences of AI’s functionality. Generative AI tools are large-language models. Teaching and learning is an even larger language model. They fit together naturally. 

In 1951, the philosopher Bertrand Russell wrote an essay entitled “Are Human Beings Necessary?” in response to the new field of cybernetics. He pondered the consequences of an automated society long before generative AI as we know it was even imagined. Russell concluded that an automated society might not necessarily require human beings to function, but that probably wasn’t a good idea.

We have come to the inflection point where we can automate most elements of knowledge transmission. I’m not sure if that is a good idea. But before we take up residence in the Classroom AI Doom Loop, we should probably have a serious policy discussion about the purpose of education. If a major function of education can be automated, it’s probably not human enough. 

I can illustrate this point by advising you to walk through the aisles of your local Costco. You will see families, which almost always include a toddler who is sitting in the cart, eyes glued to a tablet, oblivious to the chaotic bounty of capitalism that surrounds them. We have trained a generation of children to be entertained and taught by staring at a screen. Or if you prefer to learn via screen, watch this passionate lament by a teacher named Ema, who shares the same sentiment much more eloquently on TikTok.

The current iteration of AI tutors is really good at teaching students content. If children prefer to learn that way, let them. For thousands of years teaching has consisted of equal parts transmission of knowledge and human development. Let the machines handle the transmission chores. 

There is not a school on earth that doesn’t have a poster or plaque in the office that says something like the following: “Our mission is to develop lifelong learners who effectively communicate, critically think, collaborate, and use their creativity to become successful in college, career while improving their community as active citizens.” These competencies, often found in Portrait of a Graduate documents, are the components of education that humans excel at modeling, explaining, coaching, and teaching. 

Let’s live up to those promises by letting the humans focus on human development and the array of skills that are described in frameworks as varied as 21st Century Skills, Socio-Emotional Learning, Employability Skills, Character Education, the Whole Child Approach, and Global Competence. They are all based on lived experience, and that’s something humans currently hold a monopoly on. 

The post Are Human Beings Necessary in the Life Cycle of Knowledge? appeared first on Getting Smart.

 Navigating generative AI’s impact on education reveals a potential “Classroom AI Doom Loop,” challenging roles in knowledge transmission.
The post Are Human Beings Necessary in the Life Cycle of Knowledge? appeared first on Getting Smart. Ed Policy, Leadership, Personalized Learning, Artificial Intelligence, EdTech, Future of work Getting Smart

Leave a Reply

Your email address will not be published. Required fields are marked *