Balancing Bloom, assessment, and AI

Key points:

It’s possible to maintain academic rigor and revise assessments in response to generative AI

Stay updated on AI in education

New group targets AI skills in education and the workforce

For more news on AI in education, visit eSN’s Digital Learning hub

Early this school year, faculty had a conversation about student teachers pulling all-nighters in order to complete their lesson plans. Most of the faculty commiserated their own experiences in taking lots of time to develop high-quality lesson plans.

One faculty member spoke up and asked why we are not teaching student teachers to use a strong prompt for lesson plan development and encouraging them to use a generative AI tool to create lesson plans. The key is how the lesson plan is implemented to address student needs within the classroom, and not the creation of the lesson plan itself. Alternatively, the student teachers could be asked to critique the AI-generated lesson plan, showing their ability to analyze the lesson plan. Several of the faculty pushed back, saying it is essential for student teachers to write those lesson plans. However, the majority may be wrong.

Educators have long been encouraged to focus on higher-level thinking skills. Two key tools educators have been using for decades are Bloom’s Taxonomy and Costa’s Levels of Questioning. Now, more than ever, educators need to focus on the four upper levels of Bloom’s 1956 taxonomy (with evaluation at the top) and the processing and applying levels of Costa’s levels.

In a generative AI-rich world, we need to rethink how we view assessment. With generative AI now capable of handling lower-level cognitive tasks such as remembering and understanding, assessments need to challenge students to engage in higher-order thinking. This includes analyzing data, evaluating scenarios, and creating new solutions, which AI cannot easily replicate.

It is time for educators to ensure that all assessments beyond the most basic formative assessments now focus on the top four levels of Bloom’s six levels. Basic knowledge (Level 1) of Bloom’s original (1956) version of the taxonomy can now be generated via AI. For instance, creating a state report showing its capital and basic history would be simple for Claude.ai. Similarly, reviewing the comprehension (Level 2) level of Bloom, some of the verbs suggested for that level include organize, summarize, translate, and paraphrase. Most generative AI tools can easily be prompted to organize and paraphrase. Translate is a task computers have been doing for a while with tools such as Google Translate. There are now a wide range of summarization tools, including one now integrated into Adobe Acrobat. Therefore, educators need to take those tools into consideration when developing lessons and assessments for students. The gathering level of Costa’s questioning taxonomy is similar in that rewrite, restate, recall, locate, and describe are all tasks that generative AI can master.

Educators need to return to the original version of Bloom’s taxonomy where evaluation, synthesis, analysis, and application are the top four levels. Due to the ability to use a previous generation of technology tools, Bloom’s taxonomy was shifted to encourage creation, with creating as the top level of the taxonomy. However, the simple development of new materials can be done with generative AI. Effectively and efficiently applying those creations, analyzing them, integrating them, and evaluating them into existing systems or thought processes must be where educators focus going forward. As technology has again shifted the landscape, it is time to move evaluation back to the top of the taxonomy.

This is not to say students should not be asked to perform tasks that align with Costa’s Gathering Level of Questioning, nor work at the knowledge and understanding levels of Bloom. However, assessments, particularly quizzes, tests, and papers, need to be developed to focus on the higher levels of Bloom. When teachers look to assess the lower levels of Bloom, they should consider returning to oral assessments.

The rise of generative AI in educational contexts necessitates a strategic revision of assessment methodologies to maintain the integrity and relevance of classroom instruction. By shifting focus towards higher order thinking skills, such as analysis, synthesis, and evaluation, educators can ensure that assessments challenge students to engage deeply with content, fostering originality and critical thinking.

Emphasizing the application of knowledge in diverse contexts helps students develop practical skills that transcend academic environments and prepare them for real-world challenges. Moreover, by integrating tasks that require unique, reflective, and personalized assessments, educators will cultivate digital literacy among their students. These are essential competencies in an increasingly AI-integrated world. This shift combats the potential for academic dishonesty and should enhance educational outcomes by promoting essential 21st-century skills.

Ultimately, revising assessments in response to generative AI technologies is about maintaining academic rigor and preparing students to be thoughtful, innovative, and ethical contributors to a technology-rich society.

 Early this school year, faculty had a conversation about student teachers pulling all-nighters in order to complete their lesson plans. Most of the faculty commiserated their own experiences in taking lots of time to develop high-quality lesson plans. AI in Education, Digital Learning, Featured on eSchool News, assessment, Assessments, digital, digital learning, Education, learning, news, quality, rigor, school eSchool News

Leave a Reply

Your email address will not be published. Required fields are marked *