By Judi Fusco and Jeremy Roschelle
Throughout this article, Judi Fusco and Jeremy Roschelle discuss what the educational research community observed in 2023 regarding how students and educators were impacted by this generative artificial intelligence technologies. Read on to see their conversation.
Judi Fusco: Over the past year, a lot of people wanted to talk and think about generative AI (GenAI). Here at EngageAI, we did a lot of thinking about it from many different angles, while carefully considering ethics and equity. I’m curious, Jeremy, what are your thoughts about GenAI in 2023?
Jeremy Roschelle: When GenAI became available about a year ago, many people wondered what educators would do. We saw educators consistently express cautious optimism—they were excited to dig into the opportunities, but also conscious of the risks.
Judi: Yes, and there are many risks to discuss. I’m hoping this new year we can really dig into these in conversations with others. Recently, when talking about ChapGPT with Menko Johnson, a colleague who works at a school district, I realized that one reason teachers and administrators like to use it is because it’s a “task completion engine.”
Jeremy: Task completion engine? Interesting phrase. Say more, please.
Judi: Imagine you’re a principal and you have to write yet another memo about the school parking lot to remind parents, staff, and students to follow the rules. You turn to a GenAI system and ask it to write a memo; in your prompt, you mention the rules it needs to cover, some specifics about your parking lot, ask it to use humor, and be creative. It feels that you’ve written roughly 1,000 of these over your career, so you can quickly paste in the relevant information from past memos. You’d like to see something different. Within seconds you have a memo that is satirical and meets all of your requirements. A GenAI completed the task and you didn’t have to expend mental energy, therefore the name, task completion engine. (Of course, you fact-check that it wrote out the rules correctly, GenAI sometimes chooses a word that isn’t quite right, because of the way its algorithms work, and then is not accurate.)
Jeremy: I like the metaphor of a task completion engine and I think I see how it might not lead to learning. I’m wondering if we should think about GenAI for learning? Let’s talk a little more about this.
Judi: Sure. Here’s what I have discussed with some educators. Let’s start with a quick version of how people learn (a simplified learning theory) that I find useful. From one perspective, learning is about processing information; learning can happen when a person does something active with information.
Jeremy: Agreed. So if a “task completion engine” does most of the processing and gives a student the output, the student will not have learned from that activity. By producing something, others may mistake this for learning.
Judi: Exactly! Let’s think a little more about levels of processing and learning. Factual recall can be done after you do shallow processing, which could just involve repeating the fact and a definition to remember over and over. Let’s define a medium level of processing as when you integrate and make connections between what you already know and the new information. A deeper level of processing requires cognitive activities like synthesizing, expanding, elaborating, and more, so that you really can think about things from different perspectives and understand them.
Jeremy: Ah ha! Yes, I did a TV interview this year, which was fun, and I told the audience in New York City to evaluate what the student is doing. If, when they use AI, they are explaining, justifying, critiquing, comparing, arguing, etc., then I’d feel comfortable that they are probably learning. But if they are just tweaking the knobs and passing along the output of a GenAI system, not so much.
Judi: Right, that’s a really good point. If a student creates a document on a topic with a GenAI system, and then makes “minor changes” to it, they most likely won’t learn. We need to really think about what kinds of interactions help learning. The early 2023 story was “cheating” and while those fears seem to have diminished, I’m worried about students being cheated out of opportunities to dig in and really learn because of GenAI use. Teenagers are overworked! Some may look for shortcuts. And to be fair, sometimes the assignments they are given don’t grab their interest.
Jeremy: I know how busy teenagers can get. I know sometimes the emphasis in assignments is on the product, not the process of learning. OK, so cheating 2.0 —that’s what we should be thinking about as we go into 2024?
Judi: Well, it’s a little more than that. Let’s take another minute to talk about the training of GenAI tools and why they can’t do certain things to support learning. These tools are trained on finished documents (and increasingly, images, videos, etc) found on the Internet. In training, GenAI derives a statistical model of the characteristics in those documents. In GenAI systems that produce writing, these models are called large language models (LLM). The models include relationships of things, like when words occur together or if words are similar and can be used in place of one another. What we see in the output of GenAI is that the model generates novel texts with similar characteristics to what the LLM was given when it was trained. In schools, we’re concerned about human learning and need to support the process of learning. GenAI never considers the process of how humans learn, it just produces output similar to what it was trained on.
Jeremy: Interesting. Are you saying that GenAI hasn’t seen examples of the process or flow of people who are engaged in learning? It makes sense because there just aren’t many public examples that show how people integrate new information with prior knowledge, how they work to integrate a range of related concepts into a coherent structure, or how they engage collaboratively to critique and elaborate complex information. If there’s nothing for GenAI to learn about the process of learning from, there’s no way a task completion engine can help with learning. This is yet another issue with training data that we and others have discussed a great deal over the year. [1]
Judi: Yes, and I’m not sure an LLM can support all that there is in the learning process; it can make a model of relationships of words, but it can’t do it all. One of the new things we need to think about in 2024 is the limits of GenAI. It does not know what great teachers know about how to guide a group of students through a learning process because it has so few examples of the process of learning. You are right, it may not be possible to go from a “task completion engine” to a “learning process guide”. We need completely different data, and then we need to ensure that the data is representative of the learning process in many different situations, with learners of diverse backgrounds. We don’t want to rush into the use of AI and harm students. I fear that there is too much pressure to rush in and use AI. I hope we can talk with educators a great deal this year.
Jeremy: To summarize, educators in 2023 may have seen some productivity enhancement for their practice from GenAI. Now, in 2024, it sounds like they should be on the lookout for overly ambitious claims that GenAI can guide or support the processing that students need to be actively doing in order to learn.
Judi: That’s the message from today’s conversation: Task Completion Engine ≠ Learning Process Support, so let’s take some time to really think about how AI systems can help your students learn or not.
Jeremy: Maybe you could add to the message to educators: keep looking for ways to engage your students actively and socially in deeply processing information so that they are strongly engaged in powerful learning processes.
Judi: I like that. Let’s get together again soon and talk more about GenAI and issues important to practice with the EngageAI Practitioner Advisory Board. Signing off for now with Happy New Year, Jeremy and all!
Jeremy: Happy New Year, Judi and all!