Composing Scientific Ideas in Narrative Learning Environments

Interviewed by Jeremy Roschelle

Throughout this post, Jeremy Roschelle, executive director of Digital Promise’s Learning Sciences Research, chats with Gautam Biswas and Mohit Bansal on their work in developing artificial intelligence (AI) agents and the power of emerging technologies that can develop narratives.

Key Ideas:

  • Often, solely memorizing facts prevents students from engaging in powerful learning with science. AI agents offer a potential solution to keep students interested in the topic. 
  • Composing multiple ideas together is often difficult for science students, and serves as a priority issue in foundational AI.
  • To compose ideas correctly, both people and AI systems need context.
  • Experts envision a future where students and AI agents can build science concepts through summarizing, clarifying, questioning, and predicting.

Gautam Biswas: I’m a professor of computer science at Vanderbilt University and I apply artificial intelligence (AI) and machine learning to develop intelligent learning environments for K-12 science instruction. I support students to construct models of real-world scenarios, which they can use to study how scientific laws translate into the real world. 

Mohit Bansal: I’m a professor of computer science as well, at the University of North Carolina, Chapel Hill where we develop artificial intelligence systems that understand human language and generate human-like language. In the Engage AI Institute, we want to create better AI agents to support classroom experiences: for example, how can we give students or teachers verbal feedback with explanations?

GB: That’s extremely interesting because these science learning tasks are difficult for students and they may disengage when their tasks become too difficult for them. When these situations occur, an AI agent like what Mohit describes could summarize a student’s recent progress and help them re-engage by giving them hints and suggestions on how they may move forward. Enabling AI to provide an explanation for its actions is crucial to creating a useful pedagogical agent.

AI Agents in Education

Jeremy Roschelle: How did each of you become so passionate about building AI agents for education?

GB: Well, we have some similarities in our backgrounds. As an undergraduate, I went to a foremost institute of technology in India and they first taught us all these scientific topics—physics, chemistry, math—in such an abstract way. I learned much better when I was applying my science concepts to understand and construct systems and had opportunities to make sure our design solutions worked.

MB: Similarly, the bottom line for me was the failure of memorization-based education and the value of a more constructive approach, what I describe as more “compositional.” Compositional is an important word in my AI work today because it is very, very challenging for AI deep learning systems to compose information or ideas across different times or settings. For example, take an assistant device you might find in your house. It can answer your question by finding one relevant bit of information but will struggle to combine multiple relevant sources of information to formulate a more human-like answer. Composing ideas is essential to supporting meaningful conversations, like the conversations between a parent and a student or a teacher and a student.

JR: So in Gautam’s learning environment, building a model probably involves conversations and actions that have this creative, compositional quality, right?

GB: Yes, that’s exactly what I picked up on in what Mohit said. To support a student who is building a scientific model, we need to compose a conversation that is based on the multiple cognitive processes a student has going on, like what they know about the scientific topic, what other work they have done in mathematics or a different class, and what question is on their mind in their current modeling/problem-solving task they are working on.

JR: For example, in my own early learning sciences work, I found students built models by composing everyday metaphors like “pulling” along with their observations of how objects move (e.g., “faster and faster”), and relationships between variables (“making velocity bigger makes the object move faster”).

MB: And that kind of reasoning is like the Holy Grail for AI agents today. It’s so interesting and challenging! And it’s not only that but also the problem of ambiguity. Like there are two ways to interpret “eat the cake with the spoon” — is it eating the cake that has a spoon inserted or using the spoon to eat cake? Basically, when you try to apply metaphors compositionally, you need to use a lot of contexts to choose the right meaning.

GB: And that gets us right to the big idea for the Institute, narrative-centered learning. One superpower of narratives is that they engage students and agents in a rich context, and that can help students make the right sorts of connections among ideas. And building connections among related ideas is foundational to strong science learning. Not only that but when you have a narrative context, the overall satisfaction for the student who composes ideas to solve a problem is so much higher.

MB: And that’s why memorization-based learning makes me so unhappy. When there’s no narrative context, yes, I might use a fact on an exam, but then it is just erased from my head. Narratives make you care and help you ground the new concepts in what you care about. This is so important for education and so hard for AI today, and that’s why it is going to be so productive for our Institute to push on it.

GB: Scientists using AI for education need to consider this deeply. A good example of this is Douglas Lenat’s work to build general commonsense reasoning into a system called CYC. It’s super hard to do that. But it does not have to be so hard in specific educational contexts.  Our bet is that with a story or narrative context, the problem of using context to resolve ambiguities will be easier to make progress on.

MB: Not only that, but when we want to generate a natural language response, having a narrative story context will make it easier to generate one that is really relevant to the context the student is already in — or to suggest an action to the student that will help them build that science simulation they are working on.

AI’s Abilities to Explain and Clarify

JR: You know, when I hear you both talk about supporting the student conversationally and also what you say AI can now do, it makes me think of some classic learning sciences research on reciprocal teaching, where students were given index cards with roles: summarizer, clarifier, questioner, predictor. These roles really helped students. And you are telling me that you are working on AI agents that can do these things: summarize, clarify, generate questions, and predict.

MB: Yes, that’s very close to my main passion. Not only do I want to summarize or ask a question, but I want the AI agent to be able to explain WHY it made that summary or asked that question. Explainability is extremely important in an educational setting and is also at the frontier of AI research today.

GB: And I’m excited about our Institute because we have such a great group of scientists who can come at this from different perspectives. Maybe in a small group of students working on a scientific simulation, none of the students is thinking of an important ambiguity—like if a “bigger” value is positive or negative. Could that be where an AI agent specialized in the “clarifier” role comes in and helps?

MB: That’s where AI is going today — toward clusters of specialized agents that work together to support more complex reasoning.

GB: And our Institute is like that, too. We’ve got all the many different educational perspectives, learning science perspectives, and foundational AI perspectives that are needed to create the next generation of powerful learning environments: learning environments that go beyond memorizing scientific facts to supporting students to use science to model phenomena in their world.