Interviewed by Pati Ruiz
Throughout this conversation, Pati Ruiz, Senior Research Scientist on Digital Promise’s Learning Sciences Research team, interviews Cindy Hmelo-Silver and Tiffany Barnes about how equity will be woven into the Engage AI Institute’s work.
Key Ideas:
- Being able to do research at scale is an important part of supporting equitable learning environments and understanding how narrative-centered learning environments need to be adapted for and with different contexts and understandings.
- Generating narratives is both exciting and difficult.
- One goal is to develop ambitious learning practices where students and teachers all have agency in learning by doing.
Pati Ruiz: What inspires your interests today?
Tiffany Barnes: I’m not a neuroscientist or anything, but I always thought it was neat that we could get computers to do things that people can do. The thing that really inspires a lot of my research is solving problems and how others got over the impasses that I might be experiencing.
Cindy Hmelo-Silver: When I was in graduate school, my advisor said ‘hey I hear there’s this thing called problem-based learning they’re doing in the medical school why don’t you go check it out’. And I was really excited to see students arguing over knowledge as they were collaboratively trying to solve a problem that they didn’t necessarily know how to solve, but that they had to learn about new information as they were trying to solve it. And one of the things that also was really exciting to me was the role of the teacher as a facilitator of learning rather than somebody who is conveying information. If you would have asked me when I started doing research if artificial intelligence (AI) could ever have a role in orchestrating problem-based learning, I would have said no. I feel like both the advances in AI and the deeper understanding we have of how to facilitate learning starts to make me see those possibilities.
PR: What’s fueling your passion for this work?
CHS: So I think it’s a combination of colleagues who are doing really exciting technical work and understand learning and those who are interested in working with folks who understand education really well. With the collaborations these groups can have, some of the possibilities that I’ve thought about throughout my career can happen. I think we have possibilities to bring these kinds of ambitious learning practices where kids and teachers all have agency in learning by doing, by engaging with interesting subject matter and interesting applications.
TB: I think what’s exciting about AI today is that we can do a lot more things now than we used to be able to.
Making A Difference in the Future of AI
PR: Over the next five years, how do you see yourself making a difference?
CHS: Right now classrooms look a lot like they did 20-30 years ago. I guess, I would like to think that if we can think about how to create these narrative-centered learning environments, as well as the supports to make them locally relevant to help support teachers to help support groups for collaboration, we can do some work that will be really transformative.
TB: What I’m really excited about is helping individual teachers and learners to be able to learn more and collaborate to learn more. I think that AI should be built for and scaled for more people. I am excited to help translate between research and teachers.
PR: How do you hope the work in the Engage AI Institute will support equitable learning environments for students at districts across the U.S.?
TB: I think our understanding of what equity means has evolved over time and I think what we mean by equity now is every student should see themselves reflected in the stories that are told and that’s really hard to do even with AI. We have to think of new paradigms for what it means to make AI for education and I think what we have to do is figure out ways that these narrative-centered environments can actually help kids create their own narratives. But I think the equity work is making sure we’re not just doing research on but research with teachers and students and parents while story building.
CHS: I think, being able to do this work at scale is an important part of supporting equitable learning environments and understanding how narrative-centered learning environments both need to be adapted for and with different contexts. We must ask how these learning environments are used, under what circumstances they might be effective, and what effective really means; whether it’s content learning, self efficacy, or identity.
Storytelling with Generative AI
PR: This institute focuses on stories and narratives through the use of generative AI. What excites you about that focus?
CHS: One of the things that got me really excited was when we started talking about the possibility of being able to generate narratives and adapt them to a local context. I think one of the interesting possibilities, if we’re thinking about problem-based learning, is not adapting at the individual level but at the classroom level. We’d try to use these banks of narrative elements to construct a narrative problem context for a particular classroom, school, or museum context. And how do we make sure that kids see kids who look like them and kids who don’t look like them as protagonists who have important roles? It sounds like we will have some interesting discussions over the next few years.
TB: Generative AI to create new problems and contexts is very interesting to me as a tool for enabling people to see themselves in the problems and problem-solving they are learning to do.
PR: What are some fears and misconceptions that the public might have around AI in education?
CHS: Certainly one fear is that we’re trying to replace teachers, which we absolutely are not doing. I think Tiffany’s point about working with teachers, along with other stakeholders, is a really, really important one.
TB: I think another thing is privacy. People are quite afraid of anyone getting their data for any purpose. Parents and caregivers are rightly very protective of their children’s data, so I think that that’s actually a big challenge for doing good AI right and ethically. If we want to do a good job with building adaptive systems and stories for different kids we need their data about how they interact with what we make for them. We also need to ensure we are acting safely and responsibly. What we really want is for people to engage with the AI Institute in authentic and meaningful ways. They have to trust us enough to do that. And we have to be transparent and ethical to earn that trust.
PR: What gives you hope that the Engage AI Institute can address some of these fears and misconceptions over the next five years?
CHS: I think we’ve got a combination of both an interdisciplinary group of people who are talking to each other and working together. We also have a commitment to equity and ethics from the beginning.
TB: What gives me hope is that the Institute is involving teachers, students, and young researchers to imagine what’s possible.