Toward Inclusive, Generative, and Transparent AI for Learning

Interviewed by Josh Weisgrau

Throughout this post, Josh Weisgrau, Digital Promise’s Chief Learning Officer for Experience Design, chats with Krista Glazewski, Department Chair and Professor of Instructional Systems Technology at the University of Indiana, and Corey Brady, Assistant Professor of the Learning Sciences at Vanderbilt University, on connections between the potential for applications of AI in K-12 education that might be deliberately designed to support learner variability and equitable outcomes for learners.

Key Ideas:

  • AI designed and applied with inclusive principles at its core can challenge student and teachers’ assumptions and help frame differences as assets
  • AI can offload time from teachers, in order for them to prioritize the parts of their role that computers cannot fill such as building relationships with their students and personalizing their learning experiences
  • By its nature as an early-stage technology, AI will not always work as intended, which should be embraced in the classroom as a way to learn about AI tools, not just from them

Josh Weisgrau: Tell me about what brings you to this Institute, and to AI and education in general.

Corey Brady: The reason I have hope for AI is the same reason that I have hope for diversity in people; that is, there is a chance to think about AI as a new kind of participant that challenges our assumptions about what participation, insights, and valid contributions to group thinking can look like. 

Recently, my younger daughter had a major surgery to address a chronic condition. One of the positives from that experience was that it allowed her to breathe enough that she could do things like shouting and singing. And so, she decided that since we had moved to Nashville, Music City, she wanted to know everything about music and start to sing. Her voice coach has never coached a teenager who had never sung before. So her voice, her instrument, was like a young child’s. 

One of the things that’s been foregrounded for me in my daughter’s experiences is how when a learning environment welcomes the unexpectedly different, it can be structured so that the differences are assets. And so I think that the design that we do around AI can be designed to accommodate a different kind of intelligence in the AI itself, which can open us to different kinds of intelligence in the groups that we’re studying. I’m hopeful about that, but I also recognize that there’s a chance for that to turn into a re-inscription of existing horrible patterns.

Krista Glazewski: One of the things that I just care really passionately about, and that has shaped all of my ideas about education, is that I am a brown, but white-passing, individual who was adopted into a white family. From a very young age, I was always keying into how things are said to me when people have assumed me to be one race, how relationships are formed, and what people expected of me. The way that plays out, even in my research today, is that I have always cared as much about what isn’t said, as what is said in classrooms. This speaks to how we can make our assumptions about teaching and learning really clear and explicit so that we’re not operating from assumptions about what we think people do and don’t know that are not informed by the learners themselves.

Corey, you mentioned kind of hesitation, or maybe even a fear, about re-instilling existing patterns of inequity, and I think that that should be a foreground of this Institute in terms of ensuring that things that we do scale are equitable practices, rather than scaling technology itself or harmful practices. I think the way that we have worried about some of these technologies is that we will end up scaling things that could cause harm or scale inequalities.

Generative AI

JW: What’s the first word that comes to your mind when you think of the intersection of AI and education?

KG: The word generative comes to mind for me, but only because that’s what I imagine that we are aspiring to. To do work that is generative, that can involve creative approaches that engage kids in productive work, thinking, and approaches, in ways that account for their own interests, communities, and goals.

JW: I think when most people think of AI they think maybe a recommendation engine, or in education, an automated scoring engine- neither feels particularly generative. But this AI Institute is about stories and narratives, so why do you think that’s important?

KG: I do think most people think of it as automating processes that computers are better at than humans. I actually think that is then kind of a ticket to doing things that can be generative and can be supportive, and that can lead to this idea that the promise or the potential is how we might be able to take a learner at a moment of need and really push their understanding in some really key deep and important ways. I often think so much about this moment of need, because it’s at the moment of need where students and learners need an intervention the most and when we can identify and target that. When we can plan for and detect it in real-time, then I think we have done something that we haven’t been able to do before. 

Another thing that is happening in this project that to me is really exciting is the ability to change the way that educators might be thinking about engaging with individuals and classes of learners. What I mean by that is that if we could offload so much of the kinds of administrative work onto the technology, we create space for the teacher to be able to have one on one conversations or to take a group of learners more deeply in a one-on-one format. When that space is created because so many of the other things have been offloaded to AI, then I think that we were starting to understand the intersection of teachers and technology.

CB: I like the way you’re talking about it, Krista. You described one of the themes as the computer doing things more efficiently, or better than a human can, and I think that’s obviously true in some cases, but I think computers are going to suck at narrative. A masterful storyteller wins the rhetorical argument, although the Greek [philosophers] worry about people being so good at storytelling that they would make the evil sound better than the good. But, the computer is not going to trick us in that way with its stories. It’s not going to be so wonderful at telling stories that we’re going to be compelled by that vision. Instead, it’s going to be bad enough, but different enough, that we’ll have to make sense of its stories. As a result, we’ll be called to tell better and more inclusive stories that could hold the truth that what [the AI] is seeing is different from something that we’re seeing

I’m kind of fantasizing about the inverse- instead of the teacher being able to focus on things because of the computer there, they’re dealing with more things, like more diversity in the classroom, than being able to focus their attention. Now, it’s not either/or but there are moments, I think, when the teacher is going to have to broaden their sense of “how do I even make sense across this huge gulf in perspectives?” And that it’s going to be almost like the computer will make space for human diversity. I love the idea of people trying to help others understand what the computer might mean; and I feel like I’ve seen that in physical interpretations with robotics. If the kids are interacting with a robot, it’s like they are trying to include the robot, but when it’s a sense-making activity I think it’s even more compelling or more of a requirement to integrate it.

Opening the Black Box

JW: We’ve framed this institute around learning with AI, not learning about AI, but it seems from what you are saying that this may really be a false distinction?

KG: I think what’s important is that there are opportunities to ask, “How is this technology working and why isn’t it working well and what’s behind it?” One of the tenets of ethics in AI is around transparency and accountability because we have to be accountable for how these data sets are derived, and what they’re doing. We also have to be transparent about how they have been generated and what they’re used for; if we can’t do those things, then I think that we are creating some systems where people will put too much trust and faith in the technologies themselves. So this question about should this be learning about technology, and in a way then, should we open up the box for them? 100% yes, we should.