Interviewed by Judi Fusco
Throughout this post, Judi Fusco, Director of Emerging Technologies and Learning Sciences at Digital Promise, chats with David Crandall, Professor of Computer Science and Director of the Luddy Center for Artificial Intelligence at Indiana University Bloomington, to get to know each other as they work across the boundaries of disciplines in The National Science Foundation AI Institute for Engaged Learning.
Key Ideas:
- Interdisciplinary work, including classroom experts, computer scientists, and learning scientists as we consider foundational and use-inspired research in the development of future technologies, is crucial.
- Artificial Intelligence (AI) may be a different kind of technology than we’ve encountered before and we need to respond accordingly in research.
- It is a priority to preserve human connection as we develop new technologies.
Please tell us a bit about yourselves.
David Crandall: I’m David Crandall. I’m a professor of computer science at Indiana University. As a computer scientist, I work in computer vision, specifically, but machine learning and AI more broadly, and I’m excited to work on core computer science problems pushing forward the technology while doing interdisciplinary collaborations and applications. That’s why I’m really excited about being part of this team.
Judi Fusco: I’m Judi Fusco and I’ve been thinking about emerging technologies for classrooms with teachers since the late 90s. The technologies keep changing and we are still sometimes creating new technologies for teachers and not with them and then determining if and how they fit in classrooms. I want us to think with practitioners all through the design and implementation cycle. I use a learning sciences lens as I think with practitioners.
JF: What’s an event or experience that inspired your research interests today?
DC: I’ve always been excited to be here at Indiana University where I get to think broadly not just about the technology, but about how the technology could be used in AI to have a benefit. To make that happen, it’s going to require many different disciplines across the university – sciences, humanities, arts, and education. I guess in a sort of happy accident I am here in a place that is so supportive of interdisciplinary collaboration. I continue to be very excited about that.
JF: I’ve spent a lot of time with practitioners and I’ve learned so much from them. We need computer and technology experts, subject matter experts, in this case, teachers and students, and researchers involved as we work to create new technologies. I’m here because of all the great experiences I’ve had with practitioners and want to bring them into the conversations we’re having.
The Transformation of AI Over Time
JF: What makes AI so exciting today? What makes this center so exciting?
DC: We could interpret the word exciting in a positive way or a negative way. AI is working well enough on real problems that people care about and it’s suddenly useful; that wasn’t the case for most of the 60-70 years that people had been working on AI previously. On the other hand, it also introduces lots of challenges. We’re seeing some of the negative consequences of how things can go wrong with things like biased algorithms, manipulation of information on social media (e.g., deep fakes), and self-driving cars that make mistakes and cause accidents, so we could talk about either kind of exciting.
JF: I agree that deep fakes and the wild mistakes that AI makes are very scary. One of the things I hope we can do is determine how much time it takes to understand new technology and what it is doing before we just release it. What are your thoughts?
DC: Other technologies have been introduced, over the centuries, that we didn’t fully understand the ramifications of until they were out there and being used. Then, the policy and ethical conversations and the legal frameworks caught up. Maybe AI will be the same, but it does feel different, especially because the technology is advancing so rapidly. I think that part of our work in this institute is figuring out what things are essential for humans to do and what things we can – and want to – automate. We really need to involve and preserve the human connection.
JF: Yes. That’s where I think having practitioners involved will help because only they can tell us what they need. I can’t speak for them, nor can any other person who is not in the classroom.
What Role We Play
JF: Do you have other thoughts on the makeup of the institute?
DC: One of the things that I think about a lot is what is the role of the university in AI research, given how much exciting research is happening in Industry. I think the university is an important institution of humanity that brings together experts from many different disciplines, who can think about these issues and who are hopefully not biased by profit or the whims of capitalism.
JF: Over the next five years, how do you see yourself making a difference in the AI world and education?
DC: To use the National Science Foundation terms, foundational and use-inspired research, I think the great thing about this Institute is that it brings together people who are looking at the same problem from many different angles. Some are thinking more foundational, some more use-inspired, and others in terms of societal implications or ethics. I am hoping that instead of developing my techniques in isolation and risking not having any impact on people’s lives, that here we’re really inspired by a gap to fill and that we can identify the right technical components to solve the right problems. I hope we can have a big impact without negative consequences, or at least that we can think through the negative consequences together. I think that’s the most exciting part — technical work that is directly impactful and improving education.
JF: I hope to help connect the research and practitioner worlds so we can work together. I’m excited to see what new technologies emerge from our use-inspired work.