Ethical Considerations for AI in Education

Interviewed by Jeremy Roschelle

Jeremy Roschelle chatted with Ole Molvig and Collin Lynch about ethical considerations associated with designing and using artificial intelligence (AI) in education, in the context of their shared work at the Engage AI Institute.

Key Ideas

  • Ethical considerations should be front and center throughout the development of any new AI innovation, and ethics should be central to our definition of success for AI.
  • Policies and guidelines from the government, accreditation requirements in education, and standards of professional ethics are all needed to reinforce ethics in AI.
  • Public education is also important so that end-users can make informed decisions based on a full understanding of key issues such as transparency and privacy.

Interview

Introductions

Collin Lynch: I’m an Associate Professor of Computer Science at North Carolina State University. I work in AI, computer science education, and educational data mining. I’m doing a lot of work right now looking at how students engage in problem-solving in computer science education and, as part of that, thinking about questions of ethics and policy around AI and education.

Ole Molvig: I’m an Assistant Professor of History and Communications of Science and Emergent Technology at Vanderbilt University. My work has been a mixture of hard science and very cultural elements, looking at cartoons as well as equations, and trying to make connections between those things. I am interested in bringing humanist sensibilities and values to collaborations with scientists.

Examples of Ethical Considerations in AI

Jeremy Roschelle: Can you give us an example of something you’re working on and the kinds of ethical issues that arise in that work?

CL: So here’s a simple example from a current project. We are collecting students’ writing data, which puts us in an interesting position of having real-time information on things they type that they don’t necessarily intend to share. You know, things that are personal, things that may or may not be an insight into risk or harm, personal things that may have legal implications now, especially in some states. I have what I term the “data miner’s dilemma” on my hands: Every additional piece of information I collect gives me new insights which can be used to help or to detect warnings. But the question we are asking is: What’s our responsibility here? Are we supposed to be looking for these things or are we supposed to not be looking for these things? If we detect something then there might be a requirement to act, right? And yet, I could be wrong with my detection, in which case I could be creating more problems. At some point, you are making a conscious choice: either to know you could have detected something and done nothing about it or to know that you are making recommendations that are imperfect. And so that’s provoking questions for us. And in fact, there aren’t clear guidelines on this, but it’s a question being raised.

OM: That becomes really interesting. And I think about the context of our Institute and how to build trust, even when the process is totally fair. Even when we’ve spent a lot of time in the engineering phase of the work. When we’re not collecting the wrong kind of data. When we are very transparent with how we are using those data. When we did a great job of communicating what we are doing to our users and to the public. Even when we did everything right, people are still going to come at these experiences with a host of other kinds of emotional dispositions and cultural and historical backgrounds that are really going to determine whether or not they trust us. And that’s going to be a continued challenge, I think, for the final work we do with this Institute.

CL: A lot of it comes down to how much you can verify. In reality, you should always carry a healthy dose of skepticism, because once you give data away you never get it back, right? But for me, it raises the question of how we would even verify whether a company is “doing everything right.” Even once you get the source code, for example, how do you verify whether there is bias or not? We don’t have an objective standard, really, and we don’t have a guarantee that the code or the models, or, realistically, the models after they’ve been trained, can be analyzed other than by looking at cases. We’re in a situation where we really lack methods and standards for analyzing trained AI models.

OM: I mean, this is one of the things I really like about being a historian in these situations. These new data techniques that we’re encountering, around artificial intelligence or keystroke monitoring or eye tracking, feel very pressing and new. They are also, when abstracted enough, things we’ve been dealing with for a very long time, and often they are things we have solved in one context or another. Of course, different cultures at different times choose to prioritize different concerns over others in these issues. With privacy, for example, one of the huge problems is that our society has definitely valued privacy differently for different members of society. Privacy has been inequitable.

Guiding An Ethical Practice

JR: What do you find works to get people attuned to ethics in AI? What is needed to guide ethical practice?

OM: What seems to have worked is very public outcry about specific widespread uses of untested AI that contained encoded biases. The biases reinforced concerns that society already had about unfair practices. We’ve seen this outcry, particularly around some of the racial discriminatory elements found in facial recognition and gender bias in job hiring algorithms. These scenarios reinforce the fear that AI is going to recapitulate all of the problems we see already embedded in society. We tend to think that technology is one of our solutions to improve the society that we currently have. And yet here is such a prominent example of technology, not only not capturing the improvement that we have made, but really kind of codifying and encoding some of the practices that we have been working really hard to eliminate in our social practices. These clear examples of things that have gone wrong, that everybody could understand, and that had such clear analogs to other issues around civil rights…they woke up a lot of practitioners, researchers, companies, government agencies, Movand certainly academics, to pay a lot more attention. So that’s an example of how we see things moving from a public outcry or public movement into a codified set of government regulations.

CL: We also have a mandate now for professional ethics. If you want to get something taught to every undergrad, you incorporate it into accreditation requirements because that’s what makes the difference between something everybody has to learn and an idea that just sounds great. So for accreditation purposes, they have increased the requirement for ethics over the past several years from “this is a good idea” to “you really need it” to “okay, you have a class in it, but it’s a one-credit course they all take at the end” to “this should be throughout the curriculum.”

OM: Yeah. What’s probably most needed right now is around education. I mean, transparency on AI models doesn’t do much if you’re not familiar with some key basics: What is AI? When is AI being used? What is the difference between an algorithmic system and the data system? What does privacy mean? There are so many barriers for different segments of society to have a really good understanding of what we mean by all of these terms. So an important first step is to have a much better definition of terms. That’s why I am somewhat hopeful about this AI Bill of Rights. I really like that it articulates an inherent right to challenge algorithmic systems. Having those be a priori expectations instead of ad hoc questions afterward is a good step forward.

A Focus on the Future

JR: What do you want to see people spending more time thinking about and talking about?

OM: The struggle is how you actually implement ethics into practice. I know for many small projects the goal is just to get something that works, and the mindset is that the ethics “polish” can happen later. But ethics shouldn’t be a polish in my way of thinking about it. Ethics should be one of the fundamental questions that you think about from the beginning.

CL: Yeah, and so in a sense, in order to change that, you need to change the definition of “get it to work” from “get it to run” to “get it to run ethically.” And so getting your tool to work should mean getting it to work normatively within the bounds of the law. We have to change the bounds of success or the definition of success.

OM: And guidelines, like the AI Bill of Rights, that require transparency, will help move those concerns from polish into core needs. But those guidelines are still so far from being enactable and enforceable, without crushing the explosion of creativity and interest that we do want to see in these areas.

CL: I would agree with that and I would say that another thing we should really think about is policy. Because to my mind, ethical norms are great but if you want people to be ethical you really have to create policies that compel them to act ethically. You have to turn ethical decisions into practical requirements, practical guarantees, or at the very least, into clear best practices. We need to consider not just, “What are the ethical implications”, but also “What are the policies that should go along with this?” Anytime we build something there is the real-world context in which it’s used. You as the developer have very little control over that, but it has enormous implications. And so at the same time that we’re building these tools, we also really need to think about what policies are needed to ensure that they are being designed and used ethically.