By Jeremy Roschelle
Key Ideas:
- Grassroots educational policy development starts with boosting AI literacy
- Co-design leads to better educational uses and guidelines for AI
- Educators, researchers and community members organized around the concept of an AI Bill of Rights for Education
On August 9, 2023, the National Science Foundation funded EngageAI Institute invited a national group of 80 educational practitioners, researchers, developers and funders to a day-long forum held at the Computer History Museum in Mountain View, California. As the day began, educators voiced excitement, uncertainty, and risk about Artificial Intelligence (AI) and structured their conversations around the question — how can generative AI empower educators as learning designers? After listening to their experiences, collaborative teams of educators, researchers and community members gathered at roundtables. Participants collectively identified issues and imagined solutions, and although bringing their perspectives together was challenging, the level of engagement stayed high throughout the day. Overall, the event revealed the importance of joining forces to work towards safe, responsible and equitable use of AI. We share three insights below.
Grassroots policy development starts with boosting AI literacy
One Forum attendee summarized a theme as “the need for professional development for teachers about the definition, use, and existing types of AI tools that can be used in the classroom (teaching and learning), for both students and teachers.”
During the day, leading educators shared how they are called upon to explain AI to students, parents, fellow teachers, and their communities. A computer science teacher described helping fellow teachers make sense of the technology. A special education teacher described how AI could lead to greater inclusiveness in art, music and writing and how she was sharing that in a large urban school district. A school superintendent shared that in her community, parents come to school leaders to make sense of what AI is and how it will impact their students. One educator summarized the day by writing “I want Congress to know that there should be a concerted effort in supporting and educating school leaders [to learn about AI].”
Co-design leads to better educational uses and guidelines for AI
A theme highlighted in the reflections of another Forum attendee was “the importance of co-designing AI solutions with educators and students. This means involving them in the development of AI tools and ensuring that they are designed in a way that meets their needs and respects their values.”
A school leader described how much their school learned about AI by allowing students to lead conversations. A researcher discussed how students became highly engaged when given opportunities to discuss AI ethics. This aligns with existing research that EngageAI has done, in identifying how student contribution to the design process influences a tools’ effectiveness. Together, researchers and teachers called for efforts to understand the transformative potential of AI. Indeed, many participants expressed how AI could be a positive force for needed re-thinking of school, but only with equal attention to the related risks of AI systems.
As they thought about policy development, participants expressed tension. On the one hand, inconsistency of policies and guidelines across classrooms and schools is confusing; centralized policies would help. On the other hand, they don’t believe governments or companies know enough about the reality of AI use in schools to come up with policies that can work. For example, it’s not just that schools will adopt a list of AI-enabled resources; schools will also have to decide what to allow among tools that students bring. The “allow” category may have greater consequences than the “adopt” category. Participants called for policy development to involve co-design with educators and researchers: “for more collaboration between different stakeholders in education to ensure that AI is used in a responsible and ethical way.”
Organizing around an AI Bill of Rights for Education
In a Forum workshop, small groups reviewed and elaborated upon principles in the White House Blueprint for an AI Bill of Rights. One educator commented “AI raises a number of ethical concerns, such as bias, privacy, and fairness. It is important to carefully consider these concerns before deploying AI in schools.”
Participants worried that the Blueprint’s idea of “giving notice” (of how AI is used in a vendor’s product) will not suffice for the reality of schools; educators will have to “take notice” of what is happening and anticipate consequences beyond those imagined by a vendor. Another table expanded upon concerns with data privacy. Compared to older educational technology products, they characterized AI-enabled products as having more data, more personal information, more ability to process images and video, more transformation between inputs and outputs, and more potential for misuse. Equity issues specific to education were also highlighted, such as growing inequalities based on differences in parents’ ability to pay for generative AI subscriptions. Forum participants envisioned the value of a year-long project to elaborate principles for safe, responsible and equitable AI—which could lead to guidelines that are specific to educational situations.
Next steps
The EngageAI Institute has formed a practitioner advisory group to inform its explorations of how AI can increase student engagement and enable deeper learning. More generally, the Institute plans to continue to catalyze grassroots efforts among practitioners, researchers and community members through future forums, webinars, and working meetings. We plan to continue with the AI Bill of Rights as an aspirational theme. To learn more, sign up for the EngageAI newsletter.