top of page

Ethical Integration of AI: Asking all the Questions


I like to ask a lot of questions. One of my core values is curiosity, and I lean into that heavily, believing that embodying flexibility and a growth mindset leads to new ideas and new ways of doing things. While, yes, it annoys people from time to time, I find that approaching with curiosity (and asking lots of questions) helps me to gather the information needed to make informed decisions, refrain from harmful assumptions, and move forward with a “what might be?” mentality – ultimately benefitting the whole process (and our kiddos!).


When it comes to the use of artificial intelligence, so many questions arise. When it comes to integrating artificial intelligence into education, we get even more questions because of the tremendous opportunity and responsibility we have when it comes to our students.


And I would encourage you to sit with the questions and really consider them. Often when faced with questions surrounding something new or unknown, our instinct of self-preservation kicks in, and we lean towards options that maintain and enforce the status quo. We cannot take that way of thinking and apply it towards questions surrounding AI.


There are absolutely challenges when it comes to working with AI. And no challenge that isn’t present in some way already. And nothing we can’t overcome with intention. And the potential benefits of using AI far outweigh the challenges.


So, about those questions. When I think about ethical use of artificial intelligence in education, I think about it on various levels – from a systematic standpoint that starts with how the platforms are developed and put into the world to questions related to organizational educational philosophy that address why and how education should occur to policy that ensures safety and integrity to a practical implementation level and what and how the teachers are using AI in their classrooms.


While this list certainly isn’t exhaustive, here are some of the questions I’ve been considering as I consider what constitutes “ethical use of AI.”


Systematic questions

  • How can we steer future advancements of AI towards benefitting humankind?

  • How can we ensure that access to AI and its benefits is equitable?

  • How can we work to train AI to cut down on bias, discrimination, and misinformation?


Educational philosophy questions

  • How can we ensure the use of AI is being driven by educational goals and what is best for students?

  • How can we ensure we are using AI as a complement to human interaction, not a replacement?

  • How are we teaching our students to use AI in a way that focuses on fostering their own skills like critical thinking and creativity, rather than replacing these skills?

  • How can we use AI to support student development of emotional intelligence, fostering dispositions such as empathy, responsibility, and grit?

  • How are we using AI to develop and foster student independence, self-awareness, and self efficacy, rather than an over-reliance on AI?


Policy questions

  • How are we protecting student data and privacy?

  • How are we rethinking standards and protocols in-line with AI?

  • How can we align our organizational honor code to reflect ethical AI usage for both learners and teachers?

  • How can we be engaging in ongoing learning about the developments of AI and integrating that learning into our work so that our policies stay current?


Implementation questions

  • How can we teach our students to be literate in the skills and tools of AI?

  • How can we work with our students to use AI as a powerful tool for maximizing learning, personalized to individual student needs?

  • How can we partner with parents for consistent messaging?

  • How can we use AI to shift education to spaces that weren’t previously possible?


I don’t have answers to any of these questions – in part because the AI technology is advancing every day so any answer I had today would have to be adjusted by the time you read this, in part because there are no objectively “right” answers to any of these questions, and, in part, because these questions should be addressed on a deeply personal level, considering your organization, your goals, and your students.


So, to guide you in your own exploration of these questions, here are a few more questions:

  • Where do you have influence and/or authority regarding these questions?

  • How would you address this in theory? How does it look in practice?

  • Who are the stakeholders who need to be involved in this conversation?

  • What support do those in your organization need to address this area so that you are in alignment?


As always, our ultimate motivation should be doing what is best for students (even when it’s hard or scary or new or unknown), and this commitment to doing what is best for students should be guiding how we think about these and any other AI-related questions, ultimately using one overarching question as our guiding lens: “What might be?”


13 views0 comments

Comments


bottom of page