There are challenges involved in the use of artificial intelligence, especially when it comes to the rapidly advancing, human-like generative AI that we’ve seen of late. Real challenges that are a little scary.
I am not a tech insider. I don’t have advance information related to the path any of this is taking. I haven’t been a part of the development team. I can’t even begin to hypothesize the possibilities of where we could be headed. I’m fully aware that many of the people who do fall into these categories have sounded alarms about the potential threat.
So this is a different kind of conversation.
I am an educator who is deeply committed to the future for our kids. I have a degree in sociology and am fascinated by the implications for our society. I am a change-averse human in a world that is rapidly developing.
And that’s the conversation I want to have.
When we think about the more advanced forms of generative AI (think ChatGPT, Bard), so many benefits and potential benefits come to mind.
And there are inherent challenges. Real challenges that could have a major (negative) impact is not addressed. But how many of them are actually new and unique to AI? I would argue none.
Challenge: Privacy and security implications
The problem: As it should be, one of the first questions we ask when it comes to the introduction of anything new to our students is, “Is it safe?” For AI specifically, lots of questions come to mind – What are the dangers to our minor children, and how can we protect them? How much data is being collected about us? How could this technology and the data collected be used in a way to hurt our kids?
Is this a new problem? No. AOL chatrooms were a big thing when I was in middle school (a really long time ago), and many of these challenges really started around them. And we’ve all seen the memes around how the internet is listening in our conversations in order to give us targeted ads.
The solution: It seems overly simple, but education and advocacy. The more we know, the better equipped we are to address any challenge, including taking necessary steps towards policy. So there are other questions we should be considering: What does digital literacy need to look like? What should privacy look like in today’s world? What are the real dangers involved?
Challenge: Equity considerations
The problem: Those who can and do fully embrace AI technologies will be at a significant advantage over those who do not. They will have access to better and more powerful resources. They will be able to complete tasks more effectively and efficiently. They will have advantages we haven’t even considered. And, yet, in order to fully embrace AI, you need access to computers or devices, time to explore, and guidance as to how to develop skills in these areas. And not everyone has these resources, and the disparity will become very evident very soon as segments of the population are left out of developments.
Is this a new problem? No. We have never had equity when it comes to education or distribution of resources. Just this lack of equity will be very visible much faster.
The solution: Increase our commitment to equity (in education) including real action to address the sources and implications of the lack of equity. If we don’t, we will very soon have two very distinct classes of people – those with access to AI and those without.
Challenge: Promotion of bias and misinformation
The problem: Generative AI can and does provide incorrect and/or biased responses at times, sometimes addressing missing or incomplete information. Sometimes going beyond the scope fo the training data. And sometimes just making something up with no real explanation (hallucinations).
Is this a new problem? No. Bias and misinformation is so prevalent in our world, it’s actually why is exists within AI. In order to create generative AI like ChatGPT, it was fed millions of pieces of existing information so that it could learn. The fact that what it generates is sometimes not quite accurate or impartial is purely a reflection of our reality. My daughter once told me that the moon landing never happened and was all a scheme from NASA. Because she watched a YouTube video. I have certain friends with whom I have had to deem certain topics off-limits because they have latched onto one news story or article without considering the source and how one-sided it might be. This isn’t a new challenge brought through ChatGPT.
The solution: Critical thinking has been an important skill for some time, and now it is essential. We need to expose students to the questions they should be asking: What is the difference between information being biased and wrong? How can we be aware of our bias and where that comes into play? What are the dangers of accurate information and data be taken out of context or misrepresented? How can we fact-check any source before we accept it as truth?
Challenge: Potential for replacing humans
The problem: We’ve all seen the warnings about AI and its potential for “replacing” humans – everything from the predicted loss of jobs to the potential end of humankind as we know it.
Is this a new problem? Maybe, but the solution is what we’ve needed for some time.
The solution: Lean into what makes humans uniquely human. I have often talked about (and written about) the need for education to emphasize dispositions and mindsets over the learning of content, and now that’s more important than ever. We need to consider what makes humans human and how that differentiates us from computers -- and do that better. Fostering creativity, empathy, and kindness will set our students up for success. While AI has the potential to replace jobs and entire industries, there are areas in which it can’t touch, and that needs to be our focus.
I believe the rise of AI can and should be the catalyst we have needed for some time to dramatically shift education, and examining the major challenges inherent in the technology helps to highlight exactly why. None of the challenges that I have included are new; they have all existed in some form or another for quite some time – and, yet, it’s AI that is bringing them to the forefront now.
Good.
All of the challenges are in need of addressing, and if the threat of machines taking over the world is what we need to actually think about equity or focusing empathy, great. But we need to actually do it.
The solution here is not to block ChatGPT because it might provide false information or to stop developing technology because it is too human.
We have an incredible opportunity (and obligation) to take control and make this technology work for us in a way that makes us all better, and that starts with embracing the challenges inherent in generative AI for what they are – a push for something different.
Comments