Artificial Intelligence as a Catalyst for Critical Thinking in Schools
- Sarah Levy
- Sep 17
- 12 min read

Introduction
Since the launch of ChatGPT in late 2022, conversations about artificial intelligence in education have swung between excitement and alarm. In classrooms and living rooms alike, students, teachers, and parents are grappling with what this technology means for learning. On one hand, critics warn that students using AI to write essays or solve problems risk bypassing essential cognitive work. A widely cited MIT Media Lab study, for example, found that students who relied on ChatGPT to write essays showed lower brain engagement, less originality, and weaker memory recall than those who worked independently or even with Google Search. Jill Barshay of The Hechinger Report echoed these concerns, noting that students often struggled to explain or replicate work completed with AI assistance.
On the other hand, early adopters see how these tools, when used with intention, can spark creativity, accelerate feedback, and expand access to personalized learning. At one experimental school in Austin, Texas, for example, AI tutors have been used to streamline the academic day into just two hours, freeing afternoons for outdoor learning and entrepreneurship; school leaders there claim that students are learning at two to six times the rate of traditional classrooms. Similarly, research highlighted by BigThink suggests that AI can serve as a “thinking partner” when students are encouraged to question and iterate on its responses, making learning more dialogic and reflective.
At the center of this debate is a single, vital question: does AI weaken or strengthen students’ ability to think critically? Critical thinking (the disciplined process of questioning assumptions, weighing evidence, and reflecting on one’s own reasoning) remains one of the most essential skills for life and work in the 21st century. Far from replacing these skills, we argue that AI, when guided by thoughtful teaching practices, can serve as a catalyst for them.
Several organizations are beginning to chart what this intentional use looks like. The Association to Advance Collegiate Schools of Business (AACSB), representing more than 1,900 institutions worldwide, emphasizes that the most effective classroom uses of AI are those that require students to interrogate outputs, reflect on their own assumptions, and iterate through multiple rounds of prompting. Meanwhile, science educators writing for the National Science Teaching Association caution that without structured scaffolds, students risk “outsourcing” their critical thinking to AI and losing valuable opportunities to wrestle with complexity.
As co-authors, we bring complementary perspectives to this conversation. Ethan, a current high school student, Coursera-certified AI practitioner, and founder of an edtech consultancy working with global schools and Fortune 500 companies, has spent years studying how students are penalized not for misusing AI, but for lacking guidance. Sarah is a scholar-practitioner with over two decades of experience in education as a classroom teacher, school leader, and parent. She holds both a master’s and a doctorate in education, along with certificates in Responsible AI usage and Elements of AI. Together, we draw on both research and lived experience to argue that the outcomes of AI use hinge not on the technology itself, but on how it is implemented.
This paper explores how AI can foster, rather than diminish, critical thinking. We do so by examining three guiding questions:
How does guided use of AI (e.g., teacher-supported prompting) impact student engagement and learning compared to unguided use?
Can generative AI tools be structured to support—not replace—critical thinking and problem-solving?
What roles do scaffolding and personalized learning play in mitigating the risks of over-reliance on AI?
By exploring these questions, we aim to provide educators, school leaders, and parents with practical insights and strategies for aligning AI tools with sound learning science.
Guided AI Use Enhances Engagement and Learning
Since the release of generative AI tools like ChatGPT, educators have observed a wide range of student behaviors, from heightened creativity and curiosity to passive, shortcut-seeking habits. The determining factor is not the technology itself, but how it is used. When students interact with AI in unstructured, unguided ways, they often bypass critical thinking processes. In contrast, when teachers scaffold the experience – by modeling how to prompt, encouraging students to question outputs, and requiring them to refine AI-generated responses – the technology becomes a partner in deeper learning rather than a replacement for thought.
The risks of unguided use are well documented. Researchers at the MIT Media Lab compared students writing essays under three conditions: independently, with Google Search, and with ChatGPT. The group relying on ChatGPT showed the lowest cognitive engagement, produced the least original writing, and struggled to recall their own work afterward. Similarly, Jill Barshay described how college students who used AI to complete assignments often could not explain or replicate their results later, raising concerns about “outsourcing” the struggle that builds cognitive stamina.
Yet evidence also shows that guided use can flip this dynamic. A BigThink article noted that when students are prompted to treat AI as a “sparring partner” rather than a crutch, they engage in more iterative, reflective learning. This principle echoes what cognitive scientists call “desirable difficulty”: the idea that some struggle is not only unavoidable but essential for deep learning. When AI is framed as a tool to challenge, provoke, and extend student thinking, it can heighten engagement rather than dampen it.
Organizations are beginning to codify what this looks like in practice. The AACSB has emphasized that the most effective classroom use of generative AI requires students to interrogate AI outputs, reflect on their own assumptions, and revise their work across multiple prompt-response cycles. Science educators writing for the National Science Teaching Association (NSTA) echo this caution, warning that students must be taught not just how to “get an answer” from AI, but how to critically evaluate whether the answer makes sense and how it connects to disciplinary thinking.
In short, the distinction between passive use and guided engagement will shape the intellectual development of today’s students. Left unchecked, AI can encourage cognitive offloading and shallow thinking. But when implemented with structured pedagogy, intentional prompts, and teacher support, AI has the potential to strengthen, not weaken, the very critical thinking skills it is so often accused of eroding.
Generative AI Tools Supporting Critical Thinking
The concern that generative AI will replace thinking altogether is not unfounded. Left unguided, these tools can produce ready-made answers that tempt students to disengage from the messy work of reasoning, questioning, and problem-solving. As The New Yorker warned, there is a growing risk that AI could homogenize thought, producing polished but predictable responses that mask complexity rather than illuminate it. The challenge, then, is to design learning experiences where AI is structured to extend thinking rather than substitute for it.
One promising approach is to position AI as a thinking partner rather than a final answer engine. For example, students might be asked to generate an AI response, then critique it for accuracy, logic, and bias. Teachers can push students further by requiring them to revise the AI’s work, cite alternative evidence, or defend why their approach differs. This kind of dialogic use draws on what BigThink described as AI’s potential to act as a “sparring partner,” a role that encourages iteration, argumentation, and reflection.
The principle of desirable difficulty is critical here. As discussed in Time’s reporting on AI and learning, not all uses of AI are equally effective. Students who rely on it to bypass challenges may gain speed but lose depth. However, when AI is used to make the work productively harder – by surfacing multiple perspectives, offering counterarguments, or requiring justification – students actually strengthen their cognitive stamina. In this sense, the “struggle-first” method described in recent podcasts and studies is instructive: students must first wrestle with a problem, then use AI to test, refine, and extend their thinking.
Case studies point to how this structured use can work in practice. At Alpha School in Austin, Texas, for example, AI tutors streamline basic instruction into two hours a day, but the school deliberately frees afternoons for wilderness activities, entrepreneurship, and collaborative projects. The result is not simply faster content delivery, but a reallocation of time toward the kinds of open-ended, higher-order tasks where critical thinking thrives. Rather than eliminating struggle, AI is used to clear space for more meaningful intellectual challenges.
Educators, therefore, play a central role in shaping AI use. Without explicit structures, students will default to using AI as a shortcut. With thoughtful design, however, generative AI can become a catalyst for deeper inquiry: a source of alternative perspectives, a platform for testing reasoning, and a partner in the iterative process of problem-solving. The difference lies not in the technology, but in how educators frame its purpose and integrate it into the fabric of learning.
The Roles of Scaffolding and Personalized Learning
If critical thinking is to remain at the heart of education in the age of AI, students need more than access to tools; they need the right supports around how to use them. Two of the most effective approaches, long recognized in learning science, are scaffolding and personalized learning.
Scaffolding refers to the instructional supports that teachers provide to help students engage productively with challenging material. In the context of AI, scaffolding might mean modeling how to generate effective prompts, showing students how to evaluate the reliability of AI responses, or asking guiding questions that require students to connect AI outputs to their prior knowledge. As the National Science Teaching Association has cautioned, “students cannot be left to assume that AI tools think for them.” Teachers must explicitly demonstrate how to move from a generated answer toward deeper questioning, analysis, and synthesis.
Scaffolding also addresses the risk of students’ outsourcing too much of their thinking. By structuring tasks so that AI is one step in a larger cognitive process (such as brainstorming, critiquing, or revising), teachers can ensure that students remain the active thinkers. For example, an assignment might require students to use AI to produce three different explanations of a scientific concept, then choose the most accurate one and justify their reasoning. In this design, the thinking does not end with the AI output; it begins there.
Personalized learning is the second key element. AI tools are uniquely capable of providing feedback and tailoring explanations to individual learners. As noted in a recent Springer article on AI in education, adaptive AI tutors can accelerate mastery by adjusting the pace, difficulty, or mode of explanation to fit each student’s needs. But this personalization must be framed carefully: if the AI provides too much support, it risks over-scaffolding, leaving students dependent rather than independent. When designed well, however, personalization can give students “just enough” guidance to stretch their thinking while still requiring them to grapple with new ideas.
Organizations like the AACSB have argued that personalization through AI should not be confused with passivity. Instead, it can free up teachers to engage students in higher-order thinking tasks by reducing the time spent on routine review. Teachers can then focus on facilitating discussions, mentoring students in inquiry-based projects, and fostering reflection – spaces where critical thinking is most likely to develop.
In short, scaffolding and personalization are not optional add-ons; they are the mechanisms that make AI-enhanced learning both rigorous and equitable. Without them, AI risks becoming a shortcut to shallow answers. With them, it can become a bridge toward deeper engagement, ensuring that every student (regardless of their starting point) has the opportunity to strengthen their critical thinking skills.
Takeaways and Next Steps
As educators, school leaders, and parents consider how to navigate AI in education, the path forward lies not in banning or ignoring these tools, but in teaching students how to use them wisely. The following recommendations offer concrete steps for each group.
For Students
Use AI as a sparring partner, not a shortcut. Generate ideas with AI, but then critique, revise, and defend your own thinking.
Practice effective prompt engineering. Focus on crafting thoughtful prompts and follow-ups, since the quality of AI output depends on the quality of your inquiry.
Explore AI’s range. Go beyond information retrieval, use it to generate practice tests, create visual representations, or rehearse presentations.
Compare and contrast. Don’t accept the first answer. Seek multiple perspectives such as your own, your peers’, and AI’s and weigh them against each other.
Know AI’s limits. It can hallucinate, show bias, and act like a “yes-man.” use it with caution.
Reflect on the process. Keep a journal or portfolio where you explain how you used AI, what you learned, and where you disagreed with its responses.
Collaborate with peers. Use AI together for practice tests, note synthesis, or idea generation.
For Schools and Educators
Integrate AI intentionally into curriculum. Structure activities where AI is one step in a larger process of analysis, synthesis, or problem-solving, rather than the end product.
Model critical evaluation. Demonstrate in class how to question AI outputs, test their accuracy, and identify bias.
Scaffold student learning. Teach prompting strategies, require students to show their reasoning alongside AI outputs, and design assignments that demand revision and reflection.
Balance efficiency with struggle. Use AI to streamline routine tasks (like drafting outlines or providing examples) so that students can spend more time on higher-order thinking.
Invest in professional learning. Ensure teachers have training not just in how AI works, but in how to align its use with sound pedagogy and assessment practices.
For Parents
Encourage curiosity, not shortcuts. Ask your child to explain how they used AI on an assignment, what they agreed or disagreed with, and what they learned in the process.
Try AI yourself. Experiment with the tools so you can have informed, active conversations with your child
Set boundaries at home. Reinforce that AI should be a tool for brainstorming, testing ideas, or practicing skills – not for bypassing the work entirely.
Talk about digital literacy. Discuss how AI, like any online source, can be biased or inaccurate, and why it’s important to verify information.
Celebrate thinking, not just answers. Praise effort, reasoning, and reflection more than polished outputs, especially when those outputs are AI-assisted.
Conclusion
Artificial intelligence is not going away, nor should it. For today’s students, learning to use AI wisely is as essential as learning to write a strong thesis statement or collaborate in a team environment. The question is not whether AI will destroy the school system, but how we will adapt education so it strengthens, rather than weakens, skills vital to a student’s life long success.
The evidence and examples presented here point to a clear answer: outcomes hinge on guidance, scaffolding, and purpose. Unguided use encourages passivity; guided use fosters engagement. Poorly structured assignments risk cognitive offloading; well-designed ones spark reflection and problem-solving. Over-reliance can flatten curiosity; intentional scaffolding and personalization can deepen it.
For students, the call is to engage AI as a partner in thinking, not a substitute for it. For educators and schools, the task is to design learning experiences that require questioning, iteration, and analysis. For parents, the role is to encourage curiosity and ensure that AI use at home builds – not bypasses – intellectual effort.
When used thoughtfully, AI can do more than answer questions; it can provoke them. It can challenge students to wrestle with ideas, evaluate evidence, and see multiple perspectives. In other words, it can help cultivate the very critical thinking skills that will remain essential in a world where information is abundant but wisdom is scarce.
The future of learning is not about choosing between human thinking and artificial intelligence. It is about building an education system where the two work together, preparing kids for a future where AI is everywhere, so that our students grow not only more efficient, but more thoughtful, creative, and capable of navigating complexity. That is the promise of AI in education, and it is within our reach.

Ethan Mei is a student based in the Washington, D.C. metropolitan area with a strong academic foundation in Artificial Intelligence. He has completed advanced coursework, including Stanford University’s reinforcement learning course on Coursera, and continues to deepen his expertise in AI fundamentals.
Over the past two years, he has collaborated with schools worldwide to explore and improve the integration of AI technologies into educational systems, focusing on innovative approaches to enhance learning environments.

Dr. Sarah Rubinson Levy empowers excellence and innovation in education, partnering with clients to achieve the level of quality and distinction they desire for their schools, organizations, and students. Her approach centers on the belief that doing what is best for students should always be our driving force.
With over 20 years of experience, including a master’s and a doctorate as well as certificates in Day School Education, Jewish Leadership Studies, Jewish Educational Leadership, and Executive Coaching, combined with her focus on AI with certificates in Elements of AI and Responsible AI as well as having delivered a TEDx talk on the subject, Sarah helps educational leaders look towards the future, elevating their in-house teams and resources to achieve goals faster and easier through a process that focuses on clarity, strategy, and execution.
References
Association to Advance Collegiate Schools of Business (AACSB). (2024). How to evaluate critical thinking in the age of AI. Retrieved from https://www.aacsb.edu/insights/articles/2024/02/how-to-evaluate-critical-thinking-in-the-age-of-ai
Barshay, J. (2024, February 26). Proof Points: Students using AI struggle to recall and explain their work. The Hechinger Report. Retrieved from https://hechingerreport.org/proof-points-offload-critical-thinking-ai/
BigThink. (2024, March). Artificial intelligence and critical thinking: How AI can become a thinking partner. Retrieved from https://bigthink.com/thinking/artificial-intelligence-critical-thinking/
Carey, B. (2024, March 25). Think or not to think? The impact of AI on critical thinking skills. National Science Teaching Association Blog. Retrieved from https://www.nsta.org/blog/think-or-not-think-impact-ai-critical-thinking-skills
Kamenetz, A. (2024, April 18). AI in schools: Will ChatGPT destroy or revolutionize education? Time Magazine. Retrieved from https://time.com/7295195/ai-chatgpt-google-learning-school/
MIT Media Lab. (2023). Your brain on ChatGPT (Preprint). Retrieved from https://www.oneusefulthing.org/p/against-brain-damage
Perelman, D. (2024, July 1). AI is homogenizing our thoughts. The New Yorker. Retrieved from https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts
SpringerOpen. (2024). AI in education: Impacts on personalized learning and student outcomes. Smart Learning Environments, 11(16). https://slejournal.springeropen.com/articles/10.1186/s40561-024-00316-7
Thompson, E., & Abedin, R. (2024). Against brain damage: Cognitive offloading and AI. One Useful Thing. Retrieved from https://www.oneusefulthing.org/p/against-brain-damage
Comments