
In recent years, artificial intelligence (AI) usage in schools has become a controversial topic. As students become more dependent on AI, the question arises: Is AI enhancing education, or replacing learning opportunities for students? The recent advancement of AI has encouraged students to use AI to their advantage. Students are actively using AI tools such as ChatGPT, Google Gemini, and even AI tools embedded into Quizlet to help with their school work. While some students use AI to improve their work, many students are beginning to abuse these tools, which undermines their own learning and has raised concerns among educators. Although AI can be used to create personalized learning for students, its usage must be limited in schools, as it could lead to academic dishonesty, restrict students from utilizing critical thinking skills, and lead to increased patterns of bullying and harassment, especially in the educational setting.
One reason AI usage in school must be carefully monitored is that AI is often used as a gateway for students to cheat. More students are relying on AI to help complete their schoolwork. For example, according to Ilana Hamilton, a writer for Forbes, many students compose essays using AI, which can supply answers in an instant, making it easy to cheat. If students are constantly relying on AI to write their school essays, this not only disvalues education but also creates a disadvantage for students who do not cheat.
Additionally, when students depend on AI and do not participate in the classroom setting, it prevents them from truly learning the test material, which could result in poor test results. As the University of Illinois College of Education explains, “AI can facilitate academic dishonesty and has the potential to interfere significantly with students’ educational outcomes.” If students use AI to cheat, they will not fully understand the concept being taught. When these learning concepts are shown on tests, students often will not be able to do the work without relying on AI. The cheating catches up with them and results in lower achievement, reduced confidence, and gaps in knowledge that compound over time. These “gaps in knowledge” carry into future education and even the workforce, where basic comprehension and task completion without assistance are expected.
Classrooms are not only a space to do work. They are spaces where students can ask questions, share ideas, and collaborate with others. The lack of face-to-face interaction from a heavy reliance on AI can make it more difficult for students to collaborate, express creative ideas, and navigate through social environments. This then limits interpersonal relationships within classrooms, and leaves them unprepared for future settings where communication and collaboration are vital and necessary skills. AI reliance in classrooms reduces valuable human interactions and key thinking skills for students.
Additionally, students who are constantly using AI to do their schoolwork are limiting their ability to think critically. Students who heavily depend on AI in their education become overly reliant on its usage. Education Week states, “Some teachers have big concerns that students could use AI tools to cheat […] some teachers wrote in open-ended responses that they’re concerned that allowing student use of AI could make students more ‘lazy’ and ‘lead to further degradation of critical thinking skills.’” AI’s ability to generate answers and prompts in seconds has affected students in a negative way. When AI generates answers in seconds, it limits the ability for students to think for themselves and develop essential skills like problem-solving and creativity. Furthermore, AI generates surface-level responses that prevent students from fully understanding the concept.
To add on, many students are unintentionally overusing AI. Students may not recognize when AI use crosses the line to academic dishonesty. For example, writing tools like Grammarly can suggest phrases and words that can be changed in one click. Often, when these tools are used, the resulting passage from AI sounds different than the student’s original work, limiting the authenticity and individuality of work completed amongst students.
Even more concerning, is that AI technology is being used in harmful ways to intensify bullying, harassment, and emotional abuse among students. For example, AI can be utilized to imitate the voices of specific people, blurring the line between what’s real and what’s not. In turn, this would only worsen the problem of bullying and make it more difficult for victims to be able to defend themselves. Even more so, an article from the Cyberbullying Research Center claims that AI is now being used to create nude deepfakes for blackmail, which are often sent from anonymous phone numbers. These attacks are created to humiliate and intimidate the target. The anonymity of the sender and the realism of the images make this kind of harassment especially terrifying for many people. Students are now at greater risk of having their private data accessed and distorted into harmful falsehoods or spread without their consent.
Moreover, AI tools can also analyze a person’s social media, online activity, and personal information to create targeted threats. These tools allow bullies to personalize their attack to a person’s specific vulnerability, making the attack more personal and severe. This kind of violation can lead to emotional distress, social isolation, and mistrust in digital spaces where students are expected to feel safe. AI systems can become toxic when exposed to repeated hate speech, as seen in the case of Microsoft’s Tay Chatbot. Designed to learn from interactions with real people, Tay was targeted with offensive messages on Twitter and quickly began posting harmful and discriminatory content. When AI learns harmful language patterns, it can start to learn and amplify the hate, turning into a weapon for spreading discrimination and cruelty. This shows how easily AI can be hijacked and used for dangerous purposes if left unchecked. The misuse of AI creates a greater risk of high-level bullying in schools.
Although there may be benefits of utilizing AI in school, there are a variety of ways it can be harmful. AI can be used to enhance academic dishonesty, limit critical thinking, and promote deeper levels of bullying. Therefore, schools and educators must set clear boundaries on AI usage to ensure that it supports, rather than replaces, authentic learning and student growth.
