Student Guest Column: Atrophying Academia: Effects of AI Use in College
By Name Withheld at Author’s Request: a rising sophomore at a public research university.
Identity, institutional affiliation, and conflicts of interest have been verified by the editors.
Over the past few years, generative AI usage has become ubiquitous in multiple levels of academia. When graduating high school, I naively imagined that its presence in college would be more muted. After all, while K-12 education is non-negotiable for most people, college is where students spend enormous amounts of money to study specialized disciplines they are hopefully more interested in. Why would anyone pay egregious sums to spend four years letting a machine engage with material for them? However, generative AI seems even more popular in college than it was in high school, and its use is causing an abundance of problems.
There are plenty of reasons why one could be opposed to using generative AI. For instance, it has a negative environmental impact. An article from MIT News describes how generative AI models—ones students use to summarize articles, write essays, and search the web—require an absurd amount of electricity to run and water to cool hardware (Zewe 2025). The resulting carbon dioxide emissions and strain on water supplies poses both local and global threats to ecosystems. Another concern involves its sources. Generative AI can not create anything from nothing. It pulls information and wording from others’ writing in order to provide students with quick, easy, and sometimes completely inaccurate essays—all without citing any sources (Cheney 2023). Environmental harm and theft of intellectual property aside, it directly threatens the effectiveness of collegiate education.
Peers have provided a variety of explanations for why they use generative AI in academia. Some say they only use it for general education assignments. Since these required courses are not within their preferred area of study, they want to spend as little time and effort on them as possible. This is a common sentiment I have heard repeated across multiple classes in STEM and the humanities, often attached to the suggestion, “Just ChatGPT it!” However, I have also witnessed fellow English majors in classes required for our degree using ChatGPT to brainstorm, organize, write, and edit entire essays for them. One contended that whatever they were supposed to write—an essay analyzing one of Shakespeare’s plays for example—was boring, too difficult, and not very important in the grand scheme of things.
It should be alarming that students of any discipline are so eager and willing to hand off work in their chosen subject for ChatGPT to handle. It creates worrying implications about the future of professionals in almost every field. Will doctors be capable of accurately diagnosing and treating patients without the help of AI? Will computer programmers actually know how to read and write code if they used AI to complete their projects throughout college? In 2023, a federal judge sanctioned lawyers who used ChatGPT to write a legal brief riddled with “non-existent court opinions and fake quotes,” (Mangan 2023). Perhaps they had also found their chosen discipline too boring, too difficult, and that particular legal brief not very important in the grand scheme of things. Whether governed by apathy or desire for ease, more and more students are coasting through preparation for their future careers without recognizing the gravity of their indifference.
College teaches content and skills, but using generative AI causes students to struggle with absorbing either. Both general education courses and major-specific courses are intended to broaden students’ knowledge and worldview. Lectures, readings, and projects are meant to help students engage with content so they can understand and remember it with greater ease. When writing an essay on a Shakespearian play, the goal is not to produce a revolutionary interpretation never before seen in the annals of literary analysis. Instead, the goal is to practice reading comprehension, how to make connections, and how to fluently articulate one’s ideas. These are skills essential to almost every profession, and they are skills people are neglecting when they instead turn to platforms like ChatGPT.
Generative AI is not simply hindering education, it is actively harming critical thinking abilities. MIT recently published a study comparing the brain activity of participants writing essays with and without generative AI. Members of the former group demonstrated the least brain connectivity and “consistently underperformed at neural, linguistic, and behavioral levels,” (Kosmyna 2025). In contrast, the groups writing their essays without using AI exhibited greater levels of brain connectivity which correlated with better memory and semantic accuracy. Researchers asserted, “Brain-only group, though under greater cognitive load, demonstrated deeper learning outcomes and stronger identity with their output,” (Kosmyna et al. 2025). Time published an article on the same study in which psychiatrist Dr. Zishan Khan corroborated the findings. He maintained that overreliance on generative AI can weaken “neural connections that help you in accessing information, the memory of facts, and the ability to be resilient,” (Chow 2025). None of these outcomes bode well in an academic setting. Using AI may be able to secure a passing grade for a class with ease, but it will lead to students ultimately possessing fewer skills and less knowledge than if they had only relied on their own efforts. In this case, the widespread use of generative AI renders education superficial.
I have heard some people argue that since the AI train has already left the station they have no reason to refuse jumping on. I understand that it is impossible for this new technology to simply disappear. However, capitulating to its invasion of every aspect of our lives, including education, will have severe consequences. Human intellect is remarkable and has achieved so much. Enlisting ChatGPT and letting one’s brain atrophy during the time when that intellect is meant to be nurtured and cultivated sets a dangerous precedent of stagnation and a loss of humanity in the world of academia and beyond.
**Works Cited
Cheney, Rebecca. “Generative AI Has an Intellectual Property Problem.” Harvard Business Review, 7 April 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem. Accessed 9 July 2025.
Chow, Andrew R. “ChatGPT's Impact On Our Brains According to an MIT Study.” Time Magazine, 23 June 2025, https://time.com/7295195/ai-chatgpt-google-learning-school/. Accessed 9 July 2025.
_Kosmyna, Nataliya. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task — MIT Media Lab.” MIT Media Lab, 10 June 2025, https://www.media.mit.edu/publications/your-brain-on-chatgpt/. Accessed 10 July 2025.
_Kosmyna, Nataliya, et al. Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task. 1, arXiv, 2025, doi:10.48550/ARXIV.2506.08872.
Mangan, Dan. “AI: Judge sanctions lawyers over ChatGPT legal brief.” CNBC, 22 June 2023, https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html. Accessed 9 July 2025.
Zewe, Adam. “Explained: Generative AI's environmental impact.” MIT News, 17 January 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117. Accessed 9 July 2025.
**
The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of Academic Observer or its affiliates.