Skip to Content, Navigation, or Footer.
The Daily Cardinal Est. 1892
Friday, November 07, 2025
Robot teacher .jpeg

Large language models are demons. But not like that

Popular forms of artificial intelligence present themselves as teachers with a reasonable train of thought, but that couldn’t be further from the truth.

ChatGPT is a demon. Yes, a demon. Now, when I call a Large Language Model demonic, I don’t mean to conjure up images of fallen angels and other biblical images. However, the way artificial intelligence and LLMs present themselves is inherently evil.

LLMs hide behind a facade of thought processing. They appear to genuinely grapple with the information or task you plug in, formulating a line of reasoning to reach their responses. 

The theatrics of an LLM pretending to reason with your input isn’t reason, it’s demonic. The ancient Greeks understood demons as “any invisible being using reason, as if knowing.” Anton Strezhnev, a University of Wisconsin-Madison political science professor, used this definition to illustrate the reality of LLMs’ internal models. Wouldn’t Plato be surprised to know that man created demons?

According to researchers with Google DeepMind, the only “reasoning” these models actually take part in is the compression of your input to predict what words they will respond with. This process is not a form of thinking. It is simply an internally programmed model many AI bots have.

Within the process of compressing data, LLMs convert your input phrases into vectors. These vectors are fixed and assigned to certain word inputs at the outset of LLMs’ creation. Similar vectors are paired with similar words. These relationships are then used in the model’s transformer as next-token predictors, or the next words they will churn out that are added to your original prompt.

When you contemplate the beauty of a sunset or the eloquence of Shakespeare, do you instinctively convert that experience into vectors that determine the closeness of a similar thought in order to predict the next thought you’ll have as you look at that sunset? No, you don’t. Because you are intelligent. You actually think and reason. Artificial intelligence… is dumb.

Now, all of this isn’t to say LLMs are a complete scourge of malice and misguidance. They have real world applicability if you use them correctly. From usage as a search engine outside the classroom to correcting grammar, these aspects of AI usage among students can have benefits. But too often, AI is now used as a resource when coursework loses students’ attention. LLMs have become a quick solution for mediating and parsing through assigned readings or lectures. This mediation not only dilutes students’ understanding of the subject matter, but it also inhibits their ability to reason with it, to form their own opinions and to pull unique perspectives from what they’ve read.

Political scientist and Georgetown Professor Paul Musgrave highlighted in a substack post that this apparent lack of attention from students has led to a form of non-literacy. Dense texts see little effort in attempts to engage with them, movies have been designed for distracted audiences and TikTok has compressed attention spans. Cognitive deficiencies have pushed students to pawn off their engagement with scholarship to AI, causing a loss of reasoning.

AI’s internal model has become our own, and we must rectify this before we cease to grapple with new information entirely. Students need to start making real attempts at discerning difficult texts and concepts, formulating logical arguments, asking questions and paying attention. The easy way out through AI and LLMs is a roadblock to developing research and reasoning skills that are required in professional careers. 

Enjoy what you're reading? Get content from The Daily Cardinal delivered to your inbox
Support your local paper
Donate Today
The Daily Cardinal has been covering the University and Madison community since 1892. Please consider giving today.

Powered by SNworks Solutions by The State News
All Content © 2025 The Daily Cardinal