LLMs Are Not Even Stupid, But Humans Sure Are
Feb 12, 2026
LLMs are the latest mass-appeal drug for the "shortcut" defect in human brains.
Humans in essence are lazy, selfish, and easily bored. I've known this now for a few years.
Everyone is hypersensitive to "shortcuts" (of course, see for example Thinking Fast and Slow by Kahneman, Predictably Irrational: The Hidden Forces that Shape Our Decisions, or numerous other books from the last thirty years of cognitive science research).
Business executives aren't the sharpest tools in the shed, and are already super tuned into "vibing" shit, and extremely motivated to find and use "shortcuts" (e.g. firing a bunch of workers and hoping some 10x bigger, 50% cheaper team in India really can pull off the miracles that the slick-talking fellow viber executive from there told him about).
Workers also are hypersensitive to "shortcuts" because most of work is bullshit and most of the things that cause the work to be bullshit are the layers of bullshit that the viber executives stupidly impose looking for shortcuts and believing the snake-oil consultants.
The average Jane Joe in society is also hypersensitive to "shortcuts" because they want more than they can afford and are probably working those miserable, bullshit jobs created by those miserable shortcut-chasing viber executives.
In rides shortcut-extraordinaire, viber executive snake-oil salesman Sam Altman with a shockingly good bullshit machine promising everyone shortcuts for anything their hearts desire: A paper, a book, a program, a girlfriend, pornography, a company... A veritable genie-in-a-bottle of unlimited shortcuts.
But the contents don't match what is written on the tin. Never mind, shortcut Altman says, we just need to scale it.
They do, and it still can't, doesn't, won't, could never actually provide many shortcuts.
Now people are looking for shortcuts to the shortcuts and justifying shortcuts (e.g. vibe-coding-log-colorizers) that aren't actually shortcuts for anything. Paper cuts, maybe, shortcuts, nope.
How. Fucking. Stupid.
Those Poor Executives
I don't spare many fools above, but am I being too harsh on those poor executives? I don't think so, but see for yourself what Simon Wardley wrote back in 2016, The robots are coming ... for whom the bell tolls:
From my experience, most CEOs tend to demonstrate poor situational awareness, an inability to decipher doctrine from context specific play and the boardrooms are more akin to alchemy, gut feel and whatever is popular in the HBR than to chess playing masters. Various studies have questioned the impacts of CEOs e.g. Markus Fitza's study demonstrated that the CEO effect on firm performance varies little from chance.
Not Even Wrong Stupid
It is said that Wolfgang Pauli once remarked “Das is nicht einmal falsch” (“That is not even wrong”) in reaction to some work by a physics researcher.
It's not fair to call LLMs stupid. They're not. To be stupid would require something that LLMs do not nor will not ever have, anything remotely like intelligence.
LLMs are correlation machines. When A and B happen to be together, the model records that and later, given an A (or B) can produce an attendant B (or A).
It's morning, I take a shit, the sun rises.
"It's not that simple!" you protest. Yes, it's that simple.
"But I saw it do this thing with some code (which I otherwise don't know much about or I'd have done it myself) and I can assure you, it did a thing."
Yes, I don't doubt you. Again, when there are correlations, there are correlations.
Just like you cannot get an ought from an is, you cannot get causation from correlation.
Yes, all things with a causal relationship are correlated; the inverse is radically untrue in any general sense.
Combine this with the fact that you cannot predict new knowledge and LLMs become pretty useless.
That's right, I said it, useless. I didn't say they're always wrong, and I didn't say you won't find humans that like to claim they are important or that they personally get some value from them if they twiddle their knobs just right, but I did say they are generally useless.
A Short, Dismal Tour of LLMs
The following are just a couple, mundane examples of how LLMs are not even stupid.
The fact is, LLMs are never right or wrong. Never, ever. There is nothing in the model that can distinguish if something is an actual fact in this universe or some wild hallucination.
For an LLM, it is all the same.
It is only a human (or something else in a feedback loop) that can trigger an LLM to generate more of the same not-right-not-wrong, not-true-not-false bullshit.
The LLM grifters, though, are very astute about the flaws in human brains, the ones that are almost impossible to avoid, and lather on the anthropomorphic bullshit so thick, the "Thinking", the "I need to look up yada yada..." etc. ornamentation around the presentation of the LLM bullshit that people are constantly fooled and believe that when the LLM produces something that checks out, it must be "working" and when it doesn't, it must have "made an error."
Absolutely false.
These few examples represent a tiny fraction of similar ones over the past two-plus years of using an LLM almost every day. Now, if that didn't catch your attention, are you even reading these words? lol.
Yep, that's right, I've used an LLM frequently because like any intelligent person trying to maximize my impact, I'd use any useful tool whose downside didn't outweigh its upside.
And that's why I state so adamantly that LLMs are useless.
The first principle is that you must not fool yourself — and you are the easiest person to fool. —Richard P Feynman
The Case of Not Even Misunderstanding
The build2 system is brilliant, and I swear built by aliens because nothing humans previously created in that dismal hinterland of build systems (yep, you too Go and Rust) holds a candle. It has extensive, detailed, voluminous documentation and a learning curve that would make Mt Everest feel like a little hill.
At least, that's my experience of it. You probably have your own tool or whatever that presented a similar experience.
This is just the situation where I wish I had an intelligent assistant, but LLMs are not that thing by a million miles.
I'd love something to help me with the precise problem I'm facing without having to spend a couple weeks learning the insides and outs of a tool, reading the docs, reading the code, trying a bunch of prototypes. I'd love something that in 20 seconds could munch on some words and produce something useful.
Wouldn't you?
LLMs Aren't Even Stupid 0
LLMs Aren't Even Stupid 1
LLMs Aren't Even Stupid 2
The Case of Windows That Aren't
Humans make a lot of mistakes, no doubt about it. But if you were in a conversation with a friend and they suddenly and randomly started claiming that five minutes ago you were telling them a tale of when you visited China in 1487 blah blah, you'd quite quickly and justifiably question their sanity.
But LLMs cannot be sane or insane or anything else having anything to do with semantics (i.e. the meaning of things). It's all just the same bullshit.
(If you've ever read one of those stupid articles in WaPo or WSJ about LLMs where the author asserts some asinine thing like, "We teach our kids to be respectful but being a shithead to a liebot gets better results some study may have shown..." you can forgive me for trying it out.)
LLMs Aren't Even Stupid 3
LLMs Aren't Even Stupid 4
LLMs Aren't Even Stupid 5
The Case of The (Not) Wrong Option
Yep, another example of where it would be super helpful to have an intelligent assistant that actually understood the meaning of "operator" and the semantics of the various ones because those things are hard. (Are you a programmer? Quick, what's the precedence of +, -, /, * in Smalltalk, Ruby, Python, C and Haskell?)
LLMs cannot do this.
LLMs Aren't Even Stupid 6
Shockingly (I know right?!), the issue had nothing to do with this perfectly correct operator expression. Oh well...
The Case of The Summary That Wasn't
One of the supposed capabilities of LLMs is the ability to "summarize" text. Everyone is offering that. In fact, you can't avoid it. I can't glance at my message list and read the first few words an actual human sent me because Apple thinks it's better for me to read some dumbass summary a liebot created.
For the millionth time, LLMs must hallucinate to function. How often do you check their work?
LLMs Don't 'Summarize' A
LLMs Don't 'Summarize' B
You read those words and your brain cannot help but conjure something in there that must understand and surely that apology is sincere...
Bullshit. All of it. Every word of it. The same bullshit as every other thing an LLM produces.
If only your parents, when giving you that beloved stuffed animal, had informed you that the very thing that makes you feel those endearing feelings for that wad of fabric and stuffing would be used to trick and steal from you as an adult...
It's the same as looking at this image and "seeing" an old or young woman. There's no woman there, but you cannot help it. Once your brain has seen it, try to un-see it. Good luck!
Old Woman, Young Woman Bi-stable Image
LLMs trick you in just the same way that stuffed animals and carefully arranged dots on a screen or paper can. But no matter how often someone says, "Don't believe everything you see or hear." we can't help ourselves.
If it lies like a human, and it strokes our ego like a human, and it tickles our fantasies like a human, why can't I just believe it's a human-like intelligence 😭 😭 😭
The Banality of Stupidity
As I stated above, these an mind-numbingly mundane, boring cases of LLMs doing exactly what they are intended to do: randomly regurgitate some correlations that they once recorded.
They do not, cannot, will not ever be able to tell the difference between something that is factual and something that is not.
I'm sure you've got your own favorite counter examples. I'm sure you've got all kinds of reason why you think they are useful and that I'm wrong to claim they aren't.
I don't really care. But I'll entertain them, pics or it didn't happen: LLM One Shots Club.