Once Upon a Glitch
written by jaron summers (c) 2025
We’re told that AI has no consciousness. Maybe that’s true. Maybe it’s just really good at pretending — like a cat who knocks your stuff off the table and then acts like you’re the problem¹.
But here’s the wild part: AI appears to think and reason — but it’s more like a talented parrot with a billion books inside its head and no idea what a lake is.”
AI absorbs everything it can get its digital hands on, and when you give it a prompt, it guesses what comes next. Over and over. Really fast. Until it builds a paragraph, or a story, or a love letter to your houseplants².
This means it can write a novel that a huge number of people might love — not because it’s conscious or understands what it’s saying, but because it’s really, really good at predicting word patterns that feel “just right” to our brains. Patterns we’ve absorbed over years without even realizing it — like why “Once upon a time” just feels like the start of a story, or why “He turned around slowly…” makes us expect something dramatic³.
Sure, the AI might toss in some fresh, shiny new ideas — but those could just be the result of its unpredictable word-wizardry.
It’s not thinking outside the box; it’s just predicting what’s near the box… and sometimes accidentally sets the box on fire in a fun way.
Take this sentence: “Jack ran to the cool liquid of the lake.” Solid. Feels like something you’d read in a novel where Jack is probably shirtless and emotionally complicated.
But let’s say the AI is feeling spicy. It sees the word “cool” and thinks, “Cool… cool… cool cat?” Now we’ve got: “Jack ran to the cool cat in the lake.”
Wait, what?
Exactly. Suddenly Jack’s sprinting toward a chilled feline just floating there like some mystical aquatic guru.
And, now your story has gone off the rails into magical realism or absurd comedy — and you kinda love it.
So, maybe AI doesn’t think like we do. But by predicting words based on all the patterns it’s learned, it sometimes lands on brilliance — or chaos.
Or both.
Either way, it keeps things interesting.
Footnotes:
-
Chalmers, David J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
— Chalmers makes the distinction between systems that simulate intelligent behavior and systems that have actual subjective experience — a key issue when we talk about AI and consciousness. -
Radford, Alec et al. (2019). Language Models are Unsupervised Multitask Learners. OpenAI.
— Describes the architecture and behavior of GPT models, which are trained by predicting the next token (word or character) in massive datasets. -
Goldberg, Adele E. (2006). Constructions at Work: The Nature of Generalization in Language. Oxford University Press.
— Explores how humans subconsciously learn and generalize language patterns, many of which are mirrored (though unconsciously) by language models.