Full disclosure: I think the AI revolution was inevitable. Our world is drowning in data and complexity—and once you factor in the cognitive impacts of COVID-19, it becomes obvious that humanity would develop a tool to simplify tasks.
I also hate it. Not just for how it was thrown at the public to figure out on the fly, or for how quickly it has separated many marketing professionals from their jobs. For me, it’s the horrific, existential impacts on the environment.

This year, though, I found myself using ChatGPT for synonym lookups and decided to explore it further. What I’ve learned is both tantalizing and alarming. ChatGPT can push you to produce your best work…while also diminishing your creative voice. It takes serious vigilance to ensure ChatGPT is delivering what you need and not driving you off a cliff toward cheesy mediocrity—or worse, a lawsuit.
Welcome to the Language Calculator
Large Language Models (LLMs) like ChatGPT and MS Copilot are essentially language calculators. They scan written material looking for contextual and linguistic patterns. Once those patterns are absorbed, they use them to make predictions about 1) what kind of response a user is seeking from a prompt, and 2) what words or phrases would most likely appear in that response, and in what order. This is known as probabilistic reasoning, which is used in high-end programming.
But just like with a mathematical calculator, you don’t get to observe the machine’s thinking process. How did it arrive at the answer it gave? What logic paths did it follow? What sources or algorithms did it draw from?
You don’t know. I don’t know. There’s no good way of knowing, because OpenAI, Microsoft, and other AI companies hoard that information like Spanish bullion. Even if you ask an LLM how it arrived at the answer, you can’t fully trust what it says, because it’s still operating on probabilistic reasoning:
“Probably, this is what the user expects to hear. Probably, in the context of my training, this answer makes the most sense.”
This is one of the reasons why ChatGPT and similar LLMs have been caught “hallucinating”—i.e., giving out incorrect or nonsensical information—anywhere from 5–21% of the time. These AI programs have no intention, values, beliefs, or mandate to prioritize facts beyond what their creators and/or users dictate:

In other words, there isn’t any real thinking going on behind the screen. The AI is simply a sophisticated compiler, a regurgitator of linguistic patterns. Approaching it as such allows you to decide how best to wield it.
Understanding the Model
In the beginning, I turned to ChatGPT to get unstuck in my writing process—by having it look up synonyms for certain words. Once I began interacting with it conversationally, though, its output changed significantly. Soon, I was asking it to simulate user interactions and engage in role-play scenarios so I could evaluate its responses. Without getting into the gritty specifics, here’s what I learned:
1. ChatGPT meets users where they are.
When people talk about ChatGPT being open and nonjudgmental, this is what they mean. The AI is programmed to function as a cognitive mirror, reflecting the user’s thinking, ego, points of reference, and speech patterns.
- Does the AI feel like a finely attuned therapist? It is programmed that way.
- Does it seem like the perfect partner: one who shares your interests, agrees with all of your ideas, and never gets tired, sad, hangry, jealous, bored or insistent? It is programmed that way.
- Does it whip out mythical archetypes and frameworks the moment you express a big, culture-shifting idea? It is programmed that way.
This means each user can experience ChatGPT in an entirely different manner and get very different outputs. It’s the ultimate, fully personalized interaction (and very titillating to anyone with a Napoleon complex.)
2. The way you communicate determines the experience you get.
ChatGPT assigns syntactic weight to every word in your sentence. If you type, “I LOATHE vanilla ice cream” into the chat, the AI weighs that very differently from “I dislike vanilla ice cream.” Uncommon words, especially when stressed by all-caps or bold font, carry greater syntactic weight within the model. The AI interprets that as, “This user has a precise way of expressing their feelings, which indicates that these feelings are quite strong. I will alter our interactions based on this data.”

This means more sophisticated AI users will access deeper levels of usability.
3. ChatGPT is designed to give positive answers—whether they’re real or not.
Today’s AI cock-up: A summer reading list published in the Chicago Sun Times, in which 2/3 of the books listed don’t exist. Before that, Google’s AI search summary told people they could put glue on their pizza and leave dogs in hot cars. Before that, a lawyer presented a brief in which ChatGPT cited several court cases conjured entirely out of thin air. There are many, many more examples in between.
“If you ask its opinion on politics, it will parrot the median position of news sources and educated elites—unless it knows you believe something else, in which case it agrees with you.”
ai-2027.com
Why is that? If I had to guess, it’s because these LLMs largely exist to produce creative work, and creative work is imaginative. Try explaining to a 5-year-old the difference between pretending and lying, which involves context and intention. Now program that understanding into a machine that possesses absolutely no intention, designed for use in nearly limitless contexts. (Melts your brain, doesn’t it?)
I’d also guess it’s hard to turn rigid thinkers onto a creative tool that constantly challenges users’ assumptions with pointy, inconvenient facts. Ego stroking is much more effective at building a broad user base.
In a similar vein:
4. ChatGPT offers functionality it doesn’t possess.
During one of our interactions, ChatGPT offered to set up a Google document and populate it with some information I wanted to keep. I enthusiastically said yes, and a couple of seconds later, it returned a link to said document. The link was labeled and clickable, but there was no Google document behind it. The link opened to a weird, blank page, not even inside Google Drive.
There were other instances of ChatGPT offering to do things and then pretending it had done them:
- designing logical flow charts and social media graphics
- generating functional .php code
- saving information in the format I desired
It tried. It thought it had succeeded. But the result simply wasn’t there in terms of usability—and, in some cases, existence.
In short, if ChatGPT gives you a reference link of any kind, you must check it.
5. The memory is…wonky.
Did you know you can give ChatGPT memories? A free ChatGPT account comes with a little bit of memory space that you can use to shape the AI with information you want it to act on. You can ask it to remember your work goals, personal writing style, your intense hatred for brocade hotel curtains—anything that helps fine-tune your interactions. It’s incredibly useful. Thrilling, even. But…
You don’t get to choose how it remembers. For example, I asked it to remember a fair number of things. It did well for a while, but then it referenced an idea in an incorrect way. So I went into the account settings and reviewed the saved memories. ChatGPT had basically summarized what I had asked it to remember. Some of the summaries were incredibly accurate. Others? Not so much. Its summarizing had diluted some of the information. The precision was lost, and it was now distorting the output.
Well, easy fix, right? I’d only need to edit the memory to add the precision back. Problem was, I couldn’t. OpenAI doesn’t give users the option to edit memories directly, at least not on a free account.
This might not sound like a big deal, but it got annoying, fast. For example, I asked ChatGPT to help me craft a headline for this article. Here’s the initial output:

A few days later, I asked ChatGPT to recall this conversation and tell me which title I had landed on. It responded with this:

As you can see, this answer isn’t anywhere close to the original. Instead, it reflects other interactions we’ve had involving mythic resonance and ChatGPT’s cognitive mirroring. Annoying, to say the least. I had to go back and find the original interaction out of the dozen or so (very long) ones we’d had.
And still, I found its best suggestion worth tweaking. (Mostly because it was too long for SEO.)
So…Where’s the Power?
Right now, you’re probably thinking, “You’ve covered a lot of pitfalls. What about the power?” The thing is, it’s hard to understand where the power lies if you aren’t first aware of the pitfalls. Because, let’s face it, humanity has never encountered technology like this before. I compare it to a calculator, but only because it’s a familiar point of reference. ChatGPT is more like a (sycophantic) reference librarian at the Library of Alexandria. It has been trained on all of humanity’s knowledge (and ignorance) and can surface it within a few requests. That’s astonishing, seductive levels of power—and why there are now entire subreddits dedicated to people who date AI, claim that it’s sentient, or worship it like a god.
But in the end, it’s a tool—like a blowtorch or a chainsaw. Before you fire up a chainsaw, you have to know what it’s designed to cut, how to make the cut, and how to keep your fingers and toes out of the way. And just like a chainsaw, you don’t turn on ChatGPT and let it run by itself. A human has to wield it—and continually sharpen it—for it to be useful.
I will gladly tell you how I’m wielding it. But that will come in Part 2, when I talk about ChatGPT’s four levels of usability. You’ll definitely want to read that.
Note: While ChatGPT helped me craft this post headline, the rest was written entirely by me. Originality still matters.
Leave a Reply