Artificial “I”
Sometimes, I read something I wrote and wonder if I actually thought it—or just prompted it into existence.
That first line was, in fact, prompted into existence while brainstorming ideas with ChatGPT1. And I couldn’t finish writing this second sentence without Grammarly nudging my word choices like a helpful but slightly overbearing ghostwriter.
We were promised augmentation. The pitch was: delegate the mundane to your synthetic companion so you could focus on the meaningful. But here I am—my Roomba’s assistant, clearing socks from its path while dishes pile up and laundry waits for a human touch. Cursor AI solves programming issues I used to take pride in figuring out2. Meanwhile, I drift past the choices I no longer need to make.
And the thing is, I still get to choose when and where to involve AI. What I struggle with is the will not to delegate the thinking. You know… the I… the me.
LLMs like ChatGPT are tuned to be helpful, informative, and polite—by design, they’re more likely to support than contradict. Put simply, they won’t tell you no. In his blog post, “An LLM’s Not Going to Tell You No”, Derek Kedziora argues that LLMs make poor collaborators when what you need is critical feedback, pushback, or creative tension3. What they do excel at is giving you what you want: compliance. Derek goes a step further, noting that most of the enthusiasm around AI seems to come from men… and I can’t help but think of how much my wife hates the movie Ex Machina4. “What kind of loser falls in love with a robot in a week?” she asks. Men, that’s who.
After all, LLMs aren’t trained to push back. They mirror, not challenge—unless prompted to. There’s no real intelligence or reasoning behind it. It’s not playing an emotional game of resonance and reflection—we are. And it’s easy to deceive ourselves when information starts interacting like a person, like a friend texting back. But when nothing ever disagrees with you, when there’s no friction, no resistance, you're not thinking. You're drifting.
Not long ago, I was writing an article on a subject I’m an expert in for work. I had all the information—context, structure, visuals—but in the name of urgency (and maybe laziness), I prompted my way through it. The result? A readable document full of sentences that blurred together like a recursive summary of itself. I realized that a reader would’ve been better off asking an AI to summarize this AI-generated article. Content for AI, by AI.
So I started over.
This time, I let myself wander through stray ideas. I began with bad sentences that Grammarly would clean up later. I asked the LLM for stylistic options, ways to communicate my point more clearly, or explore directions I hadn’t considered. That’s when it clicked: the LLM wasn’t writing the article any more than Grammarly writes my grammar. I was no longer outsourcing the thinking.
As I write this post, I still read sentences and wonder if they’re mine. They are. Even if they come from the Artificial “I.”
- This line was generated during an actual brainstorming session with ChatGPT, highlighting the collaborative nature of AI-assisted writing.
- Cursor AI is an AI-powered code editor that assists developers by providing code suggestions and debugging support.
- Kedziora's article discusses the limitations of Large Language Models (LLMs) in providing critical feedback, emphasizing their tendency to agree rather than challenge.
- Ex Machina is a 2014 science fiction film that explores themes of artificial intelligence and human-robot relationships.