Hallucinations are AI’s wild lies, but Ghost Edits are its silent crimes. You ask it to fix one thing—a loop, a sentence, a prompt—and it drifts through your work, leaving mysterious changes you didn’t sign up for. It’s not just meddling; it’s haunting your code and text, then vanishing without a trace.

People are rightly concerned about AI hallucinations, where it confidently states things that just aren’t true, but I’ve found a much bigger problem isn’t the false claims—it’s the changes it makes without being asked and without telling you. I call these Ghost Edits. Edits you didn’t ask for and that AI will often claim it didn’t even make!

Hallucinations are a problem when you are asking AI to be creative, to write an email or a legal brief for you. But many of us use AI more to help us revise our work. In my work, I don’t have it create things for me as much as I have it review things—code, blog posts, letters, prompts, etc.—to provide proofreading, feedback, and to suggest improvements. In a typical workday I will ask AI to look over dozens, if not hundreds, of items and give me its thoughts.

For instance, my grammar is far less than perfect. I have no clue where to put commas, let alone semicolons or dashes, and thus I rely on AI to grammar-check and proofread my blog posts. This works very well generally, but I must review all changes carefully to make sure it didn’t change anything else. For one post, I noticed that the proofread version the AI gave me was about two-thirds the size of the original. Reading it, I realized that it had dropped a story about processing college transcripts that I quite liked. When I asked what it had done besides grammar checking, it denied having done anything!

Haunted AI

This isn’t an occasional change here and there; it’s often dozens per document. It’s not one ghost, but an entire haunted mansion! It is so frequent an occurrence that there are certain types of files I won’t even let AI touch, namely prompts and configuration files.

A lot of the work I had to do on Mirror Mirror was creating prompts for various AI to handle specific tasks. Often these prompts took days of painstaking tweaking to understand what the AI would respond to and what it wouldn’t. It could be a tedious process of finding the exact wording. One time I asked AI to take a prompt and put some sample JSON, a coding standard helpful when passing data between computers, at the end. A strength of AI is getting formatting perfect, and this was a fairly complex JSON that I likely would have mistyped somehow. The AI did a great job with the JSON, but also managed to shrink the 12 painfully constructed instructions in the prompt to 2! When I asked why, it told me that the others weren’t necessary, but a quick test showed otherwise.

“I Didn’t Do It!”

I had what I thought was a clever idea: I’d ask the AI to do the check and when it was done also tell me each and every thing it changed and why. This burned me almost immediately as I naively took a configuration file I had asked the AI to add some version tracking into, verified that it said it had only changed the version tracking, and cut and pasted it into my program. My program promptly crashed as the AI had changed much more than had been asked. When I asked it why it would do that, it said it hadn’t made those changes! When I showed it the changes, it went and gave me a corrected version of the document without its ghost changes, quietly ignoring any indication that it knew anything about these changes.

I’ve found this to be fairly consistent: AI does not seem to realize it made these changes.

Why Does This Happen?

When I first noticed these Ghost Edits, I was making heavy use of older AI models, especially GPT-3.5, and I thought it would go away as models got better, but in fact they’ve gotten worse. Older models had small context windows, the amount of data it could handle. Moreover, these context windows were used for both inputs and outputs. Imagine you have a bucket full of water and you had to pour more in while keeping some to scoop out—so the more you put in this bucket, the less can come out of it. In order to give you a good answer, if you had a large question, the AI would have no choice but to shorten the response.

Therefore, I really expected that as context windows got bigger this issue would go away. When they didn’t, I looked further into what’s going on. It turns out that this behavior often stems from how AI models (especially large language models) are trained to “improve” or “optimize” outputs holistically, rather than surgically. They don’t always have a granular sense of “change only this, leave the rest alone”—they see the whole input as a canvas. Plus, their tendency to paraphrase or simplify can strip out nuance you wanted to keep.

As for the failure for AI to tell you what it changed, I think it’s a two-fold problem. First, it’s likely a failure of introspection—most models just aren’t great at tracking their own edits step-by-step. Second, because AI thinks in tokens and not words, it often thinks things are more similar than they really are. In short, when AI changed my prompt from 12 points to 2, I think it didn’t see the difference because it thought that the 12 points and the two were saying the same thing. It’s like swapping “use” for “utilize”; they are such close synonyms that you might not realize you’d changed them.

What to Do About It?

For items that I’ve found AI gets wrong repeatedly—prompts and configuration files—I won’t even do a large-scale cut-and-paste of what AI produces. I’ll hand-type, or cut and paste small sections, so that I know exactly what is being changed. The few times I’ve broken this rule, I’ve paid for it. With these things, AI just can’t help from making Ghost Edits.

For items that AI isn’t as bad with, such as proofreading blogs or checking code, you might think that simply closely reading what you get back from the AI would be enough, but it really isn’t. While I surely reread what I get back, the truth is that I could miss major changes, especially if they are omissions. This is doubly true with code, where one single letter being different would be easy to miss and can cause big problems.

You can’t trust AI to police itself, but you can trust another AI to police it! So I take the before and after documents, load them into another AI, and ask it to detail each and every change, and then I look through those changes and make sure it’s what I need. If you are using Microsoft Word or VS Code, you can also use the comparison tools there to see exactly what has changed. Either way is a bit cumbersome, to be sure, but you only have to trust the AI once and get burned before you check carefully every time.

Busting the Ghosts

Ghost Edits are the poltergeists of the AI world—mischievous, elusive, and a little too comfortable rearranging your digital furniture. With a second AI as your ghostbuster and a sharp eye on every change, you can keep them at bay. So next time you hand your work to AI, watch out—it might just throw a haunted house party in your code!


Discover more from Lowry On Leadership

Subscribe to get the latest posts sent to your email.

3 responses to “The Biggest Problem with AI Isn’t Hallucinations: It’s Ghost Edits”

  1. James Neal Lowry Avatar
    James Neal Lowry

    It seems to.me that AI is kind of lime being married.

  2. […] Ghost Editing is when AI subtly or significantly alters parts of documents it was never asked to change. For example, if you provide AI with a blog post and explicitly instruct it to tighten only the conclusion, upon close comparison, you’ll notice unauthorized edits throughout. These could be minor adjustments, like swapping “use” for “utilize,” or extensive changes, such as rewriting entire sections without your permission. […]

  3. […] files (yaml) or AI prompts. When it’s in those files for other reasons it will often make ghost edits to simplify prompts or, more mystifyingly, rename variables in configuration files. More than once, […]

Leave a Reply

Quote of the week

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

~ Sam Altman (apocryphal)

Designed with WordPress

Discover more from Lowry On Leadership

Subscribe now to keep reading and get access to the full archive.

Continue reading