The importance of editing AI content     

Generative AI arrived in the public consciousness a couple of years ago with all the subtlety of a car bomb. Its exponential growth has been driven by three factors – AI companies’ desperation to grow at a quicker rate than their competitors, staggering sums of external investment, and a desire to be part of a rapidly evolving market which as yet has no boundaries or limits.

Along the way, generative AI has caused much mirth with its hallucinations and absurdities. It’s briefly titillated us on video platforms like the short-lived Sora, where everything you saw was entirely fictional. And it’s also upended the jobs market, with companies abolishing everything from entry-level roles to advanced positions previously requiring decades of experience. Some analysts predict a high percentage of the world’s jobs may be lost to AI, yet in contrast to previous technological revolutions, there seems little scope for new jobs to be created as old ones are rendered obsolete.

A copywriting canary in the coalmine

As a copywriter, I’ve been at the sharp end of this. In 2024, five of my clients ceased providing work to freelancers because of AI and search engine changes. In 2025, I lost another three. The impact on my bottom line has been dramatic. So when I was offered the chance to edit AI content for a major automotive client, it felt like a valuable opportunity to examine my ‘competition’.

There is no doubt that AI engines like Claude, Perplexity and Jasper can generate reams of highly technical copy far quicker than any writer. I’ve also acknowledged in a previous blog how effective Microsoft CoPilot can be if it’s dealing purely with content inside your company’s data ecosystem. The problems come when AI engines start drawing information from the wider internet, with all its inconsistencies and inaccuracies, and presenting it as fact.

In the process of editing AI-generated articles, I’ve seen some truly bizarre issues arise:

  1. An AI engine kept trying to apply the same Q&A questions to different topics, leading it to pose increasingly absurd questions which it was then unable to answer.
  2. One AI-generated paragraph made perfect sense, right up until it hallucinated something that had never happened in the middle of a sentence.
  3. An AI article stated quite confidently that something which took place five years ago had first happened in autumn 2026.
  4. Another AI article made something up – but then included a link to an unrelated Wikipedia page, which made it look at first glance as though this statement had been corroborated.

I could give many more examples, but it might give away who I’m working for, so I will draw a discreet veil over my other findings.

Why does this matter?

Humans make mistakes, and even authoritative platforms like Wikipedia (a favoured resource of AI engines) contain mistakes. People aren’t infallible, so why should we expect machines to be?

The short answer is that machines should be logical, which hallucinations clearly aren’t.

The longer answer is that the engines present their content as authoritative, with no need for human input. Yet when a broadsheet newspaper makes a mistake, IPSO forces them to publish a correction. ChatGPT and its ilk have no reason to care if they make a mistake, but if you publish that content and get in trouble for it, it’s your reputation on the line. Don’t expect the AI company whose content was responsible to submit a witness statement on your behalf in court.

For this reason, sense-checking AI-generated content is essential before any business publishes anything on any public or client-facing portal. Human oversight from a qualified and experienced writer will also ensure a number of other benefits, beyond accuracy:

  1. Content can be restyled to match corporate house writing styles, from language and use of key terminology through to consistent punctuation and formatting.
  2. An editor can add in some of the things AI engines can’t generate – humour, personal opinion, emotion, lived experience.
  3. AI copy is prone to repetition, often making the same point two or three times with very similar (or even identical) wording. Human oversight will identify and remove unnecessary duplication.

It’s also worth considering that AI engines are engaged in plagiarism and copyright theft of digital content on a scale which makes Napster and its peer-to-peer ilk resemble a few people trading cassette tapes. Innumerable lawsuits are underway around the world from enraged content creators, from writers and artists to film and media outlets. Legal wheels grind slowly, but these cases may prove to be cataclysmic for generative AI brands. Some firms are already hundreds of billions of dollars in the red due to start-up costs, plus the vast energy and environmental costs of running all the servers and processors needed to meet daily demand for their services.

The best things in life are rarely free

You might wonder why you should pay a writer to edit something AI has produced for free, but that’s the point. If you’re not paying for something, it’s unlikely to be high quality – and it won’t be free for much longer anyway. Even if generative AI companies successfully defend every legal case in the next ten years (which seems inconceivable), the incalculable sums of money being invested in generative AI tools will eventually need to be repaid as dividends or profits. The only realistic way for this colossal debt to be settled is if AI companies start charging for the use of their tools. And once you have to pay £1,000 a month for a subscription, it’d be cheaper to use a freelance writer.

In the meantime, if you’re thinking about commissioning AI content for your business, ensure it’s accurate and authentic by asking G75 Media to provide our award-winning editing services. Get in touch with us to discuss a quote.