Picture this: You’ve just wrapped up a big project, relying on an AI writing assistant like ChatGPT to speed things along. It’s done wonders – no more staring at a blank page or struggling with phrasing. But as you read through your final draft, something feels off. It’s not that the grammar is wrong or the writing isn’t fluent; it’s that the content sounds… well, like everyone else’s.
This is a common scenario as more authors and editors turn to AI tools. While these tools can be powerful allies, they come with a caveat: there’s a sameness to AI-generated text, a slightly robotic tone often described as “bland and wordy” or “generic corporate-speak”. This is partly due to the casual way we use these personal assistants as a lightning-fast method to polish content that is less than well-thought-out. But it also speaks to the technology’s limitations.
The mechanics of AI style
To understand why AI content often sounds so similar, let’s look at how large language models work. Tools like ChatGPT are trained on massive datasets from the internet – articles, blogs, forums and so on. They rely on statistical patterns in language use, along with preset guidelines, to predict the best output. But without the contextual knowledge and creativity of a human author, they tend to produce a generic and overly formal voice that lacks the natural variability of human expression.
While AI tools are highly reliable for spelling and grammar, tell-tale signs can make AI content easy to spot – from formatting quirks to predictable vocabulary choices and repetitive sentence structure. If your goal is to engage readers and build trust, being aware of these stylistic pitfalls can make all the difference.
The look
When it comes to formatting, AI assistants can be impressively consistent, whether handling heading levels, organising citations or laying out complex lists. But this convenience brings a set of built-in stylistic preferences that are neither neutral nor subtle. For example, ChatGPT defaults to US English, capitalises every word in headings, and frequently formats content in its own distinct list style.
If you need to adhere to a corporate style guide or follow a specific manual, such as Chicago or APA, you must either craft your prompts with this in mind or carefully review the AI’s output. Even when prompted to use a custom style, ChatGPT may occasionally revert to its default settings. For readers accustomed to spotting AI-generated text, these seemingly minor formatting choices can be unmistakable signs of AI involvement, suggesting a lack of human oversight or revision.
The vocabulary
Word choice is another common giveaway. Rather than selecting words for their ideal fit in the context, AI tools rely on complex predictive algorithms based on vast datasets, offering up words that are statistically likely to follow one another. Perhaps surprisingly, this big-data approach often leads to a repetitive vocabulary.
So, what are some of the culprits to look out for? As language professionals and others who work with AI-generated text regularly will know, words like “utilize”, “encompass”, “delve”, “underscore”, “showcase” and “streamline” have surged in popularity at the hands of ChatGPT. While there’s nothing inherently wrong with these words, their overuse can feel forced in contexts where simpler words – such as “use”, “cover”, “look” or “help” – might do a better job. This tendency towards overly formal, buzzword-heavy language gives the writing an artificial quality.
Algorithm aversion
“Who really notices these stylistic quirks?” you might wonder, especially if you’re focused on getting the message across without too much fuss. But research suggests your readers may feel differently. As more writers lean on AI for a quick content fix, readers are getting a knack for spotting these worn-out verbal flourishes.
When it comes to content that requires focus and engagement, readers prefer human writing. Simply suspecting AI involvement can change their perception and cause them to lose interest or trust. Emerging research on “algorithm aversion” and “human favouritism”, such as this recent MIT study, shows that people tend to favour human writing over AI content – even if the AI has done a great job.
The upshot
Readers can distance themselves from your content simply because they sense it wasn’t crafted with human judgement. This reaction need not reflect a balanced assessment of the text’s quality but stems from a natural bias towards content created by a human. Ultimately, there’s a simple quid pro quo at play here: if your readers should invest their time and attention to engage with your writing, they'll expect you to have done the same.