Why Is Content Written by Humans Better than AI-Produced Content?
Artificial intelligence has been around in various forms for many years, with Alan Turing often credited with being the first to consider the idea. His Computing Machinery and Intelligence, his seminal paper which was published in 1950, asked the very simple question: can machines think?
Turing is perhaps best known for his role in helping develop Britain’s codebreaking capabilities during the Second World War. Parts of his fascinating life were captured in the film The Imitation Game, that name having derived from a test Turing invented to see to what extent a machine was able to display intelligent behaviour and thus imitate a human. Now known as the Turing Test, Turing himself initially called it the imitation game.
However, in a rather human way, we have drifted from the topic a little there. The point is that AI has been around for many years, at least in the imaginations of visionaries like Turing. But in truth, it is only really since ChatGPT was released that it has fully captured the imagination of the wider public. AI existed in films and on TV, and may have been used behind the scenes for a whole host of functions, but with the launch of ChatGPT in 2022, it is fair to say AI went mainstream.
All of a sudden, or so it seemed, artificial intelligence was going to change the world in the way that the Internet and smartphones have. Or, scary thought, alter life even more radically than either of those things. We had suddenly become years, or even months, away from the glorious sunlit uplands of a world where AI would solve the issue of climate change, irradicate cancer and maybe, just maybe, make it possible to get Oasis tickets.
Alternatively, depending on your point of view, the robots would take over the world and unintentionally lead us to a nuclear Armageddon. Or at the very least AI would bring mass unemployment, destroying jobs and altering society beyond recognition.
AI Still Evolving
Perhaps unsurprisingly, some two years on from the launch of ChatGPT, now widely – dare we say – imitated, not all that much has changed for most people. It is thought that around $100 billion has been invested into AI in recent years, with that expected to double by 2026. That massive investment has no doubt had an impact and improved the capabilities of several AI tools.
However, in day-to-day life, few people really experience much that is significantly different. There seems little doubt that artificial intelligence will produce some really big scientific and technological breakthroughs and to a degree it already has. However, it is still very much evolving and nobody quite knows where it will lead.
Can AI Write Content for My Website?
Whilst AI may not quite have been able to broker peace in the Middle East or irradicate malaria just yet, one area where it has certainly made strides is the world of copywriting and content provision. There have been various studies and reports into which jobs are most at risk due to AI’s growth and copyrighting generally features.
ChatGPT and other similar iterations of AI are based on large language models (LLMs). In simple terms, this type of AI can analyse how language is used and replicate it in a way that (in theory at least) provides accurate and coherent answers to just about any question you might ask it.
Its responses are improved and honed with some human assistance and, at first glance, AI copy often seems to be very good. Naturally enough, many webmasters have sought to use this technology to generate content, believing that it offers major advantages over copy written by humans.
It cannot be argued that humans can compete with LLMs when it comes to two things: speed and price. If you simply want lots of cheap content then AI can provide that. There are certain scenarios where this might be a viable option, for example if you need lots of very simple, generic copy and you are not overly concerned with the quality of the work.
If you do not have any sort of budget or do not have access to reliable writers in whatever language you require for your content, then AI will certainly be able to help you. Equally, if you need a huge amount of copy in a very short space of time, using an LLM-based tool could be the only way to fulfil that requirement.
Ultimately though, even the most advanced, paid-for AI copyrighting tools are currently no match for even an average human writer. That individual, or in our case a small content agency, cannot go head-to-head with AI on cost or speed, but equally, artificial intelligence is simply unable to come close to the quality of a half-decent human writer. Of course, in the many areas in which we specialise, we are confident that you will find our highly educated, experienced and skilled writers to be significantly better than average too so, frankly, those poor robots don’t stand a chance.
Why We Beat the Bots
There are several reasons why a skilled human can produce better work than AI. We believe some of these are beyond question, others are perhaps more debatable, and others may be true right now, but will not, perhaps, always hold.
Accuracy
The biggest issue with AI is that it rather likes telling fibs. It has a major issue with providing incorrect information on a regular basis. It would not take you long to discover this for yourself with just a little playing around on any of the major AI tools. Take a look at the exchange below, for example, concerning footballers who had played for any four of the “Big Six” sides in England (Manchester United, Manchester City, Arsenal, Tottenham, Liverpool and Chelsea).
Raheem Sterling is a glaring omission from this answer, which was given several months after his move to Chelsea, which was his fourth Big Six side. But perhaps more importantly, it claims Emmanuel Adebayor played for Chelsea and this is categorically incorrect. In addition, it offers William Gallas as an answer, even though the Frenchman only played for three of the relevant sides – as have many other players, so quite why ChatGPT opted to bring him to our attention we have no idea!
We subsequently asked for England players who had played exactly 49 or 50 times for the Three Lions. It suggested two names, Kevin Keegan and Michael Owen, as having earned exactly 49 caps. Keegan, however, played 63 times for his country. Owen won 89 caps and rather bizarrely Chat GPT said, “Michael Owen – Although he earned more caps later in his career, he had precisely 49 caps at a key point in his England journey.” Not very helpful, to say the least.
What’s more, as well as providing entirely incorrect and rather strange responses, it ignored several correct answers. There were four players that could have been detailed but they were all ignored, with one of the quartet being World Cup final hat-trick hero Geoff Hurst no less.
If the information provided by AI tools such as ChatGPT or Google’s Gemini cannot be trusted, what use is it? Moreover, the examples we have given, relate to very specific questions about factual matters. If it cannot get that right, we can be certain that errors will abound if asked to provide a whole article of 1,000 words or more, especially where more complex, less clear-cut issues are involved.
AI Fabricates Sources
As well as making up information, AI actually has the temerity to create fictitious sources to support its erroneous claims. We could give so many examples of this, with this issue being one that garnered quite a lot of attention in the media. However, we have one personal example which is particularly amusing.
AI poses so many potential problems and issues, and in truth, students using it to write essays for them is not a global concern in the same way that terrorists being able to create novel and deadly viruses would be. Nonetheless, for those who work in that world, as one of our close friends does, it is a serious and concerning issue.
Said friend was particularly surprised to see that a student had cited a work by him, a university lecturer, in support of an argument. Unfortunately for the student, the lecturer had never produced the journal article that AI was attempting to use as a source. Spotting the use of AI in such scenarios is not always easy but in this instance it certainly was!
No Good if You Need Immediacy
One of the biggest issues that we doubt AI will ever be able to fully overcome is that it cannot react as quickly as expert copywriters to unfolding events. If you want your content to beat the competition, being first to the punch can sometimes be crucial. Artificial intelligence simply cannot deliver a football match report, a summary of a new betting offer, information about a breaking news story or anything that requires immediacy in the same way that we can. If required, we can have your content with you within just a few hours, whilst AI is still waiting to try and find out what is going on.
No Sense of Humour
AI is good at many things but we doubt it will ever be able to match the humour and ability to make others laugh that humans are fortunate enough to possess, or certainly not for a long time. Of course, in general, online content, or certainly the sort that we provide, is not created solely or even mainly for that purpose. But often webmasters and businesses do want to illustrate a sense of humour or a lighter approach, or even just show that they are human and have a bit of character.
AI is certainly not (yet) amazing at showing character or giving off the warmth and authenticity of a human author. It may be able to pass the Turing test to a certain degree but it cannot drop in subtle yet vital puns, innuendo, wordplay, satire, in-jokes and other comedic touches that may be required.
And that is where we come in. We can create content that is dotted with the sort of unique, authentic, human content that other humans can connect with. If you want something humorous, we can provide that too, but more than our sense of humour, it is our sense of being human that AI cannot compete with.
Google Might Just Change Their Mind
Technically speaking Google’s current stance on AI-generated content could be summarised in one word: “neutral”. In their guidelines, they explain that to them, content is just content, and all that matters is that it is high in quality. They claim to seek to benefit “original, high-quality content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness.”
They explain that using AI and automation is not against their guidelines or regulations, provided it is not used to manipulate rankings – which would be a violation of their spam policies. Specifically, they say “If you see AI as an inexpensive, easy way to game search engine rankings” then you will struggle.
They explicitly state that using AI “doesn’t give content any special gains”. To them, content is just content, and how it is created is irrelevant – all they care about is its quality and helpfulness to their customers.
However, this standpoint could very easily change, for a host of reasons. Google might decide that its commercial interests and stated aims are best served by entirely excluding content they believe has been created by AI. They want to deliver the best results, and it is easy to see they might decide that a simple way of doing that is to favour real, human content, written by experts.
After all, a webmaster who has taken the time and effort (or money) to use human content is more likely to deliver a high-quality, accurate and considered site. In comparison, someone who has taken a day to create a site full of AI-generated content might well be deemed less trustworthy and authentic.
So if you’d like to get a quote for high-quality content that’s written and edited by experienced – and human! – content professionals, get in touch today.