Hallucination is a term that stems from human psychology: a person sees, hears or senses things that are not there. It might be a hallucination on its own to use this term for an entity that is artificial, not human. When asked, ChatGPT cannot tell who coined this term. The chatbot points to other language models like Neuro Linguistic Programming that interestingly enough has the same acronym as Natural Language Processing, used in AI-training.
Well, enough of the coffee table talk for now. Let’s see what AI hallucinations are and how we can work around them.
What are hallucinations in AI?
An AI-writer can produce a credible sounding text where it states complete nonsense with great authority. It can mix up data, use the wrong source information or even make up facts and generate fake URLs. ChatGPT could for example come up with a nonexistent paper on the origin of the Covid-19 virus, tell that Hamlet was the king of Sweden and quote Napoleon saying: ‘Words, my friend. Nothing but words.’
Roughly speaking there are 3 types of hallucinations:
1. Prompt reading mistakes
The AI writer gives an answer to a completely different question. At the moment it cannot be determined how this is triggered. Try to be as clear as possible in your prompt and point the AI to the right information source.
2. Mishandling or making up facts
The training data of ChatGPT included the complete Reddit forum, a very large data set with non-factual opinions and conspiracy theories. This could explain part of AI hallucinations, because AI predicts the outcome of prompts by comparing patterns in its own data sets.
3. Biased texts
Training and training data are most likely at fault for biases on age, gender and other personal attributes. As users we cannot prevent them, so we have to check for these hallucinations afterwards.
The risks of AI hallucinations
Don’t put a disclaimer on your website that all your content is AI generated and that you’re not responsible for any possible misinformation: you are! Unreliable website content will lead to:
- losing credibility
- making you liable for legal actions
- undermining equity and inclusion
Workarounds for AI hallucinations
At the moment we cannot prevent AI hallucinations: this is the task of AI developers and trainers, but understanding how to humanize text can help mitigate some effects.
The only things we can do right now are:
- Be as clear as possible in our prompt
- Check all AI output before publishing
Checking AI output
Always fact-check your output. Ask ChatGPT to mention the source of the information and to give you names, dates and an URL. Click the URL to check if it’s real. Ask your old friend Google for the same information or call experts in the field. In case of doubt, don’t publish.
A check on biases is easier and super fast: there is a built-in bias check in the software of Textmetrics. Here you cannot go wrong!