Can ChatGPT be trusted? True or false - five myths about the reliability of artificial intelligence
Text: Antti Kivimäki
1. ChatGPT and other AI services are objective
FALSE
ChatGPT always gives a slightly different answer to the same question. Like other generative AI algorithms, it has randomness built into it. But the truth doesn’t change, so its answers can’t really be ‘true’ if they keep changing.
The same problem exists in image recognition services. We tried prompting different image recognition systems with the same images, and we found that they gave different answers and recognized a different numbers of things in the images. One service predicted with certainty that there was a fire engine in one picture, but the other services didn’t detect it –and neither did I.
Answers from AI services shouldn’t be thought of as the truth but as a point of view. They have their own interpretations of things, just like humans do.
2. AI generates answers by itself
FALSE
Humans are involved in many ways in the production and processing of information by AI services. Moderators screen ChatGPT’s responses and remove ones that are deemed inappropriate. Humans also annotate material for machine learning, marking different features (like cats or fire engines) so the AI systems can learn to recognise them. Many annotators and moderators work in the Global South for a low wage and often in poor conditions.
Generative AI companies are also hiring poets to make the responses flow better and sound more beautiful, and users also influence AI systems. For example, when they tweak ChatGPT to get a better answer, that helps the machine learning system calibrate its responses.
3. AI is political
TRUE AND FALSE
Few things in the world are truly apolitical. As AI is used more and more in society, there are lots of ideas and discussions about where it should and shouldn’t be applied. For example, screenwriters in Hollywood went on strike because they were concerned that AI would be used to replace them.
But we also have a tendency to see AI as more aware and more human than it is. Although ChatGPT produces politically charged sentences, it doesn’t ‘realise’ that it’s talking about a politically sensitive topic. It simply organizes and processes data statistically.
4. ChatGPT is politically left-leaning
TRUE AND FALSE
Many studies have investigated the values in ChatGPT’s responses, and they’ve found that its responses skew to the left – though ‘left’ and ‘right’ were measured by US standards. There are a couple of theories to explain these findings.
One possibility is that there are more left-wing articles and posts on the internet, so the data used to train ChatGPT might have been biased. The skew could also come from moderation if right-wing responses by ChatGPT are more likely to seen as politically incorrect and get flagged by moderators. Or it could be something else entirely – the truth is that it’s actually very difficult for us to say anything exact about these complex algorithmic systems.
But I’m also not very convinced by these studies because of some weaknesses in how they were done. Even small differences in how you ask a question can get very different responses from ChatGPT, and the studies weren’t designed to deal with this. Some of them also didn’t repeat things enough to account for the random variation in ChatGPT’s responses.
5. Artificial intelligence dramatically increases productivity
FALSE
AI has proven useful for processing large data sets – for example, an image recognition system can quickly distinguish the contents of millions of images. AI systems can also help with many routine tasks, like formulating an email with a friendly tone. But these benefits are partly illusory.
AI does a good job of sifting, classifying and aggregating, but it often doesn’t produce anything very useful. If you ask a machine learning algorithm to find ten groups in the data, it will find ten groups – but they might not be sensible groups. It’s up to the human user to assess the meaningfulness of the responses. If you include the time needed for fact-checking, traditional processes might be quicker than using AI.
When people are hired for expert work, the hope is that they’ll be so proficient in their field that there won’t be much need to supervise their work. AI certainly doesn’t yet have the depth of expertise or the ability to make overall judgements. That means the AI always has to be monitored by a human with the skills to sceptically evaluate its output.
- Published:
- Updated: