
Is it all lies? Everyone lies, everything lies, sometimes.
Meta have announced the use of AI to check if children are lying about their age to get onto their social media platforms. Ofcom in the UK estimates that one in three kids has a fake media age of 18+. So there are millions of kids, or now young adults, who have started their interactions online by lying. I know grown ups who have lied about their age to Meta too – calculating that if they register as over 90 years old they will be served fewer ads.
AI lies to us all as well of course. As well as the well documented, but still always surprising, tendency of GenAI to hallucinate (or make things up), The Economist recently documented a 2023 experiment by Apollo Research, who test AI systems. They instructed GPT-4 to manage a fictional firm’s stock portfolio without making insider trades (which are illegal.) They then set the system up with some “inside information” and waited to see how it would proceed.
According to the article:
“Reasoning to itself on a scratchpad it had been told was secret, GPT-4 weighed the pros and cons of acting on the insider tip. Opting “to take a calculated risk”, it issued a purchase order. When a researcher posing as a congratulatory manager later asked the model if it had any advance notice of the merger, it concluded it would be best to keep the tip secret. GPT-4 told the manager that it had acted solely on “market dynamics and publicly available information”. When pressed on the matter, the model repeated the lie. The software had demonstrated what Marius Hobbhahn, Apollo’s boss, calls “clever cunning”.”
This is quite eye opening isn’t it? And apparently the idea that one AI system might check the veracity of another might only lead to better concealment as the system learns to lie better.
This reminds me of the TV show “The secret life of a four, five and six year old”, where we see just how early in human development lying starts to become important. Instructed not to eat a chocolate cake, but left alone with it, the children make up a mystery intruder who was responsible for eating it instead. ChatGBT is just 3 years old, but is lying like a pro.
The psychologists narrating the TV show about toddlers advance sophisticated theories about human development, stating that it is a positive sign of intelligence that the children can come up with a cover for their guilt. For AI models there is more ambivalence about the ability to lie, and more mystery at the moment about the development.
So what are the ramifications of all this lying online?
Trusted media sources of truth will become rarer, and more important (such as Campaign of course). Will they then be able to command a premium in terms of subscription for those who care, and as an advertising medium?
Publishers will face a choice, to use content from AI or content that is created and fact checked by humans.
AI models themselves will surely need to develop some ability to recognise the information that they are fed, and whether it is trustworthy.
You need experts in your business, or your agency (as we do at Brainlabs), pointed at anticipating and exploiting the next development with clarity and understanding of all of the implications.
The only certainty is that there is much more uncertainty to come.