The Media Week Awards are back
The awards, which Campaign calls “the most highly prized awards in UK commercial media” are now open for entries with deadlines looming in June and July.
I’m honoured and delighted to have been asked to judge again. I have seen the growth of professionalism and rigour in the judging over the years. But a new book, by Nobel prize winner Daniel Kahneman, makes for grim reading as far as judgement in terms of the effects of what they categorise as “Noise” on human judgement.
The book (with co-authors with Olivier Sibony and Cass Sunstein) is packed full of evidence casting significant doubt on nearly every aspect of judgement, many of which underpin business and society. For example, a study of 208 federal judges in 1981 who were all exposed to the same 16 hypothetical cases found that in only 3 was there agreement on the verdict. There was also huge variation in sentencing – in one case where the average sentence was a year, one judge recommended 15 years in prison.
In real life (as opposed to a hypothetical case) judgements judges have been found more likely to grant parole at the beginning of the day or after a food break. Hungry judges are tougher. One study which examined 1.5m judgements over 3 decades showed that when the local football team loses a game on the weekend judges make harsher decisions on Monday. A study of 6 million decisions made by French judges found that defendants are given more leniency on their birthdays. And when it is hot outside, people are less likely to be granted asylum according to evidence on the effect of temperature on 207,000 immigrant court decisions.
This is shocking of course and as you read through the book the evidence piles up for the unreliability of human judges and juries.
More evidence then that evidence based decisions, using rigorous modelling are so important in media and advertising thinking, and why the IPA data bank is so useful.
Are robotic judgements better? Not by much according to this book. Partly of course because the rules are based on history (past judgements delivered by humans and therefore subject to bias) or a set of rules (created by humans and subject to bias). Machine learning is not as unnoisy as it seems.
Winning an award is important and can help your career path, but your career also depends in other ways on the judgements of others. Studies based on 360 degree performance reviews find that the variance in scores based on empirical performance accounts for no more than 20-30% of the review. The rest is system noise. And the noise may have absolutely nothing to do with you – it could be down to a row that the rater had at home, bad weather spoiling their plans for the evening or the fact on the other hand that they have had a generous review from someone else.
We can’t delegate career decisions to machines anyway as the authors write: “Creative people need space. People aren’t robots… people need face to face interactions and values are constantly evolving. If we lock everything down we won’t make space for this.”
What should we do to account for noise in decision making, (aside from hoping for good weather and a winning football team)?
Kahneman, Sibony and Sunstein advocate appointing a “decision observer”. Someone who has no skin in the game to identify and point out bias. This is common on major boards in respect of non-executive directors and chairs, but non-existent in many reviews or on awards judging panels and should be welcomed (at least as a trial).
In addition, high performing teams need, as a matter of course, to understand how to reach agreement when they disagree in a way that steps aside from who is most forceful or charming. We all need to develop a way of working through disagreements that is transparent in approach. In Belonging, the key to transforming and maintaining diversity, inclusion and equality at work we say this: “Understand that there are 3 kinds of disagreement: a) we are using different facts and evidence to reach our conclusions; b) we are interpreting the facts and evidence differently; c) we actually fundamentally disagree.” We detail how to do this in chapter 6.
Start with this, and at least some of the noise in collective decision making will quieten to ensure better outcomes for everyone.
Do you believe in data magic?
Friday, June 25th, 2021Are we mixing up magic and science (again)?
It’s always been true that people can manipulate data to fool others. (A index chart with a scale that starts at 50 not zero for instance is a classic and somewhat disappointing feature of some awards entries to exaggerate impact).
Now data may be manipulating us as AI takes control.
The consequences of this are far reaching and profoundly dark. We should not believe in what we see without interrogation. For some this has echoes of the pre-enlightenment mass belief in magic.
Until the late seventeenth century in the west the magic and science were pretty much the same thing. Isaac Newton “discovered” gravity but he also worked hard to turn metal into gold with alchemy. Queen Elizabeth 1 sponsored the magician/mathematician Dr Dee who cast spells and also taught Drake and Raleigh how to navigate the globe. Dee conversed with angels, and wrote algorithms (they aren’t anything new) to explain the solar system.
Magic fell from grace as an endeavor for scholars and scientists in the enlightenment replaced by cold hard data. Today, one leading commentator on data science believes that we are in danger of thinking about it in terms that are magical.
Edelman pointed to one example of “magic” in tech: DeepFake (where machine learning and artificial intelligence is supercharging the ability to fake content) – the fun aspect of which is compelling, the dark side of which is yet to be fully understood, or accounted for.
Artificial Intelligence (AI) is climbing up to the “Peak of Inflated Expectations” according to the Gartner Hype Cycle (though nowhere near the “Plateau of Productivity”), but it is cropping up widely, and often usefully. Outside of our industry Edelman cited an education experiment where 2 years of progress was made in just 6 weeks as a result of personalized AI driven online learning programmes for schoolkids. AI is saving lives in screenings for breast cancer.
For our industry Edelman warns that AI is in its infancy. And there are dangers if we don’t guide AI ethically, responsibly and monitor its progress.
Edelman advises 5 key questions to ask when using AI. Crucially this means asking specifically who designed it and whose reputation is damaged if it goes wrong. Many systems are designed for the current status quo by the current leaders of that status quo. Yet simultaneously we’re trying to change the status quo, to make our businesses stronger and better for the disruptions to come.
As industry changemakers we need to interrogate AI carefully. Can we on the one hand make pledges about being more inclusive in the work and in our management and at the same time allow AI to make decisions based on the biases of the past?
One glance at the current situation shows that the status quo is not ok. The Economist has taken a look at how AI is working in Google Open Images and ImageNet. They found just 30-40% photos are of women, (50% of the population of course), that men are more likely to appear as skilled workers and women in swimsuits or underwear. Frequency of labels for men are high for “business, vehicle, management”. The equivalent for women include “smile, toddlers, clothing”.
Edelman reminded the conference audience of a key episode from America’s history The Salem Witch Trials took place in the seventeenth century in Edelman’s county of residence Massachusetts. He warned that if we allow the narrative about AI to become magical then we are in danger of behaving like the residents of Salem, of becoming like uninformed credulous children and allowing unfair and even harmful practices to become the norm. We will fail to challenge the systems in a way which will create a better world. In Salem being an outsider was harmful at the minimum to your prospects of flourishing (most of the victims were misfits to a strict Puritan society). We need to actively design AI to encourage more diversity and bring in outsiders in our systems now. We must actively ensure that we design AI to drive change and difference.
Edelman says: “Don’t just build AI for performance, but also for opportunity, for justice and for inclusion”.
As WPP UK Country manager and Group M ceo Karen Blackett wrote in the foreword for our book Belonging: “Diversity is not a problem to fix. Diversity is the solution.”
When it comes to the development of revolutionary new systems and ways of working we all need to pay attention to ethics, to inclusion and belonging.
Posted in MediaComment | No Comments »