Archive for June, 2021

Do you believe in data magic?

Friday, June 25th, 2021

Are we mixing up magic and science (again)?

It’s always been true that people can manipulate data to fool others. (A index chart with a scale that starts at 50 not zero for instance is a classic and somewhat disappointing feature of some awards entries to exaggerate impact).

Now data may be manipulating us as AI takes control.

The consequences of this are far reaching and profoundly dark.  We should not believe in what we see without interrogation.  For some this has echoes of the pre-enlightenment mass belief in magic.

Until the late seventeenth century in the west the magic and science were pretty much the same thing.  Isaac Newton “discovered” gravity but he also worked hard to turn metal into gold with alchemy.  Queen Elizabeth 1 sponsored the magician/mathematician Dr Dee who cast spells and also taught Drake and Raleigh how to navigate the globe.  Dee conversed with angels, and wrote algorithms (they aren’t anything new) to explain the solar system.

Magic fell from grace as an endeavor for scholars and scientists in the enlightenment replaced by cold hard data.  Today, one leading commentator on data science believes that we are in danger of thinking about it in terms that are magical.

  1. David Edelman is director at the Massachusetts Institute of Technology. A one time advisor to Barack Obama he joined me at a keynote for NextM Austria – at a fascinating conference session chaired by Omid Novidi, ceo of MediaCom Austria.

Edelman pointed to one example of “magic” in tech:  DeepFake (where machine learning and artificial intelligence is supercharging the ability to fake content)  – the fun aspect of which is compelling, the dark side of which is yet to be fully understood, or accounted for.

Artificial Intelligence (AI) is climbing up to the “Peak of Inflated Expectations” according to the Gartner Hype Cycle (though nowhere near the “Plateau of Productivity”), but it is cropping up widely, and often usefully.  Outside of our industry Edelman cited an education experiment where 2 years of progress was made in just 6 weeks as a result of personalized AI driven online learning programmes for schoolkids.  AI is saving lives in screenings for breast cancer.

For our industry Edelman warns that AI is in its infancy.  And there are dangers if we don’t guide AI ethically, responsibly and monitor its progress.

Edelman advises 5 key questions to ask when using AI.  Crucially this means asking specifically who designed it and whose reputation is damaged if it goes wrong.  Many systems are designed for the current status quo by the current leaders of that status quo.  Yet simultaneously we’re trying to change the status quo, to make our businesses stronger and better for the disruptions to come.

As industry changemakers we need to interrogate AI carefully.  Can we on the one hand make pledges about being more inclusive in the work and in our management and at the same time allow AI to make decisions based on the biases of the past?

One glance at the current situation shows that the status quo is not ok.  The Economist has taken a look at how AI is working in Google Open Images and ImageNet.  They found just 30-40% photos are of women,  (50% of the population of course), that men are more likely to appear as skilled workers and women in swimsuits or underwear.  Frequency of labels for men are high for “business, vehicle, management”. The equivalent for women include “smile, toddlers, clothing”.

Edelman reminded the conference audience of a key episode from America’s history  The Salem Witch Trials took place in the seventeenth century in Edelman’s county of residence Massachusetts.  He warned that if we allow the narrative about AI to become magical then we are in danger of behaving like the residents of Salem, of becoming like uninformed credulous children and allowing unfair and even harmful practices to become the norm.  We will fail to challenge the systems in a way which will create a better world.  In Salem being an outsider was harmful at the minimum to your prospects of flourishing (most of the victims were misfits to a strict Puritan society).  We need to actively design AI to encourage more diversity and bring in outsiders in our systems now.  We must actively ensure that we design AI to drive change and difference.

Edelman says: “Don’t just build AI for performance, but also for opportunity, for justice and for inclusion”.

As WPP UK Country manager and Group M ceo Karen Blackett wrote in the foreword for our book Belonging: “Diversity is not a problem to fix.  Diversity is the solution.”

When it comes to the development of revolutionary new systems and ways of working we all need to pay attention to ethics, to inclusion and belonging.

 

 

Is your career suffering because of all the noise?

Monday, June 14th, 2021

The Media Week Awards are back

The awards, which Campaign calls “the most highly prized awards in UK commercial media” are now open for entries with deadlines looming in June and July.

I’m honoured and delighted to have been asked to judge again.  I have seen the growth of professionalism and rigour in the judging over the years.  But a new book, by Nobel prize winner Daniel Kahneman, makes for grim reading as far as judgement in terms of the effects of what they categorise as “Noise” on human judgement.

The book (with co-authors with Olivier Sibony and Cass Sunstein) is packed full of evidence casting significant doubt on nearly every aspect of judgement, many of which underpin business and society.  For example, a study of 208 federal judges in 1981 who were all exposed to the same 16 hypothetical cases found that in only 3 was there agreement on the verdict.  There was also huge variation in sentencing – in one case where the average sentence was a year, one judge recommended 15 years in prison.

In real life (as opposed to a hypothetical case) judgements judges have been found more likely to grant parole at the beginning of the day or after a food break.  Hungry judges are tougher.  One study which examined 1.5m judgements over 3 decades showed that when the local football team loses a game on the weekend judges make harsher decisions on Monday.  A study of 6 million decisions made by French judges found that defendants are given more leniency on their birthdays.  And when it is hot outside, people are less likely to be granted asylum according to evidence on the effect of temperature on 207,000 immigrant court decisions.

This is shocking of course and as you read through the book the evidence piles up for the unreliability of human judges and juries.

More evidence then that evidence based decisions, using rigorous modelling are so important in media and advertising thinking, and why the IPA data bank is so useful.

Are robotic judgements better?  Not by much according to this book.  Partly of course because the rules are based on history (past judgements delivered by humans and therefore subject to bias) or a set of rules (created by humans and subject to bias).  Machine learning is not as unnoisy as it seems.

Winning an award is important and can help your career path, but your career also depends in other ways on the judgements of others.  Studies based on 360 degree performance reviews find that the variance in scores based on empirical performance accounts for no more than 20-30% of the review.  The rest is system noise.  And the noise may have absolutely nothing to do with you – it could be down to a row that the rater had at home, bad weather spoiling their plans for the evening or the fact on the other hand that they have had a generous review from someone else.

We can’t delegate career decisions to machines anyway as the authors write: “Creative people need space.  People aren’t robots… people need face to face interactions and values are constantly evolving.  If we lock everything down we won’t make space for this.”

What should we do to account for noise in decision making, (aside from hoping for good weather and a winning football team)?

Kahneman, Sibony and Sunstein advocate appointing a “decision observer”.  Someone who has no skin in the game to identify and point out bias.  This is common on major boards in respect of non-executive directors and chairs, but non-existent in many reviews or on awards judging panels and should be welcomed (at least as a trial).

In addition, high performing teams need, as a matter of course, to understand how to reach agreement when they disagree in a way that steps aside from who is most forceful or charming.  We all need to develop a way of working through disagreements that is transparent in approach.    In Belonging, the key to transforming and maintaining diversity, inclusion and equality at work we say this: “Understand that there are 3 kinds of disagreement: a) we are using different facts and evidence to reach our conclusions; b) we are interpreting the facts and evidence differently; c) we actually fundamentally disagree.”  We detail how to do this in chapter 6.

Start with this, and at least some of the noise in collective decision making will quieten to ensure better outcomes for everyone.

 

What if we did less?

Tuesday, June 1st, 2021

The power of minus.

The power of “and” has been well documented.  Best selling author Martin Sharp has spoken about the power of a “life of combinations”.  He exhorts people to replace “but” with “and” for a richer existence.

Recent business book, “The Power of And: responsible business without trade-offs” by Edward Freeman, Parmar and Martin argues that the business of business is “responsible action, not simply profit seeking”.

“Yes AND” is a creative technique born in improv comedy and translated into idea generation where you build on each idea rather than dismissing anything.

But what if instead we did less?  If instead of “Yes, And” we said “No, subtract”?

Top designer, Thomas Heatherwick, (creator of the epic 2012 Olympic Cauldron, and the lovable new Routemaster bus), thinks subtraction can be as powerful, if not more so, than addition.  He recently said that in his design studio they always ask “Do we need this element?” and that subtraction and simplification have huge effect.  Less, for Heatherwick, is frequently much more.

When you are working on a project, critiquing and quality controlling, how often do you remove elements?  I would observe that most people’s tendency is to use their experience and smartness to ask for more, dig further and add work, rather than have an instinct to strip things away and do less.

It turns out that this is a quirk of human nature and is statistically substantiated.  The Economist points to a study in Nature which suggests that humans struggle with “subtractive” thinking.  When asked to improve something, anything, from a lego model to a golf course, their tendency is to add more things rather than strip things out.  In one test of a lego model, most people added to it and only between 2% and 12% of respondents removed bricks.  When asked to improve a piece of writing 80% added more words and only 16% cut the article back.

The research shows that when there is an increased cognitive load (which could be the stress of a new business pitch or big approval meeting), people are even less likely to remove features to improve the work.

In the spirit of keeping this article short, simple and without extra features, I will end by saying that it is very useful to be conscious of this newly identified cognitive bias.  If your tendency is to add more complications and features then don’t.  Ask instead what the minimum viable plan is (this is a key feature of Agile ways of working), and remember that when Dr Frasier Crane said: “but if less is more, then think how much more more is”, he was almost certainly wrong.