AI, Bias, Riot Police and Recruitment

AI, Bias, Riot Police and Recruitment

I recently read another article posing the choice as to whether AI being used in recruiting software is introducing or removing bias. This is a choice that I’ve seen written about, almost relentlessly, for the last few years and I’m going to be bold enough to go out on a limb and say that I actually know the answer. I’ve got this one folks. It’s doing both at the same time. We present the inevitable truth of that as an argument and there is no real argument. It’s doing both at the same time.

We like to think that ‘a thing does a thing’ – that there is healthy food and bad food. That our world can broadly be reduced, with clarity, to whether something is moving us towards or away from our goals. The world simply refuses to engage with us on those terms. In the article the author looks at several well known providers of recruitment solutions and highlights how incorrect or uncomfortable decisions can be reached. The author is absolutely correct that this is the case. However, the overall impact on fairness could be hugely different to the impact on individuals.

Do police in riot gear make you feel safe? There’s always a danger of crime, so riot police should feel like a good thing. They are there to protect the public like normal police and they are even better equipped and trained to deal with trouble. But my guess is that if you turned the corner and saw 500 riot police that the net positive impact of the above might be lost on you as an individual. You might feel uneasy. Similarly if you are misidentified as a criminal by those riot police then I’m guessing that the argument that the area is net safer probably isn’t a clincher for you. I’m guessing you are significantly more negatively impacted than before.

We know we have bias in recruitment when carried out only by humans. We know that we have a range of biases that go into recruitment decisions. We are immensely flawed and biased software. We get this stuff wrong. Therefore we need to accept the reality of the current solution – and it seems smarter to attempt to use software to do this than to correct for a combination of evolutionary and societal flaws every time we make a decision. It would be, in fact, probably the height of arrogance to believe that we could do so.

Software can probably make things better (overall), but the problem is that we are attracted to and sold ‘solutions’. And nobody likes a solution that doesn’t actually solve the problem. In this case the solution makes things a bit better overall – and possibly much better over time – but still has the same kind of flaws in it as when you started. And that is hugely problematic as Earl Weiner identified with a series of ‘laws‘ addressing the problem of automation in aviation.

I’ll pick out some and then leave you to go back to vendor selection.

17. Every device creates its own opportunity for human error.

18. Exotic devices create exotic problems.

19. Digital devices tune out small errors while creating opportunities for large errors.

20. Complacency? Don’t worry about it.

22. There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.

23. Invention is the mother of necessity

28. Any pilot who can be replaced by a computer should be.

25. Some problems have no solution. If you encounter one of these, you can always convene a committee to revise some checklist.

29. Whenever you solve a problem you usually create one. You can only hope that the one you created is less critical than the one you eliminated.

Fairness, Silos & The Proximity Problem

Fairness, Silos & The Proximity Problem

There’s a type of discrimination that we don’t talk about enough.

The Proximity Problem is the term I use to describe a very human problem that dogs the efficiency and productivity of organisations. I’d love to see research on it, because to the best of my knowledge, it’s a guilty secret. It’s the thing that leaders see elsewhere – but would never admit to themselves. It’s a genuine ethical challenge, but rarely framed as such.

The problem at hand is that, for all the talk of breaking down organisational silos and flattening structures, leaders still tend to protect those closest to them. Or, to put it more bluntly, leaders discount the emotional cost and impact of their decisions – action and inaction – on people who aren’t in their immediate sphere. As an equation it looks a bit like this

(contact time with senior team + amount they know about your family) x closeness in reporting line = level of undue favour

Or, if you are lower in the org it might look like this

Number of levels between you and senior team x number of times your peers have been mentioned positively in a meeting at a senior level = chance of redundancy or poor bonus or missing out on advancement

The Proximity Problem is a contributory reason, in my experience, to why it’s easier to fail and remain in certain parts of an organisation than others. For instance – we all know it tends to be better to be a direct report to a senior leader when it comes to bonus time.

Now, some of you may be thinking ‘Congratulations Sherlock, of course senior people get more money, that’s not news’, but the point I’m trying to make is that it is much harder for them to get less money. Because if they are allocated less bonus/a lower performance rating that

i) necessitates a conversation with them to tell them that news. And that person delivering the news knows their family and possibly has spent time bonding with them

ii) necessitates the senior leader confronting the issue of either poor hiring or poor performance management of the individual

So taking 1 percent off the bonuses of a group of people that you don’t know the names of and only pass in the corridor beats taking 2 percent off the person you are trapped in a room with twice a week.

When it comes to organisational changes the Proximity Problem comes to the fore again. You’ve had a raft of complaints about a team member – but you tend to believe the team member over people you don’t know as well. Why? Well, you don’t want your team member to fail and if they were to fail that would be a host of tough conversations. So you’d rather discount the views and emotions of people you haven’t met. It’s very human and it’s very costly for those not in your team. It’s why any significant clash between departments tends to end up with managers saying ‘there is fault on both sides’ – but then not dealing with the fault that sits on theirs.

It’s part and parcel of some really narrow messaging that has been given around leadership through the years:

“Have the people in your team’s backs”
“Fight their corner”
“Put your people first”

They seem like the things leaders should do, but leading is about the organisation, not just your team. And that is where we fail.

What does the failure look like? It looks like cynicism about pay processes, it looks like people feeling that failure is tolerated for some and not for others and it looks like a fundamentally undermined culture lacking in trust and lacking in proportionate action and fairness.

And yet that happens almost everywhere you look. So if you are leading people then give them a role model who considers no member of the organisation more easily disposable than others, who anyone in the organisation can look to for a fair hearing and who would never be accused of playing favourites. Give them a great human being.

If you work in HR and care about culture they are some of the hardest conversations to have. With senior stakeholders and directly questioning their decisions.

But they are also some of the conversations most required to make organisations fair and productive.

Understanding why your directors excuse behaviour or performance from their immediate team that they wouldn’t tolerate from elsewhere cuts to the heart of one of the core, unpleasant yet understandable, biases of leadership.

The Proximity Problem.

Please note: the author is less guilty of this than most over the course of his career (and his teams will testify to that), but by no means innocent either. I am, at least, aware of it. Also I’ve just seen this by Mark Eltringham over at Workplace Insight. Very much worth a read.

The Surprising Truth About Obvious Truths

The Surprising Truth About Obvious Truths

I regularly talk and write about the need for a more evidence based approach to creating work that works better for more people. Less guff. There is too much faddishness and too many poorly thought out and poorly joined up initiatives. I’m therefore naturally grumpy when people attempt to sell solutions packaged with overclaims or rubbish evidence to back it up (‘Our product has used neuroscience to improve 107% of orgs we worked with’). I’m the one that says ‘prove it’ because not enough time is spent really reflecting on what is most likely to work. We rush to action.

That said… It’s worth making sure that we don’t throw the baby out with the bathwater when taking this approach and I’ve seen some of that recently. In an effort to, rightfully, avoid overclaims it is easy to undermine legitimate claims in the same space. Or maybe more obvious truths just become collateral damage. I thought I’d share a couple of examples

1. Growth mindset

Carol Dweck’s work on growth mindset (the need for a belief in possible improvement being central to performance, expanded on in Bounce by Matthew Syed) has come in for criticism. Some criticism centres around falsifiability (if you say something must not have been done properly if it doesn’t work then it’s hard to prove it doesn’t work) and some applicability (a lot of the work in this area has been done with children, not in work environments). And there is absolutely validity in the criticism. The challenge however is that it would be absolutely perverse to believe that a willingness to persevere and practice isn’t strongly linked to the ability to improve. If you remove the research and just think it through it must to some extent be true. It would be theoretically possible to fundamentally disagree with research (methodology or conclusions) and still hold that practice makes perfect (or at least better) and that people who give up aren’t likely to get better at things. It’s a surprisingly obvious truth.

2. Engagement

For years we’ve been told that better levels of engagement guarantee business success. Or are inescapably linked to business success. But the academic evidence for this is weak. Rob Briner provides an excellent overview here and was commissioned by Engage For Success to do an evidence review which concluded… There isn’t a lot of credible evidence for what are some quite often incredible claims (made by a host of providers). Yet having said that… if you ignore all of overclaims and pseudoscience and reduced the engagement case from something akin to organisational magic to something more mundane then… the claim is simply that ‘People who want your organisation to succeed are more likely to contribute than people who really don’t care’. That seems relatively uncontroversial. To what extent it makes a difference might be debatable, but not that a difference exists. The packaging is the problem, not the potential mundane but important truth.

Where people are making claims we should examine them, but we should also remember that our own experiences and those of others are a type of evidence. And a valid type of evidence. A little more joined up common sense and a little less ‘studies show organisations that do one thing succeed’ might get us a long way.

If you are interested in taking a more evidence based approach then I’d recommend