Wednesday, August 17, 2011

The Perils of Proportions As Proxies for Performance

Suppose you set a minimum performance standard that you want your employees to do at least 98% of the work they do correctly. So for each person, you select a sample of 50 items of work they have done to check, as a proxy for how they are actually doing in all of their work.

In Worker A's sample less than 98% are correct. So you obviously have a performance problem, right?

Wrong!

If across all of the work they are doing they are actually meeting the standard then statistically there is a 26.4% chance that the sample will show that they aren't meeting the standard! And even if they are getting 99% correct, there is still an 8.9% chance that the sample will show that they aren't meeting the standard! In other words, a false positive.

Now consider Worker B who across all of the work they are doing is only getting 96% of their work right. Then statistically there is a 40% chance that the sample will show that they are meeting the standard! And even if their actual performance is as low as 94%, there is still a 19% chance that the sample will show that they are meeting the standard! In other words, a false negative.

What these two examples show is that if you have a moderately large number of employees then it is almost certain that by sampling like this you would end up wasting time managing the performance of someone who is already achieving the standard you have set, while not managing the performance of someone who isn't.

So what's the answer?

Some might say: increase the sample size. But that is the wrong answer for two reasons.

Firstly, increasing the sample size won't remove the problem, it will only reduce the risk of it happening and for a large employee base the problem would still continue, albeit to a reduced degree.

And secondly, and perhaps more importantly, if you increase the sample size then you have to divert more resources from doing the work to checking the work which reduces productivity and increases your costs.

My answer is different because the above approach is fundamentally flawed: there should be no minimum acceptable standard of error. If your workers process 250,000 items of work a year, then even an error-rate of 1% means that your 2,500 of your customers are experiencing problems of one kind or another. The appropriate standard to aim for is ZERO errors.

But that doesn't mean hitting your employees over the head for every error they make. What it means is having a big enough sample size to capture the most common errors and then making the necessary adjustments, whether this be training staff, improving systems...whatever to eliminate those sources of error.

In other words, checking the quality of work isn't about punishing staff or setting some arbitrary level of acceptable error. It's not as if anyone sets out each day to deliberately make errors.

Instead, it's about looking at the errors that have occurred and taking action to see that they are not repeated. And where one person's error is due to lack of information, misunderstood instructions or lack of training, the chances are that other people are making similar errors which may not have been detected in their particular sample of work. So it's also about sharing more widely the information about what errors are occurring.

In other words, it's not about the past, it's about the future and how your workers can do better in their jobs and your customers can get better service.

Tuesday, August 16, 2011

Black Box or Diagnosis: how do you improve performance?

Where an employee is not meeting some kind of performance target, there are at least two approaches that can be taken.

One approach could be called the 'Black Box" approach. In this approach, the manager isn't really concerned with why the person isn't performing they are only interested in that the person isn't performing. So if, say, the worker is expected to process one item of work on average every 9 minutes and they are currently taking 12 minutes, the person is monitored and they are pressured to work faster, without any attempt being made to determine why they are not meeting the target. It assumes that the task is a single indivisible unit.There are inputs, there are outputs and what happens in between is solely the responsibility of the worker. If the worker just applies themselves then they can do better.

A second approach is the Diagnostic Approach.

It doesn't make the same assumptions as the 'Black Box' approach. It assumes that the task contains elements, that a person may be more or less skilled in each of those elements and that different workers may differ in their skill level for each component and the time it takes them to complete each component. It assumes that two people may apply the same effort and yet because of their individual differences they may achieve different levels of performance. And it assumes that by identifying what it is that the worker finds difficult and by providing support to the worker in developing their skills in that element then they can improve their performance.

For instance, suppose a worker is assigned a task which requires them to:
  • use a computer system to access and process the work they are assigned
  • read copies of letters sent by clients and identify the issues they are raising
  • make a decision regarding the client's case based on the information provided
  • compose a letter to the client advising them of the decision and addressing the issues raised
The worker might have any of the following problems in completing the task:
  • they may not be very confident in using the system or may have poor keyboard skills
  • their reading-comprehension skills may be poor
  • they may find it hard to make the decision due to lack of confidence or a weak knowledge of the guidelines or other information necessary for making the decision.
  • they may have trouble in their written communication skills
In the Diagnostic Approach, a manager would talk to the worker, watch them process and find out what problem the worker is actually experiencing in processing the work. And then they would target support to helping the worker to improve in that specific skill.

The 'Black Box' approach is equivalent to talking to someone who doesn't speak English and expecting that they will somehow understand if you speak louder or more slowly. The Diagnostic Approach on the other hand is equivalent to identifying that the problem is lack of English skills and either finding an interpreter or teaching the person basic English.

It doesn't take much effort to speak louder or more slowly, but it doesn't achieve any worthwhile results either. Addressing the actual skills deficit on the other hand may take longer but it results in genuine improvement.

More importantly it empowers the worker since they now see that it lies within their power to improve if only they can identify their barriers to improvement.

Monday, August 15, 2011

Small Bets

To get better you have to take risks. There is no other path.

You have to try things where you can't necessarily predict the outcome, evaluate your results and then either kill or modify the things that didn't work as well as expected.

One of the problems in many organisations, particularly public sector organisations, is that they only want to bet on sure things and no-one wants to be associated with something that fails. So nothing truly original is ever tried and those things that are tried are never evaluated as failures and killed off as they deserve to be. Instead they become institutionalised.

There is almost a 1984-style double-think that goes on where things that have clearly failed to live up to expectations instead are lauded as outstanding successes, even while the average worker looks on like the small child in The Emperor's New Clothes, wondering where are the magnificent clothes that the management are crowing about. This does nothing for either the effectiveness of the organisation or the credibility of management.

In Little Bets, Peter Sims argues that organisations need to make what he calls 'little bets', small experiments which may well fail, but which may well also point us in the direction of what will work.

In his own words,
...little bets are concrete actions taken to discover, test, and develop ideas that are achievable and affordable. They begin as creative possibilities that get iterated and refined over time, and they are particularly valuable when trying to navigate uncertainty, create something new, or attend to open-ended problems, When we can't know what's going to happen, little bets help us learn about the factors that can't be understood beforehand. (p.8)
The problem is that when new processes are introduced in organisations, they are often presented as fait accomplis rather than as experiments or works-in-progress. They are prematurely frozen before anything has actually been learned from them and promulgated as           
'this is the way things are going to be from now on'
rather than         
'this is how things are going to be for the time being until we have learned whether it works as expected'.
And once they are cemented in as part of the status quo, they aren't revisited to see whether they have even been cost-effective in achieving the desired objective.

So what is the answer?

Firstly, I think that honest evaluation needs to be built into the process and that includes what has been learned from what has been attempted, what were the obstacles to success particularly unforeseen obstacles and obstacles created by individuals. In other words, instead of a whitewash there should be scrutiny to see what can be learned either to improve the existing process or to underpin future attempts at change.

Secondly, a kill date needs to be set upfront. That is, that unless the new process has shown demonstrable benefits that exceeds its costs by a particular date then it will be killed. This avoids things that don't work becoming institutionalised and ensures that there has to be some justification for the process to continue.

Thirdly, providing the rationale and data on which a new process was based were properly documented, there should be no stigma attached to the failure of a new process. The only stigma should be attached to those who try nothing new (timidity or inertia) and to those who try new things without adequate analysis (foolhardiness or laziness).

If an organisation is prepared to be satisfied with mediocrity then it can continue to try the same old conventional no-risk solutions, solutions that have low risk of failure but zero risk of outstanding success.

Or an organisation can try making small bets, learn from the results, and advance towards excellence.

Friday, August 5, 2011

Inert knowledge - when what is 'learned' isn't learned

When a person studies for a coursework degree then in order to obtain the degree they need to demonstrate that they have mastered some minimal proportion of the content of each subject that counts towards the degree. Universities, being the conservative institutions that they are, use things such as essays or exams as proxy measures for whether a student has achieved an acceptable level of mastery. And those teaching such courses have not necessarily put the concepts that they are teaching to work in the real world. (I once argued with regard to a particular university that if what their business school taught was in fact correct then why was it not applied within the University itself or conversely if it wasn't correct then why was it being taught?)

The problem is that what these proxy measures primarily show is that the student is able to repeat back what they have 'learned' and use terminology in an appropriate way. What they don't show is whether the student actually understands what they have learned well enough to be able to recognise when it is applicable, how it must be modified for varying contexts and how it should be applied in a specific context.

And because of this, we find information which has been poorly understood in the first place being misapplied to situations in the real world. When knowledge has only been acquiried superficially and cannot be applied to real problems then it is sometimes referred to as 'inert knowledge' which means effectively 'knowledge which does no work'. However the problem runs deeper than this since misapplication of what has been misunderstood can actually be worse than doing nothing at all, because it can result in costs with no corresponding benefits. And the fact that those doing so have passed a course may give them an unjustified confidence in the correctness of their actions.

Coupled with this is the issue that people are more likely to remember what accords with their preconceptions. So a person with an authoritarian personality will most likely remember those facts, concepts and theories that they were exposed to that support their own world view. In other words, even the little they learn may be biased in a prticular way.

The problem is made worse by the fact that when people gain a degree such as an MBA they may feel that they can now rest on their laurels and simply apply what they have learned. However, reality is dynamic, societies change, the way in which organisations operate change, the external environment or changes in cultural values may throw up challenges unanticipated when the person was studying. So not only may knowledge be inert, it may also become stale and outmoded over time.

While a tertiary education, and particularly a post-graduate education, is supposed to imbue the student with a passion for life-long learning, in many cases the learning stops as soon as the piece of paper is awarded and the person enters the 'real' world, where apparently much of what they learned doesn't actually apply or the person fails to make an appropriate match between what they have learned and particular situations they face. Theorestical models which look so great on paper may not fare so well in the trenches where multiple problems may be inextricably entwined, and what seemed so neat and clean, becomes messy and muddy.

An education is only an education if it makes a difference to behavior, if it exposes us to ideas which we may see merit in (even if we initially disagree with them) and if it fills us with a sense of the dynamic and contingent nature of knowledge so that we don't ever feel that we have reached final certainties, but instead hold our views lightly, ever willing to change them if faced with new information or a new reality.

In other words, an education should endow us with intellectual humility. No-one should think that having a piece of paper turns them into some sort of genius or that the view of a person without that piece of paper are somehow inferior.

After all, virtually none of the richest business people in the world have MBA's or other tertiary qualifications.  A credential only points to what you may have learned sometime in the past, not what your value to a business is now. Credentials are great, especially for opening doors for someone just entering the world of business but what matters in the end is achievement.

Monday, August 1, 2011

The Workaround Audit - Identifying where problems are already known to exist

In many organisations, senior managers try to distance themselves from the 'pain' of problems that they have the power, resources and authority to solve, pushing them down through the lower levels of the organisation until they reach someone who has no choice but to try and cope with the fall-out of a problem.

So what the person lower down does is develop a workaround, a way to circumvent or ameliorate the problem which is less efficient or effective that a proper solution would be, but which works well enough to get the job done (albeit at a continuous cost to the organisation.) Where an organisation has not previously had any commitment to continuous improvement, layers of workarounds may have built up over years.

However, one of the problems with workarounds is that they are only as good as the person who conceived them. Under pressure to get the work through, it is just as likely that a quick and dirty approach will be settled on that may not even be the best of the feasible workarounds that could have been considered.  As a result, there are costs to the organisation not just from the workaround itself, but also from the fact that the specific workaround is more costly than it needs to be.

The problem can be compounded when senior managers who have chosen to remain oblivious of the problem the workaround was designed to solve then try to eliminate the workaround as a source of waste without putting a permanent solution or even any solution in place.

One of the ways in which an organisation can find rapid improvements is to do a workaround audit. That is to catalogue every workaround that exists in the business, why it exists, what resources it consumes on an ongoing basis, what its existence has historically cost and whose problem it was to solve in the first place.

Once this is known, you can list the unsolved problems in the business that are costing resources and then start to look for permanent solutions. If a permanent solution isn't available, a second line of attack would be to see if there is a more efficient workaround that could do the job better.

However, we also need to consider why the workarounds existed in the first place. And this comes down to the connection between pain and authority. The organisation needs to maintain a memory of what problems have been raised with senior managers and what actions if any they have taken to solve them. Even where the problem is 'delegated' the onus should remain on the responsible senior manager to see that the problem is solved. And if it is solved by someone lower in the hierarchy putting in place a workaround then the senior manager should have to justify why a permanent solution wasn't put in place and why they abdicated responsibility.

Pain and authority should be bound together so that something is done rather than things being swept under the rug, where eventually they build up enough for people to start tripping over them.

The problem is that the power to compel accountability and the desire to evade accountability are often in the same person's hands. So short of regime change workarounds are likely to remain for some time to come.