Friday, December 30, 2011

Tiger Rocks, Rain Dances and Cautious Cats

The cat, having sat upon a hot stove lid, will not sit upon a hot stove lid again. But he won't sit upon a cold stove lid, either
~ Mark Twain

Where a manager has had to deal with a difficult situation or often merely if they can't stand the thought of having to deal with an imagined difficult situation, they may put processes in place to ensure that it never happens or never happens again.

In doing so, they frequently fail to consider:
  • how the situation arose in the first place
  • how common it is or whether it is even likely to recur
  • whether the precautions they have put in place are even likely to be effective, and
  • what the cost to their organisation is in time, money and efficiency in applying a general process to an unlikely event.
Like Twain's cat, they are so intent on not sitting again on a hot stove than on thinking through how often the stove is lit. In some cases, what they put in place is merely a 'tiger rock'. To understand what I mean by this, come with me on a detour through the world of the Simpsons:

Homer: Not a bear in sight.  The Bear Patrol must be working like a charm.
Lisa: That's specious reasoning, Dad.
Homer: Thank you, dear.
Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn't work.
Homer: Uh-huh.
Lisa: It's just a stupid rock.
Homer: Uh-huh.
Lisa: But I don't see any tigers around, do you?
        [Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.
        [Lisa refuses at first, then takes the exchange
~ The Simpsons (Episode 3F20 http://www.snpp.com/episodes/3F20.html )

Maybe there is some likelihood of a tiger attack, perhaps a tiger could have escaped from a zoo. But building processes around the least likely possibility is a recipe for waste. But once they have their 'tiger rock', which may in actuality be little more than a security blanket, they may hold onto it like grim death.

They may then be afraid to discontinue the process in case an undesired consequence occurs. Like Lisa Simpson's 'Tiger Repelling Rock', an ineffectual and unnecessary process is continued at a cost in time, money, and efficiency to the organisation. For example, a manager may believe that unless they constantly monitor their workers then the workers will slack off and work won't get done. Yet in fact their workers might actually get more work done if someone wasn't constantly looking over their shoulder. And conversely the constant monitoring may actually promote the behavior that it is intended to reduce, since the workers may live down to the low expectations of the manager.

It isn't rational to put a procedure in place to prevent something that is extremely unlikely to happen and then concluding that because it doesn't happen the procedure prevented it.

There are three consequences of doing so:
  • the organisation carries the cost of carrying out an unnecessary procedure
  • the manager may become complacent that their 'tiger rock' will prevent the problem and so become less vigilant,
  • in the rare event that the feared event happens, the procedure doesn't prevent it anyway, any more than Lisa's rock would keep away a tiger that escaped from a zoo.
So it may provide a false sense of security. The fact that we do procedure A and event B doesn't happen, does not mean that procedure A prevented event B. There are an indefinitely large number of things that didn't happen - did procedure A prevent all of those things as well?

We see something similar in corporate strategic planning. "Everyone else has one, so we must have one, otherwise we may be criticised". However as Richard Rumelt points out in his book Good Strategy, Bad Strategy many if not most strategic plans fail to incorporate sound strategic principles. As a result, they become what Russell Ackhoff refers to as 'rain dances': people dancing around thinking they are doing something important when what they are doing is totally ineffectual. And then congratulating themselves when the rain comes, as ultimately it must.

What tiger rocks and rain dances have in common is a failure to consider cause and effect and a failure to consider costs.

In both cases, putting in place and maintaining ineffective processes while superstitiously believing in their efficacy takes resources away from work that might achieve more valued outcomes, while often promoting the occurrence of what is most feared.

Monday, September 12, 2011

Change versus Improvement

In the media, I often read or hear about tax 'reform', education 'reform' or health care 'reform' etc. And whenever I see this I think that whoever is pushing this latest reform is confusing their intention or desired outcome with the process. Whatever they are doing isn't a reform unless it results in an improvement by some measure. Until then it is simply a change, no more, no less. And change in itself is not intrinsically valuable. Change always involves costs, so unless there is a counter-balancing benefit, it is actually bad.

This is equally true in business and in the public sector. Managers often think that  there is something praiseworthy just in doing 'something'. A new manager commences and thinks they have to prove themselves so they make a lot of changes and then congratulate themselves on how 'proactive' and 'innovative' they are, regardless of the costs to the business or the paucity of benefits.

Somehow organisations always manage to fill their strategic plans with dozens if not hundreds of different actions (i.e. changes) regardless of the need for them. Activity replaces improvement as a measure of value-added.

However, I contend that the best manager is the one who achieves the best results with the minimum of change and whose major changes are the elimination of unnecessary activity. But such a manager would be unlikely to be rewarded by the organisation for which they work. No-one is likely to be impressed if there is no actual activity they can point to which as led to the results they are  getting.

This problem is exacerbated by the fact that there is often no measurement of the baseline costs and benefits, so if a manager makes a change there is nothing to compare it with to determine whether the change was beneficial or damaging. This means that managers with a propensity for changing things can often point to lots and lots of changes they have made, but would be hard put to demonstrate any actual benefits from those changes. But at least their own bosses are impressed by the flurry of activity.

I have said elsewhere that a prerequisite for excellence in any activity is stability.It is difficult to become a master of anything that is constantly changing, where part of the effort that you could be putting into doing is diverted into learning the latest changes. This doesn't rule out change completely. However what it does highlight is that change needs to be judicious, it needs to be well thought out, it needs to be properly implemented and finally if it doesn't yield the expected results it needs to be scrapped.

It is only when changes are made in this way that they are likely to result in improvements rather than just pointless activity.

Wednesday, August 17, 2011

The Perils of Proportions As Proxies for Performance

Suppose you set a minimum performance standard that you want your employees to do at least 98% of the work they do correctly. So for each person, you select a sample of 50 items of work they have done to check, as a proxy for how they are actually doing in all of their work.

In Worker A's sample less than 98% are correct. So you obviously have a performance problem, right?

Wrong!

If across all of the work they are doing they are actually meeting the standard then statistically there is a 26.4% chance that the sample will show that they aren't meeting the standard! And even if they are getting 99% correct, there is still an 8.9% chance that the sample will show that they aren't meeting the standard! In other words, a false positive.

Now consider Worker B who across all of the work they are doing is only getting 96% of their work right. Then statistically there is a 40% chance that the sample will show that they are meeting the standard! And even if their actual performance is as low as 94%, there is still a 19% chance that the sample will show that they are meeting the standard! In other words, a false negative.

What these two examples show is that if you have a moderately large number of employees then it is almost certain that by sampling like this you would end up wasting time managing the performance of someone who is already achieving the standard you have set, while not managing the performance of someone who isn't.

So what's the answer?

Some might say: increase the sample size. But that is the wrong answer for two reasons.

Firstly, increasing the sample size won't remove the problem, it will only reduce the risk of it happening and for a large employee base the problem would still continue, albeit to a reduced degree.

And secondly, and perhaps more importantly, if you increase the sample size then you have to divert more resources from doing the work to checking the work which reduces productivity and increases your costs.

My answer is different because the above approach is fundamentally flawed: there should be no minimum acceptable standard of error. If your workers process 250,000 items of work a year, then even an error-rate of 1% means that your 2,500 of your customers are experiencing problems of one kind or another. The appropriate standard to aim for is ZERO errors.

But that doesn't mean hitting your employees over the head for every error they make. What it means is having a big enough sample size to capture the most common errors and then making the necessary adjustments, whether this be training staff, improving systems...whatever to eliminate those sources of error.

In other words, checking the quality of work isn't about punishing staff or setting some arbitrary level of acceptable error. It's not as if anyone sets out each day to deliberately make errors.

Instead, it's about looking at the errors that have occurred and taking action to see that they are not repeated. And where one person's error is due to lack of information, misunderstood instructions or lack of training, the chances are that other people are making similar errors which may not have been detected in their particular sample of work. So it's also about sharing more widely the information about what errors are occurring.

In other words, it's not about the past, it's about the future and how your workers can do better in their jobs and your customers can get better service.

Tuesday, August 16, 2011

Black Box or Diagnosis: how do you improve performance?

Where an employee is not meeting some kind of performance target, there are at least two approaches that can be taken.

One approach could be called the 'Black Box" approach. In this approach, the manager isn't really concerned with why the person isn't performing they are only interested in that the person isn't performing. So if, say, the worker is expected to process one item of work on average every 9 minutes and they are currently taking 12 minutes, the person is monitored and they are pressured to work faster, without any attempt being made to determine why they are not meeting the target. It assumes that the task is a single indivisible unit.There are inputs, there are outputs and what happens in between is solely the responsibility of the worker. If the worker just applies themselves then they can do better.

A second approach is the Diagnostic Approach.

It doesn't make the same assumptions as the 'Black Box' approach. It assumes that the task contains elements, that a person may be more or less skilled in each of those elements and that different workers may differ in their skill level for each component and the time it takes them to complete each component. It assumes that two people may apply the same effort and yet because of their individual differences they may achieve different levels of performance. And it assumes that by identifying what it is that the worker finds difficult and by providing support to the worker in developing their skills in that element then they can improve their performance.

For instance, suppose a worker is assigned a task which requires them to:
  • use a computer system to access and process the work they are assigned
  • read copies of letters sent by clients and identify the issues they are raising
  • make a decision regarding the client's case based on the information provided
  • compose a letter to the client advising them of the decision and addressing the issues raised
The worker might have any of the following problems in completing the task:
  • they may not be very confident in using the system or may have poor keyboard skills
  • their reading-comprehension skills may be poor
  • they may find it hard to make the decision due to lack of confidence or a weak knowledge of the guidelines or other information necessary for making the decision.
  • they may have trouble in their written communication skills
In the Diagnostic Approach, a manager would talk to the worker, watch them process and find out what problem the worker is actually experiencing in processing the work. And then they would target support to helping the worker to improve in that specific skill.

The 'Black Box' approach is equivalent to talking to someone who doesn't speak English and expecting that they will somehow understand if you speak louder or more slowly. The Diagnostic Approach on the other hand is equivalent to identifying that the problem is lack of English skills and either finding an interpreter or teaching the person basic English.

It doesn't take much effort to speak louder or more slowly, but it doesn't achieve any worthwhile results either. Addressing the actual skills deficit on the other hand may take longer but it results in genuine improvement.

More importantly it empowers the worker since they now see that it lies within their power to improve if only they can identify their barriers to improvement.

Monday, August 15, 2011

Small Bets

To get better you have to take risks. There is no other path.

You have to try things where you can't necessarily predict the outcome, evaluate your results and then either kill or modify the things that didn't work as well as expected.

One of the problems in many organisations, particularly public sector organisations, is that they only want to bet on sure things and no-one wants to be associated with something that fails. So nothing truly original is ever tried and those things that are tried are never evaluated as failures and killed off as they deserve to be. Instead they become institutionalised.

There is almost a 1984-style double-think that goes on where things that have clearly failed to live up to expectations instead are lauded as outstanding successes, even while the average worker looks on like the small child in The Emperor's New Clothes, wondering where are the magnificent clothes that the management are crowing about. This does nothing for either the effectiveness of the organisation or the credibility of management.

In Little Bets, Peter Sims argues that organisations need to make what he calls 'little bets', small experiments which may well fail, but which may well also point us in the direction of what will work.

In his own words,
...little bets are concrete actions taken to discover, test, and develop ideas that are achievable and affordable. They begin as creative possibilities that get iterated and refined over time, and they are particularly valuable when trying to navigate uncertainty, create something new, or attend to open-ended problems, When we can't know what's going to happen, little bets help us learn about the factors that can't be understood beforehand. (p.8)
The problem is that when new processes are introduced in organisations, they are often presented as fait accomplis rather than as experiments or works-in-progress. They are prematurely frozen before anything has actually been learned from them and promulgated as           
'this is the way things are going to be from now on'
rather than         
'this is how things are going to be for the time being until we have learned whether it works as expected'.
And once they are cemented in as part of the status quo, they aren't revisited to see whether they have even been cost-effective in achieving the desired objective.

So what is the answer?

Firstly, I think that honest evaluation needs to be built into the process and that includes what has been learned from what has been attempted, what were the obstacles to success particularly unforeseen obstacles and obstacles created by individuals. In other words, instead of a whitewash there should be scrutiny to see what can be learned either to improve the existing process or to underpin future attempts at change.

Secondly, a kill date needs to be set upfront. That is, that unless the new process has shown demonstrable benefits that exceeds its costs by a particular date then it will be killed. This avoids things that don't work becoming institutionalised and ensures that there has to be some justification for the process to continue.

Thirdly, providing the rationale and data on which a new process was based were properly documented, there should be no stigma attached to the failure of a new process. The only stigma should be attached to those who try nothing new (timidity or inertia) and to those who try new things without adequate analysis (foolhardiness or laziness).

If an organisation is prepared to be satisfied with mediocrity then it can continue to try the same old conventional no-risk solutions, solutions that have low risk of failure but zero risk of outstanding success.

Or an organisation can try making small bets, learn from the results, and advance towards excellence.

Friday, August 5, 2011

Inert knowledge - when what is 'learned' isn't learned

When a person studies for a coursework degree then in order to obtain the degree they need to demonstrate that they have mastered some minimal proportion of the content of each subject that counts towards the degree. Universities, being the conservative institutions that they are, use things such as essays or exams as proxy measures for whether a student has achieved an acceptable level of mastery. And those teaching such courses have not necessarily put the concepts that they are teaching to work in the real world. (I once argued with regard to a particular university that if what their business school taught was in fact correct then why was it not applied within the University itself or conversely if it wasn't correct then why was it being taught?)

The problem is that what these proxy measures primarily show is that the student is able to repeat back what they have 'learned' and use terminology in an appropriate way. What they don't show is whether the student actually understands what they have learned well enough to be able to recognise when it is applicable, how it must be modified for varying contexts and how it should be applied in a specific context.

And because of this, we find information which has been poorly understood in the first place being misapplied to situations in the real world. When knowledge has only been acquiried superficially and cannot be applied to real problems then it is sometimes referred to as 'inert knowledge' which means effectively 'knowledge which does no work'. However the problem runs deeper than this since misapplication of what has been misunderstood can actually be worse than doing nothing at all, because it can result in costs with no corresponding benefits. And the fact that those doing so have passed a course may give them an unjustified confidence in the correctness of their actions.

Coupled with this is the issue that people are more likely to remember what accords with their preconceptions. So a person with an authoritarian personality will most likely remember those facts, concepts and theories that they were exposed to that support their own world view. In other words, even the little they learn may be biased in a prticular way.

The problem is made worse by the fact that when people gain a degree such as an MBA they may feel that they can now rest on their laurels and simply apply what they have learned. However, reality is dynamic, societies change, the way in which organisations operate change, the external environment or changes in cultural values may throw up challenges unanticipated when the person was studying. So not only may knowledge be inert, it may also become stale and outmoded over time.

While a tertiary education, and particularly a post-graduate education, is supposed to imbue the student with a passion for life-long learning, in many cases the learning stops as soon as the piece of paper is awarded and the person enters the 'real' world, where apparently much of what they learned doesn't actually apply or the person fails to make an appropriate match between what they have learned and particular situations they face. Theorestical models which look so great on paper may not fare so well in the trenches where multiple problems may be inextricably entwined, and what seemed so neat and clean, becomes messy and muddy.

An education is only an education if it makes a difference to behavior, if it exposes us to ideas which we may see merit in (even if we initially disagree with them) and if it fills us with a sense of the dynamic and contingent nature of knowledge so that we don't ever feel that we have reached final certainties, but instead hold our views lightly, ever willing to change them if faced with new information or a new reality.

In other words, an education should endow us with intellectual humility. No-one should think that having a piece of paper turns them into some sort of genius or that the view of a person without that piece of paper are somehow inferior.

After all, virtually none of the richest business people in the world have MBA's or other tertiary qualifications.  A credential only points to what you may have learned sometime in the past, not what your value to a business is now. Credentials are great, especially for opening doors for someone just entering the world of business but what matters in the end is achievement.

Monday, August 1, 2011

The Workaround Audit - Identifying where problems are already known to exist

In many organisations, senior managers try to distance themselves from the 'pain' of problems that they have the power, resources and authority to solve, pushing them down through the lower levels of the organisation until they reach someone who has no choice but to try and cope with the fall-out of a problem.

So what the person lower down does is develop a workaround, a way to circumvent or ameliorate the problem which is less efficient or effective that a proper solution would be, but which works well enough to get the job done (albeit at a continuous cost to the organisation.) Where an organisation has not previously had any commitment to continuous improvement, layers of workarounds may have built up over years.

However, one of the problems with workarounds is that they are only as good as the person who conceived them. Under pressure to get the work through, it is just as likely that a quick and dirty approach will be settled on that may not even be the best of the feasible workarounds that could have been considered.  As a result, there are costs to the organisation not just from the workaround itself, but also from the fact that the specific workaround is more costly than it needs to be.

The problem can be compounded when senior managers who have chosen to remain oblivious of the problem the workaround was designed to solve then try to eliminate the workaround as a source of waste without putting a permanent solution or even any solution in place.

One of the ways in which an organisation can find rapid improvements is to do a workaround audit. That is to catalogue every workaround that exists in the business, why it exists, what resources it consumes on an ongoing basis, what its existence has historically cost and whose problem it was to solve in the first place.

Once this is known, you can list the unsolved problems in the business that are costing resources and then start to look for permanent solutions. If a permanent solution isn't available, a second line of attack would be to see if there is a more efficient workaround that could do the job better.

However, we also need to consider why the workarounds existed in the first place. And this comes down to the connection between pain and authority. The organisation needs to maintain a memory of what problems have been raised with senior managers and what actions if any they have taken to solve them. Even where the problem is 'delegated' the onus should remain on the responsible senior manager to see that the problem is solved. And if it is solved by someone lower in the hierarchy putting in place a workaround then the senior manager should have to justify why a permanent solution wasn't put in place and why they abdicated responsibility.

Pain and authority should be bound together so that something is done rather than things being swept under the rug, where eventually they build up enough for people to start tripping over them.

The problem is that the power to compel accountability and the desire to evade accountability are often in the same person's hands. So short of regime change workarounds are likely to remain for some time to come.

Sunday, July 31, 2011

On the pointlessness of meetings

I think that probably 80% of the meetings that I have ever attended have been a complete waste of time, time that could have been better spent on more value-adding activities.

There are so many dysfunctionalities with meetings that is is easy to list quite a few:
  • Often meetings are held simply because they have been scheduled and thus represent a rigid mindset. Since they are going to be held regardless, they often end up being discussions of trivia.
  • No agenda is circulated ahead of time or if an agenda is circulated the item titles are so brief as to give no clue as to what the item is actually about so that participants cannot do any preparation or prior investigation.
  • Meetings tend to be dominated by either the senior person present or the loudest, rudest or most opinionated person present neither of whom may have the best ideas or the most worthwhile things to say.
  • They may be held to give the appearance of consultation when regardless of what is said the manager who called it is still going to do what they always intended to do.
  • The timing of the meeting may take no account of more urgent priorities of participants.
  • They may be held so frequently that there is no room for items from one meeting to be progressed before the next is held.
  • They may be used to convey information which could more quickly and accurately be disseminated by other means.
In Meeting Analysis: Findings from Research and Practice, a meta-analysis of more than 100 research papers on meetings, Romano & Nunamaker concluded that:
...several decades of studies reveal meetings are indeed very costly in both terms money and time. Studies also reveal that in general meetings are unproductive and wasteful. Studies find that meetings suffer from a myriad of problems, making managers and workers alike dissatisfied with the process and the outcomes in many cases.
And recent research (see Further reading below) has shown that unless they are exceptionally well-prepared, well-run and properly targeted they are simply a waste of time.

Jason Fried in his book Rework has gone so far as to conclude that meetings are toxic. In the following video presentation, he discusses how meetings and other interruptions at work have a deadly effect on productivity:




Given all of this research by literally hundreds of researchers it is hard to understand why managers continue to have lots of meetings since clearly they are for the most part unproductive.

Given that such meetings actually undermine the organisation, perhaps the simplest solution would be for managers to put their money where their mouth is: actually charge managers a cost against their salaries for every meeting they hold and for every person present and for every hours duration. While it sounds like a drastic solution, it would soon get managers thinking about how essential a meeting was, how to make them short, focused and effective and what better alternatives existed.


Further reading

Want to Improve Productivity? Scrap meetings Ray Williams (Psychology Today)


.

Monday, July 25, 2011

How much of training is really worthwhile? (and how can you tell?)

Training is an important component in ensuring that workers know how to do their jobs properly.

However:
  • the timing has to be right
  • the process in which the person is being trained needs to be stable
  • the systems that the person uses after training need to be free of bugs.
The ideal timing for training is when the person is motivated to learn and as close as possible to when the training will be used. If the person doesn't see the point of the training then it will be an uphill battle for them to learn anything. If the gap in time is too great between the training and its use then the person may forget critical information in the meantime because they have not had the opportunity to use it and reinforce it. Training also needs to be targeted to the person's needs. A person is more likely to absorb information that they need than to absorb information peripheral to their needs.

The process needs to be stable, otherwise the training may be out-of-date as soon as the trainee walks out the door. And the computer systems need to be bug-free since otherwise the trainees will be unable to distinguish between system issues and their own mistakes. Buggy systems make for noisy feedback which undercuts learning.

Kirkpatrick's training evaluation model can be an invaluable tool for determining whether training has been worthwhile.

It has four levels:  
  1. Reaction of student - what they thought and felt about the training
  2. Learning - the resulting increase in knowledge or capability
  3. Behavior - extent of behavior and capability improvement and implementation/application
  4. Results - the effects on the business or environment resulting from the trainee's performance

The problem is that many training evaluations stop at the first level. I recall going to a training course on change management and there is no doubt that the trainer knew their subject, that it was interesting and informative and that most of those present enjoyed it and gave it good evaluations. Then they went back to their jobs and kept doing things the same way they had previously. The lesson is that the fact that trainees found the training enjoyable and interesting doesn't necessarily mean that it was worthwhile.

It used to be that when a person did training they were given pre-tests and post-tests, the theory being that the difference between their results on the two tests indicated an increase in knowledge. This seems to be out of fashion now, even though it at least had the advantage of showing that some learning had taken place. Without something comparable to this, it isn't clear that any of what was taught has stuck. Some options for measurement could include needing to complete some assessment tasks on the job and the plus in this is that it at least provides an indication of whether anything was remembered once the trainee left the training room. Plus it gets them integrating their learning into performance.

Which brings us to performance: is the person applying what they have learned? This is another area that seldom seems to be followed up except tangentially. In fact, the whole purpose of training in some instances is so that management can claim that the person was trained so that any subsequent deficit in performance is down to the employee not doing their job rather than them not having received the required training (however ill-timed or inadequate it might have been.)

Finally, if the worker did learn and apply what was covered in the training, what difference has this made to the organisation's performance: is work being done faster, more accurately, more effectively, with less waste or less downtime? If not, then the reason could be that the training wasn't properly matched to the organisation's needs and this is important feedback because it points to the need for better analysis before developing and rolling out training.

The bottom line is that training isn't about the trainees feeling good.

It is about the delivery of results and if it fails to deliver then it really wasn't worthwhile. Worse, it was a cost to the organisation without any commensurate benefit.


Resources

Official Site of the Kirkpatrick Model

Sunday, July 24, 2011

More Feet, Less Seat

At some time you may have seen one of those Black Forest Weather Houses where  a little woman comes out of the house when the weather is sunny and dry while a little man comes out to indicate rain.

A lot of managers are like the little man: the only time they come out of their offices is to deliver bad news, to give negative feedback or to dump a problem in someone's lap. The result of this is that their employees begin to associate their presence with pain and their absence with relief. ( Or to mix metaphors, perhaps they are like a groundhog, emerging from their burrow only long enough to announce six more weeks of winter.)

My theory is that managers need more feet time and less seat time. By this I mean that instead of spending so much time sitting isolated in their offices ("seat time") they should spend more time walking around catching up with their employees, listening to what they have to say, seeing what issues are emerging and what niggling sources of dissatisfaction are causing problems ("feet time").

In other words they need to humanise their workplace so that the people who work for them see them not as a source of pain but as a human being who takes an interest in them and who removes barriers to them doing their jobs well. In other words, as someone on the same side.

But this should be not taken as a 'technique' to be applied to employees as a way to manipulate them into feeling better and thus becoming more productive. This just creates its own problems. Years ago, a book was published called The One Minute Manager which basically taught managers to 'condition' their employees to perform better. In response to this followed The 59 Second Employee: How to Stay One Second Ahead of Your One Minute Manager. The application of 'techniques' is often transparent to workers no matter how clever managers might think they are being, so there ends up being an arms race between 'techniques' and defences against 'techniques' which just wastes energy and creativity that could be better focused on the work itself. Plus it leads to increasing cynicism.

Richard Farson points out in is Management of the Absurd that applying techniques to people results in the erosion of respect for the very people to whom to techniques are applied; how can a manager respect someone that they have managed to fool with their technique?

Farson says:
It is the ability to meet each situation armed not with a battery of techniques but with openness that permits a genuine response....If we genuinely respect our colleagues and employees, those feelings will be communicated without the need for artifice or technique. And they will be reciprocated.
But this means taking the time to see the person in front of you rather than the function on an org chart.

Managers need to be in for the long haul rather than the quick fix. And part of being in it for the long haul is getting out there and mixing with the workers and keeping it real. More feet, less seat.

Legacy processes - Don't let past solutions undermine future performance

Two stories to ponder:

Story #1:
A young girl is watching her mother prepare a ham for Thanksgiving. Before she puts the ham into the pan, her mother cuts about six inches off of the end of it, and throws it away. The girl asks her mother why she cuts the end off the ham. Her mother replies "I'm not sure, but that is how my mother did it". So she approaches her grandmother and her why she cut the end off the ham, before preparing it. Her grandmother replies "I'm not sure, but that was how my mother did it." In one final attempt, she approaches her great grandmother and asks why she cut the end of the ham, before cooking it. Her great grandmother replies "We only had one pan to cook with in our day. I had to cut the ham, so it would fit in the only pan we had."
Story #2:
The random wanderings of a calf formed a crooked trail through the countryside. Because this rough trail was slightly easier, people started to use it and it became a dirt track, then the main street of a small village, the main road of a small town and eventually three hundred years later it had become the crooked meandering thoroughfare of a city (For a more poetic account see the poem Cow Path )
What do these stories have in common? In one case a temporary limitation and in the other a random unthinking act determined the future actions and pathway of those who subsequently followed, who followed without thinking about why they were following that path.

A less than ideal decision made years before by a manager or programmer, perhaps determined by a set of circumstances that was either misunderstood at the time or which has long ceased to obtain may result in below average performance by the organisation and its employees for years to come. It may also be the sub-structure for future decisions that end up locking it in place.

This is why at times processes may need to be completely re-engineered from the ground up since the assumptions underlying the current process may be fatally and irretrievably flawed.

There may be barriers to doing this however.

A process wich has been around for a long time has the comfort of familiarity and an air of rightness about it since it may be all that those performing it have ever known. Even if it will make their lives easier, people may well resist a change that seems to go against everything that they have been taught about the right way to do something.

We can see some examples of this in sports:
  • the 4 minute mile was broken only because someone wouldn't believe the accepted wisdom that a human being couldn't run a mile in under 4 minutes.
  • high jump records were broken when the Fosbury Flop was introduced , challenging the accepted wisdom that the right way to go over the bar was forwards.
  • until 1844, the fastest European swimmers used the breaststroke. But in that year two Native American swimmers outclassed them in a competition in London using what eventually developed into and became known as the Australian Crawl ( Surrounded by Geniuses Dr. Alan Gregerman)
In each of these cases, the sports community concerned believed that they had achieved the best that was possible. And in a sense this was true: the best that was possible doing things the way they had always been done and assuming what had always been assumed. And yet it was possible to do better by questioning these assumptions and trying something different.

Dr. Gregerman asks:
What if there are more brilliant ways to do things? And if those more brilliant ways are all around us simply waiting for us to discover them?
One of the difficulties in making these discoveries is that nothing is more invisible than what we take for granted and the things we need to change may be almost impossible to see since they form the very background against which everything else is decided.

What this means is that sometimes we need to take a step back and seriously ask ourselves whether we would have designed a process this way if we were starting from scratch. We need to ask ourselves whether the basis for a process is grounded in the reality of what is required or if it it only what we have unquestioningly assumed was required.

Sometimes this may require new eyes, the eyes of someone who hasn't done the work before. Questions asked by someone being trained in a job may yield unexpected gold because when they ask "Why do we do that?" in relation to part of a process, it may be the very question that should have been asked long before.

So if you ask soemone why do we do this and the answer is "I don't know" or "It's the way we've always done it" or "It's the way I was trained to do it", alarm bells should go off since none of these answers justify continuing to do it.

In an article in Bloomberg Business Week Rick Wartzman put it this way:
Many times, managers become preoccupied with how they are doing things. But what's equally important—maybe even more important—is what they are doing in the first place. As Drucker noted: "There is surely nothing quite so useless as doing with great efficiency what should not be done at all."
And I guess this is my main point. We need to look at how our processes square with the demands of current reality and ensure that we are doing the most effective thing given those demands.

Saturday, July 23, 2011

Unnecessary change and mediocrity

One of the pre-conditions for excellence in an activity is that the requirements of the activity remain relatively stable.

For instance, consider professional tennis. The sizes of the racquets, the balls, the courts, the height of the nets and the rules of the game remain relatively unchanged over time. And as a result, the players are able to become skilled in tennis as an activity.

Now suppose instead that the powers-that-be made random changes to the rules and the other parameters of the game at random intervals - what level of professionalism and excellence would result?

I think the answer is clear. We wouldn't end up with people who were excellent at the activity per se; we would end up with people who could rapidly adjust to the rule changes but whose skill set was aimed at broad adjustments to keep their heads above water rather than become good at it. Because why try to be good at something that may well be superseded tomorrow?

What this means is that making unnecessary changes to the way work is done is a recipe for mediocrity. Adjusting to changes takes energy away from actually getting the work done. And the more frequently changes are made the greater the uncertainty that is created among those doing the work as to whether they are doing it in the expected way. Have they missed one of the changes that they were supposed to implement? It may be hard to tell. Especially if the changes are announced in different ways, from different authorised voices within the organisation, or if multiple announced changes are in conflict.

It can get really confusing if what was wrong last week is right this week and vice versa. And it can get even more confusing if a change is announced and then there are subsequent changes to the change when flaws in it become apparent, and then changes to the changes to the changes when further flaws are revealed. Depending on the frequency of changes, systems may have trouble keeping up and workers may not just have to remember the change but also remember the latest system workaround as well.

The take home lessons from this are:
  • If you want excellence, only change what absolutely needs changing
  • If you want clarity, only announce changes through a single channel and get them right the first time.
And what do you need to implement these lessons?

 Three simple things: analysis, foresight and planning. Things that are simple, but hard to do well and which it seems are in very short supply.

Tuesday, July 19, 2011

Managers as boundary riders

In Australia, a boundary rider is a "a person employed to ride round the fences etc. of a cattle or sheep station and keep them in good order" (Australian Oxford Dictionary).

When I look at organisations I am surprised at how many senior managers make poor boundary riders. Confused? Well maybe I should explain.

There is an old saying that "Good fences make good neighbours", yet in many organisations there are serious problems with the fences or boundaries that senior managers put in place.

Firstly, there is the problem of poorly defined boundaries where no-one seems to know who is responsible for a particular kind of work or for solving problems in a particular area. As a result, the buck is continually passed from one work area to another, searching for someone to stop with. And this leads to some problems being solved poorly or not at all. The orphan problem just can't find a home.

Secondly, there is the problem of overlapping boundaries where more than one work area might perform a particular kind of work but where there are differences of opinion about how that work should be done. As a result inconsistencies arise in the doing of the work and conflict arises when each unit thinks the other is doing it wrong.

Then there is the problem of undefined boundaries. For example a number of work areas may share a bucket of work in some way, but no controls have been put in place so as to be able to tell: how much work is being done by each work area, the timeliness with which each work area completes the work, the quality of the work and so on. As a result, no-one has genuine overall responsibility and lack of performance by one work area is hidden in the mass of work being performed by the others.

Finally, there is the problem of breached boundaries, where one work unit invades and interferes with the work that another unit does have clear responsibility for, and nothing is done to stop this interference. (Instead of 'passing the buck' this could perhaps be called 'stealing the buck')

The problems that this failure to adequately define boundaries cause could not be any worse if senior managers deliberately set out to cause them: conflict, resentment, poor performance and passing the buck.

Senior managers need to become better boundary riders, to check out the fences that they have put in place and make sure that they are performing effectively in helping the organisation achieve its purposes.

Otherwise energy that could be expended in making progress is instead expended on problems of the organisations own making, problems that would not even exist if a modicum of thought was given to what work should be done where and by whom, and making sure that no work was 'orphaned'.

In the end people cannot be held accountable if they don't know what they are responsible for.

Micro-managers - why do they do it?

In a previous post, I discussed how micro-managers are overpaid. Here I want to talk more about why they micro-manage at all.

Part of it is fear and mistrust: they are afraid of something going wrong and not knowing about it and they don't trust their employees to do a good job. But there can be more to it than that.

As people move up the hierarchy, they are increasingly faced with predicaments and poorly defined problems. In both cases, these are things which have no easy answers unlike the problems that they may have faced in a lower level position. The problems may be messy, it may be unclear what precisely the problem is, there may be constraints that make them hard to solve and even the feasible options may be unclear. The situation they are dealing with may be ambiguous and there may be competing versions of what is going on, and not all of the required information to clarify things may be available.

This is one of the reasons that such problems (which are sometimes referred to as wicked problems) remain unresolved for months or years. It isn't that the management has a lack of will, but rather that they don't know where to begin in trying to solve them.

So how does this relate to micro-managers? Well, if a person is faced with something they find difficult or impossible to do then they may avoid the situation by turning to what they can do. And for a lot of managers, it is to solve the problems of their employees. This makes them feel that they are achieving something and it takes the spotlight away from them and onto the people they are micro-managing.

So micro-managing can be a response to a manager's anxiety about their competence to resolve the problems of their own job. (If they can, micro-managers might also delegate solving the problem to someone lower down the tree as well, claiming that that they don't have time, or that it would be a good 'development opportunity', when in reality it's just that they don't have a clue how to solve it themselves - this becomes obvious when the person they have delegated it to asks for some guidance.)

One other possible reason why people micro-manage is simply that they don't have enough work to do at their own level. As the old saying goes, the devil makes work for idle hands to do, and in this case the devil's work is interfering with how their employees do their jobs. However, this lack of work may well be an illusion based on a poor understanding of their role. A new manager may be left to sink or swim and it may never be made clear what is expected of them in their new role. So they may not realise that they are now expected to look at the bigger picture and plan for the long term. As a result, the real work for which they are being paid may fall into a black hole, while they honestly believe they don't have anything to do.

So apart from fear and distrust, micro-managers may be motivated by the desire to avoid problems they can't solve at their own level and solve the problems that they know how to solve at the lower level. Or they may be motivated by not having enough to do. Either way, they are still being over-paid, since it is precisely the difficult problems that they are paid the big bucks for tackling and the big picture they are being paid to paint.

Monday, July 18, 2011

Job Crafting - a fresh way to encourage engagement

The work of the world is common as mud
Botched, it smears the hands, crumbles to dust
But the thing worth doing well done
has a shape that satisfies, clean and evident
...
The pitcher cries out for water to carry
and a person for work that is real

~ from "To be of use" by Marge Piercy

Job Crafting is a relatively new approach for workers to make their jobs more satisfying. Developed by researchers at the University of Michigan (see Useful Resources below), it generally involves working through some questionnaires/ exercises in order to gain a greater sense of what you like or don't like about your job and how you can better deploy your strengths and motivators in order to enjoy your work more.

It is something that some people do instinctively. They take on a new role and then modify that role and the way they perform the related tasks to put their own stamp on it, often creating little efficiencies, job aids, systems and shortcuts that can be usefully employed by others doing similar work. Instead of passively complaining, they take the raw material with which they have to work and turn it into something that they are better able to live with.

What can workers do?

While job-crafting may be undertaken as a a formal structured process, you can accomplish pretty much the same result as follows:

  • Identify wriggle room - There may be some things you have to do, but you may have a fair amount of flexibility in how you do them and there may be scope for doing them in a way that best suits your personal style, without adversely affecting the required outputs.
  • Identify what you yourself bring to the job - your strengths, your passions and the things that motivate you. See how you can tailor the job in a way that makes the best use of who you are as an individual.
  • Work out what you love and hate about the job - If you can, it may be possible to swap a hated task with a co-worker for something that you love and they hate, creating a win-win. Or you may be able to develop a way to make the hated task take less time or to do it more quickly or in some other way make it more bearable.
  • Don't just think about modifying your job, go ahead and do it - Most managers won't mind so long as you are providing the required outcomes. A lot of the time changes are made under the radar anyway and a happy productive employee is unlikely to be disturbed, after all: why mess with success?
In Flow, Mihaly Csikszentmialyi recounts the experience of an assembly line worker Rico Medelin:
Most people would grow tired of this work very soon. But Rico has been at this job for over five years and he still enjoys it. The reason is that he approaches the task in the same way as an Olympic athlete approaches his event: How can I beat my record? Like a runner who trains for years to shave a few seconds off his best performance on the track, Rico has trained himself to better his time on the assembly line. With the painstaking care of a surgeon, he has worked out a private routine for how to use his tools, how to do his moves.
In other words, he has become totally engaged in a game of his own making. This is something almost all of us can aim for.

What can managers do?

If you are a manager, you can help your staff craft their jobs in a number of ways:
  • Support them in their efforts - they may find better ways to do things that can be used by other staff
  • Make it clear what outcomes you expect, but allow flexibility in how they are achieved
  • Don't get in the way by interfering
  • If it is possible, assist them in negotiating task swaps
Managers are always looking for ways to engage employees. But a lot of these methods assume that the job is unpleasant in some way and that as a result there need to be external incentives of some kind added, or that they need to create fun or social activities in the workplace to compensate. But these efforts, which often seem contrived and manipulative, only serve to reinforce the view that work is drudgery.

The job-crafting approach on the other hand assumes that the job itself can become fulfilling if the worker is permitted the flexibility to make adjustments that suit their strengths, passions and motives. And no small part of this is the experience of control that it yields. But the only way this will work is if the manager gives up some control and focuses on outcomes instead.

Through job-crafting we can increase the intrinsic satisfaction in the job making workers happier while maintaining engagement and productivity.



Useful Resources

Fishbone or wishbone?

One problem that managers often have is with grasping causality. When a problem occurs they may leap to a 'cause' that is nearest in proximity or time without looking more broadly or deeply.

Fishbone diagrams are a rudimentary tool to get people thinking about possible causes for a problem and remind them not to focus on a single factor. The ribs of the fish are generally labelled with generic categories (such as People, Plant, Procedures, Policies) with sub-ribs that drill down to more specific potential causes.

They may be useful in getting people to think outside of a narrow box when it comes to causality, yet in some ways fishbone diagrams still promote a simplistic linear idea of causality, a simple B was caused by A full stop, rather than a more nuanced model in which multiple inputs, none of which may be sufficient to cause B, through their presence, absence or interaction jointly brought about B.

Considered individually none of these inputs may have been sufficient to cause the problem and the absence of a salient input may be hard to detect since we tend to look for what happened to cause the problem, rather than what failed to happen.

So although a fishbone diagram may show the most obvious linear causes (though causes that don't fit in one of the generic categories may still slip through the cracks), they ignore the radical complexity of how the world really works.

The 'causes' it identifies may be more indicative of wishful thinking: that reality is simple, that causes are easy to identify and that problems are easy to solve.

Saturday, July 16, 2011

Managers are human too...

It's easy to point out the flaws in different management styles, but it is important to realise that managers are human: they have good days and bad days, they have blind spots and pet ideas, likes and dislikes, personal problems...the whole gamut of normal human behaviors. So to expect perfection or to expect that they will never make mistakes or that they will always be fair and reasonable it just a little bit unrealistic.

In Boom: 7 Choices for Blowing the Doors off Business-As-Usual, Kevin and Jackie Freiberg put it this way:
In the blurry world of business, decisions are rarely clear-cut. Leaders caught in predicaments will make judgement calls, trade-offs and compromises that aren't always right. The decision-makers in your organisation are not all-knowing. They don't have a crystal ball with which to see the future. They struggle with uncertainty - just like you. They desire a better culture, less uncertainty, and more stability - just like you. They want more time with their families - just like you. When it comes to leading major change efforts, their actions may appear to be misguided or self-serving from where you sit, but they are probably doing the best they can with the information they have. 
While this realisation should lead to a certain amount of empathy for managers, it should also make you realise that you may have to work around these failings, defend yourself against some of them and fight against others. Being sympathetic shouldn't mean becoming a victim or tolerating unacceptable behavior.

Also remember that people aren't always managers because they are particularly good at the job. They may have been promoted because of their competence in a lower level job with a totally different skill set and may be struggling either publicly or secretly to keep their head above water (the Peter Principle). In some cases, the field of applicants for a managerial job may have been restricted by requirements ( such as travel ) that better qualified people may have been unwilling to accept, and so the successful applicant may have just been the best of a mediocre bunch. Either way expecting excellence or even competence may be misguided.

One useful skill is to be able to identify a persons base level of behavior, the way in which they usually conduct themselves. I once worked in a unit where the manager appeared to be constantly angry. However, once I realised this, I was able to get along with him quite well and to detect when his behavior deviated from that baseline ( for example when he really was angry). The other guy in my unit never managed to achieve this and hated our boss with a passion.

So part of becoming more perceptive about people's behavior is so that you can work more productively with them.

Getting to know a manager's weaknesses can be useful in that you can compensate for them in order to further the organisations aims. But, if necessary, you can also target them in order to further your own aims or to protect yourself.

The most important thing to remember is that you need to deal with reality as it is, not as it ideally should be. Do what you need to do to perform effectively in your own job, whether this means working with your manager, or working around them.


The website for Boom can be found here

Thursday, July 14, 2011

How Control-Freak Managers (CFMs) lose control

The manager who needs to be in control of everything (the control freak manager or CFM) is driven by a fear of things going wrong and possibly a fear of being blamed for things going wrong. The mindset is "If I don't know what is going on then something bad must be happening, so I MUST know".
They are also driven by a lack of trust. "My staff can't cope with problems or won't tell me if there are problems so I MUST know everything that is going on. And to make sure nothing bad happens, I must get all approvals channelled though me."

So a bottleneck is produced that reduces productivity.

This leaves workers with two choices:
  • they can co-operate with the control freak manager and as a result become increasingly dependent and disempowered. They seek approvals for everything even things for which they could use their own judgement. And they are at a loss when the CFM isn't around. And productivity suffers.
  • Or they can rebel. They can deliberately conceal what they are doing from the CFM since it's the only way they can get work done without the interference of the CFM.
And it is this second choice that creates an irony: the more the CFM tries to control things, the more the workers keep them in the dark so that they can get their jobs done. And the more CFM's are kept in the dark, the more control they try to exert because the more afraid they are that things are happening that they don't want to happen and that things are getting out of their control. And so a reinforcing loop develops.

A further irony is that the manager who trusts his or her staff may be no more aware of what they are doing. But because they trust staff to exercise their judgement and do their jobs, their staff rise to the occasion, and as a result, such a manager has no reason to be afraid. And the workers because they are trusted to use their judgement become better able to do their jobs and less likely to conceal emerging problems from their manager.

And so the ultimate paradox emerges: that fear and lack of trust generate the conditions which justify that fear and distrust - a self-fulfilling prophecy.

Monday, July 11, 2011

Searching for Positive Deviants

...the theory of positive deviance holds that in every setting there’s at least one person who strays from the norm—a positive deviant—someone who has found a way to buck the status quo and solve a problem despite the same odds that are stacked against everyone...Positive deviance, the authors explain, “requires retraining ourselves to pay attention differently—awakening minds accustomed to overlooking outliers, and cultivating skepticism about the inevitable ‘that’s just the way it is.’

In a working environment, there may be individuals or workunits which only have access to the same resources as other individuals or workunits and yet thrive where others struggle.

The basic idea of positive deviance is that if we can identify such individuals or workunits then we can study them to see how it is that they do this. So far, it sounds very similar to best practice. However the positive deviance methodology goes beyond this in a number of ways.

Firstly, in many cases, it is peers who identify the positive deviants. And secondly, once the positive deviants are identified and their methods are articulated, it is generally the PD's themselves who are encouraged to share what they have learned with their peers.

This is empowering to workers. The changes that are being shared are not being imposed from above by people who do not experientially understand the realities of the job, the stresses and constraints. Instead, they are being shared by people who themselves face those same constraints every day and have found ways to deal with them. And as a result, they generate a positive perception among those who need to be influenced "Hmmm...if they can do it with the same resources I have, then maybe I can do it too!"

In other words, positive deviants show what is possible and open the way for others to do the same and achieve similar results. And in doing so they become a force for positive change.

There are a few caveats:
  • You need to make sure that you have actually identified a positive deviant rather than just someone who has made an artform of 'massaging' their perceived results.
  • A positive deviant may have found a better way to utilise an under-utilised resource - but if everyone starts to use that resource then any benefits may evaporate.
  • There has to be an atmosphere of trust - if workers perceive that you are just going to harvest any savings or raise the performance bar then they may not feel like co-operating.
  • Similarly, the positive deviant may have found a better way of doing something which circumvents the usual organisational procedures, so they may need to be reassured that there will be no negative repercussions.
Notwithstanding these concerns, the positive deviance methodology has proven to be effective in a wide variety of different contexts and seems to be under-utilised as a tool for productive change in organisations.

Those who might find this most threatening are managers because positive deviance highlights solutions that have been found at the grassroots and below the radar, solutions that have evaded those in positions of power. So positive deviance may also democratise the workplace and empower workers to find solutions rather than always look to those above them in the hierarchy for answers. This can be scary for managers since it means giving up power and trusting their employees. But it can be empowering for employees since they can feel that what they do makes a difference and that they can help one another through the solutions that they have discovered and which they may themselves have taken for granted.

So, who are your positive deviants?



Resources

Video:

"Reflections on Positive Deviance" (Monique Sternin)



Jerry Sternin on Positive Deviance - Part 1
Jerry Sternin on Positive Deviance - Part 2

Websites:

Positive Deviance Initiative
Power of Positive Deviance (website for book below) 
Canadian PD Project

Books:

 Power of Positive Deviance (Pascale, Sternin & Sternin)
 Influencer - The Power to Change Anything (Patterson, Grenny, Maxfield, McMillan & Switzler)

Sunday, July 3, 2011

Fear, Control and Meaningful Work

To do some idiotic job very well is certainly not real achievement. What is not worth doing is not worth doing well.
~ Abraham Maslow

If you want people to do a good job, give them a good job to do
~Frederick Herzberg

In Greek mythology, Sisyphus was a king punished by being compelled to roll an immense boulder up a hill, only to watch it roll back down, and to repeat this throughout eternity. In Existentialism, Albert Camus used this myth as a metaphor for the futility of life, but it could equally be used as a metaphor for the futility of work that has been stripped of meaning.


How do managers strip meaning from work? 

One way is to eliminate opportunities for workers to use their minds, by developing complex sets of rules that eliminate discretion. Part of the problem with rule-obsessed management is that it becomes increasingly difficult for workers to make decisions.
  • The more rules there are the harder it may be for the worker to determine whether there is a rule that applies and then to find that rule.
  • If workers are unable to find a rule and end up using their own discretion, they will still be left with doubt and uncertainty about whether they have complied with what was required, and with fear of reprisal if they haven't.
  • With workers getting fewer opportunities to actually use their judgement, even simple decisions may become problematic.
  • Workers end up applying the rules mechanically without using their common sense or judgement to determine whether a rule should be bent, broken or tweaked for a particular set of circumstances.
  • Maintaining a complex set of rules becomes a task in itself requiring resources that could otherwise be devoted to getting the work done. It also becomes increasingly difficult to determine whether the rules are consistent.
In Streetlights and Shadows, Gary Klein tells the story of a senior air traffic controller faced with a hijacking situation and also faced with a 15cm (6") thick manual of procedures to follow. Time was critical so he jettisoned the manual and instead drew on contacts in the security services and started to improvise. As the senior air traffic controller told Klein:
I was reprimanded for not following the procedure and commended for handling it with no civilian casualties. Then I was asked to re-write the procedure. It is still too thick.
Incidentally, rule-obsessed management isn't about getting the job done effectively. The person who makes the rules probably believes that they could do the job without the rules, they just don't trust anyone else to do so. So on the one hand there is arrogance ("I am smarter, have better judgement...than any of the people working under me") and on the other hand there is a desire to centralise power due to lack of trust ("I must control things through rules because I can't trust the workers to do the right thing"). And both of these attitudes send a demotivating message to workers, as well as resulting in delays, logjams and bottlenecks as workers end up seeking guidance on even the simplest decisions. Or conversely, it may lead to malicious compliance where the workers apply the rules to the letter regardless of the consequences.


A second and related way is to eliminate the exercise of skill in the job. For example, a manager may not trust the workers to write effective letters to clients, so they may put in place hundreds of different letters for staff to use with only limited discretion to vary them. Again, it is about centralised control and lack of trust. And paradoxically it achieves the opposite of the intended result. Since workers get less practice in wording their own letters, they lose confidence in their abilities and so they end up choosing the wording that fits the best even if it doesn't fit that well and are afraid of modifying it. And the client who writes in more than once may receive identically the same response implying that the issues they ave raised have not been given any genuine consideration.


I think you can see a pattern here:
  • management fear leads to
  • mistrust leading to
  • centralised controls leading to
  • skill reduction leading to
  • mechanical performance due to fear of deviating from the centralised controls leading to
  • poorer customer service.
The lesson here is that:
  • You don't build a skilled workforce by eliminating opportunities for the exercise of skills.
  • You don't build a motivated workforce by removing the more satisfying parts of the work.
  • You don't build trust and confidence by increasing fear

Saturday, July 2, 2011

Manager or Cosmetician?

Sometimes being a manager appears to be a matter of making it look like something has been achieved, even when it hasn't. And that is one of the reasons why organisations  may appear to achieve their strategic plans and yet never actually improve in any meaningful way.

To begin with, there is cosmetic change rather than actual change. Where a change would disrupt the status quo that managers are happy with, they may massage the details of the change, water it down and put their own controls in place in such a way as to make it appear that they have embraced the change when in fact they have sabotaged it. And as a result, it may appear that a change has been ineffective when in fact it has been poorly implemented if at all. If a person has a skin cancer then you can treat it medically or you can conceal it with makeup. Sadly, in many cases, managers will take the latter course rather than deal with the reality of a situation that requires serious attention, giving the appearance that something has changed when things have remained fundamentally the same.

Paradoxically some of the most enthusiastic adopters of the change may do the most (unintentionally) to ensure it is ineffective. Like Procrustes with his bed, they stretch or amputate aspects of the change to suit their particular view of how things should be and in the process eliminate critical factors required for its success. Even this may be an application of the cosmetic art: their enthusiastic support of the change may be more about being seen to support it than about truly being committed to implementing it (in fact they may not even understand what the change entails.)

Cosmetic skills also come into play in evaluating whether efforts towards a particular goal have been effective. The goal posts may be shifted so that what was actually achieved is now within an acceptable range. Or what the result is compared to may be changed to present the best outcome: comparison with the last quarters figures, last years figures, segmenting results so that it can be claimed that failure only occurred in one area...there are any number of tools in a managers toolbox to make a bad result look good.

When it comes to the final evaluation of a project, the skills of a mortician may come into play: giving something that is actually dead the appearance of life. When there are a number of dimensions on which a program or project could be evaluated, even if it is a failure then there may have been success after a fashion on a couple of dimensions so those are elevated to being the important ones and publicized while the important dimensions on which failure occurred are downplayed.

Such tactics create dangers for the business and the only way to protect against them is to clearly define up front and unambiguously which outcomes are important and how they are going to be measured, and how comparisons will be made. And where changes are going to be made, to require managers to document how they have implemented those changes, why they have implemented them in that way and to justify their 'modifications' in terms of the intent of the change. It might even be the case that you need to ensure up front that managers really do understand what the intent of the change is and that satisfying that intent must take precedence over anything else (protecting turf, preserving pet ideas etc) and that in the end they will be judged by whether that intent has been met not by how much 'activity' they have put into it.

Cosmetics may be about concealing flaws, about making the ugly appear beautiful, or about making what is merely pretty appear beautiful. But it is always about making something appear to be other than it is, to hide reality.

And when you hide from reality, the risk is that sooner or later reality will sneak up on you and stab you in the back.

Monday, June 27, 2011

Why employee recognition schemes fail - Part 3: The selection process

In Simpsons episode Brother Can You Spare Two Dimes Mr.Burns awards Homer  the "First Annual Montgomery Burns Award for Outstanding Achievement in the Field of Excellence" and a US$2,000 prize in exchange for a legal waiver against any claim for his radiation induced sterility.

In most Employee Recognition Schemes there is a complete lack of transparency and no-one knows whether the selection of the winners is a matter of legitimately recognising excellence or whether it is more a matter of quid pro quo. (It isn't just Employee Recognition Schemes by the way: the same doubts concern honors lists and knighthoods in Commonwealth countries and comparable awards in other countries.)

This lack of transparency is even more problematic in organisations where there is not a lot of trust to begin with. And it is further multiplied when there is management involvement or interference in the selection process (as described in a previous post.) And the fact that guidelines as to how the winners will be chosen are not made publicly available just raises more suspicions.

Then there are issues with things such as suspected quotas ("Last year we gave it to someone from that Division so we can't give it to someone from that Division this year") or the fact a person may win who everyone knows (apart apparently from the selection committee) causes more problems to the organization than they solve.

And the perception that the selection process is unfair and biased simply leads to disgruntlement and disengagement from the process.

The take home message from this is that where there is no trust to begin with a recognition scheme that is not open and transparent will simply serve to increase distrust and cynicism.

Complicators and Simplifiers

Managers can change processes in one of two ways: they can simplify them or they can complicate them. And they can do each of these things in two ways as well: smart and dumb.

dumb simplifying: where something is simplified but the manager doesn't know why it was complicated in the first place. Remember:
If you don't know why something is complicated then you may need to study it more before deciding to simplify it.
Don't ever take a fence down until you know why it was put up.
~Robert Frost

Example:
Suppose you have a set of offices where you employee a lawyer, a doctor and a dentist. Sometimes one has more work than they can deal with, while another may be twiddling their thumbs. Brilliant idea: why not combine their roles so that when a client comes in they can just see who is available. No waste time. Great. well, there would be increased time required for training, plus there would be a drop in the level of expertise and a greater likelihood that something important would slip through the cracks but hey look at the idle time that is no longer being wasted! So much simpler than having three separate kinds of jobs
...Yeah, I know this would never happen in real life, but managers who don't really understand what staff do may at times decide to combine functions into a single job type without realising the complexities of what their staff do.

dumb complicating: where something is made more complicated than it has to be.

In some cases this may be a matter of building a process that is designed to explicitly cope with every imaginable set of circumstances even the unlikeliest one, rather than a process that deals with what happens most of the time, while allowing for exception handling. In former case, the process is supposed to deal with everything without having to trust the judgement of employees, but as a result it is complex, rigid and hard to change and undermines employee initiative. Plus we pay the additional costs involved in a complex process for every case we process even the simplest one. In the latter case, the process may be easier to modify and trust is shown in employees' use of their own discretion. In some cases, dumb complicating loses sight of why a process exists in the first place and so the manager concerned may design it to conform to some abstract ideal rather than what reality demands.

Remember Murphy's Corollary: It is impossible to make anything foolproof because fools are so ingenious.

So rather than trying to design a foolproof process, trust the judgement of your employees.

smart simplifying - where the complexities don't serve any useful role in the process. To simplify something in a smart way you have to understand the purpose of the process and to see how each part of the process contributes (or not as the case may be) to that purpose. It is slicing through the Gordian knot of complexity rather than trying to unravel it one snarl at a time.

Example
a flowchart could be used to map every aspect of a given process, showing every decision point and running over several pages, looking like a big flat plate of spaghetti. Or it could be simplified to show the usual flow of the process with notes explaining what happens when there is an exception. The former is unusable and remains unused, so it was effectively a waste of time developing it. While the latter doesn't comprehensively describe the process, it does give an overview of each of the major steps required and makes it clear what the typical flow is. And once a person understands the typical flow they are better placed to understand deviations from that flow.
 However remember:
Things should be made as simple as possible but no simpler.
~ Albert Einstein

smart complicating: where a process is made more complicated because of differences that actually do make a difference.

Example:
The type and dosage of medication may depend on a number of different factors: age, bodyweight, other medications a person is taking, allergies and other conditions. Taking all these factors into account when prescribing is smart complicating since it could be a matter of life and death.
The smart complicator recognises when differences matter enough to require specialised attention.

There is a sense in which these approaches group into pairs of opposites:

smart simplifying vs dumb complicating
smart complicating vs dumb simplifying

The hard thing is working out which you are doing and the only way to find that out is to study the process an understand its purpose and whether simplifying or complicating (in the right way) will better achieve its purpose.

If you don't properly understand it can mean that there may be no reason for the process at all, but it may also mean that you need to keep your hands off of it until you have a better handle on it. 

This can be a hard lesson for some managers to learn: some always try and complicate things, others may always try to simplify them even when it causes problems. There may be a tendency to think that doing something is better than doing nothing, even if you're doing the wrong thing.

Don't give in to this tendency!

Careful thought is a form of action too, and it may save the need for remedial action later.

Sunday, June 26, 2011

Why employee recognition schemes fail - Part 2: The nomination process

One of the reasons that Recognition Schemes fail is that there are many biases built in to the nominations process, even if the people doing the nominating are employees themselves.
Some of these biases include:
  • quiet achievers (people who come to work, do their job and do it well without any fuss) are unlikely to be nominated simply because they don't even make it onto the radar.
  • self-promoters - There are staff who actively self-promote, who draw attention to every good thing they do, while minimizing or draw attention away from the things they do less well or fail to do at all. This makes such staff more salient than other staff who may be bending all of their efforts to just doing a good job rather than to big noting themselves. Self-promoters may be more likely to be nominated because everyone is aware of what they've done, even though they may have done no more than anyone else. (On the other hand, blatant self-promotion may also result in a lower likelihood of nomination because no-one likes them.).
  • friends nominating friends (self-explanatory)
  • mutual nominations or circular nominations (with staff nominating each other)
  • underperformers who have done one outstanding thing - People may perform well in a high visibility task, but have otherwise poor performance. In fact their neglect of their normal work may have contributed to their more public success. To reward them for this would send a poor message: that the day-to-day work is unimportant, that only high profile tasks matter.
  • people working on high profile projects - Staff are sometimes recognized for doing a task or project that they were selected to perform, so it was the job they were being paid to do. There is no way of knowing how well any other person might have performed in the role had they been given the opportunity or whether the person’s performance is outstanding relative to what could have been achieved. So there is an opportunity bias in recognising anyone for project type work.
  • workload bias - The people doing the nominating are the ones who have enough time on their hand to do such a nomination. So in general a smaller proportion of nominations would be expected from the busiest, highest volume work areas than from low volume work areas. To a degree, the proportion of nominations from an area could be more indicative of over-resourcing for that area than great performance and it may be that what that work area considers great performance would be considered mediocre in a higher volume area.
  • recency bias - nominations may be made on the basis of things that have recently happened rather than on things that have happened throughout the period covered by the awards.
  • cynicism - work areas that are cynical about the whole thing may submit less nominations
Any or more likely all of these things are likely to affect the quality of nominations being received by any selection committee and tend to undermine the credibility of the whole process. Managers sometimes assume that staff aren't aware of such biases but in general staff are better aware of them than the managers and as a result recognition awards tend to become a fiasco.

However, at least if the employees do the nominating rather than the managers, there is at least a perception that there is some equity in the process and an absence of favoritism.