Monday, August 15, 2011

Small Bets

To get better you have to take risks. There is no other path.

You have to try things where you can't necessarily predict the outcome, evaluate your results and then either kill or modify the things that didn't work as well as expected.

One of the problems in many organisations, particularly public sector organisations, is that they only want to bet on sure things and no-one wants to be associated with something that fails. So nothing truly original is ever tried and those things that are tried are never evaluated as failures and killed off as they deserve to be. Instead they become institutionalised.

There is almost a 1984-style double-think that goes on where things that have clearly failed to live up to expectations instead are lauded as outstanding successes, even while the average worker looks on like the small child in The Emperor's New Clothes, wondering where are the magnificent clothes that the management are crowing about. This does nothing for either the effectiveness of the organisation or the credibility of management.

In Little Bets, Peter Sims argues that organisations need to make what he calls 'little bets', small experiments which may well fail, but which may well also point us in the direction of what will work.

In his own words,
...little bets are concrete actions taken to discover, test, and develop ideas that are achievable and affordable. They begin as creative possibilities that get iterated and refined over time, and they are particularly valuable when trying to navigate uncertainty, create something new, or attend to open-ended problems, When we can't know what's going to happen, little bets help us learn about the factors that can't be understood beforehand. (p.8)
The problem is that when new processes are introduced in organisations, they are often presented as fait accomplis rather than as experiments or works-in-progress. They are prematurely frozen before anything has actually been learned from them and promulgated as           
'this is the way things are going to be from now on'
rather than         
'this is how things are going to be for the time being until we have learned whether it works as expected'.
And once they are cemented in as part of the status quo, they aren't revisited to see whether they have even been cost-effective in achieving the desired objective.

So what is the answer?

Firstly, I think that honest evaluation needs to be built into the process and that includes what has been learned from what has been attempted, what were the obstacles to success particularly unforeseen obstacles and obstacles created by individuals. In other words, instead of a whitewash there should be scrutiny to see what can be learned either to improve the existing process or to underpin future attempts at change.

Secondly, a kill date needs to be set upfront. That is, that unless the new process has shown demonstrable benefits that exceeds its costs by a particular date then it will be killed. This avoids things that don't work becoming institutionalised and ensures that there has to be some justification for the process to continue.

Thirdly, providing the rationale and data on which a new process was based were properly documented, there should be no stigma attached to the failure of a new process. The only stigma should be attached to those who try nothing new (timidity or inertia) and to those who try new things without adequate analysis (foolhardiness or laziness).

If an organisation is prepared to be satisfied with mediocrity then it can continue to try the same old conventional no-risk solutions, solutions that have low risk of failure but zero risk of outstanding success.

Or an organisation can try making small bets, learn from the results, and advance towards excellence.

No comments:

Post a Comment