Sunday, November 24, 2013

A common standard

Suppose there was no such thing as money and you needed to go shopping. Imagine how difficult and onerous it would be to try and have to establish exchange values for the things you had to trade and the things you wanted to buy. Money makes the whole thing so much easier. What money is, is a common standard of value.

Something similar to the bartering example occurs in businesses. Each type of work that a person is doing may be measureable, but if you have people doing different kinds of work, or different people doing different mixes of different kinds of work, it can be difficult to determine how productive everyone is.

The solution may be to create a common standard.

In one business I have been working with, staff performed a range of seven different kinds of work. The organisation knew how much time each person was at work (after deducting sick and other kinds of leave) and they knew how much of each type of work different people were doing, but they were stuck as to how to measure productivity.

They had six months worth of data for each person, so a multiple linear regression was done to determine how much time each type of work was taking. As a model it turned out quite well with an r-squared value of 0.91, indicating a strong correlation between time spent at work and amount of work completed (which is what you would hope!) The co-efficients from the linear model were used to weight each type of work in terms of a 'standard work item' which for ease of calculation was defined as taking 10 minutes. If the model said that a particular kind of work generally took 22 minutes then it was given a weight of 2.2 standard work items.

What this meant was that for the first time it was possible to define everyone's performance in terms of a common standard, by weighting the work they actually completed to give a total number of standard work items. Since the time each person was at work was also known, for each person a measure could be calculated ( minutes per standard work item) and compared to the same measure for the unit as a whole.

The results were enlightening to say the least! Some staff had productivity as low as 60% of the average while others had up to 170% productivity, with most staff fitting within the 80%-120% range. Of course, quantity without quality would be worse than useless, but using a separate measure of quality showed that the most productive staff also performed quality work.

Supervisors in this business had previously suspected that some of their staff were unproductive, but they had previously had no objective measure to enable them to quantify this. But now they could tell exactly how productive each member of their team was, both in absolute terms and compared to the other sixty staff in the business. Since the same data sources were used for all staff for time worked and amount of work completed and the work was weighted in the same way for everyone, there was also no scope for anyone to argue that they were being singled out or treated unfairly.

By creating a common objective measure, it was now possible to start having discussions with less productive staff to determine whether there were training or behavioral issues that needed to be addressed. However it was also now possible to start looking at the high performers to see how they achieved their results and whether there was anything which could be learned that might be transferable to other less productive staff.. As a result they had increased scope for improving the collective productivity of the business as a whole.

One side effect was that now that staff were getting regular feedback on their relative productivity, there was both the information and the motivation for everyone to try and improve their performance.

Sunday, November 17, 2013

Information must be actionable - the importance of context

Sometimes we ask for information but once we have it we realise that there is nothing we can do with it because we didn't capture enough context to interpret it.

Here are three examples from one organisation:
  • An open forum was held to report to staff on some organisational changes and part of the forum was capturing staff ideas for improvement. Ideas came thick and fast and were noted as quickly as possible on butcher's paper for collation and later consideration. When managers later met to consider the merit of the ideas they spent most of their time trying to work out what the ideas meant since there hadn't been time to clarify with staff during the meeting and insufficient detail had been gathered, with most of the ideas having been summarised in less than 10 words.
  • A survey was placed on the organisation's website for voluntary completion anonymously by clients. For each question where a client indicated they were dissatisfied with some aspect of the service the client was offered the option of elaborating in their own words what they saw as the problem. Twenty-five clients indicated that in letters not enough information was given explaining the decision. None of the clients elaborated. As a result the information was useless - it wasn't possible to track back to letters sent to those clients to see what they meant since the survey was anonymous and there was insufficient information to determine how the organisation's written communication could be improved. (Ironically, the respondents did exactly what they claimed the organisation was doing.)
  • In staff surveys one of the issues which also comes up in staff responses is poor communication by management. Yet senior management still don't know what staff mean when they talk about poor communication: is there too much? too little? is it too detailed? is it confusing? Is the problem limited to specific managers? Or is it a general? What specifically is meant by 'poor communication'? - no one seems to know.
In all three cases, the lack of context has resulted in resources being wasted getting information that doesn't inform and that isn't actionable in any way. Then further resources were wasted trying to mind-read what was meant.

Most of the time it would be better to capture stories. For example in the case of a staff survey, we could ask staff to describe an example of poor communication within the past three months, why they thought it was bad, what its effects were and how it could have been done better. Collecting stories about poor communication would enable us to narrow the definition of the problem into something which we could deal with rather than dealing with a nebulous amorphous 'something' which we could deal with if we only knew what it was.

But even more important is to clearly understand what sort of information will be useful and then properly designing a methodology for getting that sort of information. Survey design is sometimes thought of as a no-brainer, something anyone can do. And that is why so often surveys are a waste of time, money, and effort. Thinking it through at the beginning is more likely to yield a pay off than trying to be a psychic at the end! Use hindsight rather than foresight. If you settle for 'garbage in' then you are also settling for sorting through the 'garbage out' in the hope of finding 'something" useful.

Not a terribly exciting prospect is it?



Thursday, November 14, 2013

Compared to What? (Pt.2)


In one organisation I have been working with, a number of staff complained that they were suffering from 'sinusitis' following a change in the air-conditioning contractor.

Sick leave statistics were requested from the HR department and sure enough the number of days lost to 'sinusitis' had increased significantly over the number of days lost to sick leave for the same reason in the previous year. So it seems like an open and shut case, right?

Wrong! Sinusitis can easily be confused with the common cold, hay-fever, influenza or certain kinds of headache by the layman, and most of these cases were not medically diagnosed. Even where the person sees a doctor the chances are that if the patient claims that it is down to the air-conditioning, this will influence the diagnosis, especially in the absence of any medical tests.

So what was the actual situation? A business analyst (himself a regular suffer of sinusitis since childhood) took another look at the data and found two interesting anomalies:

  • firstly, the total time taken for sick leave from all causes had not increased since the previous year
  • secondly, the number of cases of cold and influenza had significantly dropped over the previous year and
  • finally, the seasonal pattern of the 'sinusitis' in the current year was remarkably similar to the season pattern of influenza in the previous year.
What this suggested was that the number of actual cases of sinusitis had not increased, but that staff had changed their definition of what constituted 'sinusitis'. In epidemiology, this is known as a classification error. What may have happened is that staff may have seen something on TV about sinusitis or their doctors may have changed what they classed as sinusitis. In any case, what this pointed to was that the apparent increase was an artifact of the change in classification.

The general principle we can derived from this is that you can't look at things in isolation. You need to look at the larger picture and where you notice anomalies see how those anomalies relate to the event or situation which prompted the initial investigation.

In the case above, it seemed reasonable to compare the current year cases of what staff were calling 'sinusitis' with the previous year. In some situations, it does make sense to compare like with like. But where an apparent change bucks a trend then it requires investigation. Where a totality hasn't changed, the only way a part of that totality can increase is for another part to decrease and where the part which has increased is easily confused with the part that has decreased, you may be facing a classification error.

In this instance, accepting the initial data at face value could have led to corrective measures that weren't necessary. Because someone looked beneath the surface, this didn't happen.

Tuesday, November 12, 2013

You can't hitch a rooster to a wagon...


...well you can but it would have to be a little wagon and it would be more as a joke than to get any useful work done! A horse and a rooster are excellent animals in and of themselves, but you wouldn't want a horse in a henhouse or to try and saddle up a rooster, it would just be silly.

Yet we sometimes do the same thing when we come across new ideas. In one organisation I know, the flavour of the month is Behavioral Insights. Now this is a useful concept in itself and has the potential to work well where it is applicable.

But in this organisation, it has become an end in itself and without really understanding its scope of application, one of the managers is trying to apply it where it doesn't really work. It isn't that the manager has seen an opportunity to exploit, but rather that they have fallen in love with a new tool which they don't really understand and they are itching to use it. Of course, they can't point to how it can be used, but that isn't their problem: they have delegated it to someone else who is tearing their hair out trying to see its relevance.

In an episode of the Canadian sitcom "Corner Gas", Brent and his father are having an argument about whether Brent should rent videos out of the gas station. Brent appeals to his mother for support "Ma, Dad doesn't know what he's talking about" and his father responds "I don't want to know what I'm talking about". Sometimes this is what happens in management: a manager is so keen to apply an idea that they don't take the time to "know what they are talking about."

And that is the take home: without looking at and understanding the full context of a technique or process, you can generate a lot of wasteful action, but not much progress. Understand first then investigate where or if it can be applied.

Otherwise, you will end up with a rooster trying to pull a very big wagon, and unless the rooster is Foghorn Leghorn, getting nowhere.