Saturday, September 08, 2012

A new approach to evaluation

Over the last few months, aided by our developers, we have been building a machine that converts training events into behavioral change. We expect to plug it in and switch it on soon, so perhaps now is the time to talk about it.




The problem was never evaluating training. Anyone with a scientific background will know that an 'independent measures' design will do the job: two groups are created randomly, one receives training the other does not - the effects are clear. Business ethos generally precludes this approach; but 'matched pairs' would work just fine; find a set of people similar to those who have received training and draw comparisons.

We don't want to do evaluation - it will only reveal what we already suspect: that there is no significant impact. It's much easier to fall back on the mantra. And in this fog we miss the glaringly obvious: at school we learned not because we attended lessons but because we prepared for exams - we learned in readiness for the test. No tests, no learning.

But tests were never a great call to learning; they are artificial challenges - artificial concerns -  means of enforcing learning. They result in token learning. By contrast, performance support differs precisely in that it seeks to respond to those challenges which people already encounter.

I don't doubt for a second the value of learning events; just as I don't doubt the value of dinner parties, weekend breaks or trips to the movies. But businesses require more tangible justifications.

It's common knowledge that line mangers have a key role to play in the outcomes of training. Where line managers clarify expectations and monitor progress pre- and post- learning the measurable impact is greater. Why? It's a phenomenon similar to the 'Hawthorne Effect': the mere act of observing experimental subjects leads to a change in their behaviour. But line managers are not the most powerful influence on individual behaviour: peers are.

The mechanism relies on 'iterative peer review': prior to trainining attendees select those peers they would like feedback from, on the behavioural outcomes selected by the programme manager. The attendees rate themselves and the difference between their self-rating and peer ratings (their diffence score) is returned before they attend the event. So far so good - something like a '360-lite'.

After the event, however, the system automatically prompts the chosen peer group to reassess those same behaviours at intervals of say one, three and six months. At the end of this period a 'change score' is calculated: an average value representing the amont of observed behavioural change that has taken place. Knowing that they are living up to the expectations of their peers, people make an effort to change. By coupling a meaningful challenge to the event, learners will endeavour to practise what they have learned - and we can skip directly to robust 'level 3' results.

That's the theory, anyway. I will let you know how we get on.

2 comments:

  1. Great idea, will put it into action ourselves too and let you know how we get on.

    We've been working for a while with a similar concept of self and peer assessment as a novel and highly effective way of leaving the responsibility for learning with the learner, not always easy in corporate settings. Senior executives often wanting to sit back and be entertained.

    It is based on some work that was pioneered by James Kilty in the 1970s through the Human Potential Research Project at Surrey University. Can't find much on the web about it but it looks like he has written a bit about 'co-counselling' here http://www.kilty.demon.co.uk/index.htm. You might also be interested in this on peer-reviewed bonus systems that was tweeted recently from LDRB http://ldrlb.co/2012/09/is-it-time-for-a-peer-reviewed-bonus-system/.

    ReplyDelete
  2. Just found your blog - really good stuff. Thanks. It also occurs to me that your system here is similar to social signals like reviews on amazon. I could see future workers and learners adding to globally outsourced value chains on a real time, fluid marketplace way and being reviewed or scored by the people they contract to, peers, suppliers, the people they subcontratct and so on. This could drive performance and learning. tahnks again, great stuff.

    ReplyDelete