A colleague kindly sent me the output from the recent APM Assurance Specific Interest Group, which focuses on assessing project quality and performance. It’s quite nice, at least by the standards of the profession, though with occasional lapses into half-baked thinking. As usual with most would-be management experts, they are obsessed with turning everything into quantitative measurement. It’s not a bad idea in itself, though the uniqueness of individual projects and the fact (yes fact) that metrics are only a means and never the end) do suggest that the desire to be measurable is leading them to pointless and inconsequential quantification. This in turn leads to the outcome that the attempt to provide a solution results only in a more and more artificial definition of the problem.
Take, for example, their attempt to quantify RAG ratings. I’m firmly opposed to this on principle – RAG should define a qualitative difference in consequence, not just an arbitrary definition of the ‘Oh-well,-1-to-3-can-be-red-and-4-to-6-amber,-and-oh-how-can-we-say-that-10-is-really-special?-I-know,-let’s-make-it-blue!’ variety.
Exactly how objective and rigorous this is comes out when they find that they can’t actually tell you what the difference between neighbouring scores actually is. Their scoring for 4 is ‘Better than a 3, but some elements required for a 5 rating are not in place’. And for 7? ‘Better than a 6, but some elements required for an 8 rating are not in place’. As a colleague immediately responded to this marvellous insight, ‘No shit, Sherlock…’
I suppose there is some kind of sense in this. It lets you deal with the all-too familiar situation where you find yourself unable to decide between alternatives. But unfortunately all that really means is that the scale you are trying to use is not defined objectively, rigorously or consistently enough (usually because, in my experience, it isn’t a single scale at all). But that is only to say that it is still too immature to be used. But here it is, being recommended as a professional standard. Which leads me to refer the reader – and the APM – to my previous piece on professionalism.
The rest of the paper is riddled by the sort of inarticulacy and arbitrariness that suggests that project managers probably shouldn’t be allowed to write standards or even evaluate projects. I particularly despair at the description of what needs to be in place to get a 10: ‘Processes have been refined to be best practice. IT is used in an integrated way to automate the workflow, providing tools to improve quality and effectiveness. The project is demonstrating innovative techniques, thought leadership and best practice’.
No definition of best practice, so it starts with a completely meaningless idea. The assumption that the Nirvana of management is automation is also a bit scary: providing IT-based tools to manage workflow and improve quality and effectiveness, far from being best practice, is about as basic as it gets. Well, it is around here. As for ‘demonstrating innovative techniques, thought leadership and best practice’ (there it is again!), having led innovation management programmes and having routinely laughed/despaired at the quality of thinking that portrays itself as ‘leadership’ in most organisations, I am astonished at what the APM has been prepared to release under its banner.
(For what is, I think, a slightly more intelligent approach to RAG statuses – which is to say, one which is focused on action, not measurement – try here.)