I'm trying to tease out connections between Hal Macomber's blog post about being a pessimist (maybe!) when it comes to the chances of project failure, Martin Fowler's thoughts on failure and productivity, and Dale Emery's human-centered definition of project success. I'm not sure I've succeeded but hopefully I can massage these wandering thoughts into something more concrete in the future.
On the whole I think that the word 'failure' has such a negative effect on those we apply it to that we should spray it around a little less. Is a 100% project success rate realistic? (And if not, then what is?)
- There are plenty of high-risk projects with a possible high return, e.g. where the technology is at a research stage. If we fail on one of these - when we were likely to fail - then is that 'bad'? (And when we succeed against the odds do we recognise that either?)
- Some percentage of projects will suffer mishaps due to special cause not common cause, so is it fair to label these teams as failures?
Sometimes calling an IT initiative a failure is just part of the corporate blame game. It can be politically expedient to have a scapegoat but perhaps some failures should be reframed in terms of the potential for future success?
- Throw one away: we didn't achieve X and Y but we still achieved Z (and learnt more about X and Y along the way), so we'll follow Fred Brook's advice and start over. (And I know he really said we should plan to throw one away but in the context of this reframing it still seems appropriate.)
- We live to fight another day: Project J didn't deliver, but the 11-person team have learnt more about this new technology, they aren't about to quit or take a month off sick, and they will be a real help to Project K next week. (The opposite of 'Success By Death March'.)
- The Law Of Unintended Consequences: version 1 of the application wasn't really what the customer wanted, but the business environment just changed and quite fortuitously that will make version 2 much easier to complete. (How likely is this? I once saw exactly this situation on a small scale, so it is possible - though I'll concede it's not going to happen often.) On a related note, the checkered initial history of an application like Excel shows that a change to the technology environment can also usher in success for a one-time software 'failure'.
At the other end of the spectrum are situations where the spectre of cancellation has to be invoked before the project gets the attention it needs (because for some reason cancellation is often seen as the worst way to fail).
- "At the end of our first iteration it seems likely that you won't be able to rely on the SnakeOil technology you want to roll out world wide so we're not going to be able to deliver" (subtext: but we think we know of some alternatives so let's talk about them.)
- "Your business representatives are always too busy to talk to us, our team is still short the six developers the IS department promised us, we can't get Security to give us access to the building where the users are, and quite frankly I don't think we have a chance" (subtext: but if you want us to succeed, then please demonstrate it by helping to sort these problems out.)
It's easy to say that projects fundamentally fail because they weren't set up for success but until we develop 20-20 foresight that's not a particularly useful statement. It's almost as trite as saying that if none of my projects have failed so far, then none of my projects will ever fail. (And what are the chances of that?!) Perhaps the only way forwards for the whole failure / success debate is to state up-front (and adjust as we go) what the binary tests are that tell us whether we're succeeding or failing on this project.
I got thinking about this because Industrial Logic are doing some fascinating work with management tests. From what I can tell (at a distance) my concept of project tests are similar - though I can imagine a suite of project tests that collectively plot a point somewhere on a two-dimensional scale whose axes are succeeding - not succeeding and failing - not failing. (And if the project falls into the succeeding AND failing quadrant then I guess that means that the tests must be contradictory!) Such tests would have a life (and usefulness) far beyond the duration of the initial software development effort.