Reducing intellectual processes to metrics runs the risk of people working to the numbers not the goal
Whoever said that ‘Whatever can be measured, will be managed’, was not lying. This principle can be seen in action everywhere you look inside all sorts of companies, as well as academia. Budgets are budgets and funding is funding, whether it’s coming from corporate profits or from a granting agency. Either way, someone is keeping track of the numbers.
That’s all fine – at least, until you get to the point of giving people grief about how many scintillation vials they’ve ordered or something. But that aside, the real damage in quantitative management comes as it gets applied to higher-level problems. In drug discovery there’s a slide that gets used over and over again, for example. It’s a funnel of some sort, and it shows, up at the wide part, how many projects are in the preliminary stage of research. That narrows as you get closer and closer to the clinic, narrows a great deal further at the point of clinical trials, and ends up with individual droplets coming out the far end for approved drugs. If you’re very fortunate, that is.
The temptation will inevitably come, when the organisation is one programme short of meeting its goals for the year. There are several candidates, what would it hurt to declare one of them a winner, just a bit early? Just this once?
If you normalise these charts to figure out how many projects get started for every one that leads to a marketed product, the results are quite alarming. There’s no easy way around that. However, presenting the data in this format, with everything measured, invites every step of the process to be managed according to those numbers. So, we have to start x projects around here to get one drug, do we? Very well, that’s our new goal: we shall always make sure that we’ve started enough projects to provide for a stream of drugs coming out the back end of that funnel up there on the screen. If anyone appears bored or underemployed, we’ll start twice as many projects and get even more drugs out the door.
But this is a fallacy. In fact, it’s worse than a fallacy, it’s an outright moral hazard. The incentive is now in place to simply start projects. And if those numbers start to flag, well, any old project might do to keep the total up. But the historical numbers used to set those targets did not come from people who were starting projects for the sake of it. No, they were people working on what they thought were worthwhile ideas. Starting projects just to be starting them is dangerously close to a cargo cult.
When you set up those project goals, you probably attached incentives to them as well, didn’t you? Bonuses, promotions and the like depend on everyone hitting their targets. But this can become a moral hazard that has grown claws and fangs. Now the temptation will inevitably come, when the organisation is one programme short of meeting its goals for the year. Just one. And it’s early November. There are several candidate projects, none of them particularly appealing and all of them needing more work to assess their potential (if any). Look at the calendar, though: what would it hurt to declare one of them a winner, just a bit early? Just this once? They’re not as bad as they look, are they? Why, that one over there, it’s looking quite promising from this angle, isn’t it? Oh dear.
The worst part of all this is that deadlines and goals are not in themselves evil. If you never have a concrete goal and a time you’d like see it achieved by, you might be content to wander along for months (which might turn into years). These things really can make you think about priorities or address crucial questions and experiments that you might have been neglecting. But setting them in hard enough fashion to make you do the right things increases the pressure to do the wrong things instead, and sometimes there’s very little space between those two choices.
Here is where I would like to end up with a foolproof method to avoid the danger, but alas, I have none. In the end, nothing is truly foolproof, especially when we have the incentive to fool ourselves.