Measuring R&D productivity is a thorny issue. Dennis Lendrem urges the pharmaceutical industry to learn from its mistakes.
Metrics are great. ‘What gets measured gets done,’ as Peter Drucker said. But you need to be sure you’re measuring the right things – data-driven decisions are only as good as the data driving the decisions.
There is a big debate about this right now in the pharmaceutical industry, and specifically about measuring R&D productivity. It all comes down to a choice of metrics: those who choose to ignore R&D costs point to the recent upturn in new therapeutic drugs, and those who don’t point to the increased R&D spend per new therapeutic drug approved. As we have learned to our cost, our choice of metric has a profound effect.
An expensive education
The ‘development speed’ initiatives of the 1990s are a great case study of what can go wrong. At the time, the pharmaceutical industry embraced ‘business process reengineering’. Companies strove to minimise the time spent bringing a drug to market, thereby maximising the period of patent protection.
The thinking was to take tasks normally performed in series and, except where subject to the laws of physics or nature, perform them in parallel. We could, for instance, bring forward some formulation development and stability testing while waiting for the early toxicology data to come in. These initiatives were extremely successful: companies halved the cycle time for successful molecules during the period 1990–2001.
However, there’s a catch. Bringing development tasks forward in the development process, and pushing more and more development activities into parallel only works well if most of your molecules are successful drugs. But sadly for the industry, the vast majority of molecules are unsuccessful.
Optimising the R&D process around the tiny minority of molecules that make it to market was a costly mistake. At the time, some observers noted that increasing development speed would simply increase the R&D burn rate. It did.1
No one metric is likely to capture an activity as complex, or as uncertain, as R&D.
Worse still, pushing tasks into parallel meant we lost opportunities to terminate a project before incurring later-stage costs – the so-called ‘options’ value. Development speed changes may even have reduced R&D productivity – ‘the development speed paradox’: the faster you go, the longer it takes to get to market.2
Progression-seeking behaviour, already endemic within the pharmaceutical industry, was exacerbated by development speed initiatives built upon planning for success.3
Maximising development speed clogged the development pipeline with marginal or failing medicines. ‘The industry became really slick at delivering late-stage failures to the marketplace,’ as Norman Einstein, chief executive of Scientific Radicals observed.
Extreme measures
Business process reengineering was a disaster for pharma. It increased both the average time to marketing authorisation and the average R&D costs per successful product launch, and reduced the average number of launches for every $1 billion spent on R&D. And we are still paying the price.
The plot above shows pharma R&D spend (in $ billions) per annum during the period 1990–2014. There are two breakpoints. The first breakpoint is in 1997 – the year following the introduction of drug review charging and about 3–4 years into business process reengineering when development speed initiatives began to bite. At this point R&D development costs, already increasing, began to spiral out of control.
The second breakpoint is in 2008 when the industry began to focus on another metric – R&D spending – and the wholesale rationalisation of entire R&D organisations in a desperate bid to cap R&D costs.
The big question is: will this rationalisation improve R&D productivity? Probably not. If the lessons of the 1990s taught us anything, it is that an unhealthy focus on just one metric is likely to cause problems. The 2014 Summit on R&D Metrics in Boston last year offered a bewildering range of metrics, but capturing key metrics on a single, highly visible dashboard is essential. For no one metric is likely to capture an activity as complex, or as uncertain, as R&D.
When the industry sought to reduce the cycle time of successful molecules, we were spectacularly successful at reducing cycle time. With just one metric, though, we lost sight of the fact that most molecules are not successful. In the process, we sub-optimised our research and development efforts. You may get what you ask for, but it may not be what you need. As the old adage says, be careful what you wish for, it might come true.
Dennis Lendrem is project manager, translational research at the Institute of Cellular Medicine, Newcastle University, UK.
No comments yet