Bringing a drug to market is an expensive and drawn-out process. Systems biology promises to make it more efficient. Philip Ball examines its potential.

Bringing a drug to market is an expensive and drawn-out process. Systems biology promises to make it more efficient. Philip Ball examines its potential.

If it seems outrageous that there are still incurable diseases in this age of pharmaceutical medicine, perhaps we should instead be amazed that there are any drugs for sale at all. Consider that it takes typically 14 years and $800m to bring a drug to market, that only one out of every 5000 potential drug candidates gets approval for medical use, and that even those that reach the stage of human trials fail to make the grade six times out of seven, and you begin to see what the drugs companies are up against.

But this expensive and drawn-out process will be greatly eased if the emerging science of systems biology achieves its potential. Systems biology promises to supply computer models of cells, organs and perhaps the entire human body that will enable drugs to be tested rapidly, cheaply and without risk by simulation - that is, in silico rather than in vivo. Computers are unlikely ever to completely replace human trials, but they could help to identify possible problems and eliminate poor candidates much more efficiently. This could reduce the timespan of drug development by two to three years and might knock over $200m off the eventual bill.

Although it could prove to be an invaluable tool for pharmaceutical companies, systems biology aspires to be far more than that. Indeed, it might even answer the question that the Human Genome Project claimed but ultimately failed to: what does it mean to be alive? Rather than looking for the answer in the linear sequence of DNA bases, systems biology seeks out the interconnections between the component parts of cells and organisms, mapping out how these parts fit together and communicate with one another. It acknowledges that life is not a static picture but a dynamic, ever-changing process - not a book, but more like a society.

The fundamental and the applied aspects of cell biology can’t easily be separated, not least because you can’t predict what effect a drug will have unless you have a pretty good idea of how cells function. ’The two go hand in hand’, says bioengineer Douglas Lauffenburger of the Massachusetts Institute of Technology, US.

And the applications reach beyond pharmaceuticals. A systems-scale understanding of biological function might allow the re-engineering of microbes to perform new jobs, such as cleaning up pollution or producing biofuels or new materials. Designer bacteria might offer new ways to fight pathogens. Systems biology could even help combat the threat of biological warfare.

Very often, geneticists and cell biologists find that life doesn’t fit this simple picture. Most diseases with a genetic component involve several genes, not just one. And signalling pathways have a tendency to branch; a protein thought to be involved in one function turns out to have apparently quite different functions too. In short, the cell is nonlinear.

So you can only get so far with a mere list of components, such as the Human Genome Project has provided. Now this project has run its course, biologists are being forced to grasp the nettle of a systems approach to the cell. ’One of the most daunting challenges in biology and medicine is to begin to understand how all the parts of cells work together to create complex living organisms’, says Nobel laureate Alfred Gilman of the University of Texas Southwestern Medical Center in Dallas, US.

Some researchers argue that there is little fundamentally new about systems biology. There exists a long tradition of building mathematical models of cell functions based on biochemical data, such as the rates of reaction between different enzymes and their target molecules. These models are comparable to the complex kinetic schemes devised by atmospheric chemists to understand the reactions of trace gases in the air. But systems biology is now attempting far grander things. One objective, pursued for example by partnerships such as the Virtual Cell Project at the University of Connecticut Health Center, US, and the E-Cell Project based at Keio University in Japan, is to create an entire virtual cell, which will enable researchers to predict the cell-wide response of a perturbation to some gene or protein. The international Physiome Project aims to do the same thing for particular tissues or organs.

These initiatives take their lead from the engineering sciences, and in particular from the way that electronic engineers plan and map the behaviour of circuits. Many of the concepts used in circuit design, such as feedback, modular structure, amplification and redundancy (alternative ways to do a certain task), seem also to be employed by nature in the way that cells are ’wired’ into a network of interacting genes, proteins and other molecules. This is understandable, because electronic engineers and biology share in common the fact that their designs must fulfil a function. In the former case, the design has been rationally planned at the drawing board; in the latter case, apparent purpose has arisen spontaneously through the agency of natural selection.

Thus, one of the central concepts in systems biology is that of a network. In fact, there are several overlapping networks at play. Genes can be considered to regulate one another - for example, the activation of some genes may cause other genes to be ’switched off’, so that their protein products are no longer transcribed. This interactive behaviour of the genome was first explored by the French biologists Jacques Monod and Fran?ois Jacob in the 1960s, who talked about ’regulatory circuits’ which they called integrons. But genes interact via their proteins, and one can also think of a network of protein-protein interactions, as well as the influence of small molecules such as hormones and metabolites. Identifying all the components of these networks, and the connections between them, is the objective of post-genomics projects that are given labels such as proteomics and metabonomics.

That’s why one growth area in systems biology is the development of collective databases and software tools for organising such data and trawling through it to extract meaningful information about the architecture of cell circuits. For example, the US-based Alliance for Cellular Signalling is assembling a database for information about the signalling pathways defined by interactions between proteins. Gilman, the chairman of the project, now also heads a systems biology centre launched this February at the Southwestern Medical Center in Dallas and several private companies are targeting the data-management and data-mining needs of systems biology. Ingenuity Systems, based in Mountain View, California, US, collects data from public and proprietary sources (when clients make the latter available) and organises it into a ’street map’ that establishes the connections. It has also developed ’knowledge management’ software called Enterprise which allows users to keep track of who has done what and how it fits into the big picture.

Some companies aim to piece together this sort of information into models of complete cells and organs, which clients will then use to test hypotheses about candidate drug targets and drugs. US company Gene Network Sciences in Ithaca, New York, is working on models of a colon cancer cell and of the heart, to aid drug development for treating cancer, inflammation and cardiovascular disease. Genomatica in San Diego, California, US, is focusing on genome-scale models of metabolism constructed on a ’software platform’ called SimPheny. Hitting the metabolic pathways of bacteria and other dangerous microbes could be a highly effective way of killing them, so models provided by SimPheny might help drug companies identify and validate targets and mechanisms of drug action. Modifications to a microbe’s metabolism could also enable it to produce therapeutic proteins, high-value chemicals, or biological fuels such as ethanol or hydrogen. Entelos, a company based in Foster City, California, US, is taking a more ’top-down’ approach by devising ’virtual patients’: computer models of human health that can be used for testing drugs and other therapeutic treatments on patients with different genetic profiles and lifestyles, with a focus on obesity, diabetes and asthma.

Moreover, there is a risk of reinventing the wheel - duplicating results - which would make advances in understanding very inefficient. Colin Hill, chief executive officer of Gene Network Sciences, believes this is not yet a pressing problem: ’until models become worth something by becoming accurate, model exchange is a secondary issue.’ All the same, Hiroaki Kitano, head of the Kitano Symbiotic Systems Project funded by Erato, the research programme of the Japanese government, has tried to anticipate this problem by developing a universal systems biology language in conjunction with his colleague Hamid Bolouri, now at the Institute for Systems Biology in Seattle, US. Systems Biology Markup Language is a computer code designed to allow different computational models to interface with one another. ’We hope eventually to utilise such a standard to easily import models developed by the greater academic community into our simulations,’ Hill says.

One goal of systems biology is to make pharmaceutical medicine more rational. In the past, researchers have used genetics and molecular biology to find likely targets for drugs, but have had little idea what effect hitting that target will have on physiological processes both ’upstream’ and ’downstream’. For example, if a drug is devised to interfere with the activity of a troublesome protein, the cell might simply respond by switching on another pathway that bypasses the problem and renders the drug useless. Alternatively, knocking out the target might also disrupt another pathway to cause unwelcome physiological side effects. Often the only way to discover such things is by clinical trial and error. If these unforeseen consequences don’t come to light until the late stages of human trials, millions of dollars may have been wasted. And even then, it might not be obvious why the drug is causing problems.

But by testing drug candidates on a reliable computer model, says Seth Michelson, vice-president of R&D at Entelos, researchers can deduce ’not only what is happening but why it is happening’. They can hope to spot potential problems in advance and see what needs to be done to fix them. For example, some carcinoma cells respond to radiation treatment not by dying but by proliferating. Mapping of the biomolecular pathways has shown that this is due to a positive feedback loop which, if snipped by drugs, ensures that radiotherapy has the desired result.

In a promising coup for systems-scale modelling, researchers at Entelos predicted that a potential drug for treating asthma, which had already entered clinical trials, would fail. The drug interfered with an enzyme called interleukin-5, which can cause airway obstruction in asthma, and so it seemed likely that the drug would reduce asthmatic breathing problems. Animal trials seemed to support the idea. But the simulations of the Entelos team suggested that airway obstruction was also induced by other factors, and so the drug wouldn’t work in humans. This was confirmed in a subsequent clinical trial.

Finding potential drug targets is only the start of the problem. They then have to be tested in three phases of human clinical trials, with progressively larger patient groups. All are expensive and slow, and each phase can be a poor guide to what will happen in the next one. Over half of all candidates that reach Phase II trials fail at this hurdle, for example due to system-wide side effects or unexpected control mechanisms that circumvent the influence of the drug. These failures typically cost companies $50-100m per drug. ’The earlier ineffective targets are dropped’, says Michelson, ’the greater the savings in time and money’.

Simulations and models can’t negate the need for clinical trials, but they can make them more efficient by showing how to optimise protocols (such as the length of the study and the dosage) and to identify the best patient groups. Eventually, systems biology models might be tuned to an individual patient’s genome to enable personalised medicine - a regime of treatment optimised to each individual.

One area where time really is of the essence is the treatment of biowarfare victims. It’s not just the short timescales that pose problems here. Because drugs against biological weapons may never be needed, there is no guaranteed market and so little incentive for private companies to develop them. Even for anthrax there is still only one vaccine approved for human use, and its efficacy has been tested only on animals, since there are no patients for human clinical trials. Entelos has begun to use its metabolic models to look for potential targets in the pathogen Bacillus anthracis and to study the human immune response the bacterium triggers. Studies like this are likely to have to rely on government support, as well as a willingness (already demonstrated by the US Food and Drugs Administration) to streamline approval procedures relative to standard medical drugs.

One thing is clear: systems biology cannot rely on biologists and physicians alone. A systems-scale understanding of the functioning of cells will surely depend on collaboration between researchers from diverse backgrounds, involving in particular those with expertise in computational and mathematical modelling. Computer scientists are needed to organise and interpret the mass of data; engineers bring experience in the design and modelling of complex systems; physicists have a long history of studying the global behaviour of systems with many interacting components. Hamid Bolouri came to the Institute for Systems Biology, founded by genomics pioneer Leroy Hood, from a background in microelectronics and artificial intelligence. And chemists have an important part to play too. At the Bauer Center for Genomics Research at Harvard University, US, for example, which aims to uncover the ’design principles’ of biology, co-director Stuart Schreiber is striving to develop a ’pharmacological genetics’ in which small molecules might replace genetic manipulations to control the behaviour and interactions of genes. Peter Schultz at the University of California at Berkeley, US, had already reported molecules that can determine the genetic fate of undifferentiated cells - which tissue type they develop into. So chemists stand not only to benefit from the pharmaceutical applications of systems biology but also to play an active role in showing how the organisational principles of living systems can be redirected by non-natural agents: in other words, to explore and modify the chemistry of life.

Acknowledgements

Philip Ball is consultant editor for Nature, 4-6 Crinan Street, London N1 9XW, UK

Experimental advances
New experimental techniques for mapping pathways and networks of cells are driving systems biology. One of the key developments was the invention of DNA microarrays, which provide an instantaneous snapshot of many thousands of ’active’ genes in a cell.

Microarrays consist of a grid of spots of single-stranded DNA tethered to a surface, in which each spot contains the DNA corresponding to a particular gene. If a gene in a cell is active this means that its RNA transcript is present in the cytoplasm, ready to be translated into a protein. This RNA is reverse-transcribed to produce the corresponding DNA, which is then fluorescently labelled and allowed to bind to its complementary sequence in the microarray. So spots that ’light up’ show the active genes.

Microarrays give an essentially cell-wide picture of changes in gene activity as cells change their behaviour. That, however, is not the same as knowing which genes interact with one another, so microarray experiments provide a vast amount of data that can be hard to interpret.

Researchers are now aiming to produce similar techniques that work for proteins - but this is harder, because it can be laborious to find receptors that bind specifically to each individual protein. (Antibodies might do the job.) Mass spectrometry offers a relatively crude measure of the molecular components in a cell - not only proteins but also small molecules such as metabolites. But it can be a daunting job to try to assign specific molecules to the forest of peaks in a cell’s mass spectrum.

As well as these ’global’ methods, there are now techniques for identifying particular interactions between proteins and DNA, other proteins or metabolites. For protein-DNA interactions, a method called chromatin immunoprecipitation forges covalent crosslinks between the two molecules and then cuts up the DNA strand and extracts the protein-bound segment by precipitation with antibodies. A similar antibody-based method called co-immunoprecipitation can identify specific protein-protein interactions.