It pays to know when taking shortcuts is acceptable, and which it’s safe to take
Ask any circle of chemists about the major problems in the field today, and it won’t take long for reproducibility to come up. Although not so pronounced as the crisis in fields like cell biology, every synthetic chemist has encountered literature procedures they are unable to match. The most common cause of this irreproducibility seems not to be mistaken product identity, illicit padding of yield, or outright fraud. More often, insufficient detail in published methods thwarts efforts at replication.
Admittedly, it’s not always obvious that details are lacking. Unexpected variables such as vial headspace, stirring rate and even ambient humidity can, rarely, impact reaction outcomes. Or an unnoticed contaminant might skew results, as with ‘metal-free’ reactions that turned out to be catalysed by undetected trace metal residues,1 or when an arene fluorodesilylation was found only to occur in borosilicate flasks.2
Outside these extreme examples, sufficient attention to detail ought to allow reproduction within a small error margin – given access to appropriate equipment. Finicky apparatus can certainly affect reproducibility by adding new variables, some of which cannot easily be controlled. The advent of accessible equipment for traditionally specialist processes like photochemistry and electrochemistry is beginning to offer some respite from these issues.
However, there can also be a mismatch between the original researcher’s expectations, and those following them. Robert Bunsen once asserted that there are two types of scientist: ‘first, those who work at enlarging the boundaries of knowledge; and secondly, those who apply that knowledge to useful ends’. Researchers developing new methods strive to enlarge the bounds of pure knowledge. They attempt to obtain the highest yield of pure product, with the greatest selectivity. Their goal is to develop the method, and showcase its usefulness for others. On the other hand, process chemists might tweak reaction conditions for robustness, cost-effectiveness and safety, until they are convinced it is ready to scale up. Medicinal chemists tend to value speed at the expense of material cost and yield, telescoping reactions and dealing with impure materials, as long as they can prepare a few milligrams of fully purified product for biological testing.
Given the diversity of bench careers, the chemist following a literature procedure is unlikely to try it for the same reason as it was originally published. So while it is the pinnacle of rigour to distil your starting material, it’s more useful for me to know whether I can use benzaldehyde straight out of the bottle. There has been a recent move to supplement the standard, rigorous techniques in new methodology papers with an open-flask test on an average-performing starting material. Not only can researchers without access to expensive equipment such as a glovebox be sure of their capability to run such a procedure, this practice offers insight into a method’s sensitivity.
The shortcuts that methodologists might see as cardinal sins can become valuable tools once the goal switches to applying chemistry to useful ends. Is my compound good enough, at 80% pure, to continue to the next step, or does it require further purification? This depends on the tolerance of next reaction, and how precious the material is. So long as the compound is stable, column chromatography is the more stringent pathway – but there is such a thing as being too safe. Arguably, synthetic chemistry training does, and should, include developing expertise in where such shortcuts might be allowable.
Relatedly, laboratory safety teaches us to assess hazards and risks. The same principles apply to a synthetic shortcut: how significant is the hazard (the extra time taken, or reduction in yield or selectivity), and what is the risk it will occur? Balance these against the reward – the chance to make more compounds more quickly, or using cheaper materials - and evaluate the decision on how to move forwards.
If a method is so sensitive as to forbid any shortcuts, its real-world utility must be questioned. Unusual variables such as flask shape and unexpected impurities can be important. But if minute differences cause ‘technically not meaningless’ fluctuations in results, then the reaction can hardly be considered robust. Those seeking true impact that translates to the wider world must develop trustworthy methods that others can easily follow.
There is, of course, a need for strict rigour when developing new methodology, and trying to understand what makes a reaction work well. But when speed is critical and yield matters less, the game changes and there’s a new rulebook to follow.