The standard journal requirement of ±0.4% for carbon, hydrogen and nitrogen is not statistically sound

Woman looking at percentage sign

Source: © Getty Images

The strict elemental analysis limits imposed on researchers by journals have no clear justification 

As graduate students working side by side in a glovebox, two of us (Caleb Martin and Jason Dutton) experienced pressure to obtain elemental analysis data meeting strict journal standards. While we could acquire and analyse other spectral data (such as NMR, infrared, and mass spectra) for new compounds on-site, elemental analysis was performed by a third party. If the results sent to us deviated from expected values, we could do little more than shrug our shoulders, point to our clean NMR spectra, and try to synthesise or purify a ‘better’ sample, which either gave suitable numbers, or not, after a delay of several weeks.

Moreover, the values returned had no error bars or corroborating raw data. There is also no direction on how to treat the data on a resubmitted sample nor, if the company ran duplicate analyses, on how the multiple numbers should be interpreted. Anecdotal comments from the synthetic community indicate it is common practice in many groups to resend a sample from the same batch and simply select the best result. Journals generally required values within ±0.4% of the calculated values for carbon, hydrogen and nitrogen in a proposed formula but provided no rationalisation for this threshold. Years later, in our academic positions, we faced this situation from a different angle. Elemental analysis was an expense to our limited budgets, and unsatisfactory results delayed publication.

We conceived a study to assess the reliability of elemental analysis results. Given the high stakes in questioning an analytical technique engrained in journal guidelines for decades, we consulted two friends who had also expressed relevant concerns. Rebecca Melen had voiced frustration on Twitter about difficulties obtaining elemental analysis data during a Covid-19 lockdown in the face of an inflexible journal. Saurabh Chitnis had grown weary of relying on third party laboratories for elemental analysis data, and recently purchased his own instrument to lower costs and ensure better quality control.

Elemental analysis machine

Source: © Saurabh Chitnis

Inside the Chitnis elemental analysis instrument used to check samples before sending them on to commercial providers

In a virtual meeting, we devised a plan to assess the reliability of third party services. We selected five air-stable organic compounds – all passed on the freshly calibrated Chitnis instrument. Sending these samples to two external facilities returned results that varied wildly from each other, with several data points falling outside the 0.4% margin. This was fortunate – had the data from these two providers been spot on, we might have abandoned the project.

Next, we sent the samples to 15 other academic or corporate elemental analysis services. Some gave perfect values across the board while one company had a 30% failure rate, where a failure is the data being outside the 0.4% guideline. One carbon result came back 7% higher – this magnitude of variation in a real sample would result in an uncomfortable meeting with a supervisor!

We chemists were unsure how to analyse the aggregate data. So, we included expert statistician Rupert Kuveke as a senior co-author, to parse meaning out of nearly 500 CHN data points. From this analysis, over 16% of carbon results failed the ±0.4% requirement. The source of the variation cannot be determined from our study, but statistical analysis indicates it is essentially random. If one were to consider a 5% fail rate acceptable, a margin of ±0.71% would need to be set for carbon. Even then, 5% of all samples would fail due to factors outside the researcher’s control.

We hope our study causes journals to justify their threshold and requirements for elemental analyses

To our surprise, the study drew an overwhelmingly positive response, and we appreciate ACS Central Science as well as the synthetic community for being so receptive to a potentially controversial article. This likely reflects the systemic pain elemental analysis inflicts upon the community. We received only two critical emails stating that the technique is accurate if performed correctly. We fully agree – the issue is that one cannot verify the care used when analyses are performed at third party facilities. Several investigators have successfully used our values in rebuttal letters to reviewers demanding ‘better’ elemental analysis data and some have cited the work in the experimental section of their manuscripts.

Our manuscript provides data and states its statistical meaning, inviting readers to reach their own conclusions. We recognise that this is not a complete study on the method as false positives were not considered, but clearly journal and reviewer attitudes towards elemental analysis data require major amendments as the current (nearly) blanket ±0.4% is not a scientifically justifiable standard. We hope our study causes journals to justify their threshold and requirements for elemental analyses, and educates reviewers about the false-failure rate, which would make synthetic chemistry less frustrating and more enjoyable for researchers at all career stages.