Writing a letter-to-the-editor is a long-established way to comment on a published article. In scientific journals, a letter is almost always used to register criticism (letters that appear in newspapers and magazines are often for other purposes).
Last year I was involved in two, both of which I’ve already posted about (here and here, with the latter letter to appear imminently). Working on both was very enjoyable. If you’ve never written one, it’s worth knowing a bit to decide whether it’s worth the effort. As with any other publication, you are trying to achieve something in writing a letter.
Useful to know
First off, a letter does not count as ‘original research’. Our letter1 on Hanne Oberman and Gerko Vink’s excellent paper2 contained a little simulation study to demonstrate a point I wanted to make. It clearly was original research, but a letter is not classified as such. This is important because, had it counted as original research, it would have been subject to Plan S requirements for open-access publishing (my funder is in Plan S). Since it was a letter and therefore not ‘original research’, I could not use the open-access publishing funds that would otherwise have been available: they can only be used for ‘original research’. So the letter remains behind Biometrical Journal’s closed doors.
Second, when you write a letter, the authors have a right to reply. This absolutely is appropriate. The only letter I can remember receiving was on a reasonably forgettable paper3. The letter came from Stephen Senn4, a legend of the clinical trials world, and receiving it was a very intimidating experience for me (note: if you’re ever writing one, do try to be kind). His letter wasn’t particularly critical but started by saying our paper had not distinguished between random centre intercepts and random treatment-by-centre interactions5. Having the right-to-reply meant we could politely point out that we had done so.
The original authors are not obliged to reply (I don’t know if Oberman and Vink will to ours) but they usually do. The letter-authors don’t get to respond again, so the paper-authors get the last word. This means they may claim you misunderstood something or focus on a minor detail you got wrong. They usually say the letter doesn’t change their conclusions. They sometimes just say ‘agree to disagree’: Andrew Althouse wrote a great letter about the pointlessness of post hoc power6, and the authors’ response made it sound like it was just a difference of opinion rather than their whole paper being wrong (I now can’t access it so hope my recollection is right). It wasn’t, but people who don’t know could be fooled into thinking it is.
For both the above reasons, I view the BMJ’s system of having a conversation in the responses (see my post here for one) as far better than letters. The conversation happens much faster. It’s odd that more journals have not switched to it.
Types of letter
It’s useful to think about different types of letter. A letter might:
Highlight how awful a study is and say it’s too problematic to be counted in the evidence base (N.B. this does not necessarily mean conclusions are wrong, a fallacy I can easily fall into).
Highlight a particular problem with an otherwise-good piece of work (e.g. our letter noted above about simulating the complete data).
Use a study to highlight a more general point about a field. This may be an attitude or a problem. I think there are a few like this about dichotomous interpretations of p-values.
Correct authors’ misuse of one’s own research (e.g. the consequences of randomising schools). A cluster-randomised trial7 once used a paper by
and me8 to justify ignoring the clustering in their analysis. Our paper had literally been about when you can and cannot do this, and clearly said that you cannot for cluster-randomised trials! So we wrote a letter to point this out, and Brennan reverse-engineered what the inference might have been under different levels of clustering9.
You can probably think of other types of letter. Perhaps a note about something being amiss. For more serious accusations like fraud, a letter doesn’t seem the appropriate avenue.
It’s worth considering what type of letter you want to write. If you’re intending to highlight a small problem in an otherwise great paper, you need to make sure that’s clear to the readers and authors, because authors will be tetchy.
With (1) and (3), it can come across as randomly picking on someone when loads of papers have the same problems or worse, so you probably need to explain why this specific paper (and be clear that it’s not just this one). I’ve repeatedly used a particular paper as the perfect example of something lots of people do. It’s almost a caricature. This means it can seem like I’m picking on it. So 1) I hope introducing it in a way that says this is a nice example of something lots of people do alleviates that, and 2) it is published and therefore open to criticism. BTW, if you think your publication should not be publicly criticised, it’s a bit of a self-own that it should not have been published.
What’s the point?
The question I want you to consider is what you want to happen as a consequence of your letter. It’s easy to enjoy the righteous feeling when writing one but you need to keep perspective on what it’s going to achieve. I find thinking about this a good leveller.
In writing a letter, I’d hope to get a positive response from authors. This is not unheard-of but probably unrealistic. Too much rides on their research and it’s understandably hard to admit you’ve made some mistake that undermines what you poured your time and energy into for years.
Will other people – not the authors – read it and question the conclusions, and realise that they want to do better? Maybe. Hopefully! I think this is more realistic. If this is what you want then it makes sense to remember you are writing for these readers, so write accordingly.
Will your letter help a field turn its back on some mistake / poor practice? No. If you’re highlighting a problem with a whole field, it’s a delicate business not turning yourself into their common enemy. However right you are, you’re likely to come across as mad. ‘These statisticians [insert your field here] are always banging on about [thing we do] being wrong but they never engage with our field’s unique aims and challenges, which explain why we do that thing.’ I’m sure you’ve seen it.
In our forthcoming letter10, we’ve put the aim in the title. There are people advocating for deterministic single imputation approaches and these can appear ok if you don’t consider self-efficiency; once you do, you spot the problems. The title of our letter gives our aim: ‘Regulators and trial statisticians be aware!’
I suppose I’m writing all this to emphasise that classic advice from Dr Dre, PhD: ‘You better think of the consequence’. This could be adverse, whether or not you and your letter are right.
Morris TP, White IR, Cro S, Bartlett JW, Carpenter JR, Pham TM. (2024), Comment on Oberman & Vink: Should we fix or simulate the complete data in simulation studies evaluating missing data methods? Biometrical Journal. 66: 2300085. https://doi.org/10.1002/bimj.202300085
Oberman HI, Vink G. (2024). Toward a standardized evaluation of imputation methodology. Biometrical Journal. 66, 2200107. https://doi.org/10.1002/bimj.202200107
Kahan BC and Morris TP. (2013), Analysis of multicentre trials with continuous outcomes: when and how should we account for centre effects? Statistics in Medicine. 32: 1136-1149. https://doi.org/10.1002/sim.5667
Senn S. (2014), A note regarding ‘random effects’. Statistics in Medicine. 33: 2876–2877. https://doi.org/10.1002/sim.5965
Interestingly, my reaction was self-doubt (‘I can’t believe we didn’t write that, I’m mortified’) while Brennan’s was just self-confidence (‘We absolutely wrote that, he must not have read the introduction’)
Althouse AD. (2021), Post Hoc Power: Not Empowering, Just Misleading. Journal of Surgical Research. 259: A3–A6. https://doi.org/10.1016/j.jss.2019.10.049
Cicutto L, To T, Murphy S. (2013), A randomized controlled trial of a public health nurse-delivered asthma program to elementary schools. Journal of School Health. 83(12): 876–884. https://doi.org/10.1111/josh.12106
Kahan, BC, Morris, TP (2013), Assessing potential sources of clustering in individually randomised trials. BMC Medical Research Methodology. 13, 58. https://doi.org/10.1186/1471-2288-13-58
Kahan, BC, Morris, TP (2014) The Consequences of Randomizing Schools Rather Than Children. Journal of School Health. 84(6):349. https://doi.org/10.1111/josh.12155
Cro S, Morris TP, Roger JH, Carpenter JR. (2024, to appear) Comments on ‘Standard and reference-based conditional mean imputation’: Regulators and trial statisticians be aware! Pharmaceutical Statistics.
Great advice, thanks Tim!
Nice reading :) I only wrote 2 letters. The first one to commend authors about their article which I liked, and the second one because I was upset. It is about to be published. Its title says all as well: Words matter: use caution when interpreting study’s results!
I hope that some editors, authors and reviewers will read it and think more about how to present their results.
Agree with you: think about potential consequences !