Firing at Target
The Target statement recently appeared in JAMA1, and I hereby confer it with two acronym-related awards: “Most dubious” and possibly “Longest gap between two successive letters in the acronym”2.
The whole idea of target trial emulation is clever and useful. The cleverness is as a communication tool: It helps avoid certain ambiguities by describing what researchers would have liked to do (and why), what was done, and the discrepancy between the two. This facilitates shared understanding and helps with critical appraisal. In that sense, target trial emulation was crying out for a reporting guideline.
People who do target trials often testify that it clarifies their thinking about design (often the time zero stuff gets emphasised), and the Target statement in particular provides a structure that avoids missing certain blind-spots in reporting. As with most reporting guidelines, it is nominally about reporting, but sometimes – usefully, IMO – hints at what people should have done rather than how they report it.
Personally, as someone whose bread-and-butter is randomised clinical trials, I find it helpful to understand what researchers would do if they could, deficiencies in the available data, and so on. I also like the emphasis on identifying assumptions.
Despite the above praise, I will register two gripes. One is about what I see done; the other is about the Target statement itself.
Gripe 1: Actual trials may not be target trials but describing a target trial does require good understanding of actual trials
Target trial emulation is IMO particularly useful for communication with actual3 trials people. The flip-side is that, to talk about a target trial, one must understand actual trials! In many descriptions of a “target trial” I see, there is huge ambiguity. I suspect this is because many people doing target trial emulation don’t really know much about actual trials.
One example is outcomes4. In actual trials, a lot of deliberation goes into outcome definition, measurement, review, and so on5. This seems to be treated fairly superficially in target trial emulation (see for example items 6e and 12 in the Target statement). In a sense this is understandable: you have what you have; different sites might be using different definitions; you might not really know the measurement protocol; etc. But of course, actual trials have these in-depth discussions because it matters, so highlighting discrepancies is important. It’s exactly like the “time zero” thing, but for some reason does not receive much emphasis.
Gripe 2: Target statement leaves causal estimands translucent-to-opaque
Defining a target trial helps to know what people would like to do. It’s procedural. Some of its advocates claim that knowing this tells us exactly what people want to learn (the estimand). Nope.
An estimand is a natural-language description of the effect we wish to learn about. For methodological work, this is often compressed into the mathematical notation of potential outcomes because it’s useful for identification purposes (imagine writing out words every time instead of Y¹ or Y(1) whatever you write). Once we have a study design, we can work towards finding an identifying expression in terms of observable quantities (some people call this the “statistical estimand”, which is [i] reasonable and [ii] confusing; net-bad). I think the idea in target trials is that the identifying expression should be trivially simple, like a simple comparison of means or whatever. That’s fine, but trivially-simple identification still does not tell us what causal estimand is identifies.
Something statisticians used to do all the time in actual trials – no doubt some still do – was to censor data after an intercurrent event, then analyse the ITT set (e.g. a Cox model for censored time-to-event outcomes, or a mixed model for quantitative outcomes). Amazingly, this was sometimes accompanied by a statement that the trial tests the effects of randomisation (implying treatment policy). This was not right; those censoring and estimation procedures targeted what we would now term a hypothetical strategy. The point is, there is often a discrepancy between what people claim they want (the estimand) and what they actually do (analysis approach). Saying what they want does not resolve the discrepancy, but without it we would not even know that there is one. They might think they are estimating one thing, you might assume something else. The point is, describing your target trial is great, but even that does not tell us the estimand.
Also, given Hernán’s semantic quibbles (mostly wrong) about E9(R1) Addendum-style estimands6 at EFSPI workshop, I’m going to hit back and highlight a paragraph I’ve been referred to several times in the past7, held up each time as the way to precisely define a “causal contrast”.
Read what you will into this picture of my boy a few years ago (noticing where the arrow is, and the boar).
Disclaimer
The above are observations from listening to quite a lot of presentations about target trial emulation, but I’m certainly no expert. I want to re-emphasise that it can be a great communication tool and I will often jump to defend it. Just because something is frequently misunderstood or misused does not make it an inherently a bad idea (see also: Statistics). Julia Rohrer wrote an excellent post on this topic, which I seem to send to people weekly.
Cashin AG, Hansford HJ, Hernán MA, et al. Transparent reporting of observational studies emulating a target trial—The TARGET Statement. JAMA. 2025; 334(12):1084–1093. doi:10.1001/jama.2025.13350
As far as I can see, they don’t define the acronym but I guess it’s TrAnsparent Reporting of observational studies emulating a tarGEt Trial. But who knows. It could be transparent reporting of observational studies emulating a TARGET trial, in which case I withdraw the second award… and also the first because in that case it’s not an acronym at all.
This is not intended to be patronising; I just need to distinguish actual trials from target trials (and have seen others use it).
Bernie Sanders: “I am once again asking you to stop calling outcomes endpoints.” As Max Parmar says, patients tend to dislike it when you talk about their “end point” (see also “subjects”).
Based on experience in trials in academia. So far I’ve not been involved in these discussions in my new job in industry.
Addendum on estimands and sensitivity analyses in clinical trials to the guideline on statistical principles for clinical trials, ICH E9(R1) https://database.ich.org/sites/default/files/E9-R1_Step4_Guideline_2019_1203.pdf
MA Hernán, JM Robins. Using big data to emulate a target trial when a randomized trial is not available. American Journal of Epidemiology. 2016; 183(8):758–764. doi:10.1093/aje/kwv254



