3 Comments
User's avatar
Neural Foundry's avatar

The progression from outcome simulation to Gaussian quadrature feels like watching Monte Carlo error get methodically squeezed out at each step. Potential outcome simulation is an underrated intermediate I hadn't considerd before, the fact that conditioning on identical covariate values makes it even more efficient than just doubling sample size is a neat insight. The musical chairs between UCL, Novartis and Roche during manuscript completion adds a fun real-world twist, curious how much the Gaussian quadrature approach scales when dealing with high-dimensional confounders or non-Gaussian setups.

Tim Morris's avatar

Thanks very much! You’re right, Gaussian quadrature does not scale well with high-dimensional covariates, when Monte Carlo integration can be faster. One point to note is that “Gaussian” here means the type invented by Gauss, rather than Gaussian distributions. Table 1 of the pre-print gives three other rules.

Tim Morris's avatar

A comment from Enrico Giudice on this: ‘when you say that “we do not automatically know it [theta] because the covariate is normally distributed”, it reminded me how (surprisingly) complicated it is to get exact results even for non-Gaussian confounders.’ See the appendix of our preprint for more info.