Discussion about this post

User's avatar
Neural Foundry's avatar

The progression from outcome simulation to Gaussian quadrature feels like watching Monte Carlo error get methodically squeezed out at each step. Potential outcome simulation is an underrated intermediate I hadn't considerd before, the fact that conditioning on identical covariate values makes it even more efficient than just doubling sample size is a neat insight. The musical chairs between UCL, Novartis and Roche during manuscript completion adds a fun real-world twist, curious how much the Gaussian quadrature approach scales when dealing with high-dimensional confounders or non-Gaussian setups.

Tim Morris's avatar

A comment from Enrico Giudice on this: ‘when you say that “we do not automatically know it [theta] because the covariate is normally distributed”, it reminded me how (surprisingly) complicated it is to get exact results even for non-Gaussian confounders.’ See the appendix of our preprint for more info.

1 more comment...

No posts

Ready for more?