A new toolkit for checking simulation studies
17 Nov 2023
A new article sets out advice on how to check the quality of simulation studies to evaluate statistical methods and tips to prevent errors or unexpected results. The article has recently been published, open access, in the International Journal of Epidemiology.
Simulation studies are a powerful tool in the design and analysis of clinical trials. They are computer experiments involving creating data that can be used to inform the choice of sample sizes for complex trial designs, or to compare different statistical methods for analysing trial data. Complex simulation studies can be hard to carry out successfully, and sometimes they can produce unexpected results.
To help address this problem, our team of methodologists have recently published an article with guidance on how to check the quality of a simulation study. Using a simple example to illustrate the process, the authors provide a number of suggestions for each stage of running a simulation study, including design, conduct and analysis.
The authors advise that simulation studies should be designed to include some settings in which answers are already known. Researchers should write the computer code in stages, and check their approaches for generating the simulated data before analysing it. They should carefully explore results from analyses of the simulated data, and the authors also suggest some graphical tools for doing this. Any method failures and outliers should be identified and dealt with, either by changing the approach for generating the data, or considering an alternative analysis procedure. Finally, the article includes various ways of checking unexpected results based on the team’s experiences of running simulation studies.
This article demonstrates the importance of checking simulation studies. The authors hope their guidance will help prevent errors and improve the quality of published simulation studies.