Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases: A review and suggested reporting framework

dc.contributor.authorHajiali Afzali, H.
dc.contributor.authorGray, J.
dc.contributor.authorKarnon, J.
dc.date.issued2013
dc.description.abstractDecision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess ‘best practice’ in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.
dc.description.statementofresponsibilityHossein Haji Ali Afzali, Jodi Gray, Jonathan Karnon
dc.identifier.citationApplied Health Economics and Health Policy, 2013; 11(2):85-93
dc.identifier.doi10.1007/s40258-013-0012-6
dc.identifier.issn1175-5652
dc.identifier.issn1179-1896
dc.identifier.orcidHajiali Afzali, H. [0000-0002-0198-8394]
dc.identifier.orcidGray, J. [0000-0002-1119-7078]
dc.identifier.orcidKarnon, J. [0000-0003-3220-2099]
dc.identifier.urihttp://hdl.handle.net/2440/80826
dc.language.isoen
dc.publisherAdis International Ltd.
dc.rights© Springer International Publishing Switzerland 2013
dc.source.urihttps://doi.org/10.1007/s40258-013-0012-6
dc.subjectHumans
dc.subjectCardiovascular Diseases
dc.subjectCalibration
dc.subjectDecision Support Techniques
dc.subjectModels, Organizational
dc.subjectBenchmarking
dc.subjectEmployee Performance Appraisal
dc.subjectPractice Guidelines as Topic
dc.titleModel performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases: A review and suggested reporting framework
dc.typeJournal article
pubs.publication-statusPublished

Files