Please use this identifier to cite or link to this item:
|Web of Science®
|Not every credible interval is credible: evaluating robustness in the presence of contamination in Bayesian data analysis
|Behavior Research Methods, 2017; 49(6):2219-2234
|Lauren A. Kennedy, Daniel J. Navarro, Amy Perfors, Nancy Briggs
|As Bayesian methods become more popular among behavioral scientists, they will inevitably be applied in situations that violate the assumptions underpinning typical models used to guide statistical inference. With this in mind, it is important to know something about how robust Bayesian methods are to the violation of those assumptions. In this paper, we focus on the problem of contaminated data (such as data with outliers or conflicts present), with specific application to the problem of estimating a credible interval for the population mean. We evaluate five Bayesian methods for constructing a credible interval, using toy examples to illustrate the qualitative behavior of different approaches in the presence of contaminants, and an extensive simulation study to quantify the robustness of each method. We find that the "default" normal model used in most Bayesian data analyses is not robust, and that approaches based on the Bayesian bootstrap are only robust in limited circumstances. A simple parametric model based on Tukey's "contaminated normal model" and a model based on the t-distribution were markedly more robust. However, the contaminated normal model had the added benefit of estimating which data points were discounted as outliers and which were not.
|Bayesian data analysis
|© Psychonomic Society, Inc. 2017
|Appears in Collections:
|Aurora harvest 3
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.