Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Too good to be true: when overwhelming evidence fails to convince|
|Citation:||Proceedings of the Royal Society A, 2016; 472(2187):20150748-1-20150748-15|
|Publisher:||The Royal Society|
|Lachlan J. Gunn, François Chapeau-Blondeau, Mark D. McDonnell, Bruce R. Davis, Andrew Allison, and Derek Abbott|
|Abstract:||Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith; however, rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence and (iii) cryptographic primality testing. In this paper, we investigate the effects of small error rates in a set of measurements or observations. We find that even with very low systemic failure rates, high confidence is surprisingly difficult to achieve; in particular, we find that certain analyses of cryptographically important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2⁸⁰|
|Keywords:||Bayesian; cryptography; criminology|
|Rights:||© 2016 The Author(s) Published by the Royal Society. All rights reserved.|
|Appears in Collections:||Electrical and Electronic Engineering publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.