Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes

dc.contributor.authorScanlan, J.N.
dc.contributor.authorLannin, N.A.
dc.contributor.authorHoffmann, T.
dc.contributor.authorStanley, M.
dc.contributor.authorMcdonald, R.
dc.date.issued2018
dc.description.abstractBackground/aim: Scientific conferences provide a forum for clinicians, educators, students and researchers to share research findings. To be selected to present at a scientific conference, authors must submit a short abstract which is then rated on its scientific quality and professional merit and is accepted or rejected based on these ratings. Previous research has indicated that inter-rater variability can have a substantial impact on abstract selection decisions. For their 2015 conference, the Occupational Therapy Australia National Conference introduced a system to identify and adjust for inter-rater variability in the abstract ranking and selection process. Method: Ratings for 1340 abstracts submitted for the 2015 and 2017 conferences were analysed using many-faceted Rasch analysis to identify and adjust for inter-rater variability. Analyses of the construct validity of the abstract rating instrument and rater consistency were completed. To quantify the influence of inter-rater variability of abstract selection decisions, comparisons were made between decisions made using Rasch-calibrated measure scores and decisions that would have been made based purely on raw average scores derived from the abstract ratings. Results: Construct validity and measurement properties of the abstract rating tool were good to excellent (item fit MnSq scores ranged from 0.8 to 1.2; item reliability index = 1.0). Most raters (24 of 27, 89%) were consistent in their use of the rating instrument. When comparing abstract allocations under the two conditions, 25% of abstracts (n = 341) would have been allocated differently if inter-rater variability was not accounted for. Conclusion: This study demonstrates that, even with a strong abstract rating instrument and a small rater pool, inter-rater variability still exerts a substantial influence on abstract selection decisions. It is recommended that all occupational therapy conferences internationally, and scientific conferences more generally, adopt systems to identify and adjust for the impact of inter-rater variability in abstract selection processes.
dc.identifier.citationAustralian Occupational Therapy Journal, 2018; 65(1):54-62
dc.identifier.doi10.1111/1440-1630.12440
dc.identifier.issn0045-0766
dc.identifier.issn1440-1630
dc.identifier.urihttps://hdl.handle.net/11541.2/130134
dc.language.isoen
dc.publisherWiley-Blackwell Publishing Asia
dc.rightsCopyright 2017 Occupational Therapy Australia Access Condition Notes: Accepted manuscript available after 1 January 2019
dc.source.urihttps://doi.org/10.1111/1440-1630.12440
dc.subjectinter-rater reliability
dc.subjectmany-faceted Rasch model (MFRM)
dc.subjectpeer review
dc.subjectscientific meetings
dc.titleImpact of adjusting for inter-rater variability in conference abstract ranking and selection processes
dc.typeJournal article
pubs.publication-statusPublished
ror.fileinfo12152828910001831 13151800090001831 Revised - Conf Rasch - 2015 and 2017 Data.pdf
ror.mmsid9916167883401831

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
9916167883401831_12152828910001831_Revised - Conf Rasch - 2015 and 2017 Data.pdf
Size:
132.6 KB
Format:
Adobe Portable Document Format
Description:
Published version

Collections