When the Past != The Future: Assessing the Impact of Dataset Drift on the Fairness of Learning Analytics Models

Date

2024

Authors

Deho, O.B.
Liu, L.
Li, J.
Liu, J.
Zhan, C.
Joksimovic, S.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Journal article

Citation

IEEE Transactions on Learning Technologies, 2024; 17:1007-1020

Statement of Responsibility

Oscar Blessed Deho, Lin Liu, Jiuyong Li, Jixue Liu, Chen Zhan, and Srecko Joksimovic

Conference Name

Abstract

Learning analytics (LA), like much of machine learning, assumes the training and test datasets come from the same distribution. Therefore, LA models built on past observations are (implicitly) expected to work well for future observations. However, this assumption does not always hold in practice because the dataset may drift. Recently, algorithmic fairness has gained significant attention. Nevertheless, algorithmic fairness research has paid little attention to dataset drift. Majority of the existing fairness algorithms are “statically” designed. Put another way, LA models tuned to be “fair” on past data are expected to still be “fair” when dealing with current/future data. However, it is counter-intuitive to deploy a statically fair algorithm to a nonstationary world. There is, therefore, a need to assess the impact of dataset drift on the unfairness of LA models. For this reason, we investigate the relationship between dataset drift and unfairness of LA models. Specifically, we first measure the degree of drift in the features (i.e., covariates) and target label of our dataset. After that, we train predictive models on the dataset and evaluate the relationship between the dataset drift and the unfairness of the predictive models. Our findings suggest a directly proportional relationship between dataset drift and unfairness. Further, we find covariate drift to have the most impact on unfairness of models as compared to target drift, and there are no guarantees that a once fair model would consistently remain fair. Our findings imply that “robustness” of fair LA models to dataset drift is necessary before deployment.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© 2024 IEEE. Personal use is permitted

License

Call number

Persistent link to this record