MetaAug: Meta-data Augmentation for Post-training Quantization
Date
2024
Authors
Pham, C.
Hoang, A.D.
Nguyen, C.C.
Le, T.
Phung, D.
Carneiro, G.
Do, T.-T.
Editors
Ricci, E.
Leonardis, A.
Roth, S.
Russakovsky, O.
Sattler, T.
Varol, G.
Leonardis, A.
Roth, S.
Russakovsky, O.
Sattler, T.
Varol, G.
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Lecture Notes in Artificial Intelligence, 2024 / Ricci, E., Leonardis, A., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (ed./s), vol.15085, pp.236-252
Statement of Responsibility
Cuong Pham, Anh Dung Hoang, Cuong C. Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do
Conference Name
European Conference on Computer Vision (ECCV) (29 Sep 2024 - 4 Oct 2024 : Milan, Italy)
Abstract
Post-Training Quantization (PTQ) has received significant attention because it requires only a small set of calibration data to quantize a full-precision model, which is more practical in real-world applications in which full access to a large training set is not available. However, it often leads to overfitting on the small calibration dataset. Several methods have been proposed to address this issue, yet they still rely on only the calibration set for the quantization and they do not validate the quantized model due to the lack of a validation set. In this work, we propose a novel meta-learning based approach to enhance the performance of post-training quantization. Specifically, to mitigate the overfitting problem, instead of only training the quantized model using the original calibration set without any validation during the learning process as in previous PTQ works, in our approach, we both train and validate the quantized model using two different sets of images. In particular, we propose a meta-learning based approach to jointly optimize a transformation network and a quantized model through bi-level optimization. The transformation network modifies the original calibration data and the modified data will be used as the training set to learn the quantized model with the objective that the quantized model achieves a good performance on the original calibration data. Extensive experiments on the widely used ImageNet dataset with different neural network architectures demonstrate that our approach outperforms the state-of-the-art PTQ methods.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG