Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/117310
Type: Conference paper
Title: A Bayesian data augmentation approach for learning deep models
Author: Tran, T.
Pham, T.
Carneiro, G.
Palmer, L.
Reid, I.
Citation: Advances in neural information processing systems, 2018 / Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (ed./s), vol.2017-December, pp.1-10
Publisher: Neural Information Processing Systems Foundation
Issue Date: 2018
Series/Report no.: Advances in Neural Information Processing Systems
ISSN: 1049-5258
Conference Name: NIPS Foundation Inc (4 Dec 2017 - 9 Dec 2017 : Long Beach, CA)
Editor: Guyon, I.
Luxburg, U.V.
Bengio, S.
Wallach, H.
Fergus, R.
Vishwanathan, S.
Garnett, R.
Statement of
Responsibility: 
Toan Tran, Trung Pham, Gustavo Carneiro, Lyle Palmer and Ian Reid
Abstract: Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above --- the results also show that our approach produces better classification results than similar GAN models.
Rights: Copyright status unknown
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/FL130100102
Published version: https://papers.nips.cc/paper/6872-a-bayesian-data-augmentation-approach-for-learning-deep-models
Appears in Collections:Aurora harvest 8
Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.