Reinforcement learning with attention that works: a self-supervised approach
dc.contributor.author | Manchin, A. | |
dc.contributor.author | Abbasnejad, E. | |
dc.contributor.author | Van Den Hengel, A. | |
dc.contributor.conference | International Conference on Neural Information Processing (ICONIP) (12 Dec 2019 - 15 Dec 2019 : Sydney, Australia) | |
dc.contributor.editor | Gedeon, T. | |
dc.contributor.editor | Wong, K.W. | |
dc.contributor.editor | Lee, M. | |
dc.date.issued | 2019 | |
dc.description.abstract | Attention models have had a significant positive impact on deep learning across a range of tasks. However previous attempts at integrating attention with reinforcement learning have failed to produce significant improvements. Unlike the selective attention models used in previous attempts, which constrain the attention via preconceived notions of importance, our implementation utilises the Markovian properties inherent in the state input. We propose the first combination of self attention and reinforcement learning that is capable of producing significant improvements, including new state of the art results in the Arcade Learning Environment. | |
dc.description.statementofresponsibility | Anthony Manchin, Ehsan Abbasnejad, and Anton van den Hengel | |
dc.identifier.citation | Communications in Computer and Information Science, 2019 / Gedeon, T., Wong, K.W., Lee, M. (ed./s), vol.1143 CCIS, pp.223-230 | |
dc.identifier.doi | 10.1007/978-3-030-36802-9_25 | |
dc.identifier.isbn | 9783030368012 | |
dc.identifier.issn | 1865-0929 | |
dc.identifier.issn | 1865-0937 | |
dc.identifier.orcid | Van Den Hengel, A. [0000-0003-3027-8364] | |
dc.identifier.uri | http://hdl.handle.net/2440/123724 | |
dc.language.iso | en | |
dc.publisher | Springer | |
dc.publisher.place | Switzerland | |
dc.relation.ispartofseries | Communications in Computer and Information Science; 1143 | |
dc.rights | © Springer Nature Switzerland AG 2019 | |
dc.source.uri | https://doi.org/10.1007/978-3-030-36802-9_25 | |
dc.subject | Reinforcement learning; Attention; Deep learning | |
dc.title | Reinforcement learning with attention that works: a self-supervised approach | |
dc.type | Conference paper | |
pubs.publication-status | Published |