Towards efficient deep reinforcement learning in complex and dynamic tasks /

Files

Date

2024

Authors

Chen, Bin

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

thesis

Citation

Statement of Responsibility

Conference Name

Abstract

As an essential means of achieving generalised AI, deep reinforcement learning (DRL) achieved beyond human performance in many decision-making environments, such as competitive games and navigation tasks, by combining the representational power of deep learning with the decision-making power of reinforcement learning. However, the DRL faces sampling efficiency problems with sparse rewards in complex and dynamic environments because it requires massive interactions with environments under partial observability. Due to this inefficient data sampling issue, it isn’t easy to train a DRL agent to achieve optimal performance even after millions of steps. In addition to the instability of the complex and dynamic environment, the problem of training efficiency due to the increasing agent number is even more severe in multiagent DRL (MARL) learning. This thesis focuses on sparse reward issues in complex tasks, sampling efficiency issues in dynamic tasks and state space explosion issues in large-scale multiagent tasks.

School/Discipline

University of South Australia. UniSA STEM.
UniSA STEM

Dissertation Note

Thesis (PhD(Computer and Information Science))--University of South Australia, 2024.

Provenance

Copyright 2024 Bin Chen.

Description

1 ethesis (xii, 150 pages) :
colour illustrations.
Includes bibliographical references (pages 128-142)

Access Status

506 0#$fstar $2Unrestricted online access

Rights

License

Grant ID

Published Version

Call number

Persistent link to this record