A deep hierarchical reinforcement learner for aerial shepherding of ground swarms

Date

2019

Authors

Nguyen, H.T.
Nguyen, T.D.
Garratt, M.
Kasmarik, K.
Anavatti, S.
Barlow, M.
Abbass, H.A.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Book chapter

Citation

Event/exhibition information: 26th International Conference on Neural Information Processing, ICONIP 2019, Sydney, Australia, 12/12/2019-15/12/2019 Source details - Title: International Conference on Neural Information ProcessingI CONIP 2019: Neural Information Processing, 2019, pp.658-669

Statement of Responsibility

Conference Name

Abstract

This paper introduces a deep reinforcement learning method to train an autonomous aerial agent acting as a shepherd to provide guidance for a swarm of ground vehicles. The learner is situated within a high-fidelity robotic-operating-system (ROS)-based simulation environment consisting of an Unmanned Aerial Vehicle (UAV) learning to guide a swarm of Unmanned Ground Vehicles (UGVs) to a target location. Our approach uses a combination of machine education, apprenticeship bootstrapping, and deep-learning-based methodologies to decompose the complex shepherding strategy into sub-problems requiring simpler skills that get fused to form the overall skills required for shepherding. The proposed methodology is effective in training the UAV agent with multiple reward designing schemes.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright 2019 Springer Nature

License

Grant ID

Call number

Persistent link to this record