Path Following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement LearningShow others and affiliations
2021 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 22, no 10, p. 6208-6220Article in journal (Refereed) Published
Abstract [en]
This paper aims to solve the path following problem for an underactuated unmanned-surface-vessel (USV) based on deep reinforcement learning (DRL). A smoothly-convergent DRL (SCDRL) method is proposed based on the deep Q network (DQN) and reinforcement learning. In this new method, an improved DQN structure was developed as a decision-making network to reduce the complexity of the control law for the path following of a three-degree of freedom USV model. An exploring function was proposed based on the adaptive gradient descent to extract the training knowledge for the DQN from the empirical data. In addition, a new reward function was designed to evaluate the output decisions of the DQN, and hence, to reinforce the decision-making network in controlling the USV path following. Numerical simulations were conducted to evaluate the performance of the proposed method. The analysis results demonstrate that the proposed SCDRL converges more smoothly than the traditional deep Q learning while the path following error of the SCDRL is comparable to existing methods. Thanks to good usability and generality of the proposed method for USV path following, it can be applied to practical applications.
Place, publisher, year, edition, pages
IEEE, 2021. Vol. 22, no 10, p. 6208-6220
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mau:diva-40282DOI: 10.1109/TITS.2020.2989352ISI: 000704117000013Scopus ID: 2-s2.0-85101067706OAI: oai:DiVA.org:mau-40282DiVA, id: diva2:1524417
2021-02-012021-02-012024-02-05Bibliographically approved