AEPH
Home > Conferences > Vol. 4. SDIT2024 >
Autonomous Navigation and Obstacle Avoidance of UAVs Based on Deep Reinforcement Learning
DOI: https://doi.org/10.62381/ACS.SDIT2024.53
Author(s)
Xinyuan Wang*
Affiliation(s)
Department of Transportation, Nanjing University of Aeronautics and Astronautics, Nanjing, China *Corresponding author
Abstract
This paper studies the autonomous navigation and obstacle avoidance technology of UAVs based on deep reinforcement learning, analyzes the limitations of traditional navigation and obstacle avoidance methods, and proposes a solution to optimize the path planning and obstacle avoidance of UAVs in complex environments through deep reinforcement learning. Deep reinforcement learning can perceive the surrounding environment through autonomous learning, adjust the flight path in real time, avoid static and dynamic obstacles, and significantly improve the autonomy and task execution efficiency of UAVs. This paper systematically explores the core concepts and algorithms of deep reinforcement learning, as well as its application in UAV navigation and obstacle avoidance. In particular, an in-depth analysis is conducted on environmental perception, convergence and stability of dynamic obstacle avoidance, and optimization of algorithm performance, emphasizing the importance of improving the real-time performance of the algorithm in autonomous flight of UAVs, and demonstrating the technical advantages and broad application prospects of deep reinforcement learning in UAV navigation tasks.
Keywords
UAV Navigation; Deep Reinforcement Learning; Obstacle Avoidance; Path Planning; Autonomous Flight
References
[1]Xue, Y., Chen, W.: A UAV Navigation Approach Based on Deep Reinforcement Learning in Large Cluttered 3D Environments. IEEE Transactions on Vehicular Technology 72, pp. 3001-3014 (2023). [2]Roghair, J., Ko, K., Asli, A. E. N., Jannesari, A.: A Vision Based Deep Reinforcement Learning Algorithm for UAV Obstacle Avoidance. ArXiv abs/2103.06403 (2021). [3]Wang, C., Wang, J., Shen, Y., Zhang, X.: Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach. IEEE Transactions on Vehicular Technology 68, pp. 2124-2136 (2019). [4]Joshi, B., Kapur, D., Kandath, H.: Sim-to-Real Deep Reinforcement Learning based Obstacle Avoidance for UAVs under Measurement Uncertainty. ArXiv abs/2303.07243 (2023). [5][5] Grando, R. B., Jesus, J. C., Drews-Jr, P. L. J.: Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles. 2020 Latin American Robotics Symposium (LARS), 2020 Brazilian Symposium on Robotics (SBR) and 2020 Workshop on Robotics in Education (WRE), pp. 1-6 (2020). [6]Lin, H., Peng, X.: Autonomous Quadrotor Navigation With Vision Based Obstacle Avoidance and Path Planning. IEEE Access 9, pp. 102450-102459 (2021). [7]Lu, Y., Xue, Z., Xia, G., Zhang, L.: A survey on vision-based UAV navigation. Geo-spatial Information Science 21, pp. 21 - 32 (2018). [8]Back, S., Cho, G., Oh, J., Tran, X.-T., Oh, H.: Autonomous UAV Trail Navigation with Obstacle Avoidance Using Deep Neural Networks. Journal of Intelligent & Robotic Systems 100, pp. 1195 - 1211 (2020). [9]Pant, S., Lee, S.: Obstacle Avoidance Method for UAVs using Polar Grid. Journal of Korea Multimedia Society 23, pp. 1088-1098 (2020). [10]Wei, J., Li, S.: A Method for Collision-free UAV Navigation around Moving Obstacles over an Uneven Terrain. 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1-6 (2023). [11]Guo, T., Jiang, N., Li, B., Zhu, X., Wang, Y., Du, W.: UAV navigation in high dynamic environments: A deep reinforcement learning approach. Chinese Journal of Aeronautics (2020). [12]Dobrevski, Matej & Skočaj, Danijel. (2021). Deep reinforcement learning for map-less goal-driven robot navigation. International Journal of Advanced Robotic Systems. 18. 172988142199262. 10.1177/1729881421992621.
Copyright @ 2020-2035 Academic Education Publishing House All Rights Reserved