论文:2024,Vol:42,Issue(2):310-318
引用本文:
寇凯, 杨刚, 张文启, 刘心成, 姚远, 周兴社. 基于SAC的无人机自主导航方法研究[J]. 西北工业大学学报
KOU Kai, YANG Gang, ZHANG Wenqi, LIU Xincheng, YAO Yuan, ZHOU Xingshe. Exploring UAV autonomous navigation algorithm based on soft actor-critic[J]. Journal of Northwestern Polytechnical University

基于SAC的无人机自主导航方法研究
寇凯, 杨刚, 张文启, 刘心成, 姚远, 周兴社
西北工业大学 计算机学院, 陕西 西安 710072
摘要:
针对现有深度强化学习算法在无人机自主导航任务中面临环境局部可观且感知信息不足问题,基于非确定性策略SAC (soft actor-critic)强化学习算法对未知环境下的端到端无人机自主导航任务展开研究。具体而言,提出了一种基于记忆增强机制的策略网络,通过对历史记忆信息与当前的观测整合处理,提取观测数据的时序依赖关系,从而增强局部可观条件下的状态估计能力,避免算法陷入局部最优解;设计了非稀疏奖励函数以缓解强化学习策略在稀疏奖励条件下难以收敛的问题;在Airsim+UE4仿真平台进行了多个复杂场景的训练验证。实验结果表明,所提方法导航成功率比基准算法提高10%,平均飞行距离缩短21%,有效增强了无人机自主导航算法稳定性和收敛性。
关键词:    强化学习    SAC    无人机    自主导航   
Exploring UAV autonomous navigation algorithm based on soft actor-critic
KOU Kai, YANG Gang, ZHANG Wenqi, LIU Xincheng, YAO Yuan, ZHOU Xingshe
School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China
Abstract:
The existing deep reinforced learning algorithms cannot see local environments and have insufficient perceptual information on UAV autonomous navigation tasks. The paper investigates the UAV's autonomous navigation tasks in its unknown environments based on the nondeterministic policy soft actor-critic (SAC) reinforced learning model. Specifically, the paper proposes a policy network based on a memory enhancement mechanism, which integrates the historical memory information processing with current observations to extract the temporal dependency of the statements so as to enhance the state estimation ability under locally observable conditions and avoid the learning algorithm from falling into a locally optimal solution. In addition, a non-sparse reward function is designed to reduce the challenge of the reinforced learning strategy to converge under sparse reward conditions. Finally, several complex scenarios are trained and validated in the Airsim+UE4 simulation platform. The experimental results show that the proposed method has a navigation success rate 10% higher than that of the benchmark algorithm and that the average flight distance is 21% shorter, which effectively enhances the stability and convergence of the UAV autonomous navigation algorithm.
Key words:    deep reinforced learning    soft actor-critic    unmanned aerial vehicle    autonomous navigation   
收稿日期: 2023-03-27     修回日期:
DOI: 10.1051/jnwpu/20244220310
基金项目: 国家自然科学基金(62032018,62141220,61876151)资助
通讯作者: 杨刚(1974—),教授 e-mail:yeungg@nwpu.edu.cn     Email:yeungg@nwpu.edu.cn
作者简介: 寇凯(1993—),博士研究生
相关功能
PDF(2751KB) Free
打印本文
把本文推荐给朋友
作者相关文章
寇凯  在本刊中的所有文章
杨刚  在本刊中的所有文章
张文启  在本刊中的所有文章
刘心成  在本刊中的所有文章
姚远  在本刊中的所有文章
周兴社  在本刊中的所有文章

参考文献:
[1] 武文亮, 周兴社, 沈博, 等. 集群机器人系统特性评价研究综述[J]. 自动化学报, 2022, 48(5): 1153-1172 WU Wenliang, ZHOU Xingshe, SHEN Bo, et al. A review of swarm robotic systems property evaluation research[J]. Acta Automatica Sinica, 2022, 48(5): 1153-1172 (in Chinese)
[2] KOU K, YANG G, ZHANG W, et al. UAV autonomous navigation based on multi-modal perception: a deep hierarchical reinforcement learning method[C]//China Intelligent Robotics Annual Conference, 2023
[3] 张云燕, 魏瑶, 刘昊, 等. 基于深度强化学习的端到端无人机避障决策[J]. 西北工业大学学报, 2022, 40(5): 1055-1064 ZHANG Yunyan, WEI Yao, LIU Hao, et al. End-to-end UAV obstacle avoidance decision based on deep reinforcement learning[J]. Journal of Northwestern Polytechnical University, 2022, 40(5): 1055-1064 (in Chinese)
[4] ALMAHAMID F, GROLINGER K. Autonomous unmanned aerial vehicle navigation using reinforcement learning: a systematic review[J]. Engineering Applications of Artificial Intelligence, 2022, 115(1):105321
[5] ARAFAT M Y, ALAM M M, MOH S. Vision-based navigation techniques for unmanned aerial vehicles: review and challenges[J]. Drones, 2023,7(2): 89
[6] GANDHI D, PINTO L, GUPTA A. Learning to fly by crashing[C]//IEEE International Conference on Intelligent Robots and Systems, 2017
[7] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90
[8] MACIEL-PEARSON B G, AKCAY S, ATAPOUR-ABARGHOUEI A, et al. Multi-task regression-based learning for autonomous unmanned aerial vehicle flight control within unstructured outdoor environments[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 4116-4123
[9] CHOI S Y, CHA D. Unmanned aerial vehicles using machine learning for autonomous flight; state-of-the-art[J]. Advanced Robotics, 2019, 33(6): 265-277
[10] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529-533
[11] WALVEKAR A, GOEL Y, JAIN A, et al. Vision based autonomous navigation of quadcopter using reinforcement learning[C]//International Conference on Automation, Electronics and Electrical Engineering, 2019
[12] KABAS B. Autonomous UAV navigation via deep reinforcement learning using ppo[C]//Signal Processing and Communications Applications Conference, 2022
[13] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[J/OL]. (2017-07-20)[2023-03-27]. https://arxiv.org/abs/1707.06347
[14] HE L, AOUF N, WHIDBORNE J F, et al. Integrated moment-based lgmd and deep reinforcement learning for UAV obstacle avoidance[C]//IEEE International Conference on Robotics and Automation, 2020
[15] HAARNOJA T, ZHOU A, ABBEEL P, et al. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//International Conference on Machine Learning, 2018
[16] SHAH S, DEY D, KAPOOR A. Airsim: high-fidelity visual and physical simulation for autonomous vehicles[C]//The 11th International Conference on Field and Service Robotics, 2017
[17] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J/OL]. (2015-09-09)[2023-03-27]. https://arxiv.org/abs/1509.02971
[18] FUJIMOTO S, HOOF H, MEGER D. Addressing function approximation error in actor-critic methods[C]//International Conference on Machine Learning, 2018
相关文献:
1.张云燕, 魏瑶, 刘昊, 杨尧.基于深度强化学习的端到端无人机避障决策[J]. 西北工业大学学报, 2022,40(5): 1055-1064
2.符小卫, 徐哲, 王辉.基于DDPG的无人机追捕任务泛化策略设计[J]. 西北工业大学学报, 2022,40(1): 47-55
3.魏瑶, 刘志成, 蔡彬, 陈家新, 杨尧, 张凯.基于深度循环双Q网络的无人机避障算法研究[J]. 西北工业大学学报, 2022,40(5): 970-979