论文:2024,Vol:42,Issue(3):396-405
引用本文:
岳颀, 石伊凡, 褚晶, 黄勇. 深度Q网络在月球着陆任务中的性能评估与改进[J]. 西北工业大学学报
YUE Qi, SHI Yifan, CHU Jing, HUANG Yong. Performance evaluation and improvement of deep Q network for lunar landing task[J]. Journal of Northwestern Polytechnical University

深度Q网络在月球着陆任务中的性能评估与改进
岳颀1, 石伊凡1, 褚晶1, 黄勇2
1. 西安邮电大学 自动化学院, 陕西 西安 710121;
2. 西北工业大学 航天学院, 陕西 西安 710072
摘要:
基于深度Q网络(DQN)技术的强化学习方法得到越来越广泛的应用,但该类算法的性能深受多因素影响。文中以月球登陆器为例,探讨不同超参数对DQN性能的影响,在此基础上训练得到性能较优的模型。目前已知DQN模型在100个测试回合下平均奖励为280+,文中模型奖励值可达到290+,并且通过在原始问题中引入额外的不确定性测试验证了文中模型的鲁棒性。另外,引入模仿学习的思想,基于启发式函数的模型指导方法获取演示数据,加快训练速度并提升性能,仿真结果证明了该方法的有效性。
关键词:    深度强化学习    深度Q网络    模仿学习   
Performance evaluation and improvement of deep Q network for lunar landing task
YUE Qi1, SHI Yifan1, CHU Jing1, HUANG Yong2
1. School of Automation, Xi'an University of Posts & Telecommunications, Xi'an 710121, China;
2. School of Astronautics, Northwestern Polytechnical University, Xi'an 710072, China
Abstract:
Reinforcement learning is now being applied more and more in a variety of scenarios, the majority of which are based on the deep Q network (DQN) technology. However, the algorithm is heavily influenced by multiple factors. In this paper, we take the lunar lander as a case to study how various hyper-parameters affect the performance of the DQN algorithm, based on which we tune to get a model with better performance. At present, it is known that the DQN model has an average reward of 280+ on 100 test episodes, and the reward value of the model in this article can reach 290+. Meanwhile, its robustness is tested and verified by introducing additional uncertainty tests into the original problem. In addition, to speed up the training process, imitation learning is incorporated in our model, using heuristic function model guidance method to obtain demonstration data, which accelerates training speed and improves performance. Simulation results have proven the effectiveness of this method.
Key words:    deep reinforcement learning    DQN    imitation learning   
收稿日期: 2023-05-30     修回日期:
DOI: 10.1051/jnwpu/20244230396
基金项目: 国家自然科学基金(61703336)与陕西省自然科学基金(2023-JC-QN-0727)资助
通讯作者: 岳颀(1981—) e-mail:yueqi6@163.com     Email:yueqi6@163.com
作者简介: 岳颀(1981—),副教授
相关功能
PDF(5702KB) Free
打印本文
把本文推荐给朋友
作者相关文章
岳颀  在本刊中的所有文章
石伊凡  在本刊中的所有文章
褚晶  在本刊中的所有文章
黄勇  在本刊中的所有文章

参考文献:
[1] FRANÇOIS-LAVET V, HENDERSON P, ISLAM R, et al. An introduction to deep reinforcement learning[J]. Foundations and Trends in Machine Learning, 2018, 11(3/4): 219-354
[2] CHOI R Y, COYNER A S, KALPATHY-CRAMER J, et al. Introduction to machine learning, neural networks, and deep learning[J]. Translational Vision Science & Technology, 2020, 9(2): 14
[3] LYU L, SHEN Y, ZHANG S. The advance of reinforcement learning and deep reinforcement learning[C]//2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms, 2022: 644-648
[4] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529-533
[5] 付一豪, 鲍泓, 梁天骄, 等. 基于视觉DQN的无人车换道决策算法研究[J]. 传感器与微系统, 2023, 42(10): 52-55 FU Yihao, BAO Hong, LIANG Tianjiao, et al. Research on decision algorithm for autonomous vehicle lane change based on vision DQN[J]. Transducer and Microsystem Technologies, 2023, 42(10): 52-55(in Chinese)
[6] 林歆悠, 叶卓明, 周斌豪. 基于DQN强化学习的自动驾驶转向控制策略[J]. 机械工程学报, 2023, 59(16): 315-324 LIN Xinyou, YE Zhuoming, ZHOU Binhao. DQN reinforcement learning-based steering control strategy for autonomous driving[J]. Jouranal of Mechanical Engineering, 2023, 59(16): 315-324(in Chinese)
[7] STREHL A L, LI L, WIEWIORA E, et al. PAC model-free reinforcement learning[C]//Proceedings of the 23rd International Conference on Machine Learning, 2006: 881-888
[8] ZHU J, WU F, ZHAO J. An overview of the action space for deep reinforcement learning[C]//Proceedings of the 20214th International Conference on Algorithms, Computing and Artificial Intelligence, 2021: 1-10
[9] JÄGER J, HELFENSTEIN F, SCHARF F. Bring color to deep Q-networks: limitations and improvements of DQN leading to rainbow DQN//Reinforcement learning algorithms: analysis and applications[M]. Cham: Springer International Publishing, 2021: 135-149
[10] HUSSEIN A, GABER M M, ELYAN E, et al. Imitation learning: a survey of learning methods[J]. ACM Computing Surveys, 2017, 50(2): 1-35
[11] 况立群, 李思远, 冯利, 等. 深度强化学习算法在智能军事决策中的应用[J]. 计算机工程与应用, 2021, 57(20): 271-278 KUANG Liqun, LI Siyuan, FENG Li, et al. Application of deep reinforcement learning algorithm on intelligent military decision system[J]. Computer Engineering and Applications, 2021, 57(20): 271-278(in Chinese)
[12] ZHAI J, LIU Q, ZHANG Z, et al. Deep Q-learning with prioritized sampling[C]//23rd International Conference on Neural Information Processing, Tokyo, Japan, 2016: 13-22
[13] ZHANG B, RAJAN R, PINEDA L, et al. On the importance of hyperparameter optimization for model-based reinforcement learning[C]//International Conference on Artificial Intelligence and Statistics, PMLR, 2021: 4015-4023
[14] BERGSTRA J, BENGIO Y. Random search for hyper-parameter optimization[J]. Journal of Machine Learning Research, 2012, 13(2): 281-305
[15] ZENG D, YAN T, ZENG Z, et al. A hyperparameter adaptive genetic algorithm based on DQN[J]. Journal of Circuits, Systems and Computers, 2023, 32(4): 1-24
[16] WU J, CHEN X Y, ZHANG H, et al. Hyperparameter optimization for machine learning models based on Bayesian optimization[J]. Journal of Electronic Science and Technology, 2019, 17(1): 26-40