论文:2017,Vol:35,Issue(2):220-225
引用本文:
杨宏晖, 申昇, 姚晓辉, 韩振. 用于水声目标特征学习与识别的混合正则化深度置信网络[J]. 西北工业大学学报
Yang Honghui, Shen Sheng, Yao Xiaohui, Han Zhen. Underwater Acoustic Target Feature Learning and Recognition using Hybrid Regularization Deep Belief Network[J]. Northwestern polytechnical university

用于水声目标特征学习与识别的混合正则化深度置信网络
杨宏晖, 申昇, 姚晓辉, 韩振
西北工业大学 航海学院, 陕西 西安 710072
摘要:
针对获取水声目标有类标样本困难且代价高昂的水声目标小样本识别问题,提出了基于混合正则化深度置信网络(hybrid regularization deep belief network,HR-DBN)的水声目标深度特征学习及识别方法。该方法首先提出了混合2种正则化策略的深度置信网络进行水声目标深度特征学习。第一种正则化策略是利用最大互信息组正则化项修正目标函数,提高隐含层的稀疏度;第二种正则化策略是利用大量无类标样本获得有关水声目标的普遍特性的描述和先验知识,引导特征学习。最后利用少量有类标样本对网络进行全局优化,构建识别系统,提高水声目标识别正确率。利用2类实测舰船辐射噪声数据进行验证实验,实验结果表明,提出的方法可以提取描述水声目标的深度特征,提高水声目标识别正确率。
关键词:    水声目标识别    深度学习    无监督学习    深度置信网络    互信息    正则化   
Underwater Acoustic Target Feature Learning and Recognition using Hybrid Regularization Deep Belief Network
Yang Honghui, Shen Sheng, Yao Xiaohui, Han Zhen
School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an 710072, China
Abstract:
Limited by the lack of labeled data, classifying Underwater Acoustic Target (UAT) can be complicated by small-sample-size classification problems. To solve this problem a new UAT Recognition method are proposed based on the deep learning theory. Firstly, two kinds of proposed regularization strategy of deep belief networks are used in the unsupervised learning phase to initial the parameter space and learn the features of UAT. The first regularization strategy is to use mutual information group sparse to correct the objective function and improve the sparse degree of hidden layer. The second is to use a large number of unlabeled samples to obtain the prior knowledge to guide features learning. Finally, a small labeled dataset are used to optimize the network and build the recognition system to improve the accuracy. Results of applying our method to real radiated noise datasets of UATs recognition experiments on measured data show that our method can extract the features that describe the natural characteristics of UATs, therefore improve the recognition accuracy.
Key words:    underwater acoustics    automatic target recognition    deep learning    unsupervised learning    deep belief network    mutual information    regularization   
收稿日期: 2016-10-11     修回日期:
DOI:
基金项目: 盲信号处理重点实验室基金资助
通讯作者:     Email:
作者简介: 杨宏晖(1971-),女,西北工业大学副教授,主要从事声信号处理及模式识别的研究。
相关功能
PDF(1342KB) Free
打印本文
把本文推荐给朋友
作者相关文章
杨宏晖  在本刊中的所有文章
申昇  在本刊中的所有文章
姚晓辉  在本刊中的所有文章
韩振  在本刊中的所有文章

参考文献:
[1] 杨宏晖, 孙进才, 袁骏. 基于支持向量机和遗传算法的水下目标特征选择算法[J]. 西北工业大学学报, 2005, 23(4):512-515 Yang Honghui, Sun Jincai, Yuan Jun. A New Method for Feature Selection for Underwater Acoustic Targets[J]. Journal of Northwestern Polytechnical University, 2005, 23(4):512-515 (in Chinese)
[2] 杨宏晖,申昇. 模式识别之特征选择[M]. 北京:电子工业出版社,2016 Yang Honghui, Shen Sheng. The Feature Selection of Pattern Recognition[M]. Beijing, Publishing House of Electronics Industry, 2016 (in Chinese)
[3] LeCun Y, Bengio Y, Hinton G E. Deep Learning[J]. Nature, 2015, 521(7553):436-444
[4] Schmidhuber J. Deep Learning in Neural Networks:An Overview[J]. Neural Networks, 2015, 61:85-117
[5] Luo H, Shen R, Niu C. Sparse Group Restricted Boltzmann Machines[C]//AAAI Conference on Artificial Intelligence 2011. Sam Francisco, USA, 2010
[6] 刘凯, 张立民, 张超. 受限玻尔兹曼机的新混合稀疏惩罚机制[J]. 浙江大学学报(工学版), 2015, 49(6):1070-1078 Liu Kai, Zhang Limin, Zhang Chao. New Hybrid Sparse Penalty Mechanism of Restricted Boltzmann Machine[J]. Journal of Zhejiang University, 2015, 49(6):1070-1078 (in Chinese)
[7] Shen Sheng, Yang Honghui, Yao Xiaohui, et al. Learning Robust Features from Underwater Ship-Radiated Noise with Mutual Information Group Sparse DBN[C]//International Congress and Exposition on Noise Control Engineering, Hamburg Germany, 2016
[8] Hinton G E. A Practical Guide to Training Restricted Boltzmann Machines (Version 1)[J]. Momentum, 2010, 9(1):599-619
[9] 申昇, 杨宏晖, 王芸,等. 联合互信息水下目标特征选择算法[J]. 西北工业大学学报, 2015, 33(4):639-643 Shen Sheng, Yang Honghui, Wang Yun, et al. Joint Mutual Information Feature Selection for Underwater Acoustic Targets[J]. 2015, 33(4):639-643 (in Chinese)
[10] Cover T M, Thomas J A. Elements of Information Theory[M]. Hoboken, New Jersey Wiley-Interscience, 2006
[11] Raina R, Battle A, Lee H, et al. Self-Taught Learning:Transfer Learning from Unlabeled Data[C]//International Conference on Machine Learning Corvalli, USA, 2007:759-766
[12] Erhan D, Bengio Y, Courville A, et al. Why Does Unsupervised Pre-Training Help Deep Learning?[J]. Journal of Machine Learning Research, 2010, 11(3):625-660
[13] Erhan D, Manzagol P A, Bengio Y, et al. The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training[C]//AISTATSm, Cleanwater Beach, USA, 2009
[14] Laurens V D M, Hinton G. Visualizing Data using T-SNE[J]. Journal of Machine Learning Research, 2008, 9(11):2579-2605