论文:2019,Vol:37,Issue(3):503-508
引用本文:
张珂, 赵歆波, 莫蓉. 一种生物启发的视觉显著模型[J]. 西北工业大学学报
ZHANG Ke, ZHAO Xinbo, MO Rong. A Bioinspired Visual Saliency Model[J]. Northwestern polytechnical university

一种生物启发的视觉显著模型
张珂1, 赵歆波2, 莫蓉1
1. 西北工业大学 现代设计与集成制造技术教育部重点实验室, 陕西 西安 710072;
2. 西北工业大学 计算机学院, 陕西 西安 710129
摘要:
提出一种生物启发式的视觉显著模型,利用初级视皮层中的end-stopping机制来提取角点、边缘交叉点和线段端点等这些表征潜在显著物体轮廓的特征,将其与亮度、颜色和方向特征合并形成最终显著图。模型模拟了视觉信号从视网膜、侧膝体到初级视皮层的处理机制:首先根据视网膜和侧膝体特性,将图像分解为亮度和对立颜色通道;然后通过2D Gabor滤波模拟简单细胞感受野, Gabor响应振幅用于表示复杂细胞响应;最后V1层end-stopped细胞响应通过不同朝向的复杂细胞响应乘积来计算,V1层和侧膝体的输出共同构成自底向上显著图。公开数据集上的实验结果显示模型能够准确地预测人眼注视点,性能达到了自底向上模型的先进水平。
关键词:    自底向上显著模型    视觉注意力    end-stopping机制   
A Bioinspired Visual Saliency Model
ZHANG Ke1, ZHAO Xinbo2, MO Rong1
1. Key Laboratory of Contemporary Design and Integrated Manufacturing Technology of Ministry of Education, Northwestern Polytechnical University, Xi'an 710072, China;
2. School of Computer Science, Northwestern Polytechnical University, Xi'an 710129, China
Abstract:
This paper presents a bioinspired visual saliency model. The end-stopping mechanism in the primary visual cortex is introduced in to extract features that represent contour information of latent salient objects such as corners, line intersections and line endpoints, which are combined together with brightness, color and orientation features to form the final saliency map. This model is an analog for the processing mechanism of visual signals along from retina, lateral geniculate nucleus(LGN)to primary visual cortex V1:Firstly, according to the characteristics of the retina and LGN, an input image is decomposed into brightness and opposite color channels; Then, the simple cell is simulated with 2D Gabor filters, and the amplitude of Gabor response is utilized to represent the response of complex cell; Finally, the response of an end-stopped cell is obtained by multiplying the response of two complex cells with different orientation, and outputs of V1 and LGN constitute a bottom-up saliency map. Experimental results on public datasets show that our model can accurately predict human fixations, and the performance achieves the state of the art of bottom-up saliency model.
Key words:    bottom-up saliency    visual attention    end-stopping   
收稿日期: 2018-02-12     修回日期:
DOI: 10.1051/jnwpu/20193730503
基金项目: 国家自然科学基金(61871326,61231016)与陕西省自然科学基础研究计划(2018JM6116)资助
通讯作者:     Email:
作者简介: 张珂(1986-),西北工业大学博士研究生,主要从事眼动跟踪、视觉注意与显著模型研究。
相关功能
PDF(1315KB) Free
打印本文
把本文推荐给朋友
作者相关文章
张珂  在本刊中的所有文章
赵歆波  在本刊中的所有文章
莫蓉  在本刊中的所有文章

参考文献:
[1] BORJI A, ITTI L. State-of-the-Art in Visual Attention Modeling[J]. IEEE Trans on Pattern Analysis & Machine Intelligence, 2013, 35(1):185-207
[2] ITTI L, KOCH C, NIEBUR E. A Model of Saliency-Based Visual Attention for Rapid Scene Analysis[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259
[3] PONCE C R, HARTMANN T S, LIVINGSTONE M S. End-Stopping Predicts Curvature Tuning along the Ventral Stream[J]. Journal of Neuroscience, 2017, 37(3):648-659
[4] RUEOPAS W, LEELHAPANTU S, CHALIDABHONGSE T H. A Corner-Based Saliency Model[C]//20161 3th International Joint Conference on Computer Science and Software Engineering, 2016:1-6
[5] ZHANG X, MA S, GAO W, et al. A Study on Interest Point Guided Visual Saliency[C]//Symposium on Picture Coding, 2015:307-311
[6] DAN X. Object Detection Based on Saliency Map[J]. Journal of Computer Applications, 2010, 30(suppl 2):82-85
[7] YAN Y, ZHAO P L, LI W. Bottom-up Saliency and Top-Down Learning in the Primary Visual Cortex of Monkeys[J]. Proceedings of the National Academy of Sciences, 2018, 115(41):10499-10504
[8] POIRIER F J, GOSSELIN F, ARGUIN, et al. Perceptive Fields of Saliency[J]. Journal of Vision, 2008, 8(15):14-19
[9] SKOTTUN B C. A Model for End-Stopping in the Visual Cortex[J]. Vision Research, 1998, 38(13):2023-2035
[10] BRUCE N D, TSOTSOS J K. Attention Based on Information Maximization[J]. Journal of Vision, 2007, 7(9):950
[11] JUDD T, EHINGER K A, DURAND F, et al. Learning to Predict Where Humans Look[C]//2009 IEEE 12th International Conference on Computer Vision, 2010:2106-2113
[12] LEBORAN V, GARCIADIAZ A, FDEZVIDAL X R, et al. Dynamic Whitening Saliency[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2017, 39(5):893-907
[13] ZHANG J, SCLAROFF S. Exploiting Surroundedness for Saliency Detection:A Boolean Map Approach[J]. IEEE Trans on Pattern Analysis & Machine Intelligence, 2016, 38(5):889-902
[14] VIG E, DORR M, COX D. Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014:2798-2805