|本期目录/Table of Contents|

[1]邓泓,王祖明,尹乘乐,等.基于KSGM-YOLO的轻量级小目标实蝇检测模型[J].江苏农业科学,2025,53(5):213-223.
 Deng Hong,et al.A lightweight model for small-object drosophila detection based on KSGM-YOLO[J].Jiangsu Agricultural Sciences,2025,53(5):213-223.
点击复制

基于KSGM-YOLO的轻量级小目标实蝇检测模型(PDF)
分享到:

《江苏农业科学》[ISSN:1002-1302/CN:32-1214/S]

卷:
第53卷
期数:
2025年第5期
页码:
213-223
栏目:
害虫智能检测
出版日期:
2025-03-05

文章信息/Info

Title:
A lightweight model for small-object drosophila detection based on KSGM-YOLO
作者:
邓泓1王祖明1尹乘乐2李越千1黄伟继1桂露1周帅1彭莹琼1
1.江西农业大学软件学院,江西南昌 330000; 2.德布勒森大学,匈牙利德布勒森 4032
Author(s):
Deng Honget al
关键词:
实蝇YOLO v7-tinyKSGM-YOLO小目标检测轻量级
Keywords:
-
分类号:
S126;TP391.41
DOI:
-
文献标志码:
A
摘要:
实蝇害虫会对果蔬作物造成减产,进而影响农民的果蔬经济效益。由于实蝇类间相似性高且采集的样本多为小目标,导致传统的害虫识别模型不准确。为此,提出一种基于YOLO v7-tiny的轻量级小目标检测模型KSGM-YOLO。该模型首先设计了针对实蝇数据集的锚框聚类算法,生成了更有效的锚框;其次在Backbone中引入SimAM注意力机制,以增强模型对实蝇语义信息的特征提取能力;同时在原模型的Neck层中设计了GSCBL与GSELAN,以降低模型的参数量与计算量;最后采用MPDIoU损失函数计算定位损失,优化模型对小目标实蝇标注框的计算。此外本研究创建了实蝇数据集Drosophila-Four并在此数据集上进行了多项试验。试验结果显示,KSGM-YOLO相较于原模型,在精度方面提高了2.3百分点且参数量和计算量分别下降了6.3%和8.3%。这表明本研究提出的模型实现了更为准确的小目标实蝇检测需求,同时也取得了一定的轻量级优化效果。综上所述,本研究结果为果蔬作物提供了一种更准确的针对小目标害虫的轻量级检测方法,并为在边缘设备部署提供了可行性,能帮助农业工作者及时发现实蝇害虫,提高果蔬作物的产量和质量。
Abstract:
-

参考文献/References:

[1]贺梅英,王维. 中国蔬菜出口贸易潜力及贸易效率研究[J]. 北方园艺,2023(21):144-150.
[2]Deng H,Cai X,Yin C L,et al. UDA-FlyRecog:unsupervised domain adaptation for Drosophila cross-domain recognition model[J]. Journal of Stored Products Research,2023,104:102192.
[3]Boniecki P,Koszela K,Piekarska-Boniecka H,et al. Neural identification of selected apple pests[J]. Computers and Electronics in Agriculture,2015,110:9-16.
[4]陈西亮,张佳华,艾天成. 基于支持向量机法提取江汉平原三湖农场棉蚜危害程度的空间分布[J]. 江苏农业科学,2016,44(9):157-162.
[5]Liu T,Chen W,Wu W,et al. Detection of aphids in wheat fields using a computer vision technique[J]. Biosystems Engineering,2016,141:82-93.
[6]彭莹琼,廖牧鑫,张永红,等. 基于 BP 神经网络模型的果实蝇自动分类系统[J]. 江西农业大学学报,2016,38(6):1205-1210.
[7]Cheng X,Zhang Y H,Chen Y Q,et al. Pest identification via deep residual learning in complex background[J]. Computers and Electronics in Agriculture,2017,141:351-356.
[8]Yang Z K,Yang X T,Li M,et al. Automated garden-insect recognition using improved lightweight convolution network[J]. Information Processing in Agriculture,2023,10(2):256-266.
[9]肖德琴,黄一桂,张远琴,等. 基于改进 Faster R-CNN 的田间黄板害虫检测算法[J]. 农业机械学报,2021,52(6):242-251.
[10]Redmon J,Divvala S,Girshick R,et al. You only look once:unified,real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,NV,USA:IEEE,2016:779-788.
[11]Redmon J,Farhadi A. YOLO9000:better,faster,stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI,USA:IEEE,2017:7263-7271.
[12]Farhadi A,Redmon J. YOLO v3:an incremental improvement[C]//Computer Vision and Pattern Recognition. Berlin/Heidelberg,Germany:Springer,2018,1804:1-6.
[13]Bochkovskiy A,Wang C Y,Liao H Y M. YOLO v4:Optimal speed and accuracy of object detection[EB/OL]. (2020-04-23)[2023-09-18]. http://arXiv.org/abs/2004.10934.
[14]曾晏林,贺壹婷,蔺瑶,等. 基于BCE-YOLO v5的苹果叶部病害检测方法[J]. 江苏农业科学,2023,51(15):155-163.
[15]Amrani A,Sohel F,Diepeveen D,et al. Insect detection from imagery using YOLO v3-based adaptive feature fusion convolution network[J]. Crop and Pasture Science,2023,74(6):615-627.
[16]Song L M,Liu M Y,Liu S H,et al. Pest species identification algorithm based on improved YOLO v4 network[J]. Signal,Image and Video Processing,2023,17(6):3127-3134.
[17]Wu S,Wang J P,Liu L,et al. Enhanced YOLO v5 object detection algorithm for accurate detection of adult Rhynchophorus ferrugineus[J]. Insects,2023,14(8):698.
[18]Wang C Y,Bochkovskiy A,Liao H Y M. YOLO v7:Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver,Canada:IEEE,2023:7464-7475.
[19]郑果,姜玉松,沈永林. 基于改进 YOLO v7 的水稻害虫识别方法[J]. 华中农业大学学报,2023,42(3):143-151.
[20]Buslaev A,Iglovikov V I,Khvedchenya E,et al. Albumentations:fast and flexible image augmentations[J]. Information,2020,11(2):125.
[21]Zhang H Y,Cisse M,Dauphin Y N,et al. Mixup:beyond empirical risk minimization[EB/OL]. (2018-04-27)[2023-09-18]. http://arXiv.org/abs/1710.09412V2.
[22]Liu S,Qi L,Qin H F,et al. Path aggregation network for instance segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT,USA:IEEE,2018:8759-8768.
[23]Wang C Y,Yeh I H,Liao H Y M. You only learn one representation:Unified network for multiple tasks[EB/OL]. (2021-05-10)[2023-09-17]. https://arXiv.org/abs/2015.04206V1.
[24]Lin T Y,Maire M,Belongie S,et al. Microsoft COCO:common objects in context[C]//Computer Vision-ECCV 2014:13th European Conference,Zurich,Switzerland,September 6-12,2014,Proceedings,Part V 13. Springer International Publishing,2014:740-755.
[25]Rezatofighi H,Tsoi N,Gwak J Y,et al. Generalized intersection over union:A metric and a loss for bounding box regression[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach,CA,USA:IEEE,2019:658-666.
[26]Arthur D,Vassilvitskii S. k-means++:the advantages of careful seeding[C]//Soda. New Orleans. 2007,7:1027-1035.
[27]Hu J,Shen L,Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT,USA:IEEE,2018:7132-7141.
[28]Woo S,Park J,Lee J Y,et al. CBAM:convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV). Cham:Springer International Publishing,2018:3-19.
[29]Yang L,Zhang R Y,Li L,et al. Simam:a simple,parameter-free attention module for convolutional neural networks[C]//International Conference on Machine Learning,PMLR. New York,2021:11863-11874.
[30]Chollet F. Xception:deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI,USA:IEEE,2017:1251-1258.
[31]Howard A G,Zhu M,Chen B,et al. Mobilenets:Efficient convolutional neural networks for mobile vision applications[EB/OL]. (2017-04-17)[2023-09-18]. https://arXiv.org/abs/1704.04861.
[32]Li H,Li J,Wei H,et al. Slim-neck by GSConv:A better design paradigm of detector architectures for autonomous vehicles[EB/OL]. (2022-08-17)[2023-09-18]. https://arXiv.org/abs/2206.02424V2.
[33]Zheng Z H,Wang P,Liu W,et al. Distance-IoU loss:faster and better learning for bounding box regression[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New York,2020:12993-13000.
[34]Siliang M,Yong X. Mpdiou:a loss for efficient and accurate bounding box regression[EB/OL]. (2023-07-14)[2023-09-18]. https://arXiv.org/abs/2307.07662.
[35]Selvaraju R R,Cogswell M,Das A,et al. Grad-cam:visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice,Italy:IEEE,2017:618-626.

相似文献/References:

[1]梁勇,赵健,林营志,等.基于红外传感器的实蝇类害虫实时监测装置的设计[J].江苏农业科学,2020,48(04):230.
 Liang Yong,et al.Design of a real-time monitoring device for fruit fly pests based on infrared sensors[J].Jiangsu Agricultural Sciences,2020,48(5):230.

备注/Memo

备注/Memo:
收稿日期:2023-12-27
基金项目:国家自然科学基金(编号:62262028);江西省教育厅科学技术研究项目(GJJ210438、GJJ210434、GJJ2200423)。
作者简介:邓泓(1977—),男,江西南昌人,硕士,副教授,硕士生导师,主要从事农业信息化、计算机视觉研究。E-mail:jxaudh@jxau.edu.cn。
通信作者:彭莹琼,硕士,教授,硕士生导师,主要从事农业信息化、图像处理研究。E-mail:jneyq@jxau.edu.cn。
更新日期/Last Update: 2025-03-05