• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:邬群勇

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 17 >
Responses of surface ozone under the tropical cyclone circulations: Case studies from Fujian Province, China SCIE
期刊论文 | 2025 , 16 (1) | ATMOSPHERIC POLLUTION RESEARCH
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

The circulation of tropical cyclones (TCs) exerts a multifaceted influence on the spatial and temporal distribution of surface pollutants. This study investigates the response of surface ozone (O3) concentration to the TCs in Fujian Province from June to December 2022 by analyzing the contributions of atmospheric pollutants, meteorological conditions, and dynamical transports. Empirical orthogonal function (EOF) decomposition methods are used to analyze the spatio-temporal distribution patterns of affected O3, and a Gradient Boosting Regression Trees (GBRT) machine learning model is employed to estimate surface O3 concentration, quantifying the influence of each factor. The results indicate an anomaly increase in O3 concentration during this period, with photochemistry-related meteorological conditions being the primary influencer, accounting for 66.9% of O3 variations, elucidating the interpretability of the GBRT model for attributing changes in O3 concentration. Low relative humidity and high temperature conditions have been identified as pivotal factors influencing the rise in O3 concentrations. The presence of TC undermines this predominant influence, amplifying the role of transport factors and other atmospheric pollutants. In the case studies of TC (Muifa and Nanmadol, 2022), the slow or stagnant TCs triggered persistent downdrafts in its periphery and brought favorable meteorological conditions such as clear sky and warm temperature for photochemistry. TCs also enhances the impact of horizontal and vertical dynamic transport on O3 concentrations. This work provides vital insights into the complex interplay between TCs and surface O3 concentrations, highlighting the need for targeted environmental and air quality management strategies in regions frequently impacted by TCs.

Keyword :

Attribution analysis Attribution analysis Gradient boosting regression trees Gradient boosting regression trees Surface ozone concentration Surface ozone concentration Tropical cyclone Tropical cyclone

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Keyue , Zhao, Rui , Wu, Qunyong et al. Responses of surface ozone under the tropical cyclone circulations: Case studies from Fujian Province, China [J]. | ATMOSPHERIC POLLUTION RESEARCH , 2025 , 16 (1) .
MLA Wang, Keyue et al. "Responses of surface ozone under the tropical cyclone circulations: Case studies from Fujian Province, China" . | ATMOSPHERIC POLLUTION RESEARCH 16 . 1 (2025) .
APA Wang, Keyue , Zhao, Rui , Wu, Qunyong , Li, Jun , Wang, Hong , Lin, Han . Responses of surface ozone under the tropical cyclone circulations: Case studies from Fujian Province, China . | ATMOSPHERIC POLLUTION RESEARCH , 2025 , 16 (1) .
Export to NoteExpress RIS BibTex

Version :

Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy Scopus
期刊论文 | 2025 | IEEE Geoscience and Remote Sensing Letters
Abstract&Keyword Cite

Abstract :

Semantic change detection (SCD) in high-resolution (HR) remote sensing images faces two issues: (1) isolated network branch for binary change detection (BCD) within multi-task architecture result in suboptimal SCD performance; (2) false alarms or missed detections caused by illumination differences or seasonal transform. To address these issues, this study proposes a bi-temporal binary change enhancement network (Bi-BCENet). Specifically, we introduce a binary change enhancement (BCE) strategy based on multi-network joint learning to achieve superior SCD via improving change areas prediction. Within the network’s reasoning process, we develop a cross-attention fusion module (CAFM) to enhance the global similarity modeling via cross-network prompt fusion, and we employ a cosine similarity-based auxiliary loss to optimize non-change’s semantic consistency. The experiments on SECOND and CINA-FX datasets demonstrate that Bi-BCENet outperforms representative SCD networks, achieving 62.08%, 84.95% in FSCD and 66.88%, 83.10% in mIoUsCD, respectively. And the ablation analysis of network validates Bi-BCENet’s effectiveness in reducing false alarms and missed detections in SCD results. Moreover, for specific SCD of cropland, Bi-BCENet shows its strong potential in single-to-multi SCD. © 2004-2012 IEEE.

Keyword :

binary change detection binary change detection High-resolution remote sensing High-resolution remote sensing joint learning joint learning semantic change detection semantic change detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, H. , Wang, X. , Wu, Q. et al. Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy [J]. | IEEE Geoscience and Remote Sensing Letters , 2025 .
MLA Lin, H. et al. "Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy" . | IEEE Geoscience and Remote Sensing Letters (2025) .
APA Lin, H. , Wang, X. , Wu, Q. , Li, M. , Yang, Z. , Lou, K. . Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy . | IEEE Geoscience and Remote Sensing Letters , 2025 .
Export to NoteExpress RIS BibTex

Version :

CLANN: Cloud amount neural network for estimating 3D cloud from geostationary satellite imager SCIE
期刊论文 | 2025 , 318 | REMOTE SENSING OF ENVIRONMENT
Abstract&Keyword Cite

Abstract :

Accurate information on cloud amount vertical structure is crucial for weather monitoring and understanding climate systems. Active sensors from satellites can provide three-dimensional (3D) cloud structure but with limited geographical coverage, passive sensors from satellites have expanded observation coverage but with limited capability on profiling the clouds. Combing active and passive observations from satellites, together with atmospheric reanalysis data, this study proposes a machine learning approach (CLANN, CLoud Amount Neural Network) to construct three-dimensional (3D) cloud amounts at passive observational coverage. Independent validation is conducted for cloud amount estimates derived from combined data of the Advanced Geostationary Radiation Imager (AGRI) onboard Fengyun-4 A and ERA5 using CALIPSO/CALIOP product as reference. The results indicate notable correlations (Pearson's r = 0.73). The cloud-amount-weighted height showed a high consistency in terms of height positioning between CLANN estimations and CALIOP data, with an RMSE of 1.88 km and a Pearson's r of 0.92. Key features such as water vapor band brightness temperature and upper-layer temperature significantly enhanced model accuracy, as revealed by permutation importance analysis. Sensitivity tests highlighted the critical role of the 1.375 mu m band in cirrus altitude detection, justifying the model's reliance on daytime observations. Additionally, the 3D statistical results from CLANN in 2019 reveal the seasonal variation details of cloud distribution, further demonstrating its application value in climate analysis.

Keyword :

3D cloud structure 3D cloud structure Advanced geostationary imager Advanced geostationary imager Cloud amount estimation Cloud amount estimation Cloud seasonal variation Cloud seasonal variation Neural network Neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Han , Li, Jun , Min, Min et al. CLANN: Cloud amount neural network for estimating 3D cloud from geostationary satellite imager [J]. | REMOTE SENSING OF ENVIRONMENT , 2025 , 318 .
MLA Lin, Han et al. "CLANN: Cloud amount neural network for estimating 3D cloud from geostationary satellite imager" . | REMOTE SENSING OF ENVIRONMENT 318 (2025) .
APA Lin, Han , Li, Jun , Min, Min , Zhang, Feng , Wang, Keyue , Wu, Qunyong . CLANN: Cloud amount neural network for estimating 3D cloud from geostationary satellite imager . | REMOTE SENSING OF ENVIRONMENT , 2025 , 318 .
Export to NoteExpress RIS BibTex

Version :

MSTDFGRN: A Multi-view Spatio-Temporal Dynamic Fusion Graph Recurrent Network for traffic flow prediction SCIE
期刊论文 | 2025 , 123 | COMPUTERS & ELECTRICAL ENGINEERING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

In the construction of smart cities in the new era, traffic prediction is an important component. Precise traffic flow prediction faces significant challenges due to spatial heterogeneity, dynamic correlations, and uncertainty. Most existing methods typically learn from a single spatial or temporal perspective, or at best combine the two in a limited dual-perspective manner, which limits their ability to capture complex spatio-temporal relationships. In this paper, we propose a novel Multi-view Spatio-Temporal Dynamic Fusion Graph Convolutional Recurrent Network (MSTDFGRN) to address these limitations. The core idea is to learn dynamic spatial dependencies alongside both short- and long-term temporal patterns through multi-view learning. First, we introduce a multi-view spatial convolution module that dynamically fuses static and adaptive graphs in multiple subspaces to learn intrinsic and potential spatial dependencies of nodes. Simultaneously, in the temporal view, we design both short-range and long-range recurrent networks to aggregate spatial domain knowledge of nodes at multiple granularities and capture forward and backward temporal dependencies. Furthermore, we design a spatiotemporal attention model that applies an attention mechanism to each node, capturing global spatio-temporal dependencies. Comprehensive experiments on four real traffic flow datasets demonstrate MSTDFGRN's excellent predictive accuracy. Specifically, compared to the Spatial- Temporal Graph Attention Gated Recurrent Transformer Network model, our method improves the MAE by 4.69% on the PeMS08 dataset.

Keyword :

Graph Convolutional Network Graph Convolutional Network Multi-view learning Multi-view learning Spatio-temporal dependencies Spatio-temporal dependencies Traffic flow prediction Traffic flow prediction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Shiyu , Wu, Qunyong , Wang, Yuhang et al. MSTDFGRN: A Multi-view Spatio-Temporal Dynamic Fusion Graph Recurrent Network for traffic flow prediction [J]. | COMPUTERS & ELECTRICAL ENGINEERING , 2025 , 123 .
MLA Yang, Shiyu et al. "MSTDFGRN: A Multi-view Spatio-Temporal Dynamic Fusion Graph Recurrent Network for traffic flow prediction" . | COMPUTERS & ELECTRICAL ENGINEERING 123 (2025) .
APA Yang, Shiyu , Wu, Qunyong , Wang, Yuhang , Zhou, Zhan . MSTDFGRN: A Multi-view Spatio-Temporal Dynamic Fusion Graph Recurrent Network for traffic flow prediction . | COMPUTERS & ELECTRICAL ENGINEERING , 2025 , 123 .
Export to NoteExpress RIS BibTex

Version :

Quantifying centrality using a novel flow-based measure: Implications for sustainable urban development SSCI
期刊论文 | 2025 , 116 | COMPUTERS ENVIRONMENT AND URBAN SYSTEMS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

The flow of essential elements such as people, goods, and information through complex networks has become a critical factor in shaping urban dynamics and regional development. Quantifying location centrality plays an indispensable role not only in urban infrastructure planning but also in National central city planning. Two vital aspects should be considered for central nodes in flow-based complex networks: their impact on adjacent nodes and the diversity of nodes they affect. In this paper, we present a centrality measure index (C-index) that accounts for flow volume and flow directions, offering a high degree of interpretability. We applied the C-index to four public weighted complex networks, demonstrating that our method outperforms classical methods. Furthermore, we validated the effectiveness and advantages of C-index on quantifying location centrality both in inter-city and intra-city population mobility network. The centrality findings from the perspective of population mobility can reinforce guidelines for understanding National central cities and polycentric structure of cities, thereby facilitating policy-making of sustainable urban development.

Keyword :

Centrality measure Centrality measure Interactions between locations Interactions between locations Population mobility networks Population mobility networks Sustainable urban development Sustainable urban development

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Yanzhong , Wu, Qunyong , Zhao, Zhiyuan et al. Quantifying centrality using a novel flow-based measure: Implications for sustainable urban development [J]. | COMPUTERS ENVIRONMENT AND URBAN SYSTEMS , 2025 , 116 .
MLA Yin, Yanzhong et al. "Quantifying centrality using a novel flow-based measure: Implications for sustainable urban development" . | COMPUTERS ENVIRONMENT AND URBAN SYSTEMS 116 (2025) .
APA Yin, Yanzhong , Wu, Qunyong , Zhao, Zhiyuan , Chen, Xuanyu . Quantifying centrality using a novel flow-based measure: Implications for sustainable urban development . | COMPUTERS ENVIRONMENT AND URBAN SYSTEMS , 2025 , 116 .
Export to NoteExpress RIS BibTex

Version :

SDSINet: A spatiotemporal dual-scale interaction network for traffic prediction SCIE
期刊论文 | 2025 , 173 | APPLIED SOFT COMPUTING
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Accurate traffic forecasting is essential for smart city development. However, existing spatiotemporal modeling methods often face significant challenges, including limitations in handling complex temporal dependencies, capturing multiscale spatial relationships, and modeling the interaction between temporal and spatial features. These challenges arise due to the reliance on extended historical data, fixed adjacency matrices, and the lack of dynamic spatiotemporal interaction modeling. To address these issues, we propose the Spatiotemporal Dual-Scale Interaction Network (SDSINet). SDSINet introduces an implicit temporal information enhancement method that embeds temporal identity information into feature representations, reducing the computational overhead and improving the modeling of global temporal features. Additionally, SDSINet integrates a dynamic multiscale spatial modeling approach that combines adaptive and scale-specific graphs, enabling the model to capture both local and global spatial dependencies. Furthermore, SDSINet incorporates a dual-scale spatiotemporal interaction learning framework that captures short-term and long-term temporal dependencies as well as multiscale spatial correlations. Extensive experiments on real-world datasets - traffic flow (PeMS04), speed (PeMSD7(M)), and demand (NYCBike Drop-off/Pick-up) - demonstrate that SDSINet outperforms existing state-of-the-art methods in prediction accuracy and computational efficiency. Notably, SDSINet achieves a 14.03% reduction in MAE on the NYCBike Drop-off dataset compared to AFDGCN, setting anew benchmark for traffic forecasting.

Keyword :

Graph convolutional network Graph convolutional network Interactive learning Interactive learning Spatiotemporal dependencies Spatiotemporal dependencies Traffic prediction Traffic prediction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Shiyu , Wu, Qunyong . SDSINet: A spatiotemporal dual-scale interaction network for traffic prediction [J]. | APPLIED SOFT COMPUTING , 2025 , 173 .
MLA Yang, Shiyu et al. "SDSINet: A spatiotemporal dual-scale interaction network for traffic prediction" . | APPLIED SOFT COMPUTING 173 (2025) .
APA Yang, Shiyu , Wu, Qunyong . SDSINet: A spatiotemporal dual-scale interaction network for traffic prediction . | APPLIED SOFT COMPUTING , 2025 , 173 .
Export to NoteExpress RIS BibTex

Version :

Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting SCIE
期刊论文 | 2025 , 12 (11) , 15057-15072 | IEEE INTERNET OF THINGS JOURNAL
Abstract&Keyword Cite

Abstract :

Accurate traffic forecasting is one of the key applications within Internet of Things (IoT)-based intelligent transportation systems (ITS), playing a vital role in enhancing traffic quality, optimizing public transportation, and planning infrastructure. However, existing spatial-temporal methods encounter two primary limitations: 1) they have difficulty differentiating samples over time and often ignore dependencies among road network nodes at different time scales and 2) they are limited in capturing dynamic spatial correlations with predefined and adaptive graphs. To overcome these limitations, we introduce a novel temporal identity interaction dynamic graph convolutional network (TIIDGCN) for traffic forecasting. The central concept involves assigning temporal identity features to raw data and extracting distinctive, representative spatial-temporal features through multiscale interactive learning. Specifically, we design a multiscale interactive model incorporating both spatial and temporal components. This network aims to explore spatial-temporal patterns at various scales from macro to micro, facilitating their mutual enhancement through positive feedback mechanisms. For the spatial component, we design a new dynamic graph learning method to depict the changing dependencies among nodes. We conduct comprehensive experiments using four real-world traffic datasets (PeMS04/07/08 and NYCTaxi Drop-off/Pick-up). Specifically, TIIDGCN achieves a 16.46% reduction in mean absolute error compared to the Spatial-Temporal Graph Attention Gated Recurrent Transformer Network model on the PeMS08 dataset.

Keyword :

Adaptation models Adaptation models Correlation Correlation Data models Data models Dictionaries Dictionaries Feature extraction Feature extraction Forecasting Forecasting Graph convolutional network (GCN) Graph convolutional network (GCN) Internet of Things Internet of Things multiscale interaction multiscale interaction Roads Roads Time series analysis Time series analysis traffic forecasting traffic forecasting Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Shiyu , Wu, Qunyong , Li, Mengmeng et al. Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting [J]. | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (11) : 15057-15072 .
MLA Yang, Shiyu et al. "Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting" . | IEEE INTERNET OF THINGS JOURNAL 12 . 11 (2025) : 15057-15072 .
APA Yang, Shiyu , Wu, Qunyong , Li, Mengmeng , Sun, Yu . Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting . | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (11) , 15057-15072 .
Export to NoteExpress RIS BibTex

Version :

Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images SCIE
期刊论文 | 2025 , 18 , 976-994 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Building type information indicates the functional properties of buildings and plays a crucial role in smart city development and urban socioeconomic activities. Existing methods for classifying building types often face challenges in accurately distinguishing buildings between types while maintaining well-delineated boundaries, especially in complex urban environments. This study introduces a novel framework, i.e., CNN-Transformer cross-attention feature fusion network (CTCFNet), for building type classification from very high resolution remote sensing images. CTCFNet integrates convolutional neural networks (CNNs) and Transformers using an interactive cross-encoder fusion module that enhances semantic feature learning and improves classification accuracy in complex scenarios. We develop an adaptive collaboration optimization module that applies human visual attention mechanisms to enhance the feature representation of building types and boundaries simultaneously. To address the scarcity of datasets in building type classification, we create two new datasets, i.e., the urban building type (UBT) dataset and the town building type (TBT) dataset, for model evaluation. Extensive experiments on these datasets demonstrate that CTCFNet outperforms popular CNNs, Transformers, and dual-encoder methods in identifying building types across various regions, achieving the highest mean intersection over union of 78.20% and 77.11%, F1 scores of 86.83% and 88.22%, and overall accuracy of 95.07% and 95.73% on the UBT and TBT datasets, respectively. We conclude that CTCFNet effectively addresses the challenges of high interclass similarity and intraclass inconsistency in complex scenes, yielding results with well-delineated building boundaries and accurate building types.

Keyword :

Accuracy Accuracy Architecture Architecture Buildings Buildings Building type classification Building type classification CNN-transformer networks CNN-transformer networks cross-encoder cross-encoder Earth Earth Feature extraction Feature extraction feature interaction feature interaction Optimization Optimization Remote sensing Remote sensing Semantics Semantics Transformers Transformers very high resolution remote sensing very high resolution remote sensing Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Shaofeng , Li, Mengmeng , Zhao, Wufan et al. Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2025 , 18 : 976-994 .
MLA Zhang, Shaofeng et al. "Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 18 (2025) : 976-994 .
APA Zhang, Shaofeng , Li, Mengmeng , Zhao, Wufan , Wang, Xiaoqin , Wu, Qunyong . Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2025 , 18 , 976-994 .
Export to NoteExpress RIS BibTex

Version :

AUHF-DETR: A Lightweight Transformer with Spatial Attention and Wavelet Convolution for Embedded UAV Small Object Detection SCIE
期刊论文 | 2025 , 17 (11) | REMOTE SENSING
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Real-time object detection on embedded unmanned aerial vehicles (UAVs) is crucial for emergency rescue, autonomous driving, and target tracking applications. However, UAVs' hardware limitations create conflicts between model size and detection accuracy. Moreover, challenges such as complex backgrounds from the UAV's perspective, severe occlusion, densely packed small targets, and uneven lighting conditions complicate real-time detection for embedded UAVs. To tackle these challenges, we propose AUHF-DETR, an embedded detection model derived from RT-DETR. In the backbone, we introduce a novel WTC-AdaResNet paradigm that utilizes reversible connections to decouple small-object features. We further replace the original global attention mechanism with the PSA module to strengthen inter-feature relationships within each ROI, thereby resolving the embedded challenges posed by RT-DETR's complex token computations. In the encoder, we introduce a BDFPN for multi-scale feature fusion, effectively mitigating the small-object detection difficulties caused by the baseline's Hungarian assignment. Extensive experiments on the public VisDrone2019, HIT-UAV, and CARPK datasets demonstrate that compared with RT-DETR-r18, AUHF-DETR achieves a 2.1% increase in APs on VisDrone2019, reduces the parameter count by 49.0%, and attains 68 FPS (AGX Xavier), thus satisfying the real-time requirements for small-object detection in embedded UAVs.

Keyword :

AUHF-DETR AUHF-DETR embedded UAV real-time detection embedded UAV real-time detection object detection object detection UAV images UAV images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Guo, Hengyu , Wu, Qunyong , Wang, Yuhang . AUHF-DETR: A Lightweight Transformer with Spatial Attention and Wavelet Convolution for Embedded UAV Small Object Detection [J]. | REMOTE SENSING , 2025 , 17 (11) .
MLA Guo, Hengyu et al. "AUHF-DETR: A Lightweight Transformer with Spatial Attention and Wavelet Convolution for Embedded UAV Small Object Detection" . | REMOTE SENSING 17 . 11 (2025) .
APA Guo, Hengyu , Wu, Qunyong , Wang, Yuhang . AUHF-DETR: A Lightweight Transformer with Spatial Attention and Wavelet Convolution for Embedded UAV Small Object Detection . | REMOTE SENSING , 2025 , 17 (11) .
Export to NoteExpress RIS BibTex

Version :

MSHF-YOLO: Cotton growth detection algorithm integrated multi-semantic and high-frequency features SCIE
期刊论文 | 2025 , 167 | DIGITAL SIGNAL PROCESSING
Abstract&Keyword Cite

Abstract :

Accurate monitoring of cotton growth is essential for precision agriculture. However, existing deep learning-based object detection models often underperform in complex field environments due to challenges such as occlusion and low contrast. To address these limitations, we propose MSHF-YOLO, an improved detection framework based on YOLOv8. The model incorporates a Multi-Semantic Spatial and Channel Attention (MSCA) module in the backbone to enhance feature representation. Additionally, we replace traditional upsampling and downsampling operations in the neck with DySample and Adaptive Wavelet Down (AWD) modules to preserve high-frequency information. A High-frequency boost (HB) module is further introduced in the detection head to enhance detail sensitivity. Experimental results demonstrate that MSHF-YOLO achieves mAP@0.5 of 86.0% and mAP@0.75 of 68.2%, outperforming the baseline by 5.5% and 3.5%, respectively, while reducing model size by 12.5%. These results highlight the model's effectiveness and potential for robust cotton growth monitoring.

Keyword :

Cotton growth Cotton growth Image recognition Image recognition Wavelet transform Wavelet transform YOLO YOLO

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Luo, Jiahuan , Wu, Qunyong , Wang, Yuhang et al. MSHF-YOLO: Cotton growth detection algorithm integrated multi-semantic and high-frequency features [J]. | DIGITAL SIGNAL PROCESSING , 2025 , 167 .
MLA Luo, Jiahuan et al. "MSHF-YOLO: Cotton growth detection algorithm integrated multi-semantic and high-frequency features" . | DIGITAL SIGNAL PROCESSING 167 (2025) .
APA Luo, Jiahuan , Wu, Qunyong , Wang, Yuhang , Zhou, Zhan , Zhuo, Zihao , Guo, Hengyu . MSHF-YOLO: Cotton growth detection algorithm integrated multi-semantic and high-frequency features . | DIGITAL SIGNAL PROCESSING , 2025 , 167 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 17 >

Export

Results:

Selected

to

Format:
Online/Total:582/13572937
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1