• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈羽中

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 19 >
面向分布式数据安全共享的高速公路路网拥堵监测
期刊论文 | 2025 , 41 (1) , 11-20 | 福建师范大学学报(自然科学版)
Abstract&Keyword Cite Version(1)

Abstract :

应用人工智能技术对高速公路路网道路状态进行监测已成为热点,然而,数据孤岛及隐私保护是高速路网智能决策面临的挑战.为实现分布式数据安全共享及智能决策,以拥堵问题为例,提出基于联邦学习的高速路网道路拥堵状态监测策略.利用摄像头实时数据,在密态可计算的同态加密联邦学习智能决策架构下,建立基于道路区间优化的拥堵状态监测模型.结果表明,在确保分布式数据安全共享的前提下,能够有效实现高速路网道路拥堵状态监测.

Keyword :

同态加密 同态加密 数据安全共享 数据安全共享 智能决策 智能决策 联邦学习 联邦学习 道路拥堵状态 道路拥堵状态 高速公路路网 高速公路路网

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李林锋 , 陈羽中 , 姚毅楠 et al. 面向分布式数据安全共享的高速公路路网拥堵监测 [J]. | 福建师范大学学报(自然科学版) , 2025 , 41 (1) : 11-20 .
MLA 李林锋 et al. "面向分布式数据安全共享的高速公路路网拥堵监测" . | 福建师范大学学报(自然科学版) 41 . 1 (2025) : 11-20 .
APA 李林锋 , 陈羽中 , 姚毅楠 , 邵伟杰 . 面向分布式数据安全共享的高速公路路网拥堵监测 . | 福建师范大学学报(自然科学版) , 2025 , 41 (1) , 11-20 .
Export to NoteExpress RIS BibTex

Version :

面向分布式数据安全共享的高速公路路网拥堵监测
期刊论文 | 2025 , 41 (01) , 11-20 | 福建师范大学学报(自然科学版)
Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement SCIE
期刊论文 | 2025 , 11 , 732-747 | IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING
Abstract&Keyword Cite Version(2)

Abstract :

It is a challenging task to obtain high-quality images in low-light scenarios. While existing low-light image enhancement methods learn the mapping from low-light to clear images, such a straightforward approach lacks the targeted design for real-world scenarios, hampering their practical utility. As a result, issues such as overexposure and color distortion are likely to arise when processing images in uneven luminance or extreme darkness. To address these issues, we propose an adaptive luminance enhancement and high-fidelity color correction network (LCNet), which adopts a strategy of enhancing luminance first and then correcting color. Specifically, in the adaptive luminance enhancement stage, we design a multi-stage dual attention residual module (MDARM), which incorporates parallel spatial and channel attention mechanisms within residual blocks. This module extracts luminance prior from the low-light image to adaptively enhance luminance, while suppressing overexposure in areas with sufficient luminance. In the high-fidelity color correction stage, we design a progressive multi-scale feature fusion module (PMFFM) that combines progressively stage-wise multi-scale feature fusion with long/short skip connections, enabling thorough interaction between features at different scales across stages. This module extracts and fuses color features with varying receptive fields to ensure accurate and consistent color correction. Furthermore, we introduce a multi-color-space loss to effectively constrain the color correction. These two stages together produce high-quality images with appropriate luminance and high-fidelity color. Extensive experiments on both low-level and high-level tasks demonstrate that our LCNet outperforms state-of-the-art methods and achieves superior performance for low-light image enhancement in real-world scenarios.

Keyword :

adaptive luminance enhancement adaptive luminance enhancement Distortion Distortion Feature extraction Feature extraction high-fidelity color correction high-fidelity color correction Histograms Histograms Image color analysis Image color analysis Image enhancement Image enhancement Lighting Lighting Low-light image enhancement Low-light image enhancement luminance prior luminance prior Reflectivity Reflectivity Signal to noise ratio Signal to noise ratio Switches Switches Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Li, Fusheng , Li, Yuezhou et al. Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement [J]. | IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING , 2025 , 11 : 732-747 .
MLA Niu, Yuzhen et al. "Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement" . | IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 11 (2025) : 732-747 .
APA Niu, Yuzhen , Li, Fusheng , Li, Yuezhou , Chen, Siling , Chen, Yuzhong . Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement . | IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING , 2025 , 11 , 732-747 .
Export to NoteExpress RIS BibTex

Version :

Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement EI
期刊论文 | 2025 , 11 , 732-747 | IEEE Transactions on Computational Imaging
Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement Scopus
期刊论文 | 2025 , 11 , 732-747 | IEEE Transactions on Computational Imaging
Hierarchical fine-grained state-aware graph attention network for dialogue state tracking SCIE
期刊论文 | 2025 , 81 (5) | JOURNAL OF SUPERCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

The objective of dialogue state tracking (DST) is to dynamically track information within dialogue states by populating predefined state slots, which enhances the comprehension capabilities of task-oriented dialogue systems in processing user requests. Recently, there has been a growing popularity in using graph neural networks to model the relationships between slots and slots as well as between dialogue and slots. However, these models overlook the relationships between words and phrases in the current turn dialogue and dialogue history. Specific syntactic dependencies (e.g., the object of a preposition) and constituents (e.g., noun phrases) have a higher probability of being the slot values that need to be retrieved at current moment. Neglecting these syntactic dependency and constituent information may cause the loss of potential candidate slot values, thereby limiting the overall performance of DST models. To address this issue, we propose a Hierarchical Fine-grained State Aware Graph Attention Network for Dialogue State Tracking (HFSG-DST). HFSG-DST exploits the syntactic dependency and constituent tree information, such as phrase segmentation and hierarchical structure in dialogue utterances, to construct a relational graph between entities. It then employs a hierarchical graph attention network to facilitate the extraction of fine-grained candidate dialogue state information. Additionally, HFSG-DST designs a Schema-enhanced Dialogue History Selector to select the most relevant turn of dialogue history for current turn and incorporates schema description information for dialogue state tracking. Consequently, HFSG-DST is capable of constructing the dependency tree and constituent tree on noise-free utterances. Experimental results on two public benchmark datasets demonstrate that HFSG-DST outperforms other state-of-the-art models.

Keyword :

Dialogue state tracking Dialogue state tracking Hierarchical graph attention network Hierarchical graph attention network Schema enhancement Schema enhancement Syntactic information Syntactic information

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, Hongmiao , Chen, Yuzhong , Chen, Deming et al. Hierarchical fine-grained state-aware graph attention network for dialogue state tracking [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (5) .
MLA Liao, Hongmiao et al. "Hierarchical fine-grained state-aware graph attention network for dialogue state tracking" . | JOURNAL OF SUPERCOMPUTING 81 . 5 (2025) .
APA Liao, Hongmiao , Chen, Yuzhong , Chen, Deming , Xu, Junjie , Zhong, Jiayuan , Dong, Chen . Hierarchical fine-grained state-aware graph attention network for dialogue state tracking . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (5) .
Export to NoteExpress RIS BibTex

Version :

Hierarchical fine-grained state-aware graph attention network for dialogue state tracking EI
期刊论文 | 2025 , 81 (5) | Journal of Supercomputing
Hierarchical fine-grained state-aware graph attention network for dialogue state tracking Scopus
期刊论文 | 2025 , 81 (5) | Journal of Supercomputing
Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization SCIE
期刊论文 | 2025 , 84 (2) , 3371-3391 | CMC-COMPUTERS MATERIALS & CONTINUA
Abstract&Keyword Cite Version(2)

Abstract :

The rapid advancement of Industry 4.0 has revolutionized manufacturing, shifting production from centralized control to decentralized, intelligent systems. Smart factories are now expected to achieve high adaptability and resource efficiency, particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands. To address the challenges of dynamic task allocation, uncertainty, and realtime decision-making, this paper proposes Pathfinder, a deep reinforcement learning-based scheduling framework. Pathfinder models scheduling data through three key matrices: execution time (the time required for a job to complete), completion time (the actual time at which a job is finished), and efficiency (the performance of executing a single job). By leveraging neural networks, Pathfinder extracts essential features from these matrices, enabling intelligent decision-making in dynamic production environments. Unlike traditional approaches with fixed scheduling rules, Pathfinder dynamically selects from ten diverse scheduling rules, optimizing decisions based on real-time environmental conditions. To further enhance scheduling efficiency, a specialized reward function is designed to support dynamic task allocation and real-time adjustments. This function helps Pathfinder continuously refine its scheduling strategy, improving machine utilization and minimizing job completion times. Through reinforcement learning, Pathfinder adapts to evolving production demands, ensuring robust performance in real-world applications. Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches, offering improved coordination and efficiency in smart factories. By integrating deep reinforcement learning, adaptable scheduling strategies, and an innovative reward function, Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments.

Keyword :

customization customization deep reinforcement learning deep reinforcement learning multi-robot system multi-robot system production scheduling production scheduling Smart factory Smart factory task allocation task allocation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lyu, Chenxi , Dong, Chen , Xiong, Qiancheng et al. Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization [J]. | CMC-COMPUTERS MATERIALS & CONTINUA , 2025 , 84 (2) : 3371-3391 .
MLA Lyu, Chenxi et al. "Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization" . | CMC-COMPUTERS MATERIALS & CONTINUA 84 . 2 (2025) : 3371-3391 .
APA Lyu, Chenxi , Dong, Chen , Xiong, Qiancheng , Chen, Yuzhong , Weng, Qian , Chen, Zhenyi . Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization . | CMC-COMPUTERS MATERIALS & CONTINUA , 2025 , 84 (2) , 3371-3391 .
Export to NoteExpress RIS BibTex

Version :

Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization Scopus
期刊论文 | 2025 , 84 (2) , 3371-3391 | Computers, Materials and Continua
Pathfinder: Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization EI
期刊论文 | 2025 , 84 (2) , 3371-3391 | Computers, Materials and Continua
Skeleton-Boundary-Guided Network for Camouflaged Object Detection SCIE
期刊论文 | 2025 , 21 (3) | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Abstract&Keyword Cite Version(2)

Abstract :

Camouflaged object detection (COD) aims to resolve the tough issue of accurately segmenting objects hidden in the surroundings. However, the existing methods suffer from two major problems: the incomplete interior and the inaccurate boundary of the object. To address these difficulties, we propose a three-stage skeletonboundary-guided network (SBGNet) for the COD task. Specifically, we design a novel skeleton-boundary label to be complementary to the typical pixel-wise mask annotation, emphasizing the interior skeleton and the boundary of the camouflaged object. Furthermore, the proposed feature guidance module (FGM) leverages the skeleton-boundary feature to guide the model to focus on both the interior and the boundary of the camouflaged object. Besides, we design a bidirectional feature flow path with the information interaction module (IIM) to propagate and integrate the semantic and texture information. Finally, we propose the dual feature distillation module (DFDM) to progressively refine the segmentation results in a fine-grained manner. Comprehensive experiments demonstrate that our SBGNet outperforms 20 state-of-the-art methods on three benchmarks in both qualitative and quantitative comparisons. CCS Concepts: center dot Computing methodologies -> Scene understanding;

Keyword :

Bidirectional feature flow path Bidirectional feature flow path Camouflaged object detection Camouflaged object detection Feature distillation Feature distillation Skeleton-boundary guidance Skeleton-boundary guidance

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Xu, Yeyuan , Li, Yuezhou et al. Skeleton-Boundary-Guided Network for Camouflaged Object Detection [J]. | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2025 , 21 (3) .
MLA Niu, Yuzhen et al. "Skeleton-Boundary-Guided Network for Camouflaged Object Detection" . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 21 . 3 (2025) .
APA Niu, Yuzhen , Xu, Yeyuan , Li, Yuezhou , Zhang, Jiabang , Chen., Yuzhong . Skeleton-Boundary-Guided Network for Camouflaged Object Detection . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2025 , 21 (3) .
Export to NoteExpress RIS BibTex

Version :

Skeleton-Boundary–Guided Network for Camouflaged Object Detection Scopus
期刊论文 | 2025 , 21 (3) | ACM Transactions on Multimedia Computing, Communications and Applications
Skeleton-Boundary–Guided Network for Camouflaged Object Detection EI
期刊论文 | 2025 , 21 (3) | ACM Transactions on Multimedia Computing, Communications and Applications
A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving EI
期刊论文 | 2025 , 37 (3) , 1651-1672 | Neural Computing and Applications
Abstract&Keyword Cite Version(1)

Abstract :

Math word problem (MWP) represents a critical research area within reading comprehension, where accurate comprehension of math problem text is crucial for generating math expressions. However, current approaches still grapple with unresolved challenges in grasping the sensitivity of math problem text and delineating distinct roles across various clause types, and enhancing numerical representation. To address these challenges, this paper proposes a Numerical Magnitude Aware Multi-Channel Hierarchical Encoding Network (NMA-MHEA) for math expression generation. Firstly, NMA-MHEA implements a multi-channel hierarchical context encoding module to learn context representations at three different channels: intra-clause channel, inter-clause channel, and context-question interaction channel. NMA-MHEA constructs hierarchical constituent-dependency graphs for different levels of sentences and employs a Hierarchical Graph Attention Neural Network (HGAT) to learn syntactic and semantic information within these graphs at the intra-clause and inter-clause channels. NMA-MHEA then refines context clauses using question information at the context-question interaction channel. Secondly, NMA-MHEA designs a number encoding module to enhance the relative magnitude information among numerical values and type information of numerical values. Experimental results on two public benchmark datasets demonstrate that NMA-MHEA outperforms other state-of-the-art models. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.

Keyword :

Benchmarking Benchmarking Encoding (symbols) Encoding (symbols) Graph algorithms Graph algorithms Graphic methods Graphic methods Graph neural networks Graph neural networks Network coding Network coding Network theory (graphs) Network theory (graphs) Semantics Semantics Syntactics Syntactics Word processing Word processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Junjie , Chen, Yuzhong , Xiao, Lingsheng et al. A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving [J]. | Neural Computing and Applications , 2025 , 37 (3) : 1651-1672 .
MLA Xu, Junjie et al. "A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving" . | Neural Computing and Applications 37 . 3 (2025) : 1651-1672 .
APA Xu, Junjie , Chen, Yuzhong , Xiao, Lingsheng , Liao, Hongmiao , Zhong, Jiayuan , Dong, Chen . A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving . | Neural Computing and Applications , 2025 , 37 (3) , 1651-1672 .
Export to NoteExpress RIS BibTex

Version :

A numerical magnitude aware multi-channel hierarchical encoding network for math word problem solving Scopus
期刊论文 | 2024 , 37 (3) , 1651-1672 | Neural Computing and Applications
Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis SCIE
期刊论文 | 2025 , 81 (1) | JOURNAL OF SUPERCOMPUTING
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Aspect-level multimodal sentiment analysis aims to ascertain the sentiment polarity of a given aspect from a text review and its accompanying image. Despite substantial progress made by existing research, aspect-level multimodal sentiment analysis still faces several challenges: (1) Inconsistency in feature granularity between the text and image modalities poses difficulties in capturing corresponding visual representations of aspect words. This inconsistency may introduce irrelevant or redundant information, thereby causing noise and interference in sentiment analysis. (2) Traditional aspect-level sentiment analysis predominantly relies on the fusion of semantic and syntactic information to determine the sentiment polarity of a given aspect. However, introducing image modality necessitates addressing the semantic gap in jointly understanding sentiment features in different modalities. To address these challenges, a multi-granularity visual-textual feature fusion model (MG-VTFM) is proposed to enable deep sentiment interactions among semantic, syntactic, and image information. First, the model introduces a multi-granularity hierarchical graph attention network that controls the granularity of semantic units interacting with images through constituent tree. This network extracts image sentiment information relevant to the specific granularity, reduces noise from images and ensures sentiment relevance in single-granularity cross-modal interactions. Building upon this, a multilayered graph attention module is employed to accomplish multi-granularity sentiment fusion, ranging from fine to coarse. Furthermore, a progressive multimodal attention fusion mechanism is introduced to maximize the extraction of abstract sentiment information from images. Lastly, a mapping mechanism is proposed to align cross-modal information based on aspect words, unifying semantic spaces across different modalities. Our model demonstrates excellent overall performance on two datasets.

Keyword :

Aspect-level sentiment analysis Aspect-level sentiment analysis Constituent tree Constituent tree Multi-granularity Multi-granularity Multimodal data Multimodal data Visual-textual feature fusion Visual-textual feature fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yuzhong , Shi, Liyuan , Lin, Jiali et al. Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis [J]. | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
MLA Chen, Yuzhong et al. "Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis" . | JOURNAL OF SUPERCOMPUTING 81 . 1 (2025) .
APA Chen, Yuzhong , Shi, Liyuan , Lin, Jiali , Chen, Jingtian , Zhong, Jiayuan , Dong, Chen . Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis . | JOURNAL OF SUPERCOMPUTING , 2025 , 81 (1) .
Export to NoteExpress RIS BibTex

Version :

Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis Scopus
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
Multi-granularity visual-textual jointly modeling for aspect-level multimodal sentiment analysis EI
期刊论文 | 2025 , 81 (1) | Journal of Supercomputing
Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features SCIE
期刊论文 | 2025 , 57 (2) | NEURAL PROCESSING LETTERS
Abstract&Keyword Cite Version(2)

Abstract :

Multimodal aspect-based sentiment analysis (MABSA) is an essential task in the field of sentiment analysis, which still confronts several critical challenges. The first challenge is how to effectively capture key information within both image and text features to enhance the recognition and understanding of complex sentiment expressions. The second challenge is how to achieve cross-modal alignment of multi-granularity text features and image features. The third challenge is how to narrow the semantic gap between image modality and text modality through effective cross-modal feature fusion. To address these issues, a framework that leverages external knowledge and multi-granularity image and text features (EKMG) is proposed. Firstly, an external knowledge enhanced semantic extraction module is introduced to fuse external knowledge with image features and text features, thereby capturing the key information from texts and images. Secondly, we design a multi-granularity image-text contrastive learning module. This module initially introduces a graph attention network and a novel cross-modal fusion mechanism to align image features and text features at multiple granularities. Additionally, the module employs an image-text contrastive learning strategy to narrow the semantic gap between different modalities. Experimental results on two public benchmark datasets demonstrate that EKMG achieves significant performance improvements compared to state-of-the-art baseline models.

Keyword :

Contrastive learning Contrastive learning Cross-modal fusion Cross-modal fusion External knowledge External knowledge Multi-granularity Multi-granularity Multimodal aspect-based sentiment analysis Multimodal aspect-based sentiment analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Zhanghui , Lin, Jiali , Chen, Yuzhong et al. Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features [J]. | NEURAL PROCESSING LETTERS , 2025 , 57 (2) .
MLA Liu, Zhanghui et al. "Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features" . | NEURAL PROCESSING LETTERS 57 . 2 (2025) .
APA Liu, Zhanghui , Lin, Jiali , Chen, Yuzhong , Dong, Yu . Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features . | NEURAL PROCESSING LETTERS , 2025 , 57 (2) .
Export to NoteExpress RIS BibTex

Version :

Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features Scopus
期刊论文 | 2025 , 57 (2) | Neural Processing Letters
Multimodal Aspect-Based Sentiment Analysis with External Knowledge and Multi-granularity Image-Text Features EI
期刊论文 | 2025 , 57 (2) | Neural Processing Letters
Collaboratively enhanced and integrated detail-context information for low enhancement SCIE
期刊论文 | 2025 , 162 | PATTERN RECOGNITION
Abstract&Keyword Cite Version(2)

Abstract :

Low-light image enhancement (LLIE) is a challenging task, due to the multiple degradation problems involved, such as low brightness, color distortion, heavy noise, and detail degradation. Existing deep learning-based LLIE methods mainly use encoder-decoder networks or full-resolution networks, which excel at extracting context or detail information, respectively. Since detail and context information are both required for LLIE, existing methods cannot solve all the degradation problems. To solve the above problem, we propose an LLIE method based on collaboratively enhanced and integrated detail-context information (CoEIDC). Specifically, we propose a full-resolution network with two collaborative subnetworks, namely the detail extraction and enhancement subnetwork (DE2-Net) and context extraction and enhancement subnetwork (CE2-Net). CE2-Net extracts context information from the features of DE2-Net at different stages through large receptive field convolutions. Moreover, a collaborative attention module (CAM) and a detail-context integration module are proposed to enhance and integrate detail and context information. CAM is reused to enhance the detail features from multi-receptive fields and the context features from multiple stages. Extensive experimental results demonstrate that our method outperforms the state-of-the-art LLIE methods, and is applicable to other image enhancement tasks, such as underwater image enhancement.

Keyword :

Collaborative enhancement and integration Collaborative enhancement and integration Color/brightness correction Color/brightness correction Detail reconstruction Detail reconstruction Low-light image enhancement Low-light image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yuzhen , Lin, Xiaofeng , Xu, Huangbiao et al. Collaboratively enhanced and integrated detail-context information for low enhancement [J]. | PATTERN RECOGNITION , 2025 , 162 .
MLA Niu, Yuzhen et al. "Collaboratively enhanced and integrated detail-context information for low enhancement" . | PATTERN RECOGNITION 162 (2025) .
APA Niu, Yuzhen , Lin, Xiaofeng , Xu, Huangbiao , Xu, Rui , Chen, Yuzhong . Collaboratively enhanced and integrated detail-context information for low enhancement . | PATTERN RECOGNITION , 2025 , 162 .
Export to NoteExpress RIS BibTex

Version :

Collaboratively enhanced and integrated detail-context information for low-light image enhancement EI
期刊论文 | 2025 , 162 | Pattern Recognition
Collaboratively enhanced and integrated detail-context information for low-light image enhancement Scopus
期刊论文 | 2025 , 162 | Pattern Recognition
High-order diversity feature learning for pedestrian attribute recognition SCIE
期刊论文 | 2025 , 188 | NEURAL NETWORKS
Abstract&Keyword Cite Version(2)

Abstract :

Pedestrian attribute recognition (PAR) involves accurately identifying multiple attributes present in pedestrian images. There are two main approaches for PAR: part-based method and attention-based method. The former relies on existing segmentation or region detection methods to localize body parts and learn corresponding attribute-specific feature from the corresponding regions, where the performance heavily depends on the accuracy of body region localization. The latter adopts the embedded attention modules or transformer attention to exploit detailed feature. However, it can focus on certain body regions but often provide coarse attention, failing to capture fine-grained details, the learned feature may also be interfered with by irrelevant information. Meanwhile, these methods overlook the global contextual information. This work argues for replacing coarse attention with detailed attention and integrating it with global contextual feature from ViT to jointly represent attribute-specific regions. To tackle this issue, we propose a High-order Diversity Feature Learning (HDFL) method for PAR based on ViT. We utilize a polynomial predictor to design an Attribute-specific Detailed Feature Exploration (ADFE) module, which can construct the high-order statistics and gain more fine-grained feature. Our ADFE module is a parameter-friendly method that provides flexibility in deciding its utilization during the inference phase. A Soft-redundancy Perception Loss (SPLoss) is proposed to adaptively measure the redundancy between feature of different orders, which can promote diverse characterization of features. Experiments on several PAR datasets show that our method achieves a new stateof-the-art (SOTA) performance. On the most challenging PA100K dataset, our method outperforms previous SOTA by 1.69% and achieves the highest mA of 84.92%.

Keyword :

High-order diversity feature learning High-order diversity feature learning Pedestrian attribute recognition Pedestrian attribute recognition Soft-redundancy perception loss Soft-redundancy perception loss

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Junyi , Huang, Yan , Gao, Min et al. High-order diversity feature learning for pedestrian attribute recognition [J]. | NEURAL NETWORKS , 2025 , 188 .
MLA Wu, Junyi et al. "High-order diversity feature learning for pedestrian attribute recognition" . | NEURAL NETWORKS 188 (2025) .
APA Wu, Junyi , Huang, Yan , Gao, Min , Niu, Yuzhen , Chen, Yuzhong , Wu, Qiang . High-order diversity feature learning for pedestrian attribute recognition . | NEURAL NETWORKS , 2025 , 188 .
Export to NoteExpress RIS BibTex

Version :

High-order diversity feature learning for pedestrian attribute recognition Scopus
期刊论文 | 2025 , 188 | Neural Networks
High-order diversity feature learning for pedestrian attribute recognition EI
期刊论文 | 2025 , 188 | Neural Networks
10| 20| 50 per page
< Page ,Total 19 >

Export

Results:

Selected

to

Format:
Online/Total:344/13883208
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1