• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 1 >
WEAL: Weight-wise Ensemble Adversarial Learning with Gradient Manipulation EI
期刊论文 | 2025 , 309 | Knowledge-Based Systems
Abstract&Keyword Cite

Abstract :

Adversarial training has emerged as a straightforward and effective defense approach against adversarial attacks, with ensemble adversarial learning (EAL) being a feasible branch to enhance the adversarial robustness of deep neural networks (DNNs). However, the existing EAL methods either incur massive costs in multi-model ensemble training, leading to low adaptability, or overlook the existence of gradient conflicts in single-model self-ensemble learning, resulting in only limited improvement in robustness. To address these issues, in this paper, we first analyze the importance of weight state information during network training, which plays a key role in ensemble learning, especially in adversarial settings. Then, we present a new gradient manipulation strategy, it implements random sampling in normal distribution to conduct consensual gradients for alleviating the gradient conflicts. Based on these, we propose a novel Weight-wise Ensemble Adversarial Learning (WEAL), which makes full use of the states of the weights and mitigates the conflicts in different gradients. It can greatly improve the adversarial robustness of the target model within an appropriate consumption cost. Extensive experiments on benchmark datasets and models verify the effectiveness of the proposed WEAL, e.g., in defending against white-box and black-box adversarial attacks, compared to representative adversarial training methods, the adversarial accuracy is increased by an average of 5.4% and 4.2%, and improving the adversarial accuracy by an average of 2.8% and 1.8% as compared to state-of-the-art ensemble adversarial learning method. © 2024

Keyword :

Adversarial machine learning Adversarial machine learning Contrastive Learning Contrastive Learning Federated learning Federated learning Generative adversarial networks Generative adversarial networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Chuanxi , Wang, Jiaming , Tang, Yunbo et al. WEAL: Weight-wise Ensemble Adversarial Learning with Gradient Manipulation [J]. | Knowledge-Based Systems , 2025 , 309 .
MLA Chen, Chuanxi et al. "WEAL: Weight-wise Ensemble Adversarial Learning with Gradient Manipulation" . | Knowledge-Based Systems 309 (2025) .
APA Chen, Chuanxi , Wang, Jiaming , Tang, Yunbo , Fang, He , Xu, Li . WEAL: Weight-wise Ensemble Adversarial Learning with Gradient Manipulation . | Knowledge-Based Systems , 2025 , 309 .
Export to NoteExpress RIS BibTex

Version :

V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges SCIE
期刊论文 | 2024 , 172 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

Interictal epileptiform discharges (IED) as large intermittent electrophysiological events are associated with various severe brain disorders. Automated IED detection has long been a challenging task, and mainstream methods largely focus on singling out IEDs from backgrounds from the perspective of waveform, leaving normal sharp transients/artifacts with similar waveforms almost unattended. An open issue still remains to accurately detect IED events that directly reflect the abnormalities in brain electrophysiological activities, minimizing the interference from irrelevant sharp transients with similar waveforms only. This study then proposes a dualview learning framework (namely V2IED) to detect IED events from multi -channel EEG via aggregating features from the two phases: (1) Morphological Feature Learning: directly treating the EEG as a sequence with multiple channels, a 1D -CNN (Convolutional Neural Network) is applied to explicitly learning the deep morphological features; and (2) Spatial Feature Learning: viewing the EEG as a 3D tensor embedding channel topology, a CNN captures the spatial features at each sampling point followed by an LSTM (Long Short -Term Memories) to learn the evolution of these features. Experimental results from a public EEG dataset against the state-of-theart counterparts indicate that: (1) compared with the existing optimal models, V2IED achieves a larger area under the receiver operating characteristic (ROC) curve in detecting IEDs from normal sharp transients with a 5.25% improvement in accuracy; (2) the introduction of spatial features improves performance by 2.4% in accuracy; and (3) V2IED also performs excellently in distinguishing IEDs from background signals especially benign variants.

Keyword :

Convolutional neural network Convolutional neural network Dual-view learning Dual-view learning Electroencephalography Electroencephalography Interictal epileptiform discharge Interictal epileptiform discharge Long short-term memories Long short-term memories

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ming, Zhekai , Chen, Dan , Gao, Tengfei et al. V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges [J]. | NEURAL NETWORKS , 2024 , 172 .
MLA Ming, Zhekai et al. "V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges" . | NEURAL NETWORKS 172 (2024) .
APA Ming, Zhekai , Chen, Dan , Gao, Tengfei , Tang, Yunbo , Tu, Weiping , Chen, Jingying . V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges . | NEURAL NETWORKS , 2024 , 172 .
Export to NoteExpress RIS BibTex

Version :

Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition SCIE
期刊论文 | 2024 , 294 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite

Abstract :

The capability of constructing structural features of EEG stream has long been pursued to track the events and abnormalities correlating multiple data domains in a variant time scale, thus their evolution and/or causality may be better interpreted in connection with the EEG monitoring scenarios. However, how to adapt to the increasingly uncertain complexity of an up -scaling EEG tensor still remains an open issue in the derivation of the feature factors. This study then develops a framework of Component -Increased Dynamic Tensor Decomposition for this task (namely CIDTD ), which centers on an algorithm fusing existing feature factors and the features of the increment at each examination point: (1) complementing missing feature factors (increase in rank ), and (2) optimizing temporal factor matrix and non -temporal factor matrices alternately based on the increment regulated by factor matrices of other modes. Benchmark experiments have been conducted to validate CIDTD 's ability to handle variable -length incremental windows during one single trial. In terms of performance, the results demonstrate that CIDTD outperforms its counterparts by achieving up to a 6.51% improvement in fitness and faster average runtime per examination point compared to state-of-theart algorithms. A case study on the CHB-MIT dataset shows that the feature factors constructed by CIDTD can better characterize the epileptic EEG dynamics than counterparts do, in particular with emerging abnormalities well captured by new feature factors in an up -scaled examination. Overall, the proposed solution excels in (1) supporting general streaming tensor decomposition when rank has to increase and (2) capturing abnormalities in EEG streams with high accuracy, robustness, and interpretability.

Keyword :

EEG stream EEG stream Factorization Factorization Streaming tensor Streaming tensor Structural feature construction Structural feature construction Variant time scale Variant time scale

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Su , Tang, Yunbo , Gao, Tengfei et al. Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 294 .
MLA Wei, Su et al. "Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition" . | KNOWLEDGE-BASED SYSTEMS 294 (2024) .
APA Wei, Su , Tang, Yunbo , Gao, Tengfei , Wang, Yaodong , Wang, Fan , Cheng, Dan . Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition . | KNOWLEDGE-BASED SYSTEMS , 2024 , 294 .
Export to NoteExpress RIS BibTex

Version :

Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention Scopus
期刊论文 | 2024 , 28 (11) , 1-14 | IEEE Journal of Biomedical and Health Informatics
Abstract&Keyword Cite

Abstract :

Brain functional connectivity has been routinely explored to reveal the functional interaction dynamics between the brain regions. However, conventional functional connectivity measures rely on deterministic models fixed for all participants, usually demanding application-specific empirical analysis, while deep learning approaches focus on finding discriminative features for state classification, thus having limited capability to capture the interpretable functional connectivity characteristics. To address the challenges, this study proposes a self-supervised triplet network with depth-wise attention (TripletNet-DA) to generate the functional connectivity: 1) TripletNet-DA firstly utilizes channel-wise transformations for temporal data augmentation, where the correlated & uncorrelated sample pairs are constructed for self-supervised training, 2) Channel encoder is designed with a convolution network to extract the deep features, while similarity estimator is employed to generate the similarity pairs and the functional connectivity representations with prominent patterns emphasized via depth-wise attention mechanism, 3) TripletNet-DA applies Triplet loss with anchor-negative similarity penalty for model training, where the similarities of uncorrelated sample pairs are minimized to enhance model's learning capability. Experimental results on pathological EEG datasets (Autism Spectrum Disorder, Major Depressive Disorder) indicate that 1) TripletNet-DA demonstrates superiority in both ASD discrimination and MDD classification than the state-of-the-art counterparts in various frequency bands, where the connectivity features in beta & gamma bands have respectively achieved the accuracy of 97.05%,98.32% for ASD discrimination, 89.88%,91.80% for MDD classification in the eyes-closed condition, and 90.90%,92.26% for MDD classification in the eyes-open condition, 2) TripletNet-DA enables to uncover significant differences of functional connectivity between the ASD EEG and the TD ones, and the prominent connectivity links are in accordance with the empirical findings that frontal lobe demonstrates more connectivity links and significant frontal-temporal connectivity occurs in the beta band, thus providing potential biomarkers for clinical ASD analysis. IEEE

Keyword :

Analytical models Analytical models Brain Functional Connectivity Brain Functional Connectivity Brain modeling Brain modeling Correlation Correlation Depth-wise Attention Depth-wise Attention Electroencephalography Electroencephalography Self-supervised Learning Self-supervised Learning Task analysis Task analysis Time series analysis Time series analysis Training Training Triplet Network Triplet Network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tang, Y. , Huang, W. , Liu, R. et al. Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention [J]. | IEEE Journal of Biomedical and Health Informatics , 2024 , 28 (11) : 1-14 .
MLA Tang, Y. et al. "Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention" . | IEEE Journal of Biomedical and Health Informatics 28 . 11 (2024) : 1-14 .
APA Tang, Y. , Huang, W. , Liu, R. , Yu, Y. . Learning Interpretable Brain Functional Connectivity Via Self-Supervised Triplet Network With Depth-Wise Attention . | IEEE Journal of Biomedical and Health Informatics , 2024 , 28 (11) , 1-14 .
Export to NoteExpress RIS BibTex

Version :

EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal SCIE
期刊论文 | 2023 , 27 (3) , 1283-1294 | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Artifact removal has been an open critical issue for decades in tasks centering on EEG analysis. Recent deep learning methods mark a leap forward from the conventional signal processing routines; however, those in general still suffer from insufficient capabilities 1) to capture potential temporal dependencies embedded in EEG and 2) to adapt to scenarios without a priori knowledge of artifacts. This study proposes an approach (namely DuoCL) to deep artifact removal with a dual-scale CNN (Convolutional Neural Network)-LSTM (Long Short-Term Memory) model, operating on the raw EEG in three phases: 1) Morphological Feature Extraction, a dual-branch CNN utilizes convolution kernels of two different scales to learn morphological features (individual sample); 2) Feature Reinforcement, the dual-scale features are then reinforced with temporal dependencies (inter-sample) captured by LSTM; and 3) EEG Reconstruction, the resulting feature vectors are finally aggregated to reconstruct the artifact-free EEG via a terminal fully connected layer. Extensive experiments have been performed to compare DuoCL to six state-of-theart counterparts (e.g., 1D-ResCNN and NovelCNN). DuoCL can reconstruct more accurate waveforms and achieve the highest SNR & correlation (CC) as well as the lowest error (RRMSEt & RRMSEf). In particular, DuoCL holds potentials in providing a high- quality removal of unknown and hybrid artifacts.

Keyword :

artifact removal artifact removal CNN CNN Electroencephalogram (EEG) Electroencephalogram (EEG) end-to-end end-to-end LSTM LSTM

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gao, Tengfei , Chen, Dan , Tang, Yunbo et al. EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal [J]. | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2023 , 27 (3) : 1283-1294 .
MLA Gao, Tengfei et al. "EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal" . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 27 . 3 (2023) : 1283-1294 .
APA Gao, Tengfei , Chen, Dan , Tang, Yunbo , Ming, Zhekai , Li, Xiaoli . EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2023 , 27 (3) , 1283-1294 .
Export to NoteExpress RIS BibTex

Version :

Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning SCIE
期刊论文 | 2023 , 89 | BIOMEDICAL SIGNAL PROCESSING AND CONTROL
Abstract&Keyword Cite

Abstract :

Localization of sleep apnea and hypopnea (SAH) events has routinely relied on expert visual inspection of polysomnography (PSG) recordings, which is a tedious task demanding a high level of professional skills. Automated detection methods have achieved remarkable success, especially with the recent advances in machine learning and deep learning technologies. However, a significant challenge still remains in methods towards clinical practices: How to accurately discriminate SAH events in PSG, with the onset and duration of each? This study develops an object detection framework for accurately identifying the position of SAH segments with varied durations (namely SAH-MOD) in three phases: (1) Dual-modal Feature Learning (DFL, dual-branch 1-D convolutional layers followed by Concatenate Block). Deep features are efficiently learned and then fused from two different types of signals related to respiratory, i.e., nasal airflow and abdominal movement; (2) Feature Map Generation (FMG, cascade 1-D convolutional layers). Feature maps are generated with multi-scale hierarchical features in different depths of the network, catering for the needs of object (SAH event) detection; Default anchors associated with the scales and receptive fields are tiled onto the corresponding detection feature maps; and (3) Multi-scale Object Detection (MOD). A variety of instances of prediction are then made on all available detection layers with post-processing to accurately capture each SAH event. Experiments have been performed on the dataset of stroke unit recordings for the detection of Obstructive Sleep Apnea Syndrome (OSASUD-dataset) with SAH-MOD against the state-of-the-art counterparts, and results indicate that: (1) SAH-MOD performs the best with a Recall of 81.0% and an F1-score of 71.1%; and (2) it has significant advantages in localizing the onset and duration of each SAH event, with 91.9% of the IoU values between predicted and labeled events falling between 0.6 and 1.0. Ablation experiments show that the introduction of dual-modal feature learning and hierarchical feature maps improves recall performance by 6.9% and 4.1%, respectively.

Keyword :

Dual-modal feature learning Dual-modal feature learning Multi-scale object detection Multi-scale object detection Obstructive sleep apnea-hypopnea syndrome Obstructive sleep apnea-hypopnea syndrome Polysomnography Polysomnography Sleep apnea and hypopnea Sleep apnea and hypopnea

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ji, Yifeng , Chen, Dan , Zuo, Yiping et al. Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 .
MLA Ji, Yifeng et al. "Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 89 (2023) .
APA Ji, Yifeng , Chen, Dan , Zuo, Yiping , Gao, Tengfei , Tang, Yunbo . Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 1 >

Export

Results:

Selected

to

Format:
Online/Total:125/9277139
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1