• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 1 >
V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges SCIE
期刊论文 | 2024 , 172 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

Interictal epileptiform discharges (IED) as large intermittent electrophysiological events are associated with various severe brain disorders. Automated IED detection has long been a challenging task, and mainstream methods largely focus on singling out IEDs from backgrounds from the perspective of waveform, leaving normal sharp transients/artifacts with similar waveforms almost unattended. An open issue still remains to accurately detect IED events that directly reflect the abnormalities in brain electrophysiological activities, minimizing the interference from irrelevant sharp transients with similar waveforms only. This study then proposes a dualview learning framework (namely V2IED) to detect IED events from multi -channel EEG via aggregating features from the two phases: (1) Morphological Feature Learning: directly treating the EEG as a sequence with multiple channels, a 1D -CNN (Convolutional Neural Network) is applied to explicitly learning the deep morphological features; and (2) Spatial Feature Learning: viewing the EEG as a 3D tensor embedding channel topology, a CNN captures the spatial features at each sampling point followed by an LSTM (Long Short -Term Memories) to learn the evolution of these features. Experimental results from a public EEG dataset against the state-of-theart counterparts indicate that: (1) compared with the existing optimal models, V2IED achieves a larger area under the receiver operating characteristic (ROC) curve in detecting IEDs from normal sharp transients with a 5.25% improvement in accuracy; (2) the introduction of spatial features improves performance by 2.4% in accuracy; and (3) V2IED also performs excellently in distinguishing IEDs from background signals especially benign variants.

Keyword :

Convolutional neural network Convolutional neural network Dual-view learning Dual-view learning Electroencephalography Electroencephalography Interictal epileptiform discharge Interictal epileptiform discharge Long short-term memories Long short-term memories

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ming, Zhekai , Chen, Dan , Gao, Tengfei et al. V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges [J]. | NEURAL NETWORKS , 2024 , 172 .
MLA Ming, Zhekai et al. "V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges" . | NEURAL NETWORKS 172 (2024) .
APA Ming, Zhekai , Chen, Dan , Gao, Tengfei , Tang, Yunbo , Tu, Weiping , Chen, Jingying . V2IED: Dual-view learning framework for detecting events of interictal epileptiform discharges . | NEURAL NETWORKS , 2024 , 172 .
Export to NoteExpress RIS BibTex

Version :

Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition SCIE
期刊论文 | 2024 , 294 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite

Abstract :

The capability of constructing structural features of EEG stream has long been pursued to track the events and abnormalities correlating multiple data domains in a variant time scale, thus their evolution and/or causality may be better interpreted in connection with the EEG monitoring scenarios. However, how to adapt to the increasingly uncertain complexity of an up -scaling EEG tensor still remains an open issue in the derivation of the feature factors. This study then develops a framework of Component -Increased Dynamic Tensor Decomposition for this task (namely CIDTD ), which centers on an algorithm fusing existing feature factors and the features of the increment at each examination point: (1) complementing missing feature factors (increase in rank ), and (2) optimizing temporal factor matrix and non -temporal factor matrices alternately based on the increment regulated by factor matrices of other modes. Benchmark experiments have been conducted to validate CIDTD 's ability to handle variable -length incremental windows during one single trial. In terms of performance, the results demonstrate that CIDTD outperforms its counterparts by achieving up to a 6.51% improvement in fitness and faster average runtime per examination point compared to state-of-theart algorithms. A case study on the CHB-MIT dataset shows that the feature factors constructed by CIDTD can better characterize the epileptic EEG dynamics than counterparts do, in particular with emerging abnormalities well captured by new feature factors in an up -scaled examination. Overall, the proposed solution excels in (1) supporting general streaming tensor decomposition when rank has to increase and (2) capturing abnormalities in EEG streams with high accuracy, robustness, and interpretability.

Keyword :

EEG stream EEG stream Factorization Factorization Streaming tensor Streaming tensor Structural feature construction Structural feature construction Variant time scale Variant time scale

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Su , Tang, Yunbo , Gao, Tengfei et al. Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 294 .
MLA Wei, Su et al. "Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition" . | KNOWLEDGE-BASED SYSTEMS 294 (2024) .
APA Wei, Su , Tang, Yunbo , Gao, Tengfei , Wang, Yaodong , Wang, Fan , Cheng, Dan . Scale-variant structural feature construction of EEG stream via component-increased Dynamic Tensor Decomposition . | KNOWLEDGE-BASED SYSTEMS , 2024 , 294 .
Export to NoteExpress RIS BibTex

Version :

EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal SCIE
期刊论文 | 2023 , 27 (3) , 1283-1294 | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Artifact removal has been an open critical issue for decades in tasks centering on EEG analysis. Recent deep learning methods mark a leap forward from the conventional signal processing routines; however, those in general still suffer from insufficient capabilities 1) to capture potential temporal dependencies embedded in EEG and 2) to adapt to scenarios without a priori knowledge of artifacts. This study proposes an approach (namely DuoCL) to deep artifact removal with a dual-scale CNN (Convolutional Neural Network)-LSTM (Long Short-Term Memory) model, operating on the raw EEG in three phases: 1) Morphological Feature Extraction, a dual-branch CNN utilizes convolution kernels of two different scales to learn morphological features (individual sample); 2) Feature Reinforcement, the dual-scale features are then reinforced with temporal dependencies (inter-sample) captured by LSTM; and 3) EEG Reconstruction, the resulting feature vectors are finally aggregated to reconstruct the artifact-free EEG via a terminal fully connected layer. Extensive experiments have been performed to compare DuoCL to six state-of-theart counterparts (e.g., 1D-ResCNN and NovelCNN). DuoCL can reconstruct more accurate waveforms and achieve the highest SNR & correlation (CC) as well as the lowest error (RRMSEt & RRMSEf). In particular, DuoCL holds potentials in providing a high- quality removal of unknown and hybrid artifacts.

Keyword :

artifact removal artifact removal CNN CNN Electroencephalogram (EEG) Electroencephalogram (EEG) end-to-end end-to-end LSTM LSTM

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gao, Tengfei , Chen, Dan , Tang, Yunbo et al. EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal [J]. | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2023 , 27 (3) : 1283-1294 .
MLA Gao, Tengfei et al. "EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal" . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 27 . 3 (2023) : 1283-1294 .
APA Gao, Tengfei , Chen, Dan , Tang, Yunbo , Ming, Zhekai , Li, Xiaoli . EEG Reconstruction With a Dual-Scale CNN-LSTM Model for Deep Artifact Removal . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2023 , 27 (3) , 1283-1294 .
Export to NoteExpress RIS BibTex

Version :

Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning SCIE
期刊论文 | 2023 , 89 | BIOMEDICAL SIGNAL PROCESSING AND CONTROL
Abstract&Keyword Cite

Abstract :

Localization of sleep apnea and hypopnea (SAH) events has routinely relied on expert visual inspection of polysomnography (PSG) recordings, which is a tedious task demanding a high level of professional skills. Automated detection methods have achieved remarkable success, especially with the recent advances in machine learning and deep learning technologies. However, a significant challenge still remains in methods towards clinical practices: How to accurately discriminate SAH events in PSG, with the onset and duration of each? This study develops an object detection framework for accurately identifying the position of SAH segments with varied durations (namely SAH-MOD) in three phases: (1) Dual-modal Feature Learning (DFL, dual-branch 1-D convolutional layers followed by Concatenate Block). Deep features are efficiently learned and then fused from two different types of signals related to respiratory, i.e., nasal airflow and abdominal movement; (2) Feature Map Generation (FMG, cascade 1-D convolutional layers). Feature maps are generated with multi-scale hierarchical features in different depths of the network, catering for the needs of object (SAH event) detection; Default anchors associated with the scales and receptive fields are tiled onto the corresponding detection feature maps; and (3) Multi-scale Object Detection (MOD). A variety of instances of prediction are then made on all available detection layers with post-processing to accurately capture each SAH event. Experiments have been performed on the dataset of stroke unit recordings for the detection of Obstructive Sleep Apnea Syndrome (OSASUD-dataset) with SAH-MOD against the state-of-the-art counterparts, and results indicate that: (1) SAH-MOD performs the best with a Recall of 81.0% and an F1-score of 71.1%; and (2) it has significant advantages in localizing the onset and duration of each SAH event, with 91.9% of the IoU values between predicted and labeled events falling between 0.6 and 1.0. Ablation experiments show that the introduction of dual-modal feature learning and hierarchical feature maps improves recall performance by 6.9% and 4.1%, respectively.

Keyword :

Dual-modal feature learning Dual-modal feature learning Multi-scale object detection Multi-scale object detection Obstructive sleep apnea-hypopnea syndrome Obstructive sleep apnea-hypopnea syndrome Polysomnography Polysomnography Sleep apnea and hypopnea Sleep apnea and hypopnea

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ji, Yifeng , Chen, Dan , Zuo, Yiping et al. Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 .
MLA Ji, Yifeng et al. "Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 89 (2023) .
APA Ji, Yifeng , Chen, Dan , Zuo, Yiping , Gao, Tengfei , Tang, Yunbo . Accurate apnea and hypopnea localization in PSG with Multi-scale object detection via Dual-modal Feature Learning . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 1 >

Export

Results:

Selected

to

Format:
Online/Total:942/7276509
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1