Query:
学者姓名:连盛
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Semi-supervised learning has garnered significant interest as a method to alleviate the burden of data annotation. Recently, semi-supervised medical image segmentation has garnered significant interest that can alleviate the burden of densely annotated data. Substantial advancements have been achieved by integrating consistency-regularization and pseudo-labeling techniques. The quality of the pseudo-labels is crucial in this regard. Unreliable pseudo-labeling can result in the introduction of noise, leading the model to converge to suboptimal solutions. To address this issue, we propose learning from reliable pseudo-labels. In this paper, we tackle two critical questions in learning from reliable pseudo-labels: which pseudo-labels are reliable and how reliable are they? Specifically, we conduct a comparative analysis of two subnetworks to address both challenges. Initially, we compare the prediction confidence of the two subnetworks. A higher confidence score indicates a more reliable pseudo-label. Subsequently, we utilize intra-class similarity to assess the reliability of the pseudo-labels to address the second challenge. The greater the intra-class similarity of the predicted classes, the more reliable the pseudo-label. The subnetwork selectively incorporates knowledge imparted by the other subnetwork model, contingent on the reliability of the pseudo labels. By reducing the introduction of noise from unreliable pseudo-labels, we are able to improve the performance of segmentation. To demonstrate the superiority of our approach, we conducted an extensive set of experiments on three datasets: Left Atrium, Pancreas-CT and Brats-2019. The experimental results demonstrate that our approach achieves state-of-the-art performance. Code is available at: https://github.com/Jiawei0o0/mutual-learning-with-reliable-pseudo-labels.
Keyword :
Intra-class similarity Intra-class similarity Medical image segmentation Medical image segmentation Pseudo-labels Pseudo-labels Semi-supervised learning Semi-supervised learning Uncertainty Uncertainty
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Jiawei , Luo, Zhiming , Lian, Sheng et al. Mutual learning with reliable pseudo label for semi-supervised medical image segmentation [J]. | MEDICAL IMAGE ANALYSIS , 2024 , 94 . |
MLA | Su, Jiawei et al. "Mutual learning with reliable pseudo label for semi-supervised medical image segmentation" . | MEDICAL IMAGE ANALYSIS 94 (2024) . |
APA | Su, Jiawei , Luo, Zhiming , Lian, Sheng , Lin, Dazhen , Li, Shaozi . Mutual learning with reliable pseudo label for semi-supervised medical image segmentation . | MEDICAL IMAGE ANALYSIS , 2024 , 94 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Different brain tumor magnetic resonance imaging (MRI) modalities provide diverse tumor-specific information. Previous works have enhanced brain tumor segmentation performance by integrating multiple MRI modalities. However, multi-modal MRI data are often unavailable in clinical practice. An incomplete modality leads to missing tumor-specific information, which degrades the performance of existing models. Various strategies have been proposed to transfer knowledge from a full modality network (teacher) to an incomplete modality one (student) to address this issue. However, they neglect the fact that brain tumor segmentation is a structural prediction problem that requires voxel semantic relations. In this paper, we propose a Reconstruct Incomplete Relation Network (RIRN) that transfers voxel semantic relational knowledge from the teacher to the student. Specifically, we propose two types of voxel relations to incorporate structural knowledge: Class-relative relations (CRR) and Class-agnostic relations (CAR). The CRR groups voxels into different tumor regions and constructs a relation between them. The CAR builds a global relation between all voxel features, complementing the local inter-region relation. Moreover, we use adversarial learning to align the holistic structural prediction between the teacher and the student. Extensive experimentation on both the BraTS 2018 and BraTS 2020 datasets establishes that our method outperforms all state-of-the-art approaches. © 2024 Elsevier Ltd
Keyword :
Brain tumor segmentation Brain tumor segmentation Incomplete modalities Incomplete modalities Knowledge distillation Knowledge distillation Structural relation knowledge Structural relation knowledge
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, J. , Luo, Z. , Wang, C. et al. Reconstruct incomplete relation for incomplete modality brain tumor segmentation [J]. | Neural Networks , 2024 , 180 . |
MLA | Su, J. et al. "Reconstruct incomplete relation for incomplete modality brain tumor segmentation" . | Neural Networks 180 (2024) . |
APA | Su, J. , Luo, Z. , Wang, C. , Lian, S. , Lin, X. , Li, S. . Reconstruct incomplete relation for incomplete modality brain tumor segmentation . | Neural Networks , 2024 , 180 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automatic multi-organ segmentation in medical images is crucial for many clinical applications. The art methods have reported promising results but rely on massive annotated data. However, such data is hard to obtain due to the need for considerable expertise. In contrast, obtaining a single-organ dataset is relatively easier, and many well-annotated ones are publicly available. To this end, this work raises the partially supervised problem: can we use these single-organ datasets to learn a multi-organ segmentation model? In this paper, we propose the Partial-and Mutual -Prior incorporated framework (PRIMP) to learn a robust multi-organ segmentation model by deriving knowledge from single-organ datasets. Unlike existing methods that largely ignore the organs' anatomical prior knowledge, our PRIMP is designed with two key prior shared across different subjects and datasets: (1) partial-prior, each organ has its own character (e.g., size and shape) and (2) mutual-prior, the relative position between different organs follows the comparatively fixed anatomical structure. Specifically, we propose to incorporate partial-prior of each organ by learning from the single -organ statistics, and inject mutual-prior of organs by learning from the multi-organ statistics. By doing so, the model is encouraged to capture organs' anatomical invariance across different subjects and datasets, thus guaranteeing the anatomical reasonableness of the predictions, narrowing down the problem of domain gaps, capturing spatial information among different slices, thereby improving organs' segmentation performance. Experiments on four publicly available datasets (LiTS, Pancreas, KiTS, BTCV) show that our PRIMP can improve the performance on both the multi-organ and single-organ datasets (17.40% and 3.06% above the baseline model on DSC, respectively) and can surpass the comparative approaches.
Keyword :
Anatomical prior Anatomical prior Multi-organ segmentation Multi-organ segmentation Partial supervision Partial supervision
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lian, Sheng , Li, Lei , Luo, Zhiming et al. Learning multi-organ segmentation via partial- and mutual-prior from single-organ datasets [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 80 . |
MLA | Lian, Sheng et al. "Learning multi-organ segmentation via partial- and mutual-prior from single-organ datasets" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 80 (2023) . |
APA | Lian, Sheng , Li, Lei , Luo, Zhiming , Zhong, Zhun , Wang, Beizhan , Li, Shaozi . Learning multi-organ segmentation via partial- and mutual-prior from single-organ datasets . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 80 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-organ segmentation is a critical prerequisite for many clinical applications. Deep learning-based approaches have recently achieved promising results on this task. However, they heavily rely on massive data with multi-organ annotated, which is labor- and expert-intensive and thus difficult to obtain. In contrast, single-organ datasets are easier to acquire, and many well-annotated ones are publicly available. It leads to the partially labeled issue: How to learn a unified multi-organ segmentation model from several single-organ datasets? Pseudo-label-based methods and conditional information-based methods make up the majority of existing solutions, where the former largely depends on the accuracy of pseudo-labels, and the latter has a limited capacity for task-related features. In this paper, we propose the Conditional Dynamic Attention Network (CDANet). Our approach is designed with two key components: (1) multisource parameter generator, fusing the conditional and multiscale information to better distinguish among different tasks, and (2) dynamic attention module, promoting more attention to task-related features. We have conducted extensive experiments on seven partially labeled challenging datasets. The results show that our method achieved competitive results compared with the advanced approaches, with an average Dice score of 75.08%. Additionally, the Hausdorff Distance is 26.31, which is a competitive result.
Keyword :
dynamic attention dynamic attention multi-organ segmentation multi-organ segmentation partial supervision partial supervision
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Lei , Lian, Sheng , Lin, Dazhen et al. Learning multi-organ and tumor segmentation from partially labeled datasets by a conditional dynamic attention network [J]. | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE , 2023 , 36 (1) . |
MLA | Li, Lei et al. "Learning multi-organ and tumor segmentation from partially labeled datasets by a conditional dynamic attention network" . | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE 36 . 1 (2023) . |
APA | Li, Lei , Lian, Sheng , Lin, Dazhen , Luo, Zhiming , Wang, Beizhan , Li, Shaozi . Learning multi-organ and tumor segmentation from partially labeled datasets by a conditional dynamic attention network . | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE , 2023 , 36 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, significant progress has been made in consistency regularization-based semi-supervised medical image segmentation. Typically, a consistency loss is applied to enforce consistent prediction of input images under different perturbations. However, most of the previous methods missed two key points: (1) Only a single weight is used to balance the supervised and unsupervised loss, which fails to distinguish the variances across samples. (2) Only forces the same pixel to have similar features in different data augmentation, yet ignoring the relationship with other pixels, which can serve as a more robust supervision signal. To address these issues, in this paper, we propose a novel framework, which contains two main components: Dynamic Weight Sampling (DWS) module and Class Agnostic Relationship (CAR) module. Specifically, our method contains one shared encoder and two slightly different decoders. Instead of a fixed weight, the DWS dynamically adjusts the weights for each sample based on the discrepancy between the predictions of the two decoders, balancing the supervised and unsupervised losses. The greater discrepancy implies that the sample is more challenging, and the prediction is less reliable. To learn from a more reliable target, a lower weight should be assigned to the challenging sample. In addition, to convey the relationship between pixels as supervision, the CAR develops relational consistency loss on class-agnostic feature regions. Extensive experiment on the Left Atrium and Pancreas-CT dataset shows that our methods have achieved state-of-the-art results.
Keyword :
Class agnostic relationship Class agnostic relationship Dynamic weight Dynamic weight Semi-supervised segmentation Semi-supervised segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Su, Jiawei , Luo, Zhiming , Lian, Sheng et al. Consistency learning with dynamic weighting and class-agnostic regularization for semi-supervised medical image segmentation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 90 . |
MLA | Su, Jiawei et al. "Consistency learning with dynamic weighting and class-agnostic regularization for semi-supervised medical image segmentation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 90 (2023) . |
APA | Su, Jiawei , Luo, Zhiming , Lian, Sheng , Lin, Dazhen , Li, Shaozi . Consistency learning with dynamic weighting and class-agnostic regularization for semi-supervised medical image segmentation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 90 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In medical images, the edges of organs are often blurred and unclear. Existing semi-supervised image segmentation methods rarely model edges explicitly. Thus most methods produce inaccurate predictions in target edge regions. In this paper, we propose a contour-aware consistency framework for semi-supervised medical image segmentation. The framework consists of a shared encoder, a vanilla primary decoder and a contour-enhanced auxiliary decoder. The contour-enhanced decoder is designed to enhance the features of the target contour region. The predictions from the primary decoder and the auxiliary decoder are combined to create pseudo labels, enabling the unlabeled data for supervision. For the inconsistent regions in predictions, we propose a self-contrast strategy that further improves the performance by reducing the discrepancy of the dual decoder for the same pixel. We conducted extensive experiments on three publicly available datasets and verified that our approach outperforms other methods for boundary quality. Specifically, with 5% labeled data on Left Atrial (LA) dataset, our proposed approach achieved a Boundary IoU 3.76% higher than the state-of-the-art methods. Code is available at https://github.com/SmileJET/CAC4SSL.
Keyword :
Medical image segmentation Medical image segmentation Mutual learning Mutual learning Semi-supervised Semi-supervised
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Lei , Lian, Sheng , Luo, Zhiming et al. Contour-aware consistency for semi-supervised medical image segmentation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 . |
MLA | Li, Lei et al. "Contour-aware consistency for semi-supervised medical image segmentation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 89 (2023) . |
APA | Li, Lei , Lian, Sheng , Luo, Zhiming , Wang, Beizhan , Li, Shaozi . Contour-aware consistency for semi-supervised medical image segmentation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 89 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Few-shot learning can potentially learn the target knowledge in extremely few data regimes. Existing few-shot medical image segmentation methods fail to consider the global anatomy correlation between the support and query sets. They generally adopt a weak one-way information transmission that can not fully explore the knowledge to segment query data. To address this problem, we propose a novel Symmetrical Supervision network based on traditional two-branch methods. We raise two main contributions: (1) The Symmetrical Supervision Mechanism is leveraged to strengthen the supervision of network training; (2) A transformer-based Global Feature Alignment module is introduced to increase the global consistency between the two branches. Experimental results on two challenging datasets (abdominal segmentation dataset CHAOS and cardiac segmentation dataset MS-CMRSeg) show a remarkable performance compared to other comparing methods. © 2022 IEEE.
Keyword :
Computer vision Computer vision Image segmentation Image segmentation Medical imaging Medical imaging
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Niu, Yao , Luo, Zhiming , Lian, Sheng et al. Symmetrical Supervision with Transformer for Few-shot Medical Image Segmentation [C] . 2022 : 1683-1687 . |
MLA | Niu, Yao et al. "Symmetrical Supervision with Transformer for Few-shot Medical Image Segmentation" . (2022) : 1683-1687 . |
APA | Niu, Yao , Luo, Zhiming , Lian, Sheng , Li, Lei , Li, Shaozi , Song, Haixin . Symmetrical Supervision with Transformer for Few-shot Medical Image Segmentation . (2022) : 1683-1687 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |