• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:潘林

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 11 >
M2UNet: Multi-Scale Feature Acquisition and Multi-Input Edge Supplement Based on UNet for Efficient Segmentation of Breast Tumor in Ultrasound Images SCIE
期刊论文 | 2025 , 15 (8) | DIAGNOSTICS
Abstract&Keyword Cite

Abstract :

Background/Objectives: The morphological characteristics of breast tumors play a crucial role in the preliminary diagnosis of breast cancer. However, malignant tumors often exhibit rough, irregular edges and unclear, boundaries in ultrasound images. Additionally, variations in tumor size, location, and shape further complicate the accurate segmentation of breast tumors from ultrasound images. Methods: For these difficulties, this paper introduces a breast ultrasound tumor segmentation network comprising a multi-scale feature acquisition (MFA) module and a multi-input edge supplement (MES) module. The MFA module effectively incorporates dilated convolutions of various sizes in a serial-parallel fashion to capture tumor features at diverse scales. Then, the MES module is employed to enhance the output of each decoder layer by supplementing edge information. This process aims to improve the overall integrity of tumor boundaries, contributing to more refined segmentation results. Results: The mean Dice (mDice), Pixel Accuracy (PA), Intersection over Union (IoU), Recall, and Hausdorff Distance (HD) of this method for the publicly available breast ultrasound image (BUSI) dataset were 79.43%, 96.84%, 83.00%, 87.17%, and 19.71 mm, respectively, and for the dataset of Fujian Cancer Hospital, 90.45%, 97.55%, 90.08%, 93.72%, and 11.02 mm, respectively. In the BUSI dataset, compared to the original UNet, the Dice for malignant tumors increased by 14.59%, and the HD decreased by 17.13 mm. Conclusions: Our method is capable of accurately segmenting breast tumor ultrasound images, which provides very valuable edge information for subsequent diagnosis of breast cancer. The experimental results show that our method has made substantial progress in improving accuracy.

Keyword :

breast cancer breast cancer deep learning deep learning multi-scale feature fusion multi-scale feature fusion segmentation segmentation ultrasound image ultrasound image

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pan, Lin , Tang, Mengshi , Chen, Xin et al. M2UNet: Multi-Scale Feature Acquisition and Multi-Input Edge Supplement Based on UNet for Efficient Segmentation of Breast Tumor in Ultrasound Images [J]. | DIAGNOSTICS , 2025 , 15 (8) .
MLA Pan, Lin et al. "M2UNet: Multi-Scale Feature Acquisition and Multi-Input Edge Supplement Based on UNet for Efficient Segmentation of Breast Tumor in Ultrasound Images" . | DIAGNOSTICS 15 . 8 (2025) .
APA Pan, Lin , Tang, Mengshi , Chen, Xin , Du, Zhongshi , Huang, Danfeng , Yang, Mingjing et al. M2UNet: Multi-Scale Feature Acquisition and Multi-Input Edge Supplement Based on UNet for Efficient Segmentation of Breast Tumor in Ultrasound Images . | DIAGNOSTICS , 2025 , 15 (8) .
Export to NoteExpress RIS BibTex

Version :

New scoring system for the evaluation obstructive degrees based on computed tomography for obstructive colorectal cancer SCIE
期刊论文 | 2025 , 17 (3) | WORLD JOURNAL OF GASTROINTESTINAL ONCOLOGY
Abstract&Keyword Cite

Abstract :

BACKGROUND The degree of obstruction plays an important role in decision-making for obstructive colorectal cancer (OCRC). The existing assessment still relies on the colorectal obstruction scoring system (CROSS) which is based on a comprehensive analysis of patients' complaints and eating conditions. The data collection relies on subjective descriptions and lacks objective parameters. Therefore, a scoring system for the evaluation of computed tomography-based obstructive degree (CTOD) is urgently required for OCRC. AIM To explore the relationship between CTOD and CROSS and to determine whether CTOD could affect the short-term and long-term prognosis. METHODS Of 173 patients were enrolled. CTOD was obtained using k-means, the ratio of proximal to distal obstruction, and the proportion of nonparenchymal areas at the site of obstruction. CTOD was integrated with the CROSS to analyze the effect of emergency intervention on complications. Short-term and long-term outcomes were compared between the groups. RESULTS CTOD severe obstruction (CTOD grade 3) was an independent risk factor [odds ratio (OR) = 3.390, 95% confidence interval (CI): 1.340-8.570, P = 0.010] via multivariate analysis of short-term outcomes, while CROSS grade was not. In the CTOD-CROSS grade system, for the non-severe obstructive (CTOD 1-2 to CROSS 1-4) group, the complication rate of emergency interventions was significantly higher than that of non-emergency interventions (71.4% vs 41.8%, P = 0.040). The postoperative pneumonia rate was higher in the emergency intervention group than in the non-severe obstructive group (35.7% vs 8.9%, P = 0.020). However, CTOD grade was not an independent risk factor of overall survival and progression-free survival. CONCLUSION CTOD was useful in preoperative decision-making to avoid unnecessary emergency interventions and complications.

Keyword :

Colorectal obstruction scoring system Colorectal obstruction scoring system Computed tomography-based obstructive degree Computed tomography-based obstructive degree Emergency intervention Emergency intervention Obstructive colorectal cancer Obstructive colorectal cancer Scoring system Scoring system

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shang-Guan, Xin-Chang , Zhang, Jun-Rong , Lin, Chao-Nan et al. New scoring system for the evaluation obstructive degrees based on computed tomography for obstructive colorectal cancer [J]. | WORLD JOURNAL OF GASTROINTESTINAL ONCOLOGY , 2025 , 17 (3) .
MLA Shang-Guan, Xin-Chang et al. "New scoring system for the evaluation obstructive degrees based on computed tomography for obstructive colorectal cancer" . | WORLD JOURNAL OF GASTROINTESTINAL ONCOLOGY 17 . 3 (2025) .
APA Shang-Guan, Xin-Chang , Zhang, Jun-Rong , Lin, Chao-Nan , Chen, Shuai , Wei, Yong , Chen, Wen-Xuan et al. New scoring system for the evaluation obstructive degrees based on computed tomography for obstructive colorectal cancer . | WORLD JOURNAL OF GASTROINTESTINAL ONCOLOGY , 2025 , 17 (3) .
Export to NoteExpress RIS BibTex

Version :

Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology EI
会议论文 | 2025 , 15548 LNCS , 96-105 | 1st MICCAI Challenge Comprehensive Analysis and Computing of Real-World Medical Images, CARE 2024 Held in Conjunction with 27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
Abstract&Keyword Cite

Abstract :

Multi-sequence cardiac magnetic resonance (MS-CMR) images are capable of providing myocardial pathology information for patients with myocardial infarction. Precise myocardial structure and pathology segmentation hold significant importance for subsequent diagnosis and treatment. Nevertheless, traditional manual myocardial structure and pathology segmentation is not only time-consuming and labor-intensive but also has a low accuracy rate, particularly when identifying pathology such as scars and edema that are small in volume and have low contrast with the surrounding tissues, it becomes even more challenging. To address this issue, this paper proposes an improved nn-UNet for fully automatic segmentation of myocardial pathologies. In this network, based on nn-Unet, we use multi-modal data as input to make up for the lack of information in a single mode. For multi-modal data, we utilize cross normalization to improve the generalization performance. Meanwhile, multi-scale attention modules are integrated to process features at different resolutions, thus improving the feature representation capability of neural networks. Through feature fusion and attention weighting, the model can better capture the global and local information of myocardial pathologies, and achieve more accurate segmentation of myocardial pathologies. To verify the effectiveness of the proposed method, we conducted an evaluation using five-fold cross-validation in the dataset of the MyoPS++ challenge. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

Keyword :

Cardiology Cardiology Deep learning Deep learning Diagnosis Diagnosis Image segmentation Image segmentation Nuclear magnetic resonance Nuclear magnetic resonance Pathology Pathology

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tang, Mengshi , Li, Nuoxi , Pan, Lin . Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology [C] . 2025 : 96-105 .
MLA Tang, Mengshi et al. "Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology" . (2025) : 96-105 .
APA Tang, Mengshi , Li, Nuoxi , Pan, Lin . Improved nn-UNet: Generalizable Multi-scale Attention-Driven Segmentation of Multi-sequence Myocardial Pathology . (2025) : 96-105 .
Export to NoteExpress RIS BibTex

Version :

Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation EI
会议论文 | 2025 , 15548 LNCS , 34-45 | 1st MICCAI Challenge Comprehensive Analysis and Computing of Real-World Medical Images, CARE 2024 Held in Conjunction with 27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
Abstract&Keyword Cite

Abstract :

MyoPS (Myocardial Pathology Segmentation) is used for auxiliary diagnosis of myocardial infarction by accurately segmenting myocardial lesions (such as scars and edema). However, CMR images are complex, manual segmentation is time-consuming and relies on professional knowledge, and there are differences in imaging data from different centers, which increases the difficulty of segmentation. To this end, this study developed a domain generalization module that flexibly integrates LGE, T2-weighted, and Cine sequences to improve cross-center and multi-sequence adaptability and robustness. Our method combines the domain generalization module with the nnUNet segmentation network, and reduces the differences between different data distributions by utilizing the domain generalization module for data mixing enhancement, thereby enhancing the model’s generalization ability and improving segmentation performance. In tests conducted on the data set of the MyoPS++ Challenge, our network performed well in segmenting scars and edema. Compared with the native segmentation network, it has a greater performance improvement, which verifies its ability to handle multi-center, Effectiveness in multi-sequence CMR data. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

Keyword :

Diagnosis Diagnosis Diseases Diseases Image segmentation Image segmentation Pathology Pathology

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Leyang , Tu, Yaosheng , Bai, Penggang et al. Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation [C] . 2025 : 34-45 .
MLA Chen, Leyang et al. "Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation" . (2025) : 34-45 .
APA Chen, Leyang , Tu, Yaosheng , Bai, Penggang , Pan, Lin . Domain Generalization in Myocardial Pathology Segmentation with MixUp Augmentation . (2025) : 34-45 .
Export to NoteExpress RIS BibTex

Version :

Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI EI
会议论文 | 2025 , 15548 LNCS , 149-157 | 1st MICCAI Challenge Comprehensive Analysis and Computing of Real-World Medical Images, CARE 2024 Held in Conjunction with 27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
Abstract&Keyword Cite

Abstract :

Automatic segmentation of left atrial cavity and scar in late gadolinium enhanced magnetic resonance imaging has important clinical significance for the diagnosis of atrial fibrillation. Owing to the inferior image quality, thin walls, surrounding enhancement regions, and complex morphology of left atrial scars, the automatic quantitative analysis of them is extremely challenging. Either manual segmentation of the left atrial cavity or the atrial scar is very time-consuming and subjective errors may occur. In this work, a deep neural network named ResCEAUNet has been developed and validated for automatic segmentation of left atrial scars. We adopt nnUNet as the baseline. To enhance segmentation accuracy, we introduce two key improvements to our model: the lightweight Convolutional Block Attention Module (CBAM) and the edge attention module. The edge attention module significantly improves the model’s ability to delineate intricate boundaries of the atrial wall and scar tissue, particularly beneficial for thin structures like the left atrium. Simultaneously, CBAM sharpens the model’s focus on relevant features, enabling more precise localization and identification of scar tissue without substantially increasing computational complexity. These synergistic enhancements result in a robust and efficient segmentation model, demonstrating its effectiveness by achieving a Dice score of 0.6181 on the LAScarqs++ 2024 validation dataset. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

Keyword :

Deep neural networks Deep neural networks Diagnosis Diagnosis Diffusion tensor imaging Diffusion tensor imaging Dynamic contrast enhanced MRI Dynamic contrast enhanced MRI Image segmentation Image segmentation Nuclear magnetic resonance Nuclear magnetic resonance

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Yashuang , Cheng, Haiyan , Li, Douzhi et al. Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI [C] . 2025 : 149-157 .
MLA Zhang, Yashuang et al. "Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI" . (2025) : 149-157 .
APA Zhang, Yashuang , Cheng, Haiyan , Li, Douzhi , Pan, Lin . Left Atrial Scar Segmentation and Quantification Using Residual CBAM-EAM Attention UNet for LGE MRI . (2025) : 149-157 .
Export to NoteExpress RIS BibTex

Version :

LGENet: disentangle anatomy and pathology features for late gadolinium enhancement image segmentation SCIE
期刊论文 | 2025 , 63 (8) , 2311-2323 | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING
Abstract&Keyword Cite

Abstract :

Myocardium scar segmentation is essential for clinical diagnosis and prognosis for cardiac vascular diseases. Late gadolinium enhancement (LGE) imaging technology has been widely utilized to visualize left atrial and ventricular scars. However, automatic scar segmentation remains challenging due to the imbalance between scar and background and the variation in scar sizes. To address these challenges, we introduce an innovative network, i.e., LGENet, for scar segmentation. LGENet disentangles anatomy and pathology features from LGE images. Note that inherent spatial relationships exist between the myocardium and scarring regions. We proposed a boundary attention module to allow the scar segmentation conditioned on anatomical boundary features, which could mitigate the imbalance problem. Meanwhile, LGENet can predict scar regions across multiple scales with a multi-depth decision module, addressing the scar size variation issue. In our experiments, we thoroughly evaluated the performance of LGENet using LAScarQS 2022 and EMIDEC datasets. The results demonstrate that LGENet achieved promising performance for cardiac scar segmentation.

Keyword :

Adaptive decision Adaptive decision Boundary attention Boundary attention Multi-depth network Multi-depth network Scar segmentation Scar segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Mingjing , Yang, Kangwen , Wu, Mengjun et al. LGENet: disentangle anatomy and pathology features for late gadolinium enhancement image segmentation [J]. | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2025 , 63 (8) : 2311-2323 .
MLA Yang, Mingjing et al. "LGENet: disentangle anatomy and pathology features for late gadolinium enhancement image segmentation" . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING 63 . 8 (2025) : 2311-2323 .
APA Yang, Mingjing , Yang, Kangwen , Wu, Mengjun , Huang, Liqin , Ding, Wangbin , Pan, Lin et al. LGENet: disentangle anatomy and pathology features for late gadolinium enhancement image segmentation . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2025 , 63 (8) , 2311-2323 .
Export to NoteExpress RIS BibTex

Version :

Codebook prior-guided hybrid attention dehazing network SCIE
期刊论文 | 2025 , 162 | IMAGE AND VISION COMPUTING
Abstract&Keyword Cite

Abstract :

Transformers have been widely used in image dehazing tasks due to their powerful self-attention mechanism for capturing long-range dependencies. However, directly applying Transformers often leads to coarse details during image reconstruction, especially in complex real-world hazy scenarios. To address this problem, we propose a novel Hybrid Attention Encoder (HAE). Specifically, a channel-attention-based convolution block is integrated into the Swin-Transformer architecture. This design enhances the local features at each position through an overlapping block-wise spatial attention mechanism while leveraging the advantages of channel attention in global information processing to strengthen the network's representation capability. Moreover, to adapt to various complex hazy environments, a high-quality codebook prior encapsulating the color and texture knowledge of high-resolution clear scenes is introduced. We also propose a more flexible Binary Matching Mechanism (BMM) to better align the codebook prior with the network, further unlocking the potential of the model. Extensive experiments demonstrate that our method consistently outperforms the second-best methods by a margin of 8% to 19% across multiple metrics on the RTTS and URHI datasets. The source code has been released at https://github.com/HanyuZheng25/HADehzeNet.

Keyword :

Channel attention Channel attention Discrete codebook learning Discrete codebook learning Single image dehazing Single image dehazing Swin-transformer Swin-transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Liqin , Zheng, Hanyu , Pan, Lin et al. Codebook prior-guided hybrid attention dehazing network [J]. | IMAGE AND VISION COMPUTING , 2025 , 162 .
MLA Huang, Liqin et al. "Codebook prior-guided hybrid attention dehazing network" . | IMAGE AND VISION COMPUTING 162 (2025) .
APA Huang, Liqin , Zheng, Hanyu , Pan, Lin , Su, Zhipeng , Wu, Qiang . Codebook prior-guided hybrid attention dehazing network . | IMAGE AND VISION COMPUTING , 2025 , 162 .
Export to NoteExpress RIS BibTex

Version :

Dual-phase airway segmentation: Enhancing distal bronchial identification with anatomical prior guidance SCIE
期刊论文 | 2025 , 153 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite

Abstract :

Airway segmentation and reconstruction are critical for preoperative lesion localization and surgical planning in pulmonary interventions. However, this task remains challenging due to the intrinsically complex tree structure of the airway and the imbalance in branch sizes. While current deep learning methods focus on model architecture optimization, they underutilize anatomical priors such as the spatial correlation between pulmonary arteries and bronchi beyond geometric grading level III. To address this limitation, we propose dual-decoding segmentation network (DDS-Net) integrated with a pulmonary-bronchial extension generative adversarial network (PBE-GAN), which explicitly embeds artery-bronchus adjacency priors to enhance distal bronchial identification. Experimental results demonstrate state-of-the-art performance, achieving a Dice Similarity Coefficient (DSC) of 88.46%, Branch Detection Rate (BD) of 88.31%, and Tree Length Detection Rate (TD) of 84.93%, with significant improvements in detecting peripheral bronchi near pulmonary arteries. This study confirms that incorporating anatomical relationships substantially improves segmentation accuracy, particularly for fine structures. Future work should prioritize clinical validation through multi-center trials and explore integration with real-time surgical navigation systems, while extending similar anatomical synergy principles to other organ-specific segmentation tasks.

Keyword :

Airway segmentation Airway segmentation Artery accompany Artery accompany Generative adversarial network Generative adversarial network Prior knowledge Prior knowledge

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Zhen , Zhang, Wen , Huang, Liqin et al. Dual-phase airway segmentation: Enhancing distal bronchial identification with anatomical prior guidance [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2025 , 153 .
MLA Zhang, Zhen et al. "Dual-phase airway segmentation: Enhancing distal bronchial identification with anatomical prior guidance" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 153 (2025) .
APA Zhang, Zhen , Zhang, Wen , Huang, Liqin , Pan, Lin , Zheng, Shaohua , Liu, Zheng et al. Dual-phase airway segmentation: Enhancing distal bronchial identification with anatomical prior guidance . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2025 , 153 .
Export to NoteExpress RIS BibTex

Version :

Multi-Source Domain Adaptation for Medical Image Segmentation SCIE
期刊论文 | 2024 , 43 (4) , 1640-1651 | IEEE TRANSACTIONS ON MEDICAL IMAGING
WoS CC Cited Count: 13
Abstract&Keyword Cite

Abstract :

Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.

Keyword :

Domain adaptation Domain adaptation medical image segmentation medical image segmentation multi-source multi-source unsupervised learning unsupervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pei, Chenhao , Wu, Fuping , Yang, Mingjing et al. Multi-Source Domain Adaptation for Medical Image Segmentation [J]. | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) : 1640-1651 .
MLA Pei, Chenhao et al. "Multi-Source Domain Adaptation for Medical Image Segmentation" . | IEEE TRANSACTIONS ON MEDICAL IMAGING 43 . 4 (2024) : 1640-1651 .
APA Pei, Chenhao , Wu, Fuping , Yang, Mingjing , Pan, Lin , Ding, Wangbin , Dong, Jinwei et al. Multi-Source Domain Adaptation for Medical Image Segmentation . | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2024 , 43 (4) , 1640-1651 .
Export to NoteExpress RIS BibTex

Version :

Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning SCIE
期刊论文 | 2024 , 14 (16) | DIAGNOSTICS
Abstract&Keyword Cite

Abstract :

Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.

Keyword :

cross modality segmentation cross modality segmentation cross pseudo supervision cross pseudo supervision feature alignment feature alignment unsupervised domain adaptation unsupervised domain adaptation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning [J]. | DIAGNOSTICS , 2024 , 14 (16) .
MLA Yang, Mingjing et al. "Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning" . | DIAGNOSTICS 14 . 16 (2024) .
APA Yang, Mingjing , Wu, Zhicheng , Zheng, Hanyu , Huang, Liqin , Ding, Wangbin , Pan, Lin et al. Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning . | DIAGNOSTICS , 2024 , 14 (16) .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 11 >

Export

Results:

Selected

to

Format:
Online/Total:573/13572915
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1