Indexed by:
Abstract:
Unsupervised domain adaptation (UDA) generally learns a mapping to align the distribution of the source domain and target domain. The learned mapping can boost the performance of the model on the target data, of which the labels are unavailable for model training. Previous UDA methods mainly focus on domain-invariant features (DIFs) without considering the domain-specific features (DSFs), which could be used as complementary information to constrain the model. In this work, we propose a new UDA framework for cross-modality image segmentation. The framework first disentangles each domain into the DIFs and DSFs. To enhance the representation of DIFs, self-attention modules are used in the encoder which allows attention-driven, long-range dependency modeling for image generation tasks. Furthermore, a zero loss is minimized to enforce the information of target (source) DSFs, contained in the source (target) images, to be as close to zero as possible. These features are then iteratively decoded and encoded twice to maintain the consistency of the anatomical structure. To improve the quality of the generated images and segmentation results, several discriminators are introduced for adversarial learning. Finally, with the source data and their DIFs, we train a segmentation network, which can be applicable to target images. We validated the proposed framework for cross-modality cardiac segmentation using two public datasets, and the results showed our method delivered promising performance and compared favorably to stateof-the-art approaches in terms of segmentation accuracies. The source code of this work will be released via https://zmiclab.github.io/projects.html , once this manuscript is accepted for publication. (c) 2021 Elsevier B.V. All rights reserved.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
MEDICAL IMAGE ANALYSIS
ISSN: 1361-8415
Year: 2021
Volume: 71
1 3 . 8 2 8
JCR@2021
1 0 . 7 0 0
JCR@2023
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:106
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 14
SCOPUS Cited Count: 15
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: