Indexed by:
Abstract:
Automated medical image segmentation is a crucial step in clinical analysis and diagnosis, as it can improve diagnostic efficiency and accuracy. Deep convolutional neural networks (DCNNs) have been widely used in the medical field, achieving excellent results. The high complexity of medical images poses a significant challenge for many networks in balancing local and global information, resulting in unstable segmentation outcomes. To address the challenge, we designed a hybrid CNN-Transformer network to capture both the local and global information. More specifically, deep convolutional neural networks are introduced to exploit the local information. At the same time, we designed a trident multi-layer fusion (TMF) block for the Transformer to fuse contextual information from higher-level (global) features dynamically. Moreover, considering the inherent characteristic of medical image segmentation (e.g., irregular shapes and discontinuous boundaries), we developed united attention (UA) blocks to focus on important feature learning. To evaluate the effectiveness of our proposed approach, we performed experiments on two publicly available datasets, ISIC-2017, and Kvasir-SEG, and compared our results with state-of-the-art approaches. The experimental results demonstrate the superior performance of our approach. The codes are available at https://github.com/Tanghui2000/HTC-Net.
Keyword:
Reprint 's Address:
Version:
Source :
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
ISSN: 1746-8094
Year: 2023
Volume: 88
4 . 9
JCR@2023
4 . 9 0 0
JCR@2023
JCR Journal Grade:1
CAS Journal Grade:3
Cited Count:
WoS CC Cited Count: 15
SCOPUS Cited Count: 18
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: