Indexed by:
Abstract:
Gastric cancer is a serious malignant tumor. The gold standard for diagnosing gastric cancer is identifying cancer cells using pathological slides under microscopic examination. While many approaches have been proposed for gastric cancer segmentation, it is still difficult to train large-scale segmentation networks with scant gastroscopy data. Recently, Segmentation Anything Model (SAM) has received a lot of interest lately for its use in segmenting natural and medical images. However, due to high computational complexity and huge computational costs, the application of SAM in resource limited embedded medical devices is limited. In this paper, we proposed GC-SAM, a lightweight model for tumor segmentation. The prompt encoder and mask decoder have been fine-tuned to better face the challenge of segmenting pathological images of gastric cancer tissue. Evaluated on an internal dataset, the GC-SAM achieved state-of-the-art performance compared to classical image segmentation networks, with Dice coefficient of 0.8186. In addition, external validation has confirmed its superior generalization ability. This study demonstrates the great potential of adapting GC-SAM to pathological image segmentation tasks in gastric cancer tissue and provides the possibility for deep learning image segmentation to be transferred to embedded medical devices. © 2024 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2024
Page: 1903-1908
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: