• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈秀妹

Refining:

Year

Submit Unfold

Type

Submit Unfold

Indexed by

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 1 >
Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Remote sensing image-text retrieval (RSITR) is a cross-modal task that integrates visual and textual information, attracting significant attention in remote sensing research. Remote sensing images typically contain complex scenes with abundant details, presenting significant challenges for accurate semantic alignment between images and texts. Despite advances in the field, achieving precise alignment in such intricate contexts remains a major hurdle. To address this challenge, this article introduces a novel context-aware local-global semantic alignment (CLGSA) method. The proposed method consists of two key modules: the local key feature alignment (LKFA) module and the cross-sample global semantic alignment (CGSA) module. The LKFA module incorporates a local image masking and reconstruction task to improve the alignment between image and text features. Specifically, this module masks certain regions of the image and uses text context information to guide the reconstruction of the masked areas, enhancing the alignment of local semantics and ensuring more accurate retrieval of region-specific content. The CGSA module employs a hard sample triplet loss to improve global semantic consistency. By prioritizing difficult samples during training, this module refines feature space distributions, helping the model better capture global semantics across the entire image-text pair. A series of extensive experiments demonstrates the effectiveness of the proposed method. The method achieves an mR score of 32.07% on the RSICD dataset and 46.63% on the RSITMD dataset, outperforming baseline methods and confirming the robustness and accuracy of the approach.

Keyword :

Accuracy Accuracy Cross modal retrieval Cross modal retrieval Feature extraction Feature extraction Hard sample triplet loss Hard sample triplet loss Image reconstruction Image reconstruction local image masking local image masking Remote sensing Remote sensing remote sensing image-text retrieval (RSITR) remote sensing image-text retrieval (RSITR) semantic alignment semantic alignment Semantics Semantics Sensors Sensors text-guided reconstruction text-guided reconstruction Training Training Transformers Transformers Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Chen, Xiumei 等. "Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 1 >

Export

Results:

Selected

to

Format:
Online/Total:588/13572762
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1