• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 1 >
Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 3
Abstract&Keyword Cite Version(3)

Abstract :

Remote sensing image-text retrieval (RSITR) is a cross-modal task that integrates visual and textual information, attracting significant attention in remote sensing research. Remote sensing images typically contain complex scenes with abundant details, presenting significant challenges for accurate semantic alignment between images and texts. Despite advances in the field, achieving precise alignment in such intricate contexts remains a major hurdle. To address this challenge, this article introduces a novel context-aware local-global semantic alignment (CLGSA) method. The proposed method consists of two key modules: the local key feature alignment (LKFA) module and the cross-sample global semantic alignment (CGSA) module. The LKFA module incorporates a local image masking and reconstruction task to improve the alignment between image and text features. Specifically, this module masks certain regions of the image and uses text context information to guide the reconstruction of the masked areas, enhancing the alignment of local semantics and ensuring more accurate retrieval of region-specific content. The CGSA module employs a hard sample triplet loss to improve global semantic consistency. By prioritizing difficult samples during training, this module refines feature space distributions, helping the model better capture global semantics across the entire image-text pair. A series of extensive experiments demonstrates the effectiveness of the proposed method. The method achieves an mR score of 32.07% on the RSICD dataset and 46.63% on the RSITMD dataset, outperforming baseline methods and confirming the robustness and accuracy of the approach.

Keyword :

Accuracy Accuracy Cross modal retrieval Cross modal retrieval Feature extraction Feature extraction Hard sample triplet loss Hard sample triplet loss Image reconstruction Image reconstruction local image masking local image masking Remote sensing Remote sensing remote sensing image-text retrieval (RSITR) remote sensing image-text retrieval (RSITR) semantic alignment semantic alignment Semantics Semantics Sensors Sensors text-guided reconstruction text-guided reconstruction Training Training Transformers Transformers Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Chen, Xiumei 等. "Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Chen, Xiumei , Zheng, Xiangtao , Lu, Xiaoqiang . Context-Aware Local-Global Semantic Alignment for Remote Sensing Image-Text Retrieval . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Context-Aware Local–Global Semantic Alignment for Remote Sensing Image–Text Retrieval Scopus
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Context-Aware Local–Global Semantic Alignment for Remote Sensing Image–Text Retrieval EI
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Context-Aware Local–Global Semantic Alignment for Remote Sensing Image–Text Retrieval Scopus
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
跨域遥感场景解译研究进展 CSCD PKU
期刊论文 | 2024 , 29 (6) , 1730-1746 | 中国图象图形学报
Abstract&Keyword Cite Version(2)

Abstract :

遥感对地观测中普遍存在多平台、多传感器和多角度的多源数据,为遥感场景解译提供协同互补信息.然而,现有的场景解译方法需要根据不同遥感场景数据训练模型,或者对测试数据标准化以适应现有模型,训练成本高、响应周期长,已无法适应多源数据协同解译的新阶段.跨域遥感场景解译将已训练的老模型迁移到新的应用场景,通过模型复用以适应不同场景变化,利用已有领域的知识来解决未知领域问题.本文以跨域遥感场景解译为主线,综合分析国内外文献,结合场景识别和目标识别两个典型任务,论述国内外研究现状、前沿热点和未来趋势,梳理总结跨域遥感场景解译的常用数据集和统一的实验设置.本文实验数据集及检测结果的公开链接为:.

Keyword :

分布外泛化 分布外泛化 多样性数据集 多样性数据集 模型泛化 模型泛化 自适应算法 自适应算法 跨域遥感场景解译 跨域遥感场景解译 迁移学习 迁移学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 郑向涛 , 肖欣林 , 陈秀妹 et al. 跨域遥感场景解译研究进展 [J]. | 中国图象图形学报 , 2024 , 29 (6) : 1730-1746 .
MLA 郑向涛 et al. "跨域遥感场景解译研究进展" . | 中国图象图形学报 29 . 6 (2024) : 1730-1746 .
APA 郑向涛 , 肖欣林 , 陈秀妹 , 卢宛萱 , 刘小煜 , 卢孝强 . 跨域遥感场景解译研究进展 . | 中国图象图形学报 , 2024 , 29 (6) , 1730-1746 .
Export to NoteExpress RIS BibTex

Version :

跨域遥感场景解译研究进展 CSCD PKU
期刊论文 | 2024 , 29 (06) , 1730-1746 | 中国图象图形学报
跨域遥感场景解译研究进展 Scopus CSCD PKU
期刊论文 | 2024 , 29 (6) , 1730-1746 | 中国图象图形学报
10| 20| 50 per page
< Page ,Total 1 >

Export

Results:

Selected

to

Format:
Online/Total:475/14051077
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1