• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:李蒙蒙

Refining:

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 6 >
Cross-modal feature interaction network for heterogeneous change detection SCIE
期刊论文 | 2025 | GEO-SPATIAL INFORMATION SCIENCE
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(1)

Abstract :

Heterogeneous change detection is a task of considerable practical importance and significant challenge in remote sensing. Heterogeneous change detection involves identifying change areas using remote sensing images obtained from different sensors or imaging conditions. Recently, research has focused on feature space translation methods based on deep learning technology for heterogeneous images. However, these types of methods often lead to the loss of original image information, and the translated features cannot be efficiently compared, further limiting the accuracy of change detection. For these issues, we propose a cross-modal feature interaction network (CMFINet). Specifically, CMFINet introduces a cross-modal interaction module (CMIM), which facilitates the interaction between heterogeneous features through attention exchange. This approach promotes consistent representation of heterogeneous features while preserving image characteristics. Additionally, we design a differential feature extraction module (DFEM) to enhance the extraction of true change features from spatial and channel dimensions, facilitating efficient comparison after feature interaction. Extensive experiments conducted on the California, Toulouse, and Wuhan datasets demonstrate that CMFINet outperforms eight existing methods in identifying change areas in different scenes from multimodal images. Compared to the existing methods applied to the three datasets, CMFINet achieved the highest F1 scores of 83.93%, 75.65%, and 95.42%, and the highest mIoU values of 85.38%, 78.34%, and 94.87%, respectively. The results demonstrate the effectiveness and applicability of CMFINet in heterogeneous change detection.

Keyword :

attention mechanisms attention mechanisms Change detection Change detection CNN CNN feature interaction feature interaction heterogeneous remote sensing images heterogeneous remote sensing images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Zhiwei , Wang, Xiaoqin , Lin, Haihan et al. Cross-modal feature interaction network for heterogeneous change detection [J]. | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
MLA Yang, Zhiwei et al. "Cross-modal feature interaction network for heterogeneous change detection" . | GEO-SPATIAL INFORMATION SCIENCE (2025) .
APA Yang, Zhiwei , Wang, Xiaoqin , Lin, Haihan , Li, Mengmeng , Lin, Mengjing . Cross-modal feature interaction network for heterogeneous change detection . | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
Export to NoteExpress RIS BibTex

Version :

Cross-modal feature interaction network for heterogeneous change detection Scopus
期刊论文 | 2025 | Geo-Spatial Information Science
Extraction buildings from very high-resolution images with asymmetric siamese multitask networks and adversarial edge learning SCIE
期刊论文 | 2025 , 136 | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
Abstract&Keyword Cite Version(1)

Abstract :

Building extraction from very high-resolution remote-sensing images still faces two main issues: (1) small buildings are severely omitted and the extracted building shapes have a low consistency with ground truths. (2) supervised deep-learning methods have poor performance in few-shot scenarios, limiting the practical application of these methods. To address the first issue, we propose an asymmetric Siamese multitask network integrating adversarial edge learning called ASMBR-Net for building extraction. It contains an efficient asymmetric Siamese feature extractor comprising pre-trained backbones of convolutional neural networks and Transformers under pre-training and fine-tuning paradigms. This extractor balances the local and global feature representation and reduces training costs. Adversarial edge-learning technology automatically integrates edge constraints and strengthens the modeling ability of small and complex building-shaped patterns. Aiming to overcome the second issue, we introduce a self-training framework and design an instance transfer strategy to generate reliable pseudo-samples. We examined the proposed method on the WHU and Massachusetts (MA) datasets and a self-constructed Dongying (DY) dataset, comparing it with state-of-the-art methods. The experimental results show that our method achieves the highest F1-score of 96.06%, 86.90%, and 84.98% on the WHU, MA, and DY datasets, respectively. Ablation experiments further verify the effectiveness of the proposed method. The code is available at: https://github.com/liuxuanguang/ASMBR-Net

Keyword :

Adversarial learning Adversarial learning Building extraction Building extraction Multitask learning Multitask learning Self-learning Self-learning VHR remote-sensing image VHR remote-sensing image

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Xuanguang , Li, Yujie , Dai, Chenguang et al. Extraction buildings from very high-resolution images with asymmetric siamese multitask networks and adversarial edge learning [J]. | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2025 , 136 .
MLA Liu, Xuanguang et al. "Extraction buildings from very high-resolution images with asymmetric siamese multitask networks and adversarial edge learning" . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION 136 (2025) .
APA Liu, Xuanguang , Li, Yujie , Dai, Chenguang , Zhang, Zhenchao , Ding, Lei , Li, Mengmeng et al. Extraction buildings from very high-resolution images with asymmetric siamese multitask networks and adversarial edge learning . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2025 , 136 .
Export to NoteExpress RIS BibTex

Version :

Extraction buildings from very high-resolution images with asymmetric siamese multitask networks and adversarial edge learning Scopus
期刊论文 | 2025 , 136 | International Journal of Applied Earth Observation and Geoinformation
Building types classification using MHGNN from high-resolution satellite images EI
会议论文 | 2025 , 226-229 | 6th International Conference on Geology, Mapping and Remote Sensing, ICGMRS 2025
Abstract&Keyword Cite Version(1)

Abstract :

Traditional methods for detecting urban building functional types using high-resolution remote sensing images are mainly based on hand-designed features, ignoring spatial and structural features, resulting in difficulties in recognizing buildings with similar shapes. To solve this problem, this study constructs a multi-hop graph neural network (MHGNN) that integrates high-level structural and semantic information to recognize urban building functional types from high-resolution images. Experiments show that the proposed method outperforms traditional graph neural building classification methods. Our proposed method, MHGNN, demonstrates superior performance compared to standard GNNs such as MLP, GIN and GAT in multi-region building type classification tasks. The approach achieves state-of-the-art metrics, including IoU of 74.98%, F1-Score of 85.39%, and OA of 89.56%-outperforming baseline methods by significant margins. © 2025 IEEE.

Keyword :

Buildings Buildings Classification (of information) Classification (of information) Feature extraction Feature extraction Image classification Image classification Neural networks Neural networks Remote sensing Remote sensing Satellite imagery Satellite imagery Semantics Semantics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cai, Xiaojiao , Li, Mengmeng , Zha, Ying . Building types classification using MHGNN from high-resolution satellite images [C] . 2025 : 226-229 .
MLA Cai, Xiaojiao et al. "Building types classification using MHGNN from high-resolution satellite images" . (2025) : 226-229 .
APA Cai, Xiaojiao , Li, Mengmeng , Zha, Ying . Building types classification using MHGNN from high-resolution satellite images . (2025) : 226-229 .
Export to NoteExpress RIS BibTex

Version :

Building types classification using MHGNN from high-resolution satellite images Scopus
其他 | 2025 , 226-229 | 2025 6th International Conference on Geology, Mapping and Remote Sensing, ICGMRS 2025
TDFNet: twice decoding V-Mamba-CNN Fusion features for building extraction SCIE
期刊论文 | 2025 | GEO-SPATIAL INFORMATION SCIENCE
Abstract&Keyword Cite Version(1)

Abstract :

Building extraction from remote sensing imagery is vital for various human activities. But it is challenging due to diverse building appearances and complex backgrounds. Research shows the importance of both global context and spatial details for accurate building extraction. Therefore, methods integrating convolutional neural networks (CNNs) and visual transformers (ViTs) are popular nowadays. However, current methods combining these two methods inadequately merge their features and only perform decoding once, leading to issues like unclear boundaries, internal voids, and susceptibility to non-building elements in complex scenarios with low inter-class and high intra-class variability. To address these issues, this paper introduces a novel extraction method called TDFNet. We first replace ViT with V-Mamba, which has linear complexity, and combine it with CNN for feature extraction. A bidirectional fusion module (BFM) is then designed to comprehensively integrate spatial details and global information, thereby enabling accurate identification of boundaries between adjacent buildings, and maintaining the structural integrity of buildings to avoid internal holes. During the decoding process, we propose an Encoder-Decoder Fusion Module (EDFM) to initially merge features from different stages of the encoder and decoder, thereby diminishing the model's susceptibility to non-building elements with features similar to those of buildings, and consequently reducing the incidence of erroneous extractions. Subsequently, a twice decoding strategy is implemented to enhance the learning of multi-scale features significantly, thereby mitigating the impact of tree occlusions and shadows. Our method yields the state-of-the-art (SOTA) performance on three public building datasets.

Keyword :

Building extraction Building extraction remote sensing remote sensing twice decoding twice decoding V-Mamba V-Mamba

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenlong , Yu, Peng , Li, Mengmeng et al. TDFNet: twice decoding V-Mamba-CNN Fusion features for building extraction [J]. | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
MLA Wang, Wenlong et al. "TDFNet: twice decoding V-Mamba-CNN Fusion features for building extraction" . | GEO-SPATIAL INFORMATION SCIENCE (2025) .
APA Wang, Wenlong , Yu, Peng , Li, Mengmeng , Zhong, Xiaojing , He, Yuanrong , Su, Hua et al. TDFNet: twice decoding V-Mamba-CNN Fusion features for building extraction . | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
Export to NoteExpress RIS BibTex

Version :

TDFNet: twice decoding V-Mamba-CNN Fusion features for building extraction Scopus
期刊论文 | 2025 | Geo-Spatial Information Science
Dual Fine-Grained network with frequency Transformer for change detection on remote sensing images SCIE
期刊论文 | 2025 , 136 | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(1)

Abstract :

Change detection is a fundamental yet challenging task in remote sensing, crucial for monitoring urban expansion, land use changes, and environmental dynamics. However, compared with common color images, objects in remote sensing images exhibit minimal interclass variation and significant intraclass variation in the spectral dimension, with obvious scale inconsistency in the spatial dimension. Change detection complexity presents significant challenges, including differentiating similar objects, accounting for scale variations, and identifying pseudo changes. This research introduces a dual fine-grained network with a frequency Transformer (named as FTransDF-Net) to address the above issues. Specifically, for small-scale and approximate spectral ground objects, the network employs an encoder-decoder architecture consisting of dual fine-grained gated (DFG) modules. This enables the extraction and fusion of fine-grained level information in dual dimensions of features, facilitating a comprehensive analysis of their differences and correlations. As a result, a dynamic fusion representation of salient information is achieved. Additionally, we develop a lightweight frequency transformer (LFT) with minimal parameters for detecting large-scale ground objects that undergo significant changes over time. This is achieved by incorporating a frequency attention (FA) module, which utilizes Fourier transform to model long-range dependencies and combines global adaptive attentive features with multi-level fine-grained features. Our comparative experiments across four publicly available datasets demonstrate that FTransDF-Net reaches advanced results. Importantly, it outperforms the leading comparison method by 1.23% and 2.46% regarding IoU metrics concerning CDD and DSIFN, respectively. Furthermore, efficacy for each module is substantiated through ablation experiments. The code is accessible on https://github.com/LeeThrzz/FTrans-DF-Net.

Keyword :

Change detection Change detection Dual fine-grained Dual fine-grained Frequency transformer Frequency transformer Remote sensing Remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Zhen , Zhang, Zhenxin , Li, Mengmeng et al. Dual Fine-Grained network with frequency Transformer for change detection on remote sensing images [J]. | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2025 , 136 .
MLA Li, Zhen et al. "Dual Fine-Grained network with frequency Transformer for change detection on remote sensing images" . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION 136 (2025) .
APA Li, Zhen , Zhang, Zhenxin , Li, Mengmeng , Zhang, Liqiang , Peng, Xueli , He, Rixing et al. Dual Fine-Grained network with frequency Transformer for change detection on remote sensing images . | INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION , 2025 , 136 .
Export to NoteExpress RIS BibTex

Version :

Dual Fine-Grained network with frequency Transformer for change detection on remote sensing images Scopus
期刊论文 | 2025 , 136 | International Journal of Applied Earth Observation and Geoinformation
Urban Land Use Classification with Street View Image-Assisted Remote Sensing Images EI
会议论文 | 2025 , 230-233 | 6th International Conference on Geology, Mapping and Remote Sensing, ICGMRS 2025
Abstract&Keyword Cite Version(1)

Abstract :

Accurate information on land use classification is essential for promoting sustainable urban growth, preserving the environment, safeguarding public health, and enhancing socioeconomic prosperity. This paper explores the application of street view image (SVI) assisted remote sensing images (RSI) for urban land use classification. In this paper, we introduced a dual-stream network integrating a large model and CNN for feature extraction, termed the Large-VGG Dual-Stream Network (LVDNet), which extract RSI semantic features using a CNN backbone, while SVI semantic sementic features are obtained from a large model trained on real-world data, with feature fusion achieved through cross-learning. We constructed a new dataset ZZ for experiments, in which the spatial correspond between street view images and remote sensing imagery is established. Our findings indicate that the proposed method performs effectively in datasets, demonstrates SVI significantly enhance the accuracy of urban land use classification, particularly in identifying complex urban functional areas. © 2025 IEEE.

Keyword :

Classification (of information) Classification (of information) Image classification Image classification Land use Land use Public health Public health Remote sensing Remote sensing Semantics Semantics Semantic Web Semantic Web Urban growth Urban growth

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Zheyao , Li, Mengmeng , Cai, Xiaojiao . Urban Land Use Classification with Street View Image-Assisted Remote Sensing Images [C] . 2025 : 230-233 .
MLA Yu, Zheyao et al. "Urban Land Use Classification with Street View Image-Assisted Remote Sensing Images" . (2025) : 230-233 .
APA Yu, Zheyao , Li, Mengmeng , Cai, Xiaojiao . Urban Land Use Classification with Street View Image-Assisted Remote Sensing Images . (2025) : 230-233 .
Export to NoteExpress RIS BibTex

Version :

Urban Land Use Classification with Street View Image-Assisted Remote Sensing Images Scopus
其他 | 2025 , 230-233 | 2025 6th International Conference on Geology, Mapping and Remote Sensing, ICGMRS 2025
Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy Scopus
期刊论文 | 2025 | IEEE Geoscience and Remote Sensing Letters
Abstract&Keyword Cite

Abstract :

Semantic change detection (SCD) in high-resolution (HR) remote sensing images faces two issues: (1) isolated network branch for binary change detection (BCD) within multi-task architecture result in suboptimal SCD performance; (2) false alarms or missed detections caused by illumination differences or seasonal transform. To address these issues, this study proposes a bi-temporal binary change enhancement network (Bi-BCENet). Specifically, we introduce a binary change enhancement (BCE) strategy based on multi-network joint learning to achieve superior SCD via improving change areas prediction. Within the network’s reasoning process, we develop a cross-attention fusion module (CAFM) to enhance the global similarity modeling via cross-network prompt fusion, and we employ a cosine similarity-based auxiliary loss to optimize non-change’s semantic consistency. The experiments on SECOND and CINA-FX datasets demonstrate that Bi-BCENet outperforms representative SCD networks, achieving 62.08%, 84.95% in FSCD and 66.88%, 83.10% in mIoUsCD, respectively. And the ablation analysis of network validates Bi-BCENet’s effectiveness in reducing false alarms and missed detections in SCD results. Moreover, for specific SCD of cropland, Bi-BCENet shows its strong potential in single-to-multi SCD. © 2004-2012 IEEE.

Keyword :

binary change detection binary change detection High-resolution remote sensing High-resolution remote sensing joint learning joint learning semantic change detection semantic change detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, H. , Wang, X. , Wu, Q. et al. Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy [J]. | IEEE Geoscience and Remote Sensing Letters , 2025 .
MLA Lin, H. et al. "Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy" . | IEEE Geoscience and Remote Sensing Letters (2025) .
APA Lin, H. , Wang, X. , Wu, Q. , Li, M. , Yang, Z. , Lou, K. . Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy . | IEEE Geoscience and Remote Sensing Letters , 2025 .
Export to NoteExpress RIS BibTex

Version :

SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection SCIE
期刊论文 | 2025 , 22 | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Abstract&Keyword Cite Version(2)

Abstract :

Change detection in remote sensing images is crucial for assessing human activity impacts and supporting government decision-making. However, in practice, obtaining bitemporal remote sensing images with consistent conditions is highly limited, and existing change detection methods still face two main challenges: 1) in real-world scenarios, inconsistent sensor and lighting conditions cause significant style (visual appearance) differences between bitemporal remote sensing images, leading to false changes and reducing change detection accuracy and 2) remote sensing images contain complex semantic information, and complex scenarios such as shadow occlusion and seasonal vegetation changes make the existing methods difficult to capture relevant features related to change areas. To address these challenges, we propose a style consistency enhanced differential network (SCEDNet) to eliminate style discrepancies between temporally distinct images and enhance the semantic information of change features. Specifically, we introduce a style consistency module (SCM) in the encoder to extract consistent features by computing the mean and variance of temporal features. Then, we introduce an enhanced differential module (EDM) to enhance change semantics, tackling issues such as mislocalization and incomplete regions in complex cases such as shadow occlusion and seasonal vegetation changes. In addition, we design a gate fusion upsampling (GFU) and change refine module (CRM) in the decoder to integrate multilevel differential features with different semantic information and highlight key changes, further improving change detection performance. Experiments on the CDD and GZ_CD datasets show that SCEDNet outperformed eight methods, achieving F1-scores of 95.59% and 90.41%, respectively. Code and datasets are available at https://github.com/Yzwfff/SCEDNet

Keyword :

Change detection Change detection deep learning deep learning enhanced differential features enhanced differential features remote sensing images remote sensing images style consistency style consistency

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Zhiwei , Wang, Xiaoqin , Li, Mengmeng et al. SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection [J]. | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2025 , 22 .
MLA Yang, Zhiwei et al. "SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection" . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 22 (2025) .
APA Yang, Zhiwei , Wang, Xiaoqin , Li, Mengmeng , Long, Jiang . SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2025 , 22 .
Export to NoteExpress RIS BibTex

Version :

SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection EI
期刊论文 | 2025 , 22 | IEEE Geoscience and Remote Sensing Letters
SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection Scopus
期刊论文 | 2025 , 22 | IEEE Geoscience and Remote Sensing Letters
Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model SCIE
期刊论文 | 2025 , 231 | COMPUTERS AND ELECTRONICS IN AGRICULTURE
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Precise information on agricultural parcels is crucial for effective farm management, crop mapping, and monitoring. Current techniques often encounter difficulties in automatically delineating vectorized parcels from remote sensing images, especially in irregular-shaped areas, making it challenging to derive closed and vectorized boundaries. To address this, we treat parcel delineation as identifying valid parcel vertices from remote sensing images to generate parcel polygons. We introduce a Point-Line-Region interactive multitask network (PLR-Net) that jointly learns semantic features of parcel vertices, boundaries, and regions through point-, line-, and region-related subtasks within a multitask learning framework. We derived an attraction field map (AFM) to enhance the feature representation of parcel boundaries and improve the detection of parcel regions while maintaining high geometric accuracy. The point-related subtask focuses on learning features of parcel vertices to obtain preliminary vertices, which are then refined based on detected boundary pixels to derive valid parcel vertices for polygon generation. We designed a spatial and channel excitation module for feature interaction to enhance interactions between points, lines, and regions. Finally, the generated parcel polygons are refined using the Douglas-Peucker algorithm to regularize polygon shapes. We evaluated PLR-Net using high-resolution GF-2 satellite images from the Shandong, Xinjiang, and Sichuan provinces of China and medium-resolution Sentinel-2 images from The Netherlands. Results showed that our method outperformed existing state-of-the-art techniques (e.g., BsiNet, SEANet, and Hisup) in pixel- and object-based geometric accuracy across all datasets, achieving the highest IoU and polygonal average precision on GF2 datasets (e.g., 90.84% and 82.00% in Xinjiang) and on the Sentinel-2 dataset (75.86% and 47.1%). Moreover, when trained on the Xinjiang dataset, the model successfully transferred to the Shandong dataset, achieving an IoU score of 83.98%. These results demonstrate that PLR-Net is an accurate, robust, and transferable method suitable for extracting vectorized parcels from diverse regions and types of remote sensing images. The source codes of our model are available at https://github.com/mengmengli01/PLR-Net-demo/tree/main.

Keyword :

Agricultural parcel delineation Agricultural parcel delineation Multitask neural networks Multitask neural networks PLR-Net PLR-Net Point-line-region interactive Point-line-region interactive Vectorized parcels Vectorized parcels

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Mengmeng , Lu, Chengwen , Lin, Mengjing et al. Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model [J]. | COMPUTERS AND ELECTRONICS IN AGRICULTURE , 2025 , 231 .
MLA Li, Mengmeng et al. "Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model" . | COMPUTERS AND ELECTRONICS IN AGRICULTURE 231 (2025) .
APA Li, Mengmeng , Lu, Chengwen , Lin, Mengjing , Xiu, Xiaolong , Long, Jiang , Wang, Xiaoqin . Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model . | COMPUTERS AND ELECTRONICS IN AGRICULTURE , 2025 , 231 .
Export to NoteExpress RIS BibTex

Version :

Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model EI
期刊论文 | 2025 , 231 | Computers and Electronics in Agriculture
Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model Scopus
期刊论文 | 2025 , 231 | Computers and Electronics in Agriculture
Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting SCIE
期刊论文 | 2025 , 12 (11) , 15057-15072 | IEEE INTERNET OF THINGS JOURNAL
Abstract&Keyword Cite Version(2)

Abstract :

Accurate traffic forecasting is one of the key applications within Internet of Things (IoT)-based intelligent transportation systems (ITS), playing a vital role in enhancing traffic quality, optimizing public transportation, and planning infrastructure. However, existing spatial-temporal methods encounter two primary limitations: 1) they have difficulty differentiating samples over time and often ignore dependencies among road network nodes at different time scales and 2) they are limited in capturing dynamic spatial correlations with predefined and adaptive graphs. To overcome these limitations, we introduce a novel temporal identity interaction dynamic graph convolutional network (TIIDGCN) for traffic forecasting. The central concept involves assigning temporal identity features to raw data and extracting distinctive, representative spatial-temporal features through multiscale interactive learning. Specifically, we design a multiscale interactive model incorporating both spatial and temporal components. This network aims to explore spatial-temporal patterns at various scales from macro to micro, facilitating their mutual enhancement through positive feedback mechanisms. For the spatial component, we design a new dynamic graph learning method to depict the changing dependencies among nodes. We conduct comprehensive experiments using four real-world traffic datasets (PeMS04/07/08 and NYCTaxi Drop-off/Pick-up). Specifically, TIIDGCN achieves a 16.46% reduction in mean absolute error compared to the Spatial-Temporal Graph Attention Gated Recurrent Transformer Network model on the PeMS08 dataset.

Keyword :

Adaptation models Adaptation models Correlation Correlation Data models Data models Dictionaries Dictionaries Feature extraction Feature extraction Forecasting Forecasting Graph convolutional network (GCN) Graph convolutional network (GCN) Internet of Things Internet of Things multiscale interaction multiscale interaction Roads Roads Time series analysis Time series analysis traffic forecasting traffic forecasting Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Shiyu , Wu, Qunyong , Li, Mengmeng et al. Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting [J]. | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (11) : 15057-15072 .
MLA Yang, Shiyu et al. "Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting" . | IEEE INTERNET OF THINGS JOURNAL 12 . 11 (2025) : 15057-15072 .
APA Yang, Shiyu , Wu, Qunyong , Li, Mengmeng , Sun, Yu . Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting . | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (11) , 15057-15072 .
Export to NoteExpress RIS BibTex

Version :

Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting EI
期刊论文 | 2025 , 12 (11) , 15057-15072 | IEEE Internet of Things Journal
Temporal Identity Interaction Dynamic Graph Convolutional Network for Traffic Forecasting in IoT-Based ITS Scopus
期刊论文 | 2025 , 12 (11) , 15057-15072 | IEEE Internet of Things Journal
10| 20| 50 per page
< Page ,Total 6 >

Export

Results:

Selected

to

Format:
Online/Total:321/13882633
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1