• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:李蒙蒙

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 4 >
基于面向对象孪生神经网络的高分辨率遥感影像建筑物变化检测 CSCD PKU
期刊论文 | 2024 , 28 (02) , 437-454 | 遥感学报
Abstract&Keyword Cite

Abstract :

建筑物变化检测在城市环境监测、土地规划管理和违章违规建筑识别等应用中具有重要作用。针对传统孪生神经网络在影像变化检测中存在的检测边界与实际边界吻合度低的问题,本文结合面向对象图像分析技术,提出一种基于面向对象孪生神经网络(Obj-SiamNet)的高分辨率遥感影像变化检测方法,利用模糊集理论自动融合多尺度变化检测结果,并通过生成对抗网络实现训练样本迁移。该方法应用在高分二号和高分七号高分辨率卫星影像中,并与基于时空自注意力的变化检测模型(STANet)、视觉变化检测网络(ChangeNet)和孪生UNet神经网络模型(Siam-NestedUNet)进行比较。结果表明:(1)融合面向对象多尺度分割的检测结果较单一尺度分割的检测结果,召回率最高提升32%,F1指数最高提升25%,全局总体误差(GTC)最高降低7%;(2)在样本数量有限的情况下,通过生成对抗网络进行样本迁移,与未使用样本迁移前的检测结果相比,召回率最高提升16%,F1指数最高提升14%,GTC降低了9%;(3) Obj-SiamNet方法较其他变化检测方法,整体检测精度得到提升,F1指数最高提升23%,GTC最高降低9%。该方法有效提高了建筑物变化检测在几何和属性方面的精度,并能有效利用开放地理数据集,降低了模型训练样本制作成本,提升了检测效率和适用性。

Keyword :

孪生神经网络 孪生神经网络 模糊集融合 模糊集融合 生成对抗网络 生成对抗网络 遥感变化检测 遥感变化检测 面向对象多尺度分析 面向对象多尺度分析 高分辨率遥感影像 高分辨率遥感影像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 刘宣广 , 李蒙蒙 , 汪小钦 et al. 基于面向对象孪生神经网络的高分辨率遥感影像建筑物变化检测 [J]. | 遥感学报 , 2024 , 28 (02) : 437-454 .
MLA 刘宣广 et al. "基于面向对象孪生神经网络的高分辨率遥感影像建筑物变化检测" . | 遥感学报 28 . 02 (2024) : 437-454 .
APA 刘宣广 , 李蒙蒙 , 汪小钦 , 张振超 . 基于面向对象孪生神经网络的高分辨率遥感影像建筑物变化检测 . | 遥感学报 , 2024 , 28 (02) , 437-454 .
Export to NoteExpress RIS BibTex

Version :

Where is tea grown in the world: A robust mapping framework for agroforestry crop with knowledge graph and sentinels images SCIE
期刊论文 | 2024 , 303 | REMOTE SENSING OF ENVIRONMENT
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Tea trees (Camellia sinensis), a quintessential homestead agroforestry crop cultivated in over 60 countries, hold significant economic and social importance as a vital specialty cash crop. Accurate nationwide crop data is imperative for effective agricultural management and resource regulation. However, many regions grapple with a lack of agroforestry cash crop data, impeding sustainable development and poverty eradication, especially in economically underdeveloped countries. The large-scale mapping of tea plantations faces substantial limitations and challenges due to their sparse distribution compared to field crops, unfamiliar characteristics, and spectral confusion among various land cover types (e.g., forests, orchards, and farmlands). To address these challenges, we developed the Manual management And Phenolics substance-based Tea mapping (MAP-Tea) framework by harnessing Sentinel-1/2 time series images for automated tea plantation mapping. Tea trees, exhibiting higher phenolic content, evergreen characteristics, and multiple shoot sprouting, result in extensive canopy coverage, stable soil exposure, and radar backscatter signal interference from frequent picking activities. We developed three phenology-based indicators focusing on phenolic content, vegetation coverage, and canopy texture leveraging the temporal features of vegetation, pigments, soil, and radar backscattering. Characteristics of biochemical substance content and manual management measures were applied to tea mapping for the first time. The MAP-Tea framework successfully generated China's first updated 10 m resolution tea plantation map in 2022. It achieved an overall accuracy of 94.87% based on 16,712 reference samples, with a kappa coefficient of 0.83 and an F1 score of 85.63%. The tea trees are typically cultivated in mountainous and hilly areas with a relatively low planting density (averaging about 10%). Alpine tea trees exhibited a notably dense concentration and dominance, mainly found in regions with elevations ranging from 700 m to 2000 m and slopes between 2 degrees to 18 degrees. The areas with low altitudes and slopes hold the largest tea plantation area and output. As the slope increased, there was a gradual decline in the dominance of tea areas. The results suggest a good potential for the knowledge-based approaches, combining biochemical substance content and human activities, for national-scale tea plantation mapping in complex environment conditions and challenging landscapes, providing important reference significance for mapping other agroforestry crops. This study contributes significantly to advancing the achievement of the Sustainable Development Goals (SDGs) considering the crucial role that agroforestry crops play in fostering economic growth and alleviating poverty. The first 10m national Tea tree data products in China with good accuracy (ChinaTea10m) are publicly accessed at https://doi.org/10.6084/m9.figshare .25047308.

Keyword :

Agroforestry crop mapping Agroforestry crop mapping Phenology-based algorithm Phenology-based algorithm Sentinel-1/2 Sentinel-1/2 Special cash crop Special cash crop Tea plantation Tea plantation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Peng, Yufeng , Qiu, Bingwen , Tang, Zhenghong et al. Where is tea grown in the world: A robust mapping framework for agroforestry crop with knowledge graph and sentinels images [J]. | REMOTE SENSING OF ENVIRONMENT , 2024 , 303 .
MLA Peng, Yufeng et al. "Where is tea grown in the world: A robust mapping framework for agroforestry crop with knowledge graph and sentinels images" . | REMOTE SENSING OF ENVIRONMENT 303 (2024) .
APA Peng, Yufeng , Qiu, Bingwen , Tang, Zhenghong , Xu, Weiming , Yang, Peng , Wu, Wenbin et al. Where is tea grown in the world: A robust mapping framework for agroforestry crop with knowledge graph and sentinels images . | REMOTE SENSING OF ENVIRONMENT , 2024 , 303 .
Export to NoteExpress RIS BibTex

Version :

基于面向对象CNN和RF的不同空间分辨率遥感影像农业大棚提取研究 CSCD PKU
期刊论文 | 2024 , 39 (02) , 315-327 | 遥感技术与应用
Abstract&Keyword Cite

Abstract :

遥感技术已成为快速有效获取农业大棚覆盖信息的重要途径,但遥感影像空间分辨率大小对提取精度的影响具有双重性,选择适宜分辨率影像具有重要意义。以南方农业塑料大棚为研究对象,利用GF-1、GF-2和Sentinel-2形成1~16 m间6个不同空间分辨率影像数据集,基于面向对象影像分析方法(Object-Based Image Analysis,OBIA),分别利用面向对象卷积神经网络(Convolutional Neural Network,CNN)方法和随机森林(Random forest,RF)方法开展大棚提取,分析提取精度和不同方法下的差异性。结果表明:(1)CNN和RF方法下,农业大棚的提取精度随着影像分辨率降低总体呈下降趋势,在1~16 m的影像上均能检测到农业大棚;(2)相对于RF方法,CNN方法对影像空间分辨率要求更高,在1~2 m分辨率下,CNN方法有更少的漏提和误提,但在4m及更低分辨率下,RF方法的适用性更高;(3)2 m分辨率影像是大棚信息提取的最佳空间分辨率,可经济有效地实现大棚监测。

Keyword :

农业大棚提取 农业大棚提取 空间分辨率 空间分辨率 随机森林 随机森林 面向对象CNN方法 面向对象CNN方法 高分辨率遥感数据 高分辨率遥感数据

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林欣怡 , 汪小钦 , 汤紫霞 et al. 基于面向对象CNN和RF的不同空间分辨率遥感影像农业大棚提取研究 [J]. | 遥感技术与应用 , 2024 , 39 (02) : 315-327 .
MLA 林欣怡 et al. "基于面向对象CNN和RF的不同空间分辨率遥感影像农业大棚提取研究" . | 遥感技术与应用 39 . 02 (2024) : 315-327 .
APA 林欣怡 , 汪小钦 , 汤紫霞 , 李蒙蒙 , 吴瑞姣 , 黄德华 . 基于面向对象CNN和RF的不同空间分辨率遥感影像农业大棚提取研究 . | 遥感技术与应用 , 2024 , 39 (02) , 315-327 .
Export to NoteExpress RIS BibTex

Version :

Characterizing Intercity Mobility Patterns for the Greater Bay Area in China SCIE
期刊论文 | 2023 , 12 (1) | ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION
WoS CC Cited Count: 6
Abstract&Keyword Cite

Abstract :

Understanding intercity mobility patterns is important for future urban planning, in which the intensity of intercity mobility indicates the degree of urban integration development. This study investigates the intercity mobility patterns of the Greater Bay Area (GBA) in China. The proposed workflow starts by analyzing intercity mobility characteristics, proceeds to model the spatial-temporal heterogeneity of intercity mobility structures, and then identifies the intercity mobility patterns. We first conduct a complex network analysis, based on weighted degrees and the PageRank algorithm, to measure intercity mobility characteristics. Next, we calculate the Normalized Levenshtein Distance for Population Mobility Structure (NLPMS) to quantify the differences in intercity mobility structures, and we use the Non-negative Matrix Factorization (NMF) to identify intercity mobility patterns. Our results showed an evident 'Core-Periphery' differentiation characterized by intercity mobility, with Guangzhou and Shenzhen as the two core cities. An obvious daily intercity commuting pattern was found between Guangzhou and Foshan, and between Shenzhen and Dongguan cities at working time. This pattern, however, changes during the holidays. This is because people move from the core cities to peripheral cities at the beginning of holidays and return at the end of holidays. This study concludes that Guangzhou and Foshan have formed a relatively stable intercity mobility pattern, and the Shenzhen-Dongguan-Huizhou metropolitan area has been gradually formed.

Keyword :

Baidu migration data Baidu migration data intercity mobility patterns intercity mobility patterns matrix factorization matrix factorization spatial-temporal heterogeneity spatial-temporal heterogeneity urban integration urban integration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Yanzhong , Wu, Qunyong , Li, Mengmeng . Characterizing Intercity Mobility Patterns for the Greater Bay Area in China [J]. | ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION , 2023 , 12 (1) .
MLA Yin, Yanzhong et al. "Characterizing Intercity Mobility Patterns for the Greater Bay Area in China" . | ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 12 . 1 (2023) .
APA Yin, Yanzhong , Wu, Qunyong , Li, Mengmeng . Characterizing Intercity Mobility Patterns for the Greater Bay Area in China . | ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION , 2023 , 12 (1) .
Export to NoteExpress RIS BibTex

Version :

Height estimation from single aerial imagery using contrastive learning based multi-scale refinement network SCIE
期刊论文 | 2023 , 16 (1) , 2346-2364 | INTERNATIONAL JOURNAL OF DIGITAL EARTH
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Height map estimation from a single aerial image plays a crucial role in localization, mapping, and 3D object detection. Deep convolutional neural networks have been used to predict height information from single-view remote sensing images, but these methods rely on large volumes of training data and often overlook geometric features present in orthographic images. To address these issues, this study proposes a gradient-based self-supervised learning network with momentum contrastive loss to extract geometric information from non-labeled images in the pretraining stage. Additionally, novel local implicit constraint layers are used at multiple decoding stages in the proposed supervised network to refine high-resolution features in height estimation. The structural-aware loss is also applied to improve the robustness of the network to positional shift and minor structural changes along the boundary area. Experimental evaluation on the ISPRS benchmark datasets shows that the proposed method outperforms other baseline networks, with minimum MAE and RMSE of 0.116 and 0.289 for the Vaihingen dataset and 0.077 and 0.481 for the Potsdam dataset, respectively. The proposed method also shows around threefold data efficiency improvements on the Potsdam dataset and domain generalization on the Enschede datasets. These results demonstrate the effectiveness of the proposed method in height map estimation from single-view remote sensing images.

Keyword :

aerial imagery aerial imagery contrastive learning contrastive learning digital surface models digital surface models Height estimation Height estimation local implicit constrain local implicit constrain

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Wufan , Ding, Hu , Na, Jiaming et al. Height estimation from single aerial imagery using contrastive learning based multi-scale refinement network [J]. | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2023 , 16 (1) : 2346-2364 .
MLA Zhao, Wufan et al. "Height estimation from single aerial imagery using contrastive learning based multi-scale refinement network" . | INTERNATIONAL JOURNAL OF DIGITAL EARTH 16 . 1 (2023) : 2346-2364 .
APA Zhao, Wufan , Ding, Hu , Na, Jiaming , Li, Mengmeng , Tiede, Dirk . Height estimation from single aerial imagery using contrastive learning based multi-scale refinement network . | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2023 , 16 (1) , 2346-2364 .
Export to NoteExpress RIS BibTex

Version :

Integrating Spatial Details With Long-Range Contexts for Semantic Segmentation of Very High-Resolution Remote-Sensing Images SCIE
期刊论文 | 2023 , 20 | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

This letter presents a cross-learning network (i.e., CLCFormer) integrating fine-grained spatial details within long-range global contexts based upon convolutional neural networks (CNNs) and transformer, for semantic segmentation of very high-resolution (VHR) remote-sensing images. More specifically, CLCFormer comprises two parallel encoders, derived from the CNN and transformer, and a CNN decoder. The encoders are backboned on SwinV2 and EfficientNet-B3, from which the extracted semantic features are aggregated at multiple levels using a bilateral feature fusion module (BiFFM). First, we used attention gate (ATG) modules to enhance feature representation, improving segmentation results for objects with various shapes and sizes. Second, we used an attention residual (ATR) module to refine spatial features's learning, alleviating boundary blurring of occluded objects. Finally, we developed a new strategy, called auxiliary supervise strategy (ASS), for model optimization to further improve segmentation performance. Our method was tested on the WHU, Inria, and Potsdam datasets, and compared with CNN-based and transformer-based methods. Results showed that our method achieved state-of-the-art performance on the WHU building dataset (92.31% IoU), Inria building dataset (83.71% IoU), and Potsdam dataset (80.27% MIoU). We concluded that CLCFormer is a flexible, robust, and effective method for the semantic segmentation of VHR images. The codes of the proposed model are available at https://github.com/long123524/CLCFormer.

Keyword :

Auxiliary supervise Auxiliary supervise Buildings Buildings CLCFormer CLCFormer Convolution Convolution Convolutional neural networks Convolutional neural networks convolutional neural networks (CNNs) convolutional neural networks (CNNs) Feature extraction Feature extraction Semantics Semantics semantic segmentation semantic segmentation Tiles Tiles transformer transformer Transformers Transformers very high-resolution (VHR) images very high-resolution (VHR) images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Long, Jiang , Li, Mengmeng , Wang, Xiaoqin . Integrating Spatial Details With Long-Range Contexts for Semantic Segmentation of Very High-Resolution Remote-Sensing Images [J]. | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2023 , 20 .
MLA Long, Jiang et al. "Integrating Spatial Details With Long-Range Contexts for Semantic Segmentation of Very High-Resolution Remote-Sensing Images" . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 20 (2023) .
APA Long, Jiang , Li, Mengmeng , Wang, Xiaoqin . Integrating Spatial Details With Long-Range Contexts for Semantic Segmentation of Very High-Resolution Remote-Sensing Images . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2023 , 20 .
Export to NoteExpress RIS BibTex

Version :

Detecting Building Changes Using Multimodal Siamese Multitask Networks From Very-High-Resolution Satellite Images SCIE
期刊论文 | 2023 , 61 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

Two main issues are faced when using very-high-spatial-resolution (VHR) satellite images for building change detection: 1) the boundaries of detected changes are hard to be consistent with the ground truth and 2) detected changes are easily affected by different viewing angles of bitemporal images, leading to noticeable false changes. To deal with these issues, this study develops a new Siamese change detection network [i.e., Siamese multitask change detection network (SMCD-Net)] based on a multitask learning framework to improve building change detection, particularly in the geometric aspect. Boundary information is formulated as an auxiliary task to constrain the learning of high-level semantic features. To enhance the identification of real changes from false changes, we model the directional relationships between buildings and their shadows by fuzzy sets, and incorporate the relationship information into SMCD-Net, leading to a network variant, labeled as SMCD-Net-m. Experiments were conducted on three datasets: a publicly available dataset, a Chinese GaoFen-2 dataset, and a French Pleiades dataset. We compared our methods with seven other methods, i.e., object-based Siamese network, ChangeStar, ChangeFormer, BIT, STANet, FC-Siam-diff, and Siam-NestedUNet. Results showed that the proposed SMCD-Net obtained the best detection results, achieving the lowest global total errors on all datasets. By incorporating directional information, SMCD-Net-m evidently improved detection accuracy, particularly when using bitemporal images with a large viewing angle difference. The improvement was positively correlated with the accuracy of building shadows extracted from VHR images.

Keyword :

Building change detection Building change detection directional relationship modeling directional relationship modeling multitask learning multitask learning Siamese multitask change detection network (SMCD-Net) Siamese multitask change detection network (SMCD-Net) Siamese neural network (SNN) Siamese neural network (SNN) very-high-resolution satellite images very-high-resolution satellite images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Mengmeng , Liu, Xuanguang , Wang, Xiaoqin et al. Detecting Building Changes Using Multimodal Siamese Multitask Networks From Very-High-Resolution Satellite Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2023 , 61 .
MLA Li, Mengmeng et al. "Detecting Building Changes Using Multimodal Siamese Multitask Networks From Very-High-Resolution Satellite Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 61 (2023) .
APA Li, Mengmeng , Liu, Xuanguang , Wang, Xiaoqin , Xiao, Pengfeng . Detecting Building Changes Using Multimodal Siamese Multitask Networks From Very-High-Resolution Satellite Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2023 , 61 .
Export to NoteExpress RIS BibTex

Version :

Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images SCIE
期刊论文 | 2023 , 200 , 24-40 | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING
WoS CC Cited Count: 17
Abstract&Keyword Cite

Abstract :

This paper presents a semantic edge-aware multi-task neural network (SEANet) to obtain closed boundaries when delineating agricultural parcels from remote sensing images. It derives closed boundaries from remote sensing images and improves conventional semantic segmentation methods for the extraction of small and irregular agricultural parcels. SEANet integrates three correlated tasks: mask prediction, edge prediction, and distance map estimation. Related features learned from these tasks improve the generalizability of the network. We regard boundary extraction as an edge detection task and extract rich semantic edge features at multiple levels to improve the geometric accuracy of parcel delineation. Moreover, we develop a new multi-task loss that considers the uncertainty of different tasks. We conducted experiments on three high-resolution Gaofen-2 images in Shandong, Xinjiang, and Sichuan provinces, China, and on two medium-resolution Sentinel-2 images from Denmark and the Netherlands. Results showed that our method produced a better layout of agricultural parcels, with higher attribute and geometric accuracy than the existing ResUNet, ResUNet-a, R2UNet, and BsiNet methods on the Shandong and Denmark datasets. The total extraction errors of the parcels produced by our method were 0.214, 0.127, 0.176, 0.211, and 0.184 for the five datasets, respectively. Our method also obtains closed boundaries by one single segmentation, leading to superiority as compared with existing multi-task networks. We showed that it could be applied to images with different spatial resolutions for parcel delineation. Finally, our method trained on the Xinjiang dataset could be successfully transferred to the Shandong dataset with different dates and landscapes. Similarly, we obtained satisfactory results when transferring from the Denmark dataset to the Netherlands dataset. We conclude that SEANet is an accurate, robust, and transferable method for various areas and different remote sensing images. The codes of our model are available at htt ps://github.com/long123524/SEANet_torch.

Keyword :

Agricultural parcel delineation Agricultural parcel delineation Multi-task neural networks Multi-task neural networks SEANet SEANet Semantic edge-aware detection Semantic edge-aware detection Uncertainty weighted loss Uncertainty weighted loss

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Mengmeng , Long, Jiang , Stein, Alfred et al. Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images [J]. | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2023 , 200 : 24-40 .
MLA Li, Mengmeng et al. "Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images" . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING 200 (2023) : 24-40 .
APA Li, Mengmeng , Long, Jiang , Stein, Alfred , Wang, Xiaoqin . Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images . | ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING , 2023 , 200 , 24-40 .
Export to NoteExpress RIS BibTex

Version :

Building use and mixed-use classification with a transformer-based network fusing satellite images and geospatial textual information SCIE
期刊论文 | 2023 , 297 | REMOTE SENSING OF ENVIRONMENT
Abstract&Keyword Cite

Abstract :

Assigning detailed use categories to buildings is a challenging and relevant task in urban land use classification with applications in urban planning, digital city modelling and twinning. This study aims to provide the categorisation of buildings with detailed use information by considering the possibilities of mixed-use. Mixed-use combines different use forms, and serves as a new type of use category. We obtain attributive information by combining satellite imagery that reflects spatial information and textual information from publicly available point-of-interest data collected by citizens and available on online maps. We propose a multimodal transformerbased building-use classification method to capture and fuse these different data sources within an end-to-end learning workflow. We evaluate the effectiveness of our proposed method on four urban areas in China. Experiments show that the proposed method effectively maps building use according to eight types of fine-grain categories, with a Micro F-1 score equal to 80.9%, and a Macro F-1 score equal to 62% for Wuhan research area. The proposed method is able to harness the relationship between the features obtained from the different data sources and results in higher accuracy than the state-of-the-art fusion-based multimodal integration methods. The proposed method can effectively increase the attributive grain of building use resulting in high classification accuracy.

Keyword :

Building use classification Building use classification Data fusion Data fusion Mixed-use classification Mixed-use classification Multimodal deep learning Multimodal deep learning Natural language processing Natural language processing Remote sensing Remote sensing Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Wen , Persello, Claudio , Li, Mengmeng et al. Building use and mixed-use classification with a transformer-based network fusing satellite images and geospatial textual information [J]. | REMOTE SENSING OF ENVIRONMENT , 2023 , 297 .
MLA Zhou, Wen et al. "Building use and mixed-use classification with a transformer-based network fusing satellite images and geospatial textual information" . | REMOTE SENSING OF ENVIRONMENT 297 (2023) .
APA Zhou, Wen , Persello, Claudio , Li, Mengmeng , Stein, Alfred . Building use and mixed-use classification with a transformer-based network fusing satellite images and geospatial textual information . | REMOTE SENSING OF ENVIRONMENT , 2023 , 297 .
Export to NoteExpress RIS BibTex

Version :

Early Identification of Tobacco Fields Based on Sentinel-1 SAR Images Scopus
其他 | 2023
Abstract&Keyword Cite

Abstract :

Tobacco is a significant revenue-generating crop in China. Early mapping of tobacco can be highly valuable in estimating yields and assessing real-time losses due to disasters. We use medium resolution Sentinel-1 SAR time series remote sensing images from Ninghua County in Fujian Province, China, covering the entire phenological period of tobacco from February to July in 2020, as input data for the model Attention Long Short-Term Memory Fully Convolutional Network (ALSTM-FCN). To determine the earliest identifiable timing (EIT) of tobacco, we conduct three experiments respectively using VH, VV and VV+VH data and gradually increase the length of time series in training until the tobacco was harvested, generating corresponding forty-two models. Results show that the overall accuracy (OA) of VV+VH bipolarized data volume is stable above 0.85 when data increases to April. Using bipolarized data, the EIT of tobacco can be determined at the beginning of April, during mid-growing period. The McNemar test results shows an increasing trend with the increase of time-series length. Compared using single tunnel data, the bipolarized data performs better with 90.76% of OA. Overall, our study demonstrates the potential of ALSTM-FCN for early identification of tobacco growth and highlights the importance of using bipolarized data for such applications. © 2023 IEEE.

Keyword :

earliest identifiable timing (EIT) earliest identifiable timing (EIT) Fully Convolutional Network (FCN) Fully Convolutional Network (FCN) Long Short-Term Memory (LSTM) Long Short-Term Memory (LSTM) Sentinel-1A SAR Sentinel-1A SAR tobacco extract tobacco extract

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, J. , Li, M. , Wang, X. et al. Early Identification of Tobacco Fields Based on Sentinel-1 SAR Images [未知].
MLA Liu, J. et al. "Early Identification of Tobacco Fields Based on Sentinel-1 SAR Images" [未知].
APA Liu, J. , Li, M. , Wang, X. , Feng, X. , Zhou, J. , Zhang, H. . Early Identification of Tobacco Fields Based on Sentinel-1 SAR Images [未知].
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 4 >

Export

Results:

Selected

to

Format:
Online/Total:272/6659940
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1