• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:汪小钦

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 34 >
融合遥感影像与街景的城市行道树三维绿量估算
期刊论文 | 2025 , 53 (2) , 151-158 | 福州大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

针对三维绿量计算过程中树木参数获取成本较高的问题,提出一种融合高分辨率遥感数据和街景数据的城市行道树三维绿量估算方法.以福州主要辖区为例,首先基于高分二号(GF-2)遥感影像,获取城市行道树的二维分布;然后结合街景地图实现对行道树的树木参数量测;最后基于行道树的水平分布和垂直特征完成三维绿量的估算.结果表明,研究区内行道树整体分布不均衡.基于街景测量获取的树木参数精度较高,与实测数据相比,R2 大于 0.9.单位面积上的三维绿量在白马路路段较高,在福马路等路段较低,榕树对该研究区的三维绿量贡献最大,占研究区总绿量的 80%.与二维指标相比,城市行道树的三维绿量值更能体现城市行道树的三维立体差异,反映绿地实际生态效益.

Keyword :

三维绿量 三维绿量 百度街景 百度街景 立体景观 立体景观 虚拟测量 虚拟测量 行道树 行道树

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 孔令凤 , 汪小钦 , 周小成 . 融合遥感影像与街景的城市行道树三维绿量估算 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (2) : 151-158 .
MLA 孔令凤 等. "融合遥感影像与街景的城市行道树三维绿量估算" . | 福州大学学报(自然科学版) 53 . 2 (2025) : 151-158 .
APA 孔令凤 , 汪小钦 , 周小成 . 融合遥感影像与街景的城市行道树三维绿量估算 . | 福州大学学报(自然科学版) , 2025 , 53 (2) , 151-158 .
Export to NoteExpress RIS BibTex

Version :

Cross-modal feature interaction network for heterogeneous change detection SCIE
期刊论文 | 2025 | GEO-SPATIAL INFORMATION SCIENCE
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Heterogeneous change detection is a task of considerable practical importance and significant challenge in remote sensing. Heterogeneous change detection involves identifying change areas using remote sensing images obtained from different sensors or imaging conditions. Recently, research has focused on feature space translation methods based on deep learning technology for heterogeneous images. However, these types of methods often lead to the loss of original image information, and the translated features cannot be efficiently compared, further limiting the accuracy of change detection. For these issues, we propose a cross-modal feature interaction network (CMFINet). Specifically, CMFINet introduces a cross-modal interaction module (CMIM), which facilitates the interaction between heterogeneous features through attention exchange. This approach promotes consistent representation of heterogeneous features while preserving image characteristics. Additionally, we design a differential feature extraction module (DFEM) to enhance the extraction of true change features from spatial and channel dimensions, facilitating efficient comparison after feature interaction. Extensive experiments conducted on the California, Toulouse, and Wuhan datasets demonstrate that CMFINet outperforms eight existing methods in identifying change areas in different scenes from multimodal images. Compared to the existing methods applied to the three datasets, CMFINet achieved the highest F1 scores of 83.93%, 75.65%, and 95.42%, and the highest mIoU values of 85.38%, 78.34%, and 94.87%, respectively. The results demonstrate the effectiveness and applicability of CMFINet in heterogeneous change detection.

Keyword :

attention mechanisms attention mechanisms Change detection Change detection CNN CNN feature interaction feature interaction heterogeneous remote sensing images heterogeneous remote sensing images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Zhiwei , Wang, Xiaoqin , Lin, Haihan et al. Cross-modal feature interaction network for heterogeneous change detection [J]. | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
MLA Yang, Zhiwei et al. "Cross-modal feature interaction network for heterogeneous change detection" . | GEO-SPATIAL INFORMATION SCIENCE (2025) .
APA Yang, Zhiwei , Wang, Xiaoqin , Lin, Haihan , Li, Mengmeng , Lin, Mengjing . Cross-modal feature interaction network for heterogeneous change detection . | GEO-SPATIAL INFORMATION SCIENCE , 2025 .
Export to NoteExpress RIS BibTex

Version :

Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy Scopus
期刊论文 | 2025 | IEEE Geoscience and Remote Sensing Letters
Abstract&Keyword Cite

Abstract :

Semantic change detection (SCD) in high-resolution (HR) remote sensing images faces two issues: (1) isolated network branch for binary change detection (BCD) within multi-task architecture result in suboptimal SCD performance; (2) false alarms or missed detections caused by illumination differences or seasonal transform. To address these issues, this study proposes a bi-temporal binary change enhancement network (Bi-BCENet). Specifically, we introduce a binary change enhancement (BCE) strategy based on multi-network joint learning to achieve superior SCD via improving change areas prediction. Within the network’s reasoning process, we develop a cross-attention fusion module (CAFM) to enhance the global similarity modeling via cross-network prompt fusion, and we employ a cosine similarity-based auxiliary loss to optimize non-change’s semantic consistency. The experiments on SECOND and CINA-FX datasets demonstrate that Bi-BCENet outperforms representative SCD networks, achieving 62.08%, 84.95% in FSCD and 66.88%, 83.10% in mIoUsCD, respectively. And the ablation analysis of network validates Bi-BCENet’s effectiveness in reducing false alarms and missed detections in SCD results. Moreover, for specific SCD of cropland, Bi-BCENet shows its strong potential in single-to-multi SCD. © 2004-2012 IEEE.

Keyword :

binary change detection binary change detection High-resolution remote sensing High-resolution remote sensing joint learning joint learning semantic change detection semantic change detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, H. , Wang, X. , Wu, Q. et al. Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy [J]. | IEEE Geoscience and Remote Sensing Letters , 2025 .
MLA Lin, H. et al. "Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy" . | IEEE Geoscience and Remote Sensing Letters (2025) .
APA Lin, H. , Wang, X. , Wu, Q. , Li, M. , Yang, Z. , Lou, K. . Semantic Change Detection in HR Remote Sensing Images with Joint Learning and Binary Change Enhancement Strategy . | IEEE Geoscience and Remote Sensing Letters , 2025 .
Export to NoteExpress RIS BibTex

Version :

SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection SCIE
期刊论文 | 2025 , 22 | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
Abstract&Keyword Cite

Abstract :

Change detection in remote sensing images is crucial for assessing human activity impacts and supporting government decision-making. However, in practice, obtaining bitemporal remote sensing images with consistent conditions is highly limited, and existing change detection methods still face two main challenges: 1) in real-world scenarios, inconsistent sensor and lighting conditions cause significant style (visual appearance) differences between bitemporal remote sensing images, leading to false changes and reducing change detection accuracy and 2) remote sensing images contain complex semantic information, and complex scenarios such as shadow occlusion and seasonal vegetation changes make the existing methods difficult to capture relevant features related to change areas. To address these challenges, we propose a style consistency enhanced differential network (SCEDNet) to eliminate style discrepancies between temporally distinct images and enhance the semantic information of change features. Specifically, we introduce a style consistency module (SCM) in the encoder to extract consistent features by computing the mean and variance of temporal features. Then, we introduce an enhanced differential module (EDM) to enhance change semantics, tackling issues such as mislocalization and incomplete regions in complex cases such as shadow occlusion and seasonal vegetation changes. In addition, we design a gate fusion upsampling (GFU) and change refine module (CRM) in the decoder to integrate multilevel differential features with different semantic information and highlight key changes, further improving change detection performance. Experiments on the CDD and GZ_CD datasets show that SCEDNet outperformed eight methods, achieving F1-scores of 95.59% and 90.41%, respectively. Code and datasets are available at https://github.com/Yzwfff/SCEDNet

Keyword :

Change detection Change detection deep learning deep learning enhanced differential features enhanced differential features remote sensing images remote sensing images style consistency style consistency

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Zhiwei , Wang, Xiaoqin , Li, Mengmeng et al. SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection [J]. | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2025 , 22 .
MLA Yang, Zhiwei et al. "SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection" . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 22 (2025) .
APA Yang, Zhiwei , Wang, Xiaoqin , Li, Mengmeng , Long, Jiang . SCEDNet: A Style Consistency Enhanced Differential Network for Remote Sensing Image Change Detection . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2025 , 22 .
Export to NoteExpress RIS BibTex

Version :

Mapping the distribution of pine wilt disease based on selected machine learning algorithms and high-resolution Gaofen-2/7 remote sensing SCIE
期刊论文 | 2025 , 18 (1) | INTERNATIONAL JOURNAL OF DIGITAL EARTH
Abstract&Keyword Cite

Abstract :

Under the influence of human activities and climate change, pine wilt disease (PWD) has caused significant damage to Masson's pine (Pinus massoniana Lamb.) forests in subtropical China. Existing research has struggled to accurately capture the large-scale spatial distribution of the PWD, particularly for precise extraction at provincial level. This study focuses on Fujian province and proposes a novel method for extracting PWD information at the sub-stand level. This approach uses forest age, canopy height, and temporal vegetation indices (VIs) data for deadwood distribution sub-stands to identify the suspected outbreak areas. In key counties and cities, High-resolution satellite imagery (GF-2 and GF-7) was used to construct a bi-level scale-set model (BSM) for efficient image segmentation, followed by selection of the best classification algorithm for data extraction. For non-key counties, sentinel imagery with 10-meter resolution was used on the GEE cloud platform with RF classification. The results showed an overall annual extraction accuracy exceeding 90%, and statistical analysis revealed a significant reduction in the number of dead trees from 2021 to 2022, indicating effective control measures. This study demonstrated that multi-source remote sensing data can efficiently extract PWD distribution information, fill data gaps for provincial-level monitoring, and support forest pest management.

Keyword :

bi-level scale-set model bi-level scale-set model meter-scale resolution imagery meter-scale resolution imagery Pine wilt disease Pine wilt disease remote sensing remote sensing sentinel imagery sentinel imagery

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Yifan , Zhou, Xiaocheng , Chen, Chongcheng et al. Mapping the distribution of pine wilt disease based on selected machine learning algorithms and high-resolution Gaofen-2/7 remote sensing [J]. | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) .
MLA Wang, Yifan et al. "Mapping the distribution of pine wilt disease based on selected machine learning algorithms and high-resolution Gaofen-2/7 remote sensing" . | INTERNATIONAL JOURNAL OF DIGITAL EARTH 18 . 1 (2025) .
APA Wang, Yifan , Zhou, Xiaocheng , Chen, Chongcheng , Wang, Xiaoqin , Wu, Hao , Tan, Fanglin et al. Mapping the distribution of pine wilt disease based on selected machine learning algorithms and high-resolution Gaofen-2/7 remote sensing . | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) .
Export to NoteExpress RIS BibTex

Version :

Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model SCIE
期刊论文 | 2025 , 231 | COMPUTERS AND ELECTRONICS IN AGRICULTURE
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Precise information on agricultural parcels is crucial for effective farm management, crop mapping, and monitoring. Current techniques often encounter difficulties in automatically delineating vectorized parcels from remote sensing images, especially in irregular-shaped areas, making it challenging to derive closed and vectorized boundaries. To address this, we treat parcel delineation as identifying valid parcel vertices from remote sensing images to generate parcel polygons. We introduce a Point-Line-Region interactive multitask network (PLR-Net) that jointly learns semantic features of parcel vertices, boundaries, and regions through point-, line-, and region-related subtasks within a multitask learning framework. We derived an attraction field map (AFM) to enhance the feature representation of parcel boundaries and improve the detection of parcel regions while maintaining high geometric accuracy. The point-related subtask focuses on learning features of parcel vertices to obtain preliminary vertices, which are then refined based on detected boundary pixels to derive valid parcel vertices for polygon generation. We designed a spatial and channel excitation module for feature interaction to enhance interactions between points, lines, and regions. Finally, the generated parcel polygons are refined using the Douglas-Peucker algorithm to regularize polygon shapes. We evaluated PLR-Net using high-resolution GF-2 satellite images from the Shandong, Xinjiang, and Sichuan provinces of China and medium-resolution Sentinel-2 images from The Netherlands. Results showed that our method outperformed existing state-of-the-art techniques (e.g., BsiNet, SEANet, and Hisup) in pixel- and object-based geometric accuracy across all datasets, achieving the highest IoU and polygonal average precision on GF2 datasets (e.g., 90.84% and 82.00% in Xinjiang) and on the Sentinel-2 dataset (75.86% and 47.1%). Moreover, when trained on the Xinjiang dataset, the model successfully transferred to the Shandong dataset, achieving an IoU score of 83.98%. These results demonstrate that PLR-Net is an accurate, robust, and transferable method suitable for extracting vectorized parcels from diverse regions and types of remote sensing images. The source codes of our model are available at https://github.com/mengmengli01/PLR-Net-demo/tree/main.

Keyword :

Agricultural parcel delineation Agricultural parcel delineation Multitask neural networks Multitask neural networks PLR-Net PLR-Net Point-line-region interactive Point-line-region interactive Vectorized parcels Vectorized parcels

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Mengmeng , Lu, Chengwen , Lin, Mengjing et al. Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model [J]. | COMPUTERS AND ELECTRONICS IN AGRICULTURE , 2025 , 231 .
MLA Li, Mengmeng et al. "Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model" . | COMPUTERS AND ELECTRONICS IN AGRICULTURE 231 (2025) .
APA Li, Mengmeng , Lu, Chengwen , Lin, Mengjing , Xiu, Xiaolong , Long, Jiang , Wang, Xiaoqin . Extracting vectorized agricultural parcels from high-resolution satellite images using a Point-Line-Region interactive multitask model . | COMPUTERS AND ELECTRONICS IN AGRICULTURE , 2025 , 231 .
Export to NoteExpress RIS BibTex

Version :

Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images SCIE
期刊论文 | 2025 , 18 , 976-994 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Building type information indicates the functional properties of buildings and plays a crucial role in smart city development and urban socioeconomic activities. Existing methods for classifying building types often face challenges in accurately distinguishing buildings between types while maintaining well-delineated boundaries, especially in complex urban environments. This study introduces a novel framework, i.e., CNN-Transformer cross-attention feature fusion network (CTCFNet), for building type classification from very high resolution remote sensing images. CTCFNet integrates convolutional neural networks (CNNs) and Transformers using an interactive cross-encoder fusion module that enhances semantic feature learning and improves classification accuracy in complex scenarios. We develop an adaptive collaboration optimization module that applies human visual attention mechanisms to enhance the feature representation of building types and boundaries simultaneously. To address the scarcity of datasets in building type classification, we create two new datasets, i.e., the urban building type (UBT) dataset and the town building type (TBT) dataset, for model evaluation. Extensive experiments on these datasets demonstrate that CTCFNet outperforms popular CNNs, Transformers, and dual-encoder methods in identifying building types across various regions, achieving the highest mean intersection over union of 78.20% and 77.11%, F1 scores of 86.83% and 88.22%, and overall accuracy of 95.07% and 95.73% on the UBT and TBT datasets, respectively. We conclude that CTCFNet effectively addresses the challenges of high interclass similarity and intraclass inconsistency in complex scenes, yielding results with well-delineated building boundaries and accurate building types.

Keyword :

Accuracy Accuracy Architecture Architecture Buildings Buildings Building type classification Building type classification CNN-transformer networks CNN-transformer networks cross-encoder cross-encoder Earth Earth Feature extraction Feature extraction feature interaction feature interaction Optimization Optimization Remote sensing Remote sensing Semantics Semantics Transformers Transformers very high resolution remote sensing very high resolution remote sensing Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Shaofeng , Li, Mengmeng , Zhao, Wufan et al. Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2025 , 18 : 976-994 .
MLA Zhang, Shaofeng et al. "Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 18 (2025) : 976-994 .
APA Zhang, Shaofeng , Li, Mengmeng , Zhao, Wufan , Wang, Xiaoqin , Wu, Qunyong . Building Type Classification Using CNN-Transformer Cross-Encoder Adaptive Learning From Very High Resolution Satellite Images . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2025 , 18 , 976-994 .
Export to NoteExpress RIS BibTex

Version :

Harmonizing Landsat-8 OLI and Sentinel-2 MSI: an assessment of surface reflectance and vegetation index consistency SCIE
期刊论文 | 2025 , 18 (1) | INTERNATIONAL JOURNAL OF DIGITAL EARTH
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Normalization of satellite images collected under various atmospheric conditions is critical for the comprehensive, long-term global surveillance of terrestrial surface alterations. This study utilized remote sensing data from the Sentinel-2A Multispectral Instrument (MSI) in polar orbit and the Landsat-8 Operational Land Imager (OLI) sensors, with multispectral global coverage of 10-30 m, to derive reflectance products using inversion algorithms. Validation and assessment were conducted using synchronous surface measurement spectra collected from four sites across three Chinese provinces in 2019. We corrected surface reflectance and derived vegetation indices across blue, green, red, near-infrared (NIR), and two short-wave infrared (SWIR) bands and normalized discrepancies. The phenological spatial distribution map for late rice in Jiangxi Province was constructed using normalized data outcomes. A robust linear correlation in reflectance across corresponding bands of the two satellite sensors was observed. The NIR and SWIR bands showed the most significant difference because of differences in their spectral response functions. A high degree of congruence was observed between Landsat-8 OLI and Sentinel-2 MSI sensor reflectance products, with root mean square error values consistently below 0.05. The derived conversion equations were highly accurate for harmonizing data from both sensor systems.

Keyword :

harmonization harmonization Landsat-8 OLI Landsat-8 OLI Sentinel-2 MSI Sentinel-2 MSI Surface reflectance (SR) Surface reflectance (SR) vegetation index vegetation index

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiaqi , Zhou, Xiaocheng , Liu, Xueping et al. Harmonizing Landsat-8 OLI and Sentinel-2 MSI: an assessment of surface reflectance and vegetation index consistency [J]. | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) .
MLA Zhang, Jiaqi et al. "Harmonizing Landsat-8 OLI and Sentinel-2 MSI: an assessment of surface reflectance and vegetation index consistency" . | INTERNATIONAL JOURNAL OF DIGITAL EARTH 18 . 1 (2025) .
APA Zhang, Jiaqi , Zhou, Xiaocheng , Liu, Xueping , Wang, Xiaoqin , He, Guojin , Zhang, Youshui . Harmonizing Landsat-8 OLI and Sentinel-2 MSI: an assessment of surface reflectance and vegetation index consistency . | INTERNATIONAL JOURNAL OF DIGITAL EARTH , 2025 , 18 (1) .
Export to NoteExpress RIS BibTex

Version :

Integrating Segment Anything Model Derived Boundary Prior and High-Level Semantics for Cropland Extraction From High-Resolution Remote Sensing Images SCIE
期刊论文 | 2024 , 21 | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Visual foundation models (VFMs) pretrained on large-scale training datasets show robust zero-shot adaptability across many vision tasks. However, there still exist limitations in remote sensing processing tasks due to the variety and complexity of remote sensing images. In this letter, we propose a two-flow network (TFNet) based on multitask VFM, named TFNet, to extract croplands with well-delineated boundaries from high-resolution remote sensing images. TFNet consists of a mask flow and a boundary flow. It first uses a VFM as visual encoder to obtain universal semantic features regarding croplands and then aggregates them into the two flows. Next, a boundary prior-guided module (BPM) is developed to incorporate boundary semantics derived from the boundary flow into the mask flow, to refine the boundary details of croplands. We also develop a multibranch parallel fusion module (MPFM) that aggregates multiscale contextual information to improve the identification of cropland with varied sizes and shapes. Finally, a semantic consistency loss is introduced to further optimize the feature learning of cropland information. We conducted extensive experiments on Shandong (SD) and Xinjiang (XJ) datasets collected from Gaofen-2 (GF-2) satellites and compared our method with five existing methods. Experimental results show that the croplands extracted by our method have the fewest omissions and errors, achieving the highest attribute accuracy (intersection over union (IoU) of 0.863 and 0.945) and lowest geometric errors (global total classification (GTC) of 0.134 and 0.097) than other methods on the two datasets. Our method effectively distinguished croplands of varied sizes, shapes, and spectra, even in scenarios with limited samples. Code and datasets are available at https://github.com/long123524/TFNet.

Keyword :

Boundary prior Boundary prior cropland extraction cropland extraction high-resolution remote sensing images high-resolution remote sensing images limited samples limited samples two-flow network (TFNet) two-flow network (TFNet) visual foundation model (VFM) visual foundation model (VFM)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Long, Jiang , Zhao, Hang , Li, Mengmeng et al. Integrating Segment Anything Model Derived Boundary Prior and High-Level Semantics for Cropland Extraction From High-Resolution Remote Sensing Images [J]. | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2024 , 21 .
MLA Long, Jiang et al. "Integrating Segment Anything Model Derived Boundary Prior and High-Level Semantics for Cropland Extraction From High-Resolution Remote Sensing Images" . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 21 (2024) .
APA Long, Jiang , Zhao, Hang , Li, Mengmeng , Wang, Xiaoqin , Lu, Chengwen . Integrating Segment Anything Model Derived Boundary Prior and High-Level Semantics for Cropland Extraction From High-Resolution Remote Sensing Images . | IEEE GEOSCIENCE AND REMOTE SENSING LETTERS , 2024 , 21 .
Export to NoteExpress RIS BibTex

Version :

A novel ecological evaluation index based on geospatial principles and remote sensing techniques SCIE
期刊论文 | 2024 , 31 (7) , 809-826 | INTERNATIONAL JOURNAL OF SUSTAINABLE DEVELOPMENT AND WORLD ECOLOGY
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

The evaluation of regional ecological status has far-reaching significance for understanding regional ecological conditions and promoting sustainable development. Herein, a geospatial ecological index (GEI) was developed on the basis of Landsat data and the principles of soil lines and spatial geometry. Specifically, the GEI integrates four remote sensing indicators: Perpendicular Vegetation Index (PVI) representing greenness, Modified Perpendicular Drought Index (MPDI) representing drought, Normalized Difference Built-up and Soil Index (NDSI) representing the dryness of land surface, and Land Surface Temperature (LST) representing the hotness of land surface. Two typical regions, Fuzhou City and Zijin mining area, in Fujian Province, China, were selected to evaluate regional ecological quality via the proposed GEI. The results show an improvement in the overall ecological quality of Fuzhou City, with an increase in the average GEI value from 0.49 in 2001 to 0.53 in 2020. In the case of the Zijin mining area, regions with poor ecological status are concentrated in the main mining areas. However, the average GEI value rose from 0.51 in 1992 to 0.57 in 2020, illustrating an improvement in its ecological conditions. The study demonstrates the robustness and effectiveness of GEI, objectively revealing the spatial distribution and ecological status.

Keyword :

ecological evaluation ecological evaluation fujian fujian GEI (Geospatial Ecological Index) GEI (Geospatial Ecological Index) geometric space geometric space soil line soil line

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Mengjing , Zhao, Yang , Shi, Longyu et al. A novel ecological evaluation index based on geospatial principles and remote sensing techniques [J]. | INTERNATIONAL JOURNAL OF SUSTAINABLE DEVELOPMENT AND WORLD ECOLOGY , 2024 , 31 (7) : 809-826 .
MLA Lin, Mengjing et al. "A novel ecological evaluation index based on geospatial principles and remote sensing techniques" . | INTERNATIONAL JOURNAL OF SUSTAINABLE DEVELOPMENT AND WORLD ECOLOGY 31 . 7 (2024) : 809-826 .
APA Lin, Mengjing , Zhao, Yang , Shi, Longyu , Wang, Xiaoqin . A novel ecological evaluation index based on geospatial principles and remote sensing techniques . | INTERNATIONAL JOURNAL OF SUSTAINABLE DEVELOPMENT AND WORLD ECOLOGY , 2024 , 31 (7) , 809-826 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 34 >

Export

Results:

Selected

to

Format:
Online/Total:575/13572930
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1