• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Li, Chuanlin (Li, Chuanlin.) [1] | Huang, Fenghua (Huang, Fenghua.) [2] | Hu, Wei (Hu, Wei.) [3] | Zeng, Jiangchao (Zeng, Jiangchao.) [4]

Indexed by:

EI PKU

Abstract:

To contribute to the current research of building extraction based on deep learning and high-resolution remote sensing images, we propose an improved Unet network (Res_AttentionUnet), which combines the Residual module of ResNet and Attention mechanism. We apply the Unet network to the extraction of buildings from high-resolution remote sensing images, which effectively improves the extraction accuracy of buildings. The specific optimization method can be divided into three parts. Firstly, in the traditional Unet semantic segmentation network convolution layer, the ResBlock module is added to enhance the extraction of low-level and high-level features. Meanwhile, the Attention mechanism module is added to the network step connection part. Secondly, in the whole net, the ResBlock module enables the convoluted feature map to obtain more bottom information and enhance the robustness of the convolution structure, so as to prevent underfitting. Thirdly, the Attention mechanism can enhance the feature learning of building area pixels, making feature extraction more complete, so as to improve the accuracy of building extraction. In this study, we use the open data set (WHU Building Dataset), provided by Ji Shunping team of Wuhan University, as the experimental data and select three experimental areas with different building characteristics and representativeness. Then, we preprocess the different experimental areas (including sliding, cropping, and image enhancement, etc.). Finally, we use four different network models of Unet, ResUnet, AttentionUnet, and Res_AttentionUnet to extract buildings from three different experimental areas. The experimental results are cross-compared and analyzed. The experimental results show that, compared with the other three networks, the Res_AttentionUnet proposed in this paper has higher accuracy in the building extraction from high-resolution remote sensing images. The average extraction accuracy of Res_AttentionUnet is 95.81%, which is 17.94% higher than the original Unet network, and 2.19% higher than ResUnet (the Unet with only residual module). The results demonstrate that Res_AttentionUnet can significantly improve the effectiveness of building extraction in high-resolution remote sensing images. © 2021, Science Press. All right reserved.

Keyword:

Buildings Convolution Deep neural networks Extraction Image enhancement Network layers Open Data Remote sensing Semantics Semantic Segmentation

Community:

  • [ 1 ] [Li, Chuanlin]Key Laboratory of Spatial Data Mining and Information Sharing of Ministry of Education, Fuzhou University, Fuzhou; 350108, China
  • [ 2 ] [Li, Chuanlin]National Engineering Research Centre of Geospatial Information Technology, Fuzhou University, Fuzhou; 350108, China
  • [ 3 ] [Li, Chuanlin]The Academy of Digital China, Fuzhou University, Fuzhou; 350108, China
  • [ 4 ] [Huang, Fenghua]Fujian University Engineering Research Center of Spatial Data Mining and Application, Yango University, Fuzhou; 350015, China
  • [ 5 ] [Hu, Wei]Key Laboratory of Spatial Data Mining and Information Sharing of Ministry of Education, Fuzhou University, Fuzhou; 350108, China
  • [ 6 ] [Hu, Wei]National Engineering Research Centre of Geospatial Information Technology, Fuzhou University, Fuzhou; 350108, China
  • [ 7 ] [Hu, Wei]The Academy of Digital China, Fuzhou University, Fuzhou; 350108, China
  • [ 8 ] [Zeng, Jiangchao]Key Laboratory of Spatial Data Mining and Information Sharing of Ministry of Education, Fuzhou University, Fuzhou; 350108, China
  • [ 9 ] [Zeng, Jiangchao]National Engineering Research Centre of Geospatial Information Technology, Fuzhou University, Fuzhou; 350108, China
  • [ 10 ] [Zeng, Jiangchao]The Academy of Digital China, Fuzhou University, Fuzhou; 350108, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Journal of Geo-Information Science

ISSN: 1560-8999

CN: 11-5809/P

Year: 2021

Issue: 12

Volume: 23

Page: 2232-2243

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 8

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 4

Online/Total:1030/9472781
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1