• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:卢孝强

Refining:

Year

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 3 >
Multiscale Salient Alignment Learning for Remote-Sensing Image-Text Retrieval SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Remote-sensing image-text (RSIT) retrieval involves the use of either textual descriptions or remote-sensing images (RSI) as queries to retrieve relevant RSIs or corresponding text descriptions. Many traditional cross-modal RSIT retrieval methods tend to overlook the importance of capturing salient information and establishing the prior similarity between RSIs and texts, leading to a decline in cross-modal retrieval performance. In this article, we address these challenges by introducing a novel approach known as multiscale salient image-guided text alignment (MSITA). This approach is designed to learn salient information by aligning text with images for effective cross-modal RSIT retrieval. The MSITA approach first incorporates a multiscale fusion module and a salient learning module to facilitate the extraction of salient information. In addition, it introduces an image-guided text alignment (IGTA) mechanism that uses image information to guide the alignment of texts, enabling the effective capture of fine-grained correspondences between RSI regions and textual descriptions. In addition to these components, a novel loss function is devised to enhance the similarity across different modalities and reinforce the prior similarity between RSIs and texts. Extensive experiments conducted on four widely adopted RSIT datasets affirm that the MSITA approach significantly enhances cross-modal RSIT retrieval performance in comparison to other state-of-the-art methods.

Keyword :

Cross-modal retrieval Cross-modal retrieval image-guided text alignment (IGTA) image-guided text alignment (IGTA) prior similarity prior similarity salient learning salient learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yaxiong , Huang, Jinghao , Li, Xiaoyu et al. Multiscale Salient Alignment Learning for Remote-Sensing Image-Text Retrieval [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Chen, Yaxiong et al. "Multiscale Salient Alignment Learning for Remote-Sensing Image-Text Retrieval" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Chen, Yaxiong , Huang, Jinghao , Li, Xiaoyu , Xiong, Shengwu , Lu, Xiaoqiang . Multiscale Salient Alignment Learning for Remote-Sensing Image-Text Retrieval . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

It is a challenging task to recognize novel categories with only a few labeled remote-sensing images. Currently, meta-learning solves the problem by learning prior knowledge from another dataset where the classes are disjoint. However, the existing methods assume the training dataset comes from the same domain as the test dataset. For remote-sensing images, test datasets may come from different domains. It is impossible to collect a training dataset for each domain. Meta-learning and transfer learning are widely used to tackle the few-shot classification and the cross-domain classification, respectively. However, it is difficult to recognize novel categories from various domains with only a few images. In this article, a domain mapping network (DMN) is proposed to cope with the few-shot classification under domain shift. DMN trains an efficient few-shot classification model on the source domain and then adapts the model to the target domain. Specifically, dual autoencoders are exploited to fit the source and target domain distribution. First, DMN learns an autoencoder on the source domain to fit the source domain distribution. Then, a target autoencoder is initiated from the source domain autoencoder and further updated with a few target images. To ensure the distribution alignment, cycle-consistency losses are proposed to jointly train the source autoencoder and target autoencoder. Extensive experiments are conducted to validate the generalizable and superiority of the proposed method.

Keyword :

Adaptation models Adaptation models Cross-domain classification Cross-domain classification few-shot classification few-shot classification Image recognition Image recognition Measurement Measurement meta-learning meta-learning Metalearning Metalearning Remote sensing Remote sensing remote sensing scene classification remote sensing scene classification Task analysis Task analysis Training Training transfer learning transfer learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lu, Xiaoqiang , Gong, Tengfei , Zheng, Xiangtao . Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Lu, Xiaoqiang et al. "Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Lu, Xiaoqiang , Gong, Tengfei , Zheng, Xiangtao . Domain Mapping Network for Remote Sensing Cross-Domain Few-Shot Classification . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Global-Group Attention Network With Focal Attention Loss for Aerial Scene Classification SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Aerial scene classification, aiming at assigning a specific semantic class to each aerial image, is a fundamental task in the remote sensing community. Aerial scene images have more diverse and complex geological features. While some statistics of images can be well fit using convolution, it limits such models to capturing the global context hidden in aerial scenes. Furthermore, to optimize the feature space, many methods add class information to the feature embedding space. However, they seldom combine model structure with class information to obtain more separable feature representations. In this article, we propose to address these limitations in a unified framework (i.e., CGFNet) from two aspects: focusing on the key information of input images and optimizing the feature space. Specifically, we propose a global-group attention module (GGAM) to adaptively learn and selectively focus on important information from input images. GGAM consists of two parallel branches: the adaptive global attention branch (AGAB) and the region-aware attention branch (RAAB). AGAB utilizes an adaptive pooling operation to better model the global context in aerial scenes. As a supplement to AGAB, RAAB combines grouping features with spatial attention to spatially enhance the semantic distribution of features (i.e., selectively focus on effective regions of features and ignore irrelevant semantic regions). In parallel, a focal attention loss (FA-Loss) is exploited to introduce class information into attention vector space, which can improve intraclass consistency and interclass separability. Experimental results on four publicly available and challenging datasets demonstrate the effectiveness of our method.

Keyword :

Aerial scene classification Aerial scene classification attention attention convolutional neural networks (CNNs) convolutional neural networks (CNNs) loss function loss function remote sensing remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Yichen , Chen, Yaxiong , Rong, Yi et al. Global-Group Attention Network With Focal Attention Loss for Aerial Scene Classification [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Zhao, Yichen et al. "Global-Group Attention Network With Focal Attention Loss for Aerial Scene Classification" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Zhao, Yichen , Chen, Yaxiong , Rong, Yi , Xiong, Shengwu , Lu, Xiaoqiang . Global-Group Attention Network With Focal Attention Loss for Aerial Scene Classification . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Spectrum-Induced Transformer-Based Feature Learning for Multiple Change Detection in Hyperspectral Images SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

The multiple change detection (MCD) of hyperspectral images (HSIs) is the process of detecting change areas and providing "from-to" change information of HSIs obtained from the same area at different times. HSIs have hundreds of spectral bands and contain a large amount of spectral information. However, current deep-learning-based MCD methods do not pay special attention to the interspectral dependency and the effective spectral bands of various land covers, which limits the improvement of HSIs' change detection (CD) performance. To address the above problems, we propose a spectrum-induced transformer-based feature learning (STFL) method for HSIs. The STFL method includes a spectrum-induced transformer-based feature extraction module (STFEM) and an attention-based detection module (ADM). First, the 3D-2D convolutional neural networks (CNNs) are used to extract deep features, and the transformer encoder (TE) is used to calculate self-attention matrices along the spectral dimension in STFEM. Then, the extracted deep features and the learned self-attention matrices are dot-multiplied to generate more discriminative features that take the long-range dependency of the spectrum into account. Finally, ADM mines the effective spectral bands of the difference features learned from STFEM by the attention block (AB) to explore the discrepancy of difference features and uses the softmax function to identify multiple changes. The proposed STFL method is validated on two hyperspectral datasets, and their experiments illustrate the superiority of the proposed STFL method over the currently existing MCD methods.

Keyword :

Attention Attention deep learning deep learning hyperspectral images (HSIs) hyperspectral images (HSIs) multiple change detection (MCD) multiple change detection (MCD) transformer transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Wuxia , Zhang, Yuhang , Gao, Shiwen et al. Spectrum-Induced Transformer-Based Feature Learning for Multiple Change Detection in Hyperspectral Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Zhang, Wuxia et al. "Spectrum-Induced Transformer-Based Feature Learning for Multiple Change Detection in Hyperspectral Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Zhang, Wuxia , Zhang, Yuhang , Gao, Shiwen , Lu, Xiaoqiang , Tang, Yi , Liu, Shihu . Spectrum-Induced Transformer-Based Feature Learning for Multiple Change Detection in Hyperspectral Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

A Joint Saliency Temporal-Spatial-Spectral Information Network for Hyperspectral Image Change Detection SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Hyperspectral image change detection (HSI-CD) is a fundamental task in the field of remote sensing (RS) observation, which utilizes the rich spectral and spatial information in bitemporal HSIs to detect subtle changes on the Earth's surface. However, modern deep learning (DL)-based HSI-CD methods mostly rely on patch-based methods, which leads to spectral band redundancy and spatial information noise in limited receiving domains, thus ignoring the extraction and utilization of saliency information and limiting the improvement of CD performance. To address these issues, this article proposes a joint saliency temporal-spatial-spectral information network (STSS-Net) for HSI-CD. The principal contributions of this article can be summarized: 1) we have designed a spatial saliency information extraction (SSIE) module for denoising based on distance from center pixels and spectral similarity of the substance, which increases the attention to spatial differences between similar spectral substances and different spectral substances; 2) we have designed a compact high-level spectral information tokenizer (CHLSIT) for spectral saliency information, where the high-level conceptual information of changes in spectral interest can be represented by nonlinear combinations of spectral bands, and redundancy can be removed by extracting high-level spectral conceptual features; and 3) utilizing the advantages of CNN and transformer architectures to combine temporal-spatial-spectral information. The experimental results on three real HSI-CD datasets show that STSS-Net can improve the accuracy of CD and has a certain improvement in the detection of edge information and complex information.

Keyword :

Attention Attention change detection change detection convolutional neural networks (CNNs) convolutional neural networks (CNNs) hyperspectral image (HSI) hyperspectral image (HSI) saliency information saliency information transformer transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yaxiong , Zhang, Zhipeng , Dong, Le et al. A Joint Saliency Temporal-Spatial-Spectral Information Network for Hyperspectral Image Change Detection [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Chen, Yaxiong et al. "A Joint Saliency Temporal-Spatial-Spectral Information Network for Hyperspectral Image Change Detection" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Chen, Yaxiong , Zhang, Zhipeng , Dong, Le , Xiong, Shengwu , Lu, Xiaoqiang . A Joint Saliency Temporal-Spatial-Spectral Information Network for Hyperspectral Image Change Detection . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

珠海一号高光谱场景分类数据集 CSCD PKU
期刊论文 | 2024 , 28 (01) , 306-319 | 遥感学报
Abstract&Keyword Cite

Abstract :

高空间分辨率、高光谱分辨率、大幅宽与大数据量是高光谱卫星数据发展趋势,传统高光谱影像的像素级分类面临难以处理海量数据、无法高效获取复杂海量影像中隐含信息的困境。已有研究开始关注高光谱影像的场景级分类,并逐步建立完善高光谱遥感场景分类数据集。然而,目前的数据集制作过程多参考高空间分辨率可见光遥感场景数据集的制作方法,主要采用遥感影像的空间信息进行场景类别解译,忽视了高光谱场景的光谱信息。因此,为构建高光谱影像的遥感场景分类数据集,本文利用“珠海一号”高光谱卫星拍摄的西安地区高光谱数据,使用无监督光谱聚类辅助定位、裁剪与标注待选场景样本,结合Google Earth高分影像进行目视筛选,构建6类场景类型和737幅场景样本的珠海一号高光谱场景分类数据集。并基于光谱与空间两个视角开展场景分类实验,通过视觉词袋、卷积神经网络等方法的基准测试结果,对不同算法在现有多光谱和高光谱遥感场景分类数据集下的性能进行深入分析。本研究可为后续的高光谱影像解译研究提供了有力的数据支撑。

Keyword :

场景分类 场景分类 数据集 数据集 特征提取 特征提取 珠海一号 珠海一号 高光谱遥感 高光谱遥感

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 刘渊 , 郑向涛 , 卢孝强 . 珠海一号高光谱场景分类数据集 [J]. | 遥感学报 , 2024 , 28 (01) : 306-319 .
MLA 刘渊 et al. "珠海一号高光谱场景分类数据集" . | 遥感学报 28 . 01 (2024) : 306-319 .
APA 刘渊 , 郑向涛 , 卢孝强 . 珠海一号高光谱场景分类数据集 . | 遥感学报 , 2024 , 28 (01) , 306-319 .
Export to NoteExpress RIS BibTex

Version :

Co-Enhanced Global-Part Integration for Remote-Sensing Scene Classification SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Remote-sensing (RS) scene classification aims to classify RS images with similar scene characteristics into one category. Plenty of RS images are complex in background, rich in content, and multiscale in target, exhibiting the characteristics of both intraclass separation and interclass convergence. Therefore, discriminative feature representations designed to highlight the differences between classes are the key to RS scene classification. Existing methods represent scene images by extracting either global context or discriminative part features from RS images. However, global-based methods often lack salient details in similar RS scenes, while part-based methods tend to ignore the relationships between local ground objects, thus weakening the discriminative feature representation. In this article, we propose to combine global context and part-level discriminative features within a unified framework called CGINet for accurate RS scene classification. To be specific, we develop a light context-aware attention block (LCAB) to explicitly model the global context to obtain larger receptive fields and contextual information. A co-enhanced loss module (CELM) is also devised to encourage the model to actively locate discriminative parts for feature enhancement. In particular, CELM is only used during training and not activated during inference, which introduces less computational cost. Benefiting from LCAB and CELM, our proposed CGINet improves the discriminability of features, thereby improving classification performance. Comprehensive experiments over four benchmark datasets show that the proposed method achieves consistent performance gains over state-of-the-art (SOTA) RS scene classification methods.

Keyword :

Attention Attention Context modeling Context modeling Convolutional neural networks Convolutional neural networks convolutional neural networks (CNNs) convolutional neural networks (CNNs) discriminative part discovery discriminative part discovery Feature extraction Feature extraction Remote sensing Remote sensing remote-sensing (RS) remote-sensing (RS) scene classification scene classification Semantics Semantics Technological innovation Technological innovation Training Training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Yichen , Chen, Yaxiong , Xiong, Shengwu et al. Co-Enhanced Global-Part Integration for Remote-Sensing Scene Classification [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Zhao, Yichen et al. "Co-Enhanced Global-Part Integration for Remote-Sensing Scene Classification" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Zhao, Yichen , Chen, Yaxiong , Xiong, Shengwu , Lu, Xiaoqiang , Zhu, Xiao Xiang , Mou, Lichao . Co-Enhanced Global-Part Integration for Remote-Sensing Scene Classification . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Oriented Object Detector With Gaussian Distribution Cost Label Assignment and Task-Decoupled Head SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Recently, oriented object detection in remote sensing images has garnered significant attention due to its broad range of applications. Early-oriented object detection adhered to the established general object detection frameworks, utilizing the label assignment strategy based on the horizontal bounding box (HBB) annotations or rotation-agnostic cost function. Such a strategy may not reflect the large aspect ratio and rotation of arbitrary-oriented objects in remote sensing images and require high parameter-tuning efforts in the training process, which will eventually harm the detector performance. Furthermore, the localization quality of oriented objects depends on precise rotation angle prediction, exacerbating the inconsistency between classification and regression tasks in oriented object detection. To address these issues, we propose the Gaussian distribution cost optimal transport assignment (GCOTA) and decoupled layer attention angle head (DLAAH). Specifically, GCOTA utilizes a Gaussian distribution-based cost function for the optimal transport (OT) label assignment in the training process, alleviating the impact of rotation angle and large aspect ratio in remote sensing images. DLAAH predicts rotation angle independently and incorporates layer attention to obtain the task-specific features based on the shared FPN features, enhancing the angle prediction and improving consistency across different tasks. Based on these proposed components, we present an anchor-free oriented detector, namely, Gaussian distribution and task-decoupled head oriented detector (GTDet) and a multiclass ship detection dataset in real scenarios (CGWX), which provides a benchmark for fine-grained object recognition in remote sensing images. Comprehensive experiments are conducted on CGWX and several public challenging datasets, including DOTAv1.0, and HRSC2016, to demonstrate that our method achieves superior performance on oriented object detection tasks. The code is available at https://github.com/WUTCM-Lab/GTDet.

Keyword :

Anchor-free detector Anchor-free detector deep convolution neural networks deep convolution neural networks oriented object detection oriented object detection remote sensing images remote sensing images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Qiangqiang , Yao, Ruilin , Lu, Xiaoqiang et al. Oriented Object Detector With Gaussian Distribution Cost Label Assignment and Task-Decoupled Head [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Huang, Qiangqiang et al. "Oriented Object Detector With Gaussian Distribution Cost Label Assignment and Task-Decoupled Head" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Huang, Qiangqiang , Yao, Ruilin , Lu, Xiaoqiang , Zhu, Jishuai , Xiong, Shengwu , Chen, Yaxiong . Oriented Object Detector With Gaussian Distribution Cost Label Assignment and Task-Decoupled Head . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Integrating Detailed Features and Global Contexts for Semantic Segmentation in Ultrahigh-Resolution Remote Sensing Images SCIE
期刊论文 | 2024 , 62 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Semantic segmentation of ultrahigh-resolution (UHR) remote sensing images is a fundamental task for many downstream applications. Achieving precise pixel-level classification is paramount for obtaining exceptional segmentation results. This challenge becomes even more complex due to the need to address intricate segmentation boundaries and accurately delineate small objects within the remote sensing imagery. To meet these demands effectively, it is critical to integrate two crucial components: global contextual information and spatial detail feature information. In response to this imperative, the multilevel context-aware segmentation network (MCSNet) emerges as a promising solution. MCSNet is engineered to not only model the overarching global context but also extract intricate spatial detail features, thereby optimizing segmentation outcomes. The strength of MCSNet lies in its two pivotal modules, the spatial detail feature extraction (SDFE) module and the refined multiscale feature fusion (RMFF) module. Moreover, to further harness the potential of MCSNet, a multitask learning approach is employed. This approach integrates boundary detection and semantic segmentation, ensuring that the network is well-rounded in its segmentation capabilities. The efficacy of MCSNet is rigorously demonstrated through comprehensive experiments conducted on two established international society for photogrammetry and remote sensing (ISPRS) 2-D semantic labeling datasets: Potsdam and Vaihingen. These experiments unequivocally establish MCSNet stands as a pioneering solution, that delivers state-of-the-art performance, as evidenced by its outstanding mean intersection over union (mIoU) and mean $F1$ -score (mF1) metrics. The code is available at: https://github.com/WUTCM-Lab/MCSNet.

Keyword :

Cascade Cascade multilevel fusion multilevel fusion multitask learning multitask learning remote sensing remote sensing Semantics Semantics semantic segmentation semantic segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yaxiong , Wang, Yujie , Xiong, Shengwu et al. Integrating Detailed Features and Global Contexts for Semantic Segmentation in Ultrahigh-Resolution Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
MLA Chen, Yaxiong et al. "Integrating Detailed Features and Global Contexts for Semantic Segmentation in Ultrahigh-Resolution Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) .
APA Chen, Yaxiong , Wang, Yujie , Xiong, Shengwu , Lu, Xiaoqiang , Zhu, Xiao Xiang , Mou, Lichao . Integrating Detailed Features and Global Contexts for Semantic Segmentation in Ultrahigh-Resolution Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 .
Export to NoteExpress RIS BibTex

Version :

Implementation of high thermal conductivity and synaptic metaplasticity in vertically-aligned hexagonal boron nitride-based memristor EI CSCD
期刊论文 | 2024 , 67 (6) , 1907-1914 | Science China Materials
Abstract&Keyword Cite

Abstract :

The next-generation computing system is required to perform 1018 floating point operations per second to address the exponential growth of data from sensory terminals, driven by advancements in artificial intelligence and the Internet of Things. Even if a supercomputer possesses the capability to execute these operations, managing heat dissipation becomes a significant challenge when the electronic synapse array reaches a comparable scale with the human neuron network. One potential solution to address thermal hotspots in electronic devices is the use of vertically-aligned hexagonal boron nitride (h-BN) known for its high thermal conductivity. In this study, we have developed textured h-BN films using the high power impulse magnetron sputtering technique. The thermal conductivity of the oriented h-BN film is approximately 354% higher than that of the randomly oriented counterpart. By fabricating electronic synapses based on the textured h-BN thin film, we demonstrate various bio-synaptic plasticity in this device. Our results indicate that orientation engineering can effectively enable h-BN to function as a suitable self-heat dissipation layer, thereby paving the way for future wearable memory devices, solar cells, and neuromorphic devices. (Figure presented.) © Science China Press 2024.

Keyword :

Boron nitride Boron nitride Cell engineering Cell engineering Digital arithmetic Digital arithmetic III-V semiconductors III-V semiconductors Magnetron sputtering Magnetron sputtering Nitrides Nitrides Supercomputers Supercomputers Textures Textures Thin films Thin films

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Haizhong , Ju, Xin , Jiang, Haitao et al. Implementation of high thermal conductivity and synaptic metaplasticity in vertically-aligned hexagonal boron nitride-based memristor [J]. | Science China Materials , 2024 , 67 (6) : 1907-1914 .
MLA Zhang, Haizhong et al. "Implementation of high thermal conductivity and synaptic metaplasticity in vertically-aligned hexagonal boron nitride-based memristor" . | Science China Materials 67 . 6 (2024) : 1907-1914 .
APA Zhang, Haizhong , Ju, Xin , Jiang, Haitao , Yang, Dan , Wei, Rongshan , Hu, Wei et al. Implementation of high thermal conductivity and synaptic metaplasticity in vertically-aligned hexagonal boron nitride-based memristor . | Science China Materials , 2024 , 67 (6) , 1907-1914 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 3 >

Export

Results:

Selected

to

Format:
Online/Total:345/6660022
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1