Query:
学者姓名:卢孝强
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Mobile edge caching (MEC) has grown substantially with the rapid development in scale and complexity of data traffic. By exploiting the expansive coverage of autonomous aerial vehicles (AAVs), MEC enables services for massive vehicle users (VUs) simultaneously, which is promising for enhancing network transmission efficiency. Nonetheless, due to challenges arising from the timeliness and freshness of content services caused by AAVs' limited endurance and airborne capacity, caching strategy considering the real-time of content in large-scale dynamic Internet of Vehicles (IoV) environments remains open. With the above consideration, in this article, the cache refreshing cycle and content placement are jointly optimized in the cache-enabled AAV-assisted vehicular integrated networks (CAVINs) to minimize the content Age of Information (AoI) and energy consumption of the macro AAV. Since the joint optimization problem is variational coupled with nonconvex binary constraints, it is decoupled and solved by a double-iteration method. Specifically, the optimal cache refreshing cycle is derived in semi-closed form with the Karush-Kuhn-Tucker (KKT) conditions. The locally optimal solution of the content placement is obtained through successive convex approximation (SCA). Simulation results corroborate the effectiveness and superiority of the proposed scheme.
Keyword :
Age of Information (AoI) Age of Information (AoI) Autonomous aerial vehicles Autonomous aerial vehicles Complexity theory Complexity theory Energy consumption Energy consumption Energy efficiency Energy efficiency Information age Information age Internet of Vehicles Internet of Vehicles Internet of Vehicles (IoV) Internet of Vehicles (IoV) mobile edge caching (MEC) mobile edge caching (MEC) Optimization Optimization Real-time systems Real-time systems Simulation Simulation unmanned aerial vehicles (AAVs)-assisted networks unmanned aerial vehicles (AAVs)-assisted networks Vehicle dynamics Vehicle dynamics
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xiao, Yang , Lin, Zhijian , Cao, Xiaoxiao et al. AoI Energy-Efficient Edge Caching in AAV-Assisted Vehicular Networks [J]. | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (6) : 6764-6774 . |
MLA | Xiao, Yang et al. "AoI Energy-Efficient Edge Caching in AAV-Assisted Vehicular Networks" . | IEEE INTERNET OF THINGS JOURNAL 12 . 6 (2025) : 6764-6774 . |
APA | Xiao, Yang , Lin, Zhijian , Cao, Xiaoxiao , Chen, Youjia , Lu, Xiaoqiang . AoI Energy-Efficient Edge Caching in AAV-Assisted Vehicular Networks . | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (6) , 6764-6774 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
在我国统筹实施科教兴国战略、人才强国战略、创新驱动发展战略,以及一体推进教育发展、科技创新、人才培养政策的引领与驱动下,高校、企业与科研院所之间的协同育人机制已经从最初的倡议和试点阶段,逐渐迈向了落地实施和深入发展时期.科教融合与产教融合协同育人模式将得到进一步的深化,主要体现在培养主体的多元化、培养层次的提升、培养机制的优化以及培养方式的创新与升级等多个方面.以福州大学物理与信息工程学院为例,针对电子信息领域国家技术和人才战略需求,探索重点高校、头部企业和科研机构通过设立定制化专班、共建科研平台、面向头部企业定向就业、共同举办学术交流论坛、共建导师团队、共同评价培养质量等方式实施研究生培养,打造全要素融合研究生培养新范式,着力实现创新型高层次人才自主培养.
Keyword :
产教融合 产教融合 校企联合专班 校企联合专班 研究生培养 研究生培养 科教融合 科教融合
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 杨晓丹 , 柯颖莹 , 郑志刚 et al. 全要素融合的研究生产学研协同培养机制构建研究 [J]. | 中国高校科技 , 2025 , (2) : 93-96 . |
MLA | 杨晓丹 et al. "全要素融合的研究生产学研协同培养机制构建研究" . | 中国高校科技 2 (2025) : 93-96 . |
APA | 杨晓丹 , 柯颖莹 , 郑志刚 , 魏金明 , 卢孝强 , 李福山 . 全要素融合的研究生产学研协同培养机制构建研究 . | 中国高校科技 , 2025 , (2) , 93-96 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In recent years, the rapid development of the unmanned aerial vehicle (UAV) technology has generated a large number of aerial photography images captured by UAV. Consequently, the object detection in UAV aerial images has emerged as a recent research focus. However, due to the flexible flight heights and diverse shooting angles of UAV, two significant challenges have arisen in UAV aerial images: extreme variation in target scale and the presence of numerous small targets. To address these challenges, this article introduces a semantic information-guided fusion module specifically tailored for small targets. This module utilizes high-level semantic information to guide and align the underlying texture information, thereby enhancing the semantic representation of small targets at the feature level and subsequently improving the model's ability to detect them. In addition, this article introduces a novel global-local fusion detection strategy to strengthen the detection of small targets. We have redesigned the foreground region assembly method to address the drawbacks of previous methods that involved multiple inferences. Extensive experiments conducted on the VisDrone and UAVDT datasets demonstrate that our two self-designed modules can significantly enhance the detection capability of small targets compared with the YOLOX-M model. Our code is publicly available at: https://github.com/LearnYZZ/GLSDet.
Keyword :
Accuracy Accuracy Assembly Assembly Autonomous aerial vehicles Autonomous aerial vehicles Decoupled head attention Decoupled head attention Detectors Detectors Feature extraction Feature extraction feature fusion feature fusion Object detection Object detection remote sensing image recognition remote sensing image recognition robust adversarial robust adversarial robustness robustness rotational object detection rotational object detection Semantics Semantics Superresolution Superresolution Technological innovation Technological innovation Transformers Transformers
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Yaxiong , Ye, Zhengze , Sun, Haokai et al. Global-Local Fusion With Semantic Information Guidance for Accurate Small Object Detection in UAV Aerial Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Chen, Yaxiong et al. "Global-Local Fusion With Semantic Information Guidance for Accurate Small Object Detection in UAV Aerial Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Chen, Yaxiong , Ye, Zhengze , Sun, Haokai , Gong, Tengfei , Xiong, Shengwu , Lu, Xiaoqiang . Global-Local Fusion With Semantic Information Guidance for Accurate Small Object Detection in UAV Aerial Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Solar-blind ultraviolet photodetectors (SBUV-PDs) are utilized in various military and civilian fields, encompassing missile tracking, high-voltage detection, and fire warning systems. Ga2O3 emerges as the prime candidate for such PDs owing to its elevated bandgap, remarkable thermal stability, and facile fabrication process. The metal-semiconductor-metal (MSM) structure garners attention for its swift response time and straightforward preparation, thus becoming a focal point among diverse PD architectures. Nevertheless, the metal surface impedes optical absorption, thereby diminishing the quantum efficiency of the PD. In this work, we introduce a nanograting onto the Ga2O3 surface, which results in a 747-fold increase in responsivity in the SBUV region compared to a normal MSM grating-free structure. Metal gratings can induce surface plasmon polaritons (SPP), thereby augmenting the optical absorption of the PD and stimulating hot electrons to increase photocurrent. However, the broadband response caused by the introduction of metal gratings is a common problem. By optimizing the doping concentration of the Ga2O3 absorption layer, adjusting the incident light intensity, and reverse voltage, the problem of broadband response has been solved. The responsivity of the device in the non-SBUV region is suppressed 24-fold. This methodology holds promise as a reliable approach for fabricating high-performance SBUV-PDs.
Keyword :
Ga2O3 Ga2O3 metal-semiconductor-metal (MSM) metal-semiconductor-metal (MSM) nanograting nanograting photodetector (PD) photodetector (PD) plasmon plasmon responsivity responsivity
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Jialong , Yang, Dan , Lu, Xiaoqiang et al. Enhancing Ga2O3 Solar-Blind Photodetectors via Metal Nanogratings [J]. | IEEE SENSORS JOURNAL , 2025 , 25 (1) : 434-442 . |
MLA | Li, Jialong et al. "Enhancing Ga2O3 Solar-Blind Photodetectors via Metal Nanogratings" . | IEEE SENSORS JOURNAL 25 . 1 (2025) : 434-442 . |
APA | Li, Jialong , Yang, Dan , Lu, Xiaoqiang , Zhang, Haizhong , Zhu, Minmin . Enhancing Ga2O3 Solar-Blind Photodetectors via Metal Nanogratings . | IEEE SENSORS JOURNAL , 2025 , 25 (1) , 434-442 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In hyperspectral images (HSIs), different land cover (LC) classes have distinct reflective characteristics at various wavelengths. Therefore, relying on only a few bands to distinguish all LC classes often leads to information loss, resulting in poor average accuracy. To address this problem, we propose a method called Cascaded Spatial Cross-Attention Network (CSCANet) for HSI classification. We design a cascaded spatial cross-attention module, which first performs cross-attention on local and global features in the spatial context, then uses a group cascade structure to sequentially propagate important spatial regions within the different channels, and finally obtains joint attention features to improve the robustness of the network. Moreover, we also design a two-branch feature separation structure based on spatial-spectral features to separate different LC Tokens as much as possible, thereby improving the distinguishability of different LC classes. Extensive experiments demonstrate that our method achieves excellent performance in enhancing classification accuracy and robustness.
Keyword :
Accuracy Accuracy Artificial intelligence Artificial intelligence Data mining Data mining Feature extraction Feature extraction group cascade structure group cascade structure Hyperspectral image classification Hyperspectral image classification Hyperspectral imaging Hyperspectral imaging Image classification Image classification Reflectivity Reflectivity spatial cross-attention spatial cross-attention spatial-spectral feature extraction spatial-spectral feature extraction Sun Sun Technological innovation Technological innovation Transformers Transformers
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Bo , Chen, Yaxiong , Xiong, Shengwu et al. Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2025 , 34 : 899-913 . |
MLA | Zhang, Bo et al. "Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 34 (2025) : 899-913 . |
APA | Zhang, Bo , Chen, Yaxiong , Xiong, Shengwu , Lu, Xiaoqiang . Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2025 , 34 , 899-913 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Modern detectors are mostly trained under single and limited conditions. However, object detection faces various complex and open situations in autonomous driving, especially in urban street scenes with dense objects and complex backgrounds. Due to the shift in data distribution, modern detectors cannot perform well in actual urban environments. Using domain adaptation to improve detection performance is one of the key methods to extend object detection from limited situations to open situations. To this end, this article proposes a Domain Adaptation of Anchor -Free object detection (DAAF) for urban traffic. DAAF is a crossdomain object detection method that performs feature alignment including two aspects. On the one hand, we designed a fully convolutional adversarial training method for global feature alignment at the image level. Meanwhile, images can generally be decomposed into structural information and texture information. In urban street scenes, the structural information of images is generally similar. The main difference between the source domain and the target domain is texture information. Therefore, during global feature alignment, this paper proposes a method called texture information limitation (TIL). On the other hand, in order to solve the problem of variable aspect ratios of objects in urban street scenes, this article uses an anchor -free detector as the baseline detector. Since the anchor -free object detector can obtain neither explicit nor implicit instance -level features, we adopt Pixel -Level Adaptation (PLA) to align local features instead of instance -level alignment for local features. The size of the object has the greatest impact on the final detection effect, and the object scale in urban scenes is relatively rich. Guided by the differentiation of attention mechanisms, a multi -level adversarial network is designed to perform feature alignment of the output space at different feature levels called Scale Information Limitation (SIL). We conducted cross -domain detection experiments by using various urban streetscape autonomous driving object detection datasets, including adverse weather conditions, synthetic data to real data, and cross -camera adaptation. The experimental results indicate that the method proposed in this article is effective.
Keyword :
Domain adaptation Domain adaptation Object detection Object detection Urban traffic Urban traffic
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yu, Xiaoyong , Lu, Xiaoqiang . Domain Adaptation of Anchor-Free object detection for urban traffic [J]. | NEUROCOMPUTING , 2024 , 582 . |
MLA | Yu, Xiaoyong et al. "Domain Adaptation of Anchor-Free object detection for urban traffic" . | NEUROCOMPUTING 582 (2024) . |
APA | Yu, Xiaoyong , Lu, Xiaoqiang . Domain Adaptation of Anchor-Free object detection for urban traffic . | NEUROCOMPUTING , 2024 , 582 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In recent years, with the continuous advancement of remote sensing (RS) technology and text processing techniques, there has been a growing abundance of RS images and associated textual data. Combining RS images with their corresponding textual data allows for integrated analysis and retrieval, which holds significant practical implications across multiple application domains, including geographic information systems (GIS), environmental monitoring, and agricultural management. RS images have the characteristics of multitargets and multiscales, and the textual descriptions of these targets are not fully utilized, leading to a decrease in retrieval accuracy. Previous methods have struggled to balance intermodality information interaction and intramodality feature fusion, and they have paid little attention to the consistency of distribution within modalities. In light of this, this article proposes a symmetric multilevel guidance network (SMLGN) for cross-modal retrieval in RS. SMLGN first introduces fusion guidance between local and global within modalities and fine-grained bidirectional guidance between modalities, allowing for the learning of a common semantic space. Furthermore, to address the distribution differences of different modalities within the common semantic space, we design an adversarial joint learning framework and a multiobjective loss function to optimize the SMLGN method and achieve consistency in data distribution. The experimental results demonstrate that the SMLGN method performs well in the task of cross-modal retrieval between RS images and textual data. It effectively integrates the information from both modalities, improving the accuracy and reliability of the retrieval process.
Keyword :
Adversarial learning Adversarial learning Adversarial machine learning Adversarial machine learning feature fusion feature fusion Green buildings Green buildings Index Terms-Adversarial learning Index Terms-Adversarial learning modality alignment modality alignment multisubspace joint learning multisubspace joint learning Remote sensing Remote sensing remote sensing (RS) image-text (I2T) retrieval remote sensing (RS) image-text (I2T) retrieval Roads Roads Semantics Semantics Sensors Sensors Task analysis Task analysis
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Yaxiong , Huang, Jirui , Xiong, Shengwu et al. Integrating Multisubspace Joint Learning With Multilevel Guidance for Cross-Modal Retrieval of Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
MLA | Chen, Yaxiong et al. "Integrating Multisubspace Joint Learning With Multilevel Guidance for Cross-Modal Retrieval of Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) . |
APA | Chen, Yaxiong , Huang, Jirui , Xiong, Shengwu , Lu, Xiaoqiang . Integrating Multisubspace Joint Learning With Multilevel Guidance for Cross-Modal Retrieval of Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In remote sensing of Earth observation, multi-source data can be captured by multiple platforms, multiple sensors, and multiple perspectives. These data provide complementary information for interpreting remote sensing scenes. Although these data offer richer information, they also increase the demand for model depth and complexity. Deep learning plays a pivotal role in unlocking the potential of remote sensing data by delving deep into the semantic layers of scenes and extracting intricate features from images. Recent advancements in artificial intelligence have greatly enhanced this process. However, deep learning networks have limitations when applied to remote sensing images. 1)The huge number of parameters and the difficulty in training, as well as the over-reliance on labeled training data, can affect these images. Remote sensing images are characterized by“data miscellaneous marking difficulty”, which makes manual labeling insufficient for meeting the training needs of deep learning. 2)Variations in remote sensing platforms, sensors, shooting angles, resolution, time, location, and weather can all impact remote sensing images. Thus, the interpreted images and training samples cannot have the same distribution. This inconsistency results in weak generalization ability in existing models, especially when dealing with data from different distributions. To address this issue, cross-domain remote sensing scene interpretation aims to train a model on labeled remote sensing scene data(source domain)and apply it to new, unlabeled scene data(target domain)in an appropriate way. This approach reduces the dependence on target domain data and relaxes the assumption of the same distribution in existing deep learning tasks. The shallow layers of convolutional neural networks can be used as general-purpose feature extractors, but deeper layers are more task-specific and may introduce bias when applied to other tasks. Therefore, the migration model must be modified to accomplish the task of interpreting the target domain. Cross-domain interpretation tasks aim to establish a model that can adapt to various scene changes by utilizing migration learning, domain adaptation and other techniques for reducing model prediction inaccuracy caused by changes in the data domain. This approach improves the robustness and generalization ability of the model. Interpreting cross-domain remote sensing scenes typically requires using data from multiple remote sensing sources, including radar, aerial and satellite imagery. These images may have varying views, resolutions, wavelength bands, lighting conditions and noise levels. They may also originate from different locations or sensors. As the Global Earth Observation Systems continues to advance, remote sensing images now include cross-platform, cross-sensor, cross-resolution, and cross-region, which results in enormous distributional variances. Therefore, the study of cross-domain remote sensing scene interpretation is essential for the commercial use of remote sensing data and has theoretical and practical importance. This report categorizes scene decoding tasks into four main types based on the labeled set of data:methods based on closed-set domain adaptation, partial-domain adaptation, open-set domain adaptation and generalized domain adaptation. Approaches based on closed-set domain adaptation focus on tasks where the label set of the target domain is the same as that of the source domain. Partial domain adaptation focuses on tasks where the label set of the target domain is a subset of the source domain. Open-set domain adaptation aims to research tasks where the label set of the source domain is a subset of the label set of the target domain, and it does not apply restrictions in the approach of generalized domain adaptation. This study provides an in-depth investigation of two typical tasks in cross-domain remote sensing interpretation:scene recognition and target knowledge. The first part of the study utilizes domestic and international literature to provide a comprehensive assessment of the current research status of the four types of methods. Within the target recognition task, cross-domain tasks are further subdivided into cross-domain for visible light data and cross-domain from visible light to Synthetic Aperture Radar images. After a quantitative analysis of the sample distribution characteristics of different datasets, a unified experimental setup for cross-domain tasks is proposed. In the scene classification task, the dataset is explored by classifying it according to the label set categorization, and specific examples are given to provide the corresponding experimental setup for the readers’reference. The fourth part of the study discusses the research trends in cross-domain remote sensing interpretation, which highlights four challenging research directions:few-shot learning, source domain data selection, multi-source domain interpretation, and cross-modal interpretation. These areas will be important directions for the future development of remote sensing scene interpretation, which offers potential choices for readers’subsequent research directions. © 2024 Editorial and Publishing Board of JIG. All rights reserved.
Keyword :
adaptive algorithm adaptive algorithm cross-domain remote sensing scene interpretation cross-domain remote sensing scene interpretation diverse dataset diverse dataset migration learning migration learning model generalization model generalization out-of-distribution generalization out-of-distribution generalization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, X. , Xiao, X. , Chen, X. et al. Advancements in cross-domain remote sensing scene interpretation; [跨域遥感场景解译研究进展] [J]. | Journal of Image and Graphics , 2024 , 29 (6) : 1730-1746 . |
MLA | Zheng, X. et al. "Advancements in cross-domain remote sensing scene interpretation; [跨域遥感场景解译研究进展]" . | Journal of Image and Graphics 29 . 6 (2024) : 1730-1746 . |
APA | Zheng, X. , Xiao, X. , Chen, X. , Lu, W. , Liu, X. , Lu, X. . Advancements in cross-domain remote sensing scene interpretation; [跨域遥感场景解译研究进展] . | Journal of Image and Graphics , 2024 , 29 (6) , 1730-1746 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
高空间分辨率、高光谱分辨率、大幅宽与大数据量是高光谱卫星数据发展趋势,传统高光谱影像的像素级分类面临难以处理海量数据、无法高效获取复杂海量影像中隐含信息的困境。已有研究开始关注高光谱影像的场景级分类,并逐步建立完善高光谱遥感场景分类数据集。然而,目前的数据集制作过程多参考高空间分辨率可见光遥感场景数据集的制作方法,主要采用遥感影像的空间信息进行场景类别解译,忽视了高光谱场景的光谱信息。因此,为构建高光谱影像的遥感场景分类数据集,本文利用“珠海一号”高光谱卫星拍摄的西安地区高光谱数据,使用无监督光谱聚类辅助定位、裁剪与标注待选场景样本,结合Google Earth高分影像进行目视筛选,构建6类场景类型和737幅场景样本的珠海一号高光谱场景分类数据集。并基于光谱与空间两个视角开展场景分类实验,通过视觉词袋、卷积神经网络等方法的基准测试结果,对不同算法在现有多光谱和高光谱遥感场景分类数据集下的性能进行深入分析。本研究可为后续的高光谱影像解译研究提供了有力的数据支撑。
Keyword :
场景分类 场景分类 数据集 数据集 特征提取 特征提取 珠海一号 珠海一号 高光谱遥感 高光谱遥感
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 刘渊 , 郑向涛 , 卢孝强 . 珠海一号高光谱场景分类数据集 [J]. | 遥感学报 , 2024 , 28 (01) : 306-319 . |
MLA | 刘渊 et al. "珠海一号高光谱场景分类数据集" . | 遥感学报 28 . 01 (2024) : 306-319 . |
APA | 刘渊 , 郑向涛 , 卢孝强 . 珠海一号高光谱场景分类数据集 . | 遥感学报 , 2024 , 28 (01) , 306-319 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In this work, a novel gallium oxide (Ga2O3) vertical FinFET with integrated Schottky barrier diode (SBD-FinFET), which achieves low conduction losses, is proposed. In the reverse conduction state, the integrated SBD can provide additional low-resistance path to conduct reverse current, hence achieving low reverse conduction loss. In the switching state, the SBD-FinFET can also reduce gate-drain capacitance and gate charge, thus featuring fast switching speed and low switching losses. Furthermore, in the other states, the integrated SBD of the SBD-FinFET is in the OFF state, which does not significantly affect the device characteristics. The well-calibrated simulation results show that when compared with the state-of-the-art FinFET with integrated Fin channel and ohmic contact diode (FOD-FinFET), the SBD-FinFET can reduce reverse conduction loss, turn-on loss, and turn-off loss by 25%, 19%, and 22%, respectively, while other characteristics retain almost unchanged.
Keyword :
beta-gallium oxide (Ga2O3) beta-gallium oxide (Ga2O3) conduction losses conduction losses Electric breakdown Electric breakdown Field effect transistors Field effect transistors FinFET FinFET FinFETs FinFETs Gallium Gallium Logic gates Logic gates Schottky barrier diode (SBD) Schottky barrier diode (SBD) Schottky barriers Schottky barriers Schottky diodes Schottky diodes
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xu, Xiaorui , Deng, Yicong , Li, Titao et al. Ga2O3 Vertical FinFET With Integrated Schottky Barrier Diode for Low-Loss Conduction [J]. | IEEE TRANSACTIONS ON ELECTRON DEVICES , 2024 , 71 (4) : 2530-2535 . |
MLA | Xu, Xiaorui et al. "Ga2O3 Vertical FinFET With Integrated Schottky Barrier Diode for Low-Loss Conduction" . | IEEE TRANSACTIONS ON ELECTRON DEVICES 71 . 4 (2024) : 2530-2535 . |
APA | Xu, Xiaorui , Deng, Yicong , Li, Titao , Xu, Xiaohui , Yang, Dan , Zhu, Minmin et al. Ga2O3 Vertical FinFET With Integrated Schottky Barrier Diode for Low-Loss Conduction . | IEEE TRANSACTIONS ON ELECTRON DEVICES , 2024 , 71 (4) , 2530-2535 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |