• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:柯逍

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 13 >
基于分频式生成对抗网络的非成对水下图像增强
期刊论文 | 2025 | 电子学报
Abstract&Keyword Cite

Abstract :

增强水下图像质量对水下作业领域的发展具有重要意义 . 现有的水下图像增强方法通常基于成对的水下图像和参考图像进行训练,然而实际获取与水下图像对应的参考图像比较困难,相比之下获得非成对高质量水下图像或者陆上图像较为容易. 此外,现有的水下图像增强方法很难同时针对各种失真类型进行图像增强. 为了避免对成对训练数据的依赖和进一步降低获得训练数据的难度,并应对多样的水下图像失真类型,本文提出了一种基于分频式生成对抗网络(Frequency-Decomposed Generative Adversarial Network,FD-GAN)的非成对水下图像增强方法,并在此基础上设计了高低频双分支生成器用于重建高质量水下增强图像. 具体来说,本文引入特征级别的小波变换将特征分为低频和高频部分,并基于循环一致性生成对抗网络对低频和高频部分区分处理. 其中,低频分支采用结合低频注意力机制的编码-解码器结构实现对图像颜色和亮度的增强,高频分支则采用并行的高频注意力机制对各高频分量进行增强,从而实现对图像细节的恢复. 在多个标准水下图像数据集上的实验结果表明,本文提出的方法在使用非成对的高质量水下图像和引入部分陆上图像的情况下,均能有效生成高质量的水下增强图像,且有效性和泛化性均优于当 前主流的水下图像增强方法.

Keyword :

小波变换 小波变换 水下图像增强 水下图像增强 注意力机制 注意力机制 生成对抗网络 生成对抗网络 高低频双分支生成器 高低频双分支生成器

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 牛玉贞 , 张凌昕 , 兰杰 et al. 基于分频式生成对抗网络的非成对水下图像增强 [J]. | 电子学报 , 2025 .
MLA 牛玉贞 et al. "基于分频式生成对抗网络的非成对水下图像增强" . | 电子学报 (2025) .
APA 牛玉贞 , 张凌昕 , 兰杰 , 许瑞 , 柯逍 . 基于分频式生成对抗网络的非成对水下图像增强 . | 电子学报 , 2025 .
Export to NoteExpress RIS BibTex

Version :

Vision-Language Action Knowledge Learning for Semantic-Aware Action Quality Assessment CPCI-S
期刊论文 | 2025 , 15100 , 423-440 | COMPUTER VISION - ECCV 2024, PT XLII
Abstract&Keyword Cite

Abstract :

Action quality assessment (AQA) is a challenging vision task that requires discerning and quantifying subtle differences in actions from the same class. While recent research has made strides in creating fine-grained annotations for more precise analysis, existing methods primarily focus on coarse action segmentation, leading to limited identification of discriminative action frames. To address this issue, we propose a Vision-Language Action Knowledge Learning approach for action quality assessment, along with a multi-grained alignment framework to understand different levels of action knowledge. In our framework, prior knowledge, such as specialized terminology, is embedded into video-level, stage-level, and frame-level representations via CLIP. We further propose a new semantic-aware collaborative attention module to prevent confusing interactions and preserve textual knowledge in cross-modal and cross-semantic spaces. Specifically, we leverage the powerful cross-modal knowledge of CLIP to embed textual semantics into image features, which then guide action spatial-temporal representations. Our approach can be plug-and-played with existing AQA methods, frame-wise annotations or not. Extensive experiments and ablation studies show that our approach achieves state-of-the-art on four public short and long-term AQA benchmarks: FineDiving, MTL-AQA, JIGSAWS, and Fis-V.

Keyword :

Action quality assessment Action quality assessment Semantic-aware learning Semantic-aware learning Vision-language pre-training Vision-language pre-training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Huangbiao , Ke, Xiao , Li, Yuezhou et al. Vision-Language Action Knowledge Learning for Semantic-Aware Action Quality Assessment [J]. | COMPUTER VISION - ECCV 2024, PT XLII , 2025 , 15100 : 423-440 .
MLA Xu, Huangbiao et al. "Vision-Language Action Knowledge Learning for Semantic-Aware Action Quality Assessment" . | COMPUTER VISION - ECCV 2024, PT XLII 15100 (2025) : 423-440 .
APA Xu, Huangbiao , Ke, Xiao , Li, Yuezhou , Xu, Rui , Wu, Huanqi , Lin, Xiaofeng et al. Vision-Language Action Knowledge Learning for Semantic-Aware Action Quality Assessment . | COMPUTER VISION - ECCV 2024, PT XLII , 2025 , 15100 , 423-440 .
Export to NoteExpress RIS BibTex

Version :

Quality-Guided Vision-Language Learning for Long-Term Action Quality Assessment Scopus
期刊论文 | 2025 | IEEE Transactions on Multimedia
Abstract&Keyword Cite

Abstract :

Long-term action quality assessment poses a challenging visual task since it requires assessing technical actions at different skill levels in a long video. Recent state-of-the-art methods incorporate additional modality information to aid in understanding action semantics, which incurs extra annotation costs and imposes higher constraints on action scenes and datasets. To address this issue, we propose a Quality-Guided Vision-Language Learning (QGVL) method to map visual features into appropriate fine-grained intervals of quality scores. Specifically, we use a set of quality-related textual prompts as quality prototypes to guide the discrimination and aggregation of specific visual actions. To avoid fuzzy rule mapping, we further propose a progressive semantic learning strategy with a Granularity-Adaptive Semantic Learning Module (GSLM) that refines accurate score intervals from coarse to fine at clip, grade, and score levels. The quality-related semantics we designed are universal to all types of action scenarios without any additional annotations. Extensive experiments show that our approach outperforms previous work by a significant margin and establishes new state-of-the-art on four public AQA benchmarks: Rhythmic Gymnastics, Fis-V, FS1000, and FineFS. © 1999-2012 IEEE.

Keyword :

Action quality assessment Action quality assessment human motion analysis human motion analysis video understanding video understanding vision-language learning vision-language learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, H. , Wu, H. , Ke, X. et al. Quality-Guided Vision-Language Learning for Long-Term Action Quality Assessment [J]. | IEEE Transactions on Multimedia , 2025 .
MLA Xu, H. et al. "Quality-Guided Vision-Language Learning for Long-Term Action Quality Assessment" . | IEEE Transactions on Multimedia (2025) .
APA Xu, H. , Wu, H. , Ke, X. , Li, Y. , Xu, R. , Guo, W. . Quality-Guided Vision-Language Learning for Long-Term Action Quality Assessment . | IEEE Transactions on Multimedia , 2025 .
Export to NoteExpress RIS BibTex

Version :

SFCE-Det: Sub-feature Fusion and Cross-layer Perceptual Enhancement Detector Scopus
期刊论文 | 2025 | IEEE Transactions on Circuits and Systems for Video Technology
Abstract&Keyword Cite

Abstract :

Edge devices face a pressing demand for low-cost object detection networks. However, because of limited computational resources, lightweight detectors often suffer significant performance degradation. In this paper, we propose SFCE-Det, an efficient object detector that achieves remarkable performance with remarkably few parameters and GFLOPs. The key contribution of our work lies in the novel subfeature fusion and cross-layer perceptual enhancement block (SFCE-Block), which effectively extracts feature information from images at a very low computational cost. SFCE-Block can be seamlessly integrated into existing convolutional neural networks and serves as a plug-and-play component for lightweight upgrades to the network. SFCE-Block can not only be used to upgrade classic models but also has excellent lightweight effects on state-of-the-art models (e.g. YoLOv8). Additionally, we propose a dynamic label assignment strategy that leverages global label correlation to further enhance the performance of SFCE-Det. Experimental results demonstrate that SFCE-Det surpasses many state-of-the-art lightweight object detectors, on multiple public datasets while maintaining an extremely low cost. For example, SFCE-Det-D2 achieves an impressive mAP of 83.4% on the PASCAL VOC dataset, comparable to YOLOv8-S. However, SFCE-Det-D2 requires only 26% of the parameters and 35% of the GFLOPs, which are 2.96M parameters and 9.9 GFLOPs, respectively. © 1991-2012 IEEE.

Keyword :

convolution module design convolution module design lightweight network lightweight network model compression model compression Object detection Object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ke, X. , Chen, W. . SFCE-Det: Sub-feature Fusion and Cross-layer Perceptual Enhancement Detector [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2025 .
MLA Ke, X. et al. "SFCE-Det: Sub-feature Fusion and Cross-layer Perceptual Enhancement Detector" . | IEEE Transactions on Circuits and Systems for Video Technology (2025) .
APA Ke, X. , Chen, W. . SFCE-Det: Sub-feature Fusion and Cross-layer Perceptual Enhancement Detector . | IEEE Transactions on Circuits and Systems for Video Technology , 2025 .
Export to NoteExpress RIS BibTex

Version :

DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose CPCI-S
期刊论文 | 2025 , 8869-8877 | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 8
Abstract&Keyword Cite

Abstract :

The fair and objective assessment of performances and competitions is a common pursuit and challenge in human society. The application of computer vision technology offers hope for this purpose, but it still faces obstacles such as occlusion and motion blur. To address these hindrances, our Dance-Fix proposes a bidirectional spatial-temporal context optical flow correction (BOFC) method. This approach leverages the consistency and complementarity of motion information between two modalities: optical flow, which excels at pixel capture, and lightweight skeleton data. It enables the extraction of pixel-level motion changes and the correction of abnormal skeleton data. Furthermore, we propose a part-level dance dataset (Dancer Parts) and part-level motion feature extraction based on task decoupling (PETD). This aims to decouple complex whole-body parts tracking into fine-grained limb-level motion extraction, enhancing the confidence of temporal information and the accuracy of correction for abnormal data. Finally, we present the DNV dataset, which simulates fully neat group dance scenes and provides reliable labels and validation methods for the newly introduced group dance neatness assessment (GDNA). To the best of our knowledge, this is the first work to develop quantitative criteria for assessing limb and joint neatness in group dance. We conduct experiments on DNV and video-based public JHMDB datasets. Our method effectively corrects abnormal skeleton points, flexibly embeds, and improves the accuracy of existing pose estimation algorithms.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Huangbiao , Ke, Xiao , Wu, Huanqi et al. DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose [J]. | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 8 , 2025 : 8869-8877 .
MLA Xu, Huangbiao et al. "DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose" . | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 8 (2025) : 8869-8877 .
APA Xu, Huangbiao , Ke, Xiao , Wu, Huanqi , Xu, Rui , Li, Yuezhou , Xu, Peirong et al. DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose . | THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 8 , 2025 , 8869-8877 .
Export to NoteExpress RIS BibTex

Version :

DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose EI
会议论文 | 2025 , 39 (8) , 8869-8877 | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Abstract&Keyword Cite

Abstract :

The fair and objective assessment of performances and competitions is a common pursuit and challenge in human society. The application of computer vision technology offers hope for this purpose, but it still faces obstacles such as occlusion and motion blur. To address these hindrances, our DanceFix proposes a bidirectional spatial-temporal context optical flow correction (BOFC) method. This approach leverages the consistency and complementarity of motion information between two modalities: optical flow, which excels at pixel capture, and lightweight skeleton data. It enables the extraction of pixel-level motion changes and the correction of abnormal skeleton data. Furthermore, we propose a part-level dance dataset (Dancer Parts) and part-level motion feature extraction based on task decoupling (PETD). This aims to decouple complex whole-body parts tracking into fine-grained limb-level motion extraction, enhancing the confidence of temporal information and the accuracy of correction for abnormal data. Finally, we present the DNV dataset, which simulates fully neat group dance scenes and provides reliable labels and validation methods for the newly introduced group dance neatness assessment (GDNA). To the best of our knowledge, this is the first work to develop quantitative criteria for assessing limb and joint neatness in group dance. We conduct experiments on DNV and video-based public JHMDB datasets. Our method effectively corrects abnormal skeleton points, flexibly embeds, and improves the accuracy of existing pose estimation algorithms. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Huangbiao , Ke, Xiao , Wu, Huanqi et al. DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose [C] . 2025 : 8869-8877 .
MLA Xu, Huangbiao et al. "DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose" . (2025) : 8869-8877 .
APA Xu, Huangbiao , Ke, Xiao , Wu, Huanqi , Xu, Rui , Li, Yuezhou , Xu, Peirong et al. DanceFix: An Exploration in Group Dance Neatness Assessment Through Fixing Abnormal Challenges of Human Pose . (2025) : 8869-8877 .
Export to NoteExpress RIS BibTex

Version :

MSP: Multimodal Self-Attention Prompt Learning SCIE
期刊论文 | 2025 , 34 , 5978-5988 | IEEE TRANSACTIONS ON IMAGE PROCESSING
Abstract&Keyword Cite

Abstract :

Multimodal prompt learning has emerged as an effective strategy for adapting vision-language models such as CLIP to downstream tasks. However, conventional approaches typically operate at the input level, forcing learned prompts to propagate through a sequence of frozen Transformer layers. This indirect adaptation introduces cumulative geometric distortions, a limitation that we formalize as the indirect learning dilemma (ILD), leading to overfitting of the base class and reduced generalization to novel classes. To overcome this challenge, we propose the Multimodal Self-Attention Prompt (MSP) framework, which shifts adaptation into the semantic core of the model by injecting learnable prompts directly into the key and value sequences of attention blocks. This direct modulation preserves the pretrained embedding geometry while enabling more precise downstream adaptation. MSP further incorporates distance-aware optimization to maintain semantic consistency with CLIP's original representation space, and partial prompt learning via stochastic dimension masking to improve robustness and prevent over-specialization. Extensive evaluations across 11 benchmarks demonstrate the effectiveness of MSP. It achieves a state-of-the-art harmonic mean accuracy of 80.67%, with 77.32% accuracy on novel classes-representing a 2.18% absolute improvement over prior methods-while requiring only 0.11M learnable parameters. Notably, MSP surpasses CLIP's zero-shot performance on 10 out of 11 datasets, establishing a new paradigm for efficient and generalizable prompt-based adaptation.

Keyword :

Adaptation models Adaptation models Distortion Distortion Few-shot learning Few-shot learning Geometry Geometry image classification image classification Optimization Optimization prompt learning prompt learning Semantics Semantics Training Training transfer learning transfer learning Transformers Transformers Tuning Tuning Vectors Vectors vision-language model vision-language model Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lai, Xinyi , Ke, Xiao , Xu, Huangbiao et al. MSP: Multimodal Self-Attention Prompt Learning [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2025 , 34 : 5978-5988 .
MLA Lai, Xinyi et al. "MSP: Multimodal Self-Attention Prompt Learning" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 34 (2025) : 5978-5988 .
APA Lai, Xinyi , Ke, Xiao , Xu, Huangbiao , Wu, Shanghui , Guo, Wenzhong . MSP: Multimodal Self-Attention Prompt Learning . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2025 , 34 , 5978-5988 .
Export to NoteExpress RIS BibTex

Version :

FD-GAN: Frequency-Decomposed Generative Adversarial Network for Unpaired Underwater Image Enhancement EI
期刊论文 | 2025 , 53 (2) , 527-544 | Acta Electronica Sinica
Abstract&Keyword Cite

Abstract :

Enhancing the quality of underwater images is crucial for advancements in the fields of underwater exploration and underwater rescue. Existing underwater image enhancement methods typically rely on paired underwater images and reference images for training. However, obtaining corresponding reference images for underwater images is challenging in practice. In contrast, acquiring high-quality unpaired underwater images or images captured on land are relatively more straightforward. Furthermore, existing techniques for underwater image enhancement often struggle to address a variety of distortion types simultaneously. To avoid the reliance on paired training data, reduce the difficulty of acquiring training data, and effectively handle diverse types of underwater image distortions, in this paper, we propose a novel unpaired underwater image enhancement method based on the frequency-decomposed generative adversarial network (FD-GAN). We design a dual-branch generator based on high and low frequencies to reconstruct high-quality underwater images. Specifically, feature-level wavelet transform is introduced to separate the features into low-frequency and high-frequency parts. Then the separated features are processed by a cycle-consistent generative adversarial network, so as to simultaneously enhance the color and luminance in the low-frequency component and details in the high-frequency part. More specific, the low-frequency branch employs an encoder-decoder structure with a low-frequency attention mechanism to enhance the color and brightness of the image. The high-frequency branch utilizes parallel high-frequency attention mechanisms to enhance various high-frequency components, thereby achieving the restoration of image details. Experimental results on multiple datasets show that the proposed method trained with unpaired high-quality underwater images or unpaired high-quality underwater images and on-land images, can effectively generate high-quality underwater enhanced images and the proposed method is superior to the state-of-the-art underwater image enhancement methods in terms of effectiveness and generalization. © 2025 Chinese Institute of Electronics. All rights reserved.

Keyword :

Color image processing Color image processing Image coding Image coding Image compression Image compression Image enhancement Image enhancement Photointerpretation Photointerpretation Underwater photography Underwater photography Wavelet decomposition Wavelet decomposition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Niu, Yu-Zhen , Zhang, Ling-Xin , Lan, Jie et al. FD-GAN: Frequency-Decomposed Generative Adversarial Network for Unpaired Underwater Image Enhancement [J]. | Acta Electronica Sinica , 2025 , 53 (2) : 527-544 .
MLA Niu, Yu-Zhen et al. "FD-GAN: Frequency-Decomposed Generative Adversarial Network for Unpaired Underwater Image Enhancement" . | Acta Electronica Sinica 53 . 2 (2025) : 527-544 .
APA Niu, Yu-Zhen , Zhang, Ling-Xin , Lan, Jie , Xu, Rui , Ke, Xiao . FD-GAN: Frequency-Decomposed Generative Adversarial Network for Unpaired Underwater Image Enhancement . | Acta Electronica Sinica , 2025 , 53 (2) , 527-544 .
Export to NoteExpress RIS BibTex

Version :

MEFA-Net: A mask enhanced feature aggregation network for polyp segmentation EI
期刊论文 | 2025 , 186 | Computers in Biology and Medicine
Abstract&Keyword Cite

Abstract :

Accurate polyp segmentation is crucial for early diagnosis and treatment of colorectal cancer. This is a challenging task for three main reasons: (i) the problem of model overfitting and weak generalization due to the multi-center distribution of data; (ii) the problem of interclass ambiguity caused by motion blur and overexposure to endoscopic light; and (iii) the problem of intraclass inconsistency caused by the variety of morphologies and sizes of the same type of polyps. To address these challenges, we propose a new high-precision polyp segmentation framework, MEFA-Net, which consists of three modules, including the plug-and-play Mask Enhancement Module (MEG), Separable Path Attention Enhancement Module (SPAE), and Dynamic Global Attention Pool Module (DGAP). Specifically, firstly, the MEG module regionally masks the high-energy regions of the environment and polyps through a mask, which guides the model to rely on only a small amount of information to distinguish between polyps and background features, avoiding the model from overfitting the environmental information, and improving the robustness of the model. At the same time, this module can effectively counteract the 'dark corner phenomenon' in the dataset and further improve the generalization performance of the model. Next, the SPAE module can effectively alleviate the inter-class fuzzy problem by strengthening the feature expression. Then, the DGAP module solves the intra-class inconsistency problem by extracting the invariance of scale, shape and position. Finally, we propose a new evaluation metric, MultiColoScore, for comprehensively evaluating the segmentation performance of the model on five datasets with different domains. We evaluated the new method quantitatively and qualitatively on five datasets using four metrics. Experimental results show that MEFA-Net significantly improves the accuracy of polyp segmentation and outperforms current state-of-the-art algorithms. Code posted on https://github.com/847001315/MEFA-Net. © 2024

Keyword :

Endoscopy Endoscopy Image coding Image coding Image segmentation Image segmentation Risk assessment Risk assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ke, Xiao , Chen, Guanhong , Liu, Hao et al. MEFA-Net: A mask enhanced feature aggregation network for polyp segmentation [J]. | Computers in Biology and Medicine , 2025 , 186 .
MLA Ke, Xiao et al. "MEFA-Net: A mask enhanced feature aggregation network for polyp segmentation" . | Computers in Biology and Medicine 186 (2025) .
APA Ke, Xiao , Chen, Guanhong , Liu, Hao , Guo, Wenzhong . MEFA-Net: A mask enhanced feature aggregation network for polyp segmentation . | Computers in Biology and Medicine , 2025 , 186 .
Export to NoteExpress RIS BibTex

Version :

Zero-shot 3D anomaly detection via online voter mechanism SCIE
期刊论文 | 2025 , 187 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

3D anomaly detection aims to solve the problem that image anomaly detection is greatly affected by lighting conditions. As commercial confidentiality and personal privacy become increasingly paramount, access to training samples is often restricted. To address these challenges, we propose a zero-shot 3D anomaly detection method. Unlike previous CLIP-based methods, the proposed method does not require any prompt and is capable of detecting anomalies on the depth modality. Furthermore, we also propose a pre-trained structural rerouting strategy, which modifies the transformer without retraining or fine-tuning for the anomaly detection task. Most importantly, this paper proposes an online voter mechanism that registers voters and performs majority voter scoring in a one-stage, zero-start and growth-oriented manner, enabling direct anomaly detection on unlabeled test sets. Finally, we also propose a confirmatory judge credibility assessment mechanism, which provides an efficient adaptation for possible few-shot conditions. Results on datasets such as MVTec3D-AD demonstrate that the proposed method can achieve superior zero-shot 3D anomaly detection performance, indicating its pioneering contributions within the pertinent domain.

Keyword :

Anomaly detection Anomaly detection Multimodal Multimodal Online voter mechanism Online voter mechanism Pretrained model Pretrained model Zero-shot Zero-shot

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Wukun , Ke, Xiao , Guo, Wenzhong . Zero-shot 3D anomaly detection via online voter mechanism [J]. | NEURAL NETWORKS , 2025 , 187 .
MLA Zheng, Wukun et al. "Zero-shot 3D anomaly detection via online voter mechanism" . | NEURAL NETWORKS 187 (2025) .
APA Zheng, Wukun , Ke, Xiao , Guo, Wenzhong . Zero-shot 3D anomaly detection via online voter mechanism . | NEURAL NETWORKS , 2025 , 187 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 13 >

Export

Results:

Selected

to

Format:
Online/Total:601/13572982
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1