• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:高钦泉

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 7 >
Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing SCIE
期刊论文 | 2025 , 121 | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS
Abstract&Keyword Cite Version(2)

Abstract :

Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor's diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.

Keyword :

Arbitrary modal missing Arbitrary modal missing Controllable Controllable Cross global attention Cross global attention Multimodal fusion Multimodal fusion Multimodal MR image denoising Multimodal MR image denoising

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jiang, Mingfu , Wang, Shuai , Chan, Ka-Hou et al. Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing [J]. | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS , 2025 , 121 .
MLA Jiang, Mingfu et al. "Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing" . | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS 121 (2025) .
APA Jiang, Mingfu , Wang, Shuai , Chan, Ka-Hou , Sun, Yue , Xu, Yi , Zhang, Zhuoneng et al. Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing . | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS , 2025 , 121 .
Export to NoteExpress RIS BibTex

Version :

Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing Scopus
期刊论文 | 2025 , 121 | Computerized Medical Imaging and Graphics
Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing EI
期刊论文 | 2025 , 121 | Computerized Medical Imaging and Graphics
MHAVSR: A multi-layer hybrid alignment network for video super-resolution SCIE
期刊论文 | 2025 , 624 | NEUROCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Video super-resolution (VSR) aims to restore high-resolution (HR) frames from low-resolution (LR) frames, the key to this task is to fully utilize the complementary information between frames to reconstruct high- resolution sequences. Current works tackle with this by exploiting a sliding window strategy or a recurrent architecture for single alignment, which either lacks long range modeling ability or is prone to frame-by-frame error accumulation. In this paper, we propose a Multi-layer Hybrid Alignment network for VSR (MHAVSR), which combines a sliding window with a recurrent structure and extends the number of propagation layers based on this hybrid structure. Repeatedly, at each propagation layer, alignment operations are performed simultaneously on bidirectional neighboring frames and hidden states from recursive propagation, which improves the alignment while fully utilizing both the short-term and long-term information in the video sequence. Next, we present a flow-enhanced dual-deformable alignment module, which improves the accuracy of deformable convolutional offsets by optical flow and fuses the separate alignment results of the hybrid alignment to reduce the artifacts caused by alignment errors. In addition, we introduce a spatial-temporal reconstruction module to compensate the representation capacity of model at different scales. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches. In particular, on the Vid4 test set, our model exceeds the IconVSR by 0.82 dB in terms of PSNR with a similar number of parameters. Codes are available at https://github.com/fzuqxt/MHAVSR.

Keyword :

Deformable convolution Deformable convolution Hybrid propagation Hybrid propagation Long-short term information Long-short term information Multi-layer alignment Multi-layer alignment Video super-resolution Video super-resolution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Qiu, Xintao , Zhou, Yuanbo , Zhang, Xinlin et al. MHAVSR: A multi-layer hybrid alignment network for video super-resolution [J]. | NEUROCOMPUTING , 2025 , 624 .
MLA Qiu, Xintao et al. "MHAVSR: A multi-layer hybrid alignment network for video super-resolution" . | NEUROCOMPUTING 624 (2025) .
APA Qiu, Xintao , Zhou, Yuanbo , Zhang, Xinlin , Xue, Yuyang , Lin, Xiaoyong , Dai, Xinwei et al. MHAVSR: A multi-layer hybrid alignment network for video super-resolution . | NEUROCOMPUTING , 2025 , 624 .
Export to NoteExpress RIS BibTex
A universal parameter-efficient fine-tuning approach for stereo image super-resolution SCIE
期刊论文 | 2025 , 151 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract&Keyword Cite Version(2)

Abstract :

Despite advances in the use of the strategy of pre-training then fine-tuning in low-level vision tasks, the increasing size of models presents significant challenges for this paradigm, particularly in terms of training time and memory consumption. In addition, unsatisfactory results may occur when pre-trained single-image models are directly applied to a multi-image domain. In this paper, we propose an efficient method for transferring a pre-trained single-image super-resolution transformer network to the domain of stereo image super-resolution (SteISR) using a parameter-efficient fine-tuning approach. Specifically, the concept of stereo adapters and spatial adapters are introduced, which are incorporated into the pre-trained single-image super-resolution transformer network. Subsequently, only the inserted adapters are trained on stereo datasets. Compared with the classical full fine-tuning paradigm, our method can effectively reduce training time and memory consumption by 57% and 15%, respectively. Moreover, this method allows us to train only 4.8% of the original model parameters, achieving state-of-the-art performance on four commonly used SteISR benchmarks. This technology is expected to improve stereo image resolution in various fields such as medical imaging and autonomous driving, thereby indirectly enhancing the accuracy of depth estimation and object recognition tasks.

Keyword :

Autonomous driving Autonomous driving Parameter-efficient fine-tuning Parameter-efficient fine-tuning Stereo image super-resolution Stereo image super-resolution Transfer learning Transfer learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Yuanbo , Xue, Yuyang , Zhang, Xinlin et al. A universal parameter-efficient fine-tuning approach for stereo image super-resolution [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2025 , 151 .
MLA Zhou, Yuanbo et al. "A universal parameter-efficient fine-tuning approach for stereo image super-resolution" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 151 (2025) .
APA Zhou, Yuanbo , Xue, Yuyang , Zhang, Xinlin , Deng, Wei , Wang, Tao , Tan, Tao et al. A universal parameter-efficient fine-tuning approach for stereo image super-resolution . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2025 , 151 .
Export to NoteExpress RIS BibTex

Version :

A universal parameter-efficient fine-tuning approach for stereo image super-resolution EI
期刊论文 | 2025 , 151 | Engineering Applications of Artificial Intelligence
A universal parameter-efficient fine-tuning approach for stereo image super-resolution Scopus
期刊论文 | 2025 , 151 | Engineering Applications of Artificial Intelligence
HSINet: A Hybrid Semantic Integration Network for Medical Image Segmentation EI
会议论文 | 2025 , 2302 CCIS , 339-353 | 19th Chinese Conference on Image and Graphics Technologies and Applications, IGTA 2024
Abstract&Keyword Cite Version(1)

Abstract :

Medical image segmentation is crucial in medical image analysis. In recent years, deep learning, particularly convolutional neural networks (CNNs) and Transformer models, has significantly advanced this field. To fully leverage the abilities of CNNs and Transformers in extracting local and global information, we propose HSINet, which employs Swin Transformer and the newly introduced Deep Dense Feature Extraction (DFE) block to construct dual encoders. A Swin Transformer and DFE Encoded Feature Fusion (TDEF) module is designed to merge features from the two branches, and the Multi-Scale Semantic Fusion (MSSF) module further promotes the full utilization of low-level and high-level features from the encoders. We evaluated the proposed network on the familial cerebral cavernous malformations private dataset (SG-FCCM) and the ISIC-2017 challenge dataset. The experimental results indicate that the proposed HSINet outperforms several other advanced segmentation methods, demonstrating its superiority in medical image segmentation. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.

Keyword :

Convolutional neural networks Convolutional neural networks Deep neural networks Deep neural networks Semantic Segmentation Semantic Segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zong, Ruige , Wang, Tao , Zhang, Xinlin et al. HSINet: A Hybrid Semantic Integration Network for Medical Image Segmentation [C] . 2025 : 339-353 .
MLA Zong, Ruige et al. "HSINet: A Hybrid Semantic Integration Network for Medical Image Segmentation" . (2025) : 339-353 .
APA Zong, Ruige , Wang, Tao , Zhang, Xinlin , Gao, Qinquan , Kang, Dezhi , Lin, Fuxin et al. HSINet: A Hybrid Semantic Integration Network for Medical Image Segmentation . (2025) : 339-353 .
Export to NoteExpress RIS BibTex

Version :

HSINet: A Hybrid Semantic Integration Network for Medical Image Segmentation Scopus
其他 | 2025 , 2302 CCIS , 339-353 | Communications in Computer and Information Science
DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution SCIE
期刊论文 | 2025 , 623 | NEUROCOMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Although diffusion prior-based single-image super-resolution has demonstrated remarkable reconstruction capabilities, its potential in the domain of stereo image super-resolution remains underexplored. One significant challenge lies in the inherent stochasticity of diffusion models, which makes it difficult to ensure that the generated left and right images exhibit high semantic and texture consistency. This poses a considerable obstacle to advancing research in this field. Therefore, We introduce DiffSteISR, a pioneering framework for reconstructing real-world stereo images. DiffSteISR utilizes the powerful prior knowledge embedded in pre-trained text-to-image model to efficiently recover the lost texture details in low-resolution stereo images. Specifically, DiffSteISR implements a time-aware stereo cross attention with temperature adapter (TASCATA) to guide the diffusion process, ensuring that the generated left and right views exhibit high texture consistency thereby reducing disparity error between the super-resolved images and the ground truth (GT) images. Additionally, a stereo omni attention control network (SOA ControlNet) is proposed to enhance the consistency of super-resolved images with GT images in the pixel, perceptual, and distribution space. Finally, DiffSteISR incorporates a stereo semantic extractor (SSE) to capture unique viewpoint soft semantic information and shared hard tag semantic information, thereby effectively improving the semantic accuracy and consistency of the generated left and right images. Extensive experimental results demonstrate that DiffSteISR accurately reconstructs natural and precise textures from low-resolution stereo images while maintaining a high consistency of semantic and texture between the left and right views.

Keyword :

ControlNet ControlNet Diffusion model Diffusion model Reconstructing Reconstructing Stereo image super-resolution Stereo image super-resolution Texture consistency Texture consistency

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Yuanbo , Zhang, Xinlin , Deng, Wei et al. DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution [J]. | NEUROCOMPUTING , 2025 , 623 .
MLA Zhou, Yuanbo et al. "DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution" . | NEUROCOMPUTING 623 (2025) .
APA Zhou, Yuanbo , Zhang, Xinlin , Deng, Wei , Wang, Tao , Tan, Tao , Gao, Qinquan et al. DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution . | NEUROCOMPUTING , 2025 , 623 .
Export to NoteExpress RIS BibTex

Version :

DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution EI
期刊论文 | 2025 , 623 | Neurocomputing
DiffSteISR: Harnessing diffusion prior for superior real-world stereo image super-resolution Scopus
期刊论文 | 2025 , 623 | Neurocomputing
Contrastive Learning via Randomly Generated Deep Supervision EI
会议论文 | 2025 | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025
Abstract&Keyword Cite Version(1)

Abstract :

Unsupervised visual representation learning has gained significant attention in the computer vision community, driven by recent advancements in contrastive learning. Most existing contrastive learning frameworks rely on instance discrimination as a pretext task, treating each instance as a distinct category. However, this often leads to intra-class collision in a large latent space, compromising the quality of learned representations. To address this issue, we propose a novel contrastive learning method that utilizes randomly generated supervision signals. Our framework incorporates two projection heads: one handles conventional classification tasks, while the other employs a random algorithm to generate fixed-length vectors representing different classes. The second head executes a supervised contrastive learning task based on these vectors, effectively clustering instances of the same class and increasing the separation between different classes. Our method, Contrastive Learning via Randomly Generated Supervision(CLRGS), significantly improves the quality of feature representations across various datasets and achieves state-of-the-art performance in contrastive learning tasks. © 2025 IEEE.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shibo , Ma, Zili , Chan, Ka-Hou et al. Contrastive Learning via Randomly Generated Deep Supervision [C] . 2025 .
MLA Wang, Shibo et al. "Contrastive Learning via Randomly Generated Deep Supervision" . (2025) .
APA Wang, Shibo , Ma, Zili , Chan, Ka-Hou , Liu, Yue , Tong, Tong , Gao, Qinquan et al. Contrastive Learning via Randomly Generated Deep Supervision . (2025) .
Export to NoteExpress RIS BibTex

Version :

Contrastive Learning via Randomly Generated Deep Supervision Scopus
其他 | 2025 | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Weakly Supervised Classification for Nasopharyngeal Carcinoma With Transformer in Whole Slide Images SCIE
期刊论文 | 2024 , 28 (12) , 7251-7262 | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Abstract&Keyword Cite Version(2)

Abstract :

Pathological examination of nasopharyngeal carcinoma (NPC) is an indispensable factor for diagnosis, guiding clinical treatment and judging prognosis. Traditional and fully supervised NPC diagnosis algorithms require manual delineation of regions of interest on the gigapixel of whole slide images (WSIs), which however is laborious and often biased. In this paper, we propose a weakly supervised framework based on Tokens-to-Token Vision Transformer (WS-T2T-ViT) for accurate NPC classification with only a slide-level label. The label of tile images is inherited from their slide-level label. Specifically, WS-T2T-ViT is composed of the multi-resolution pyramid, T2T-ViT and multi-scale attention module. The multi-resolution pyramid is designed for imitating the coarse-to-fine process of manual pathological analysis to learn features from different magnification levels. The T2T module captures the local and global features to overcome the lack of global information. The multi-scale attention module improves classification performance by weighting the contributions of different granularity levels. Extensive experiments are performed on the 802-patient NPC and CAMELYON16 dataset. WS-T2T-ViT achieves an area under the receiver operating characteristic curve (AUC) of 0.989 for NPC classification on the NPC dataset. The experiment results of CAMELYON16 dataset demonstrate the robustness and generalizability of WS-T2T-ViT in WSI-level classification.

Keyword :

Annotations Annotations Breast cancer Breast cancer Cancer Cancer Digital pathology Digital pathology Feature extraction Feature extraction Hospitals Hospitals image pyramid image pyramid nasopharyngeal carcinoma nasopharyngeal carcinoma transformer transformer Transformers Transformers Tumors Tumors weakly supervised learning weakly supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Ziwei , Wang, Jianchao , Gao, Qinquan et al. Weakly Supervised Classification for Nasopharyngeal Carcinoma With Transformer in Whole Slide Images [J]. | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2024 , 28 (12) : 7251-7262 .
MLA Hu, Ziwei et al. "Weakly Supervised Classification for Nasopharyngeal Carcinoma With Transformer in Whole Slide Images" . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 28 . 12 (2024) : 7251-7262 .
APA Hu, Ziwei , Wang, Jianchao , Gao, Qinquan , Wu, Zhida , Xu, Hanchuan , Guo, Zhechen et al. Weakly Supervised Classification for Nasopharyngeal Carcinoma With Transformer in Whole Slide Images . | IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS , 2024 , 28 (12) , 7251-7262 .
Export to NoteExpress RIS BibTex

Version :

Weakly Supervised Classification for Nasopharyngeal Carcinoma with Transformer in Whole Slide Images Scopus
期刊论文 | 2024 , 28 (12) , 1-12 | IEEE Journal of Biomedical and Health Informatics
Weakly Supervised Classification for Nasopharyngeal Carcinoma with Transformer in Whole Slide Images EI
期刊论文 | 2024 , 28 (12) , 7251-7262 | IEEE Journal of Biomedical and Health Informatics
Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information SCIE
期刊论文 | 2024 , 255 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite Version(2)

Abstract :

Real-world stereo image super-resolution has a significant influence on enhancing the performance of computer vision systems. Although existing methods for single-image super-resolution can be applied to enhance stereo images, these methods often introduce notable modifications to the inherent disparity, resulting in a loss in the consistency of disparity between the original and the enhanced stereo images. To overcome this limitation, this paper proposes a novel approach that integrates an implicit stereo information discriminator and a hybrid degradation model. This combination ensures effective enhancement while preserving disparity consistency. The proposed method bridges the gap between the complex degradations in real-world stereo domain and the simpler degradations in real-world single-image super-resolution domain. Our results demonstrate impressive performance on synthetic and real datasets, enhancing visual perception while maintaining disparity consistency.

Keyword :

Disparity Disparity Real-world Real-world Stereo image super-resolution Stereo image super-resolution Visual perception Visual perception

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Yuanbo , Xue, Yuyang , Bi, Jiang et al. Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 255 .
MLA Zhou, Yuanbo et al. "Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information" . | EXPERT SYSTEMS WITH APPLICATIONS 255 (2024) .
APA Zhou, Yuanbo , Xue, Yuyang , Bi, Jiang , He, Wenlin , Zhang, Xinlin , Zhang, Jiajun et al. Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 255 .
Export to NoteExpress RIS BibTex

Version :

Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information Scopus
期刊论文 | 2024 , 255 | Expert Systems with Applications
Towards real world stereo image super-resolution via hybrid degradation model and discriminator for implied stereo image information EI
期刊论文 | 2024 , 255 | Expert Systems with Applications
E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation SCIE
期刊论文 | 2024 , 40 (12) , 8867-8880 | VISUAL COMPUTER
Abstract&Keyword Cite Version(2)

Abstract :

The goal of efficient and effective real-world image super-resolution (Real-ISR) is to recover the high-resolution image from the given low-resolution image with unknown degradation by limited computation resources. Prior research has attempted to design a fully degradation-adaptive network, where the entire backbone is a nonlinear combination of several sub-networks which can handle different degradation subspaces. However, these methods heavily rely on expensive dynamic convolution operations and are inefficient in super-resolving images of different degradation levels. To address this issue, we propose an efficient and effective real-world image super-resolution network (E2-RealSR) based on partial degradation modulation, which is consisted of a small regression and a lightweight super-resolution network. The former accurately predicts the individual degradation parameters of input images, while the latter only modulates its partial parameters based on the degradation information. The extensive experiments validate that our proposed method is capable of recovering the rich details in real-world images with varying degradation levels. Moreover, our approach also has an advantage in terms of efficiency, compared to state-of-the-art methods. Our method shows improved performance while only using 20% of the parameters and 60% of the FLOPs of DASR. The relevant code is made available on this link as open source.

Keyword :

Degradation prediction Degradation prediction Efficient and effective super-resolution Efficient and effective super-resolution Partial degradation modulation Partial degradation modulation Real-world image super-resolution Real-world image super-resolution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiajun , Zhou, Yuanbo , Tong, Tong et al. E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation [J]. | VISUAL COMPUTER , 2024 , 40 (12) : 8867-8880 .
MLA Zhang, Jiajun et al. "E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation" . | VISUAL COMPUTER 40 . 12 (2024) : 8867-8880 .
APA Zhang, Jiajun , Zhou, Yuanbo , Tong, Tong , Liu, Hongjun , Tian, Tian , Hu, Xingmei et al. E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation . | VISUAL COMPUTER , 2024 , 40 (12) , 8867-8880 .
Export to NoteExpress RIS BibTex

Version :

E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation EI
期刊论文 | 2024 , 40 (12) , 8867-8880 | Visual Computer
E2-RealSR: efficient and effective real-world super-resolution network based on partial degradation modulation Scopus
期刊论文 | 2024 , 40 (12) , 8867-8880 | Visual Computer
Distance guided generative adversarial network for explainable medical image classifications SCIE
期刊论文 | 2024 , 118 | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS
Abstract&Keyword Cite Version(2)

Abstract :

Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in https://github.com/yXiangXiong/DisGAN.

Keyword :

Binary classification Binary classification Data augmentation Data augmentation Decision boundary Decision boundary Explainability Explainability Generative adversarial network Generative adversarial network Hyperplane Hyperplane

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xiong, Xiangyu , Sun, Yue , Liu, Xiaohong et al. Distance guided generative adversarial network for explainable medical image classifications [J]. | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS , 2024 , 118 .
MLA Xiong, Xiangyu et al. "Distance guided generative adversarial network for explainable medical image classifications" . | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS 118 (2024) .
APA Xiong, Xiangyu , Sun, Yue , Liu, Xiaohong , Ke, Wei , Lam, Chan-Tong , Chen, Jiangang et al. Distance guided generative adversarial network for explainable medical image classifications . | COMPUTERIZED MEDICAL IMAGING AND GRAPHICS , 2024 , 118 .
Export to NoteExpress RIS BibTex

Version :

Distance guided generative adversarial network for explainable medical image classifications Scopus
期刊论文 | 2024 , 118 | Computerized Medical Imaging and Graphics
Distance guided generative adversarial network for explainable medical image classifications EI
期刊论文 | 2024 , 118 | Computerized Medical Imaging and Graphics
10| 20| 50 per page
< Page ,Total 7 >

Export

Results:

Selected

to

Format:
Online/Total:66/10383356
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1