Query:
学者姓名:郑清海
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Federated learning encounters substantial challenges with heterogeneous data, leading to performance degradation and convergence issues. While considerable progress has been achieved in mitigating such an impact, the reliability aspect of federated models has been largely disregarded. In this study, we conduct extensive experiments to investigate the reliability of both generic and personalized federated models. Our exploration uncovers a significant finding: federated models exhibit unreliability when faced with heterogeneous data, demonstrating poor calibration on in-distribution test data and low uncertainty levels on out-of-distribution data. This unreliability is primarily attributed to the presence of biased projection heads, which introduce miscalibration into the federated models. Inspired by this observation, we propose the'Assembled Projection Heads' (APH) method for enhancing the reliability of federated models. By treating the existing projection head parameters as priors, APH randomly samples multiple initialized parameters of projection heads from the prior and further performs targeted fine-tuning on locally available data under varying learning rates. Such a head ensemble introduces parameter diversity into the deterministic model, eliminating the bias and producing reliable predictions via head averaging. We evaluate the effectiveness of the proposed APH method across three prominent federated benchmarks. Experimental results validate the efficacy of APH in model calibration and uncertainty estimation. Notably, APH can be seamlessly integrated into various federated approaches but only requires less than 30% additional computation cost for 100× inferences within large models. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Artificial intelligence Artificial intelligence Reliability Reliability
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai et al. Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models [C] . 2024 : 11329-11337 . |
MLA | Chen, Jinqian et al. "Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models" . (2024) : 11329-11337 . |
APA | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai , Li, Zhongyu , Tian, Zhiqiang . Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models . (2024) : 11329-11337 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi -view clustering leverages diverse information sources for unsupervised clustering. While existing methods primarily focus on learning a fused representation matrix, they often overlook the impact of private information and noise. To overcome this limitation, we propose a novel approach, the Multi -view Semantic Consistency based Information Bottleneck for Clustering (MSCIB). Our method emphasizes semantic consistency to enhance the information bottleneck learning process across different views. It aligns multiple views in the semantic space, capturing valuable consistent information from multi -view data. The learned semantic consistency improves the ability of the information bottleneck to precisely distinguish consistent information, resulting in a more discriminative and unified feature representation for clustering. Experimental results on diverse multi -view datasets demonstrate that MSCIB achieves state-of-the-art performance. In comparison with the average performance of the other contrast algorithms, our approach exhibits a notable improvement of at least 4%.
Keyword :
Contrastive clustering Contrastive clustering Information bottleneck Information bottleneck Multi-view clustering Multi-view clustering
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yan, Wenbiao , Zhou, Yiyang , Wang, Yifei et al. Multi-view Semantic Consistency based Information Bottleneck for Clustering [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 288 . |
MLA | Yan, Wenbiao et al. "Multi-view Semantic Consistency based Information Bottleneck for Clustering" . | KNOWLEDGE-BASED SYSTEMS 288 (2024) . |
APA | Yan, Wenbiao , Zhou, Yiyang , Wang, Yifei , Zheng, Qinghai , Zhu, Jihua . Multi-view Semantic Consistency based Information Bottleneck for Clustering . | KNOWLEDGE-BASED SYSTEMS , 2024 , 288 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With the extensive use of multi-view data in practice, multi-view spectral clustering has received a lot of attention. In this work, we focus on the following two challenges, namely, how to deal with the partially contradictory graph information among different views and how to conduct clustering without the parameter selection. To this end, we establish a novel graph learning framework, which avoids the linear combination of the partially contradictory graph information among different views and learns a unified graph for clustering without the parameter selection. Specifically, we introduce a flexible graph degeneration with a structured graph constraint to address the aforementioned challenging issues. Besides, our method can be employed to deal with large-scale data by using the bipartite graph. Experimental results show the effectiveness and competitiveness of our method, compared to several state-of-the-art methods. IEEE
Keyword :
Bipartite graph Bipartite graph Circuits and systems Circuits and systems graph degeneration graph degeneration Laplace equations Laplace equations Multi-view data Multi-view data Optimization Optimization structured graph constraint structured graph constraint Task analysis Task analysis Time complexity Time complexity Vectors Vectors
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Q. . Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (9) : 1-1 . |
MLA | Zheng, Q. . "Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering" . | IEEE Transactions on Circuits and Systems for Video Technology 34 . 9 (2024) : 1-1 . |
APA | Zheng, Q. . Flexible and Parameter-free Graph Learning for Multi-view Spectral Clustering . | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (9) , 1-1 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering aims to improve the clustering performance by leveraging information from multiple views. Most existing works assume that all views are complete. However, samples in real-world scenarios cannot be always observed in all views, leading to the challenging problem of Incomplete Multi-View Clustering (IMVC). Although some attempts are made recently, they still suffer from the following two limitations: (1) they usually adopt shallow models, which are unable to sufficiently explore the consistency and complementary of multiple views; (2) they lack of a suitable measurement to evaluate the quality of the recovered data during the learning process. To address the aforementioned limitations, we introduce a novel Incomplete Multi-View Clustering via Inference and Evaluation (IMVC-IE). Specifically, IMVC-IE adopts the contrastive learning strategy on features of different views to excavate the underlying information from existing samples firstly. Subsequently, massive alternative simulated data are inferred for missing views and a novel evaluation strategy is presented to obtain the proper data for missing views completion. Extensive experiments are conducted and verify the effectiveness of our method. © 2024 IEEE.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Binqiang , Huang, Zhijie , Lan, Shoujie et al. INCOMPLETE MULTI-VIEW CLUSTERING VIA INFERENCE AND EVALUATION [C] . 2024 : 8180-8184 . |
MLA | Huang, Binqiang et al. "INCOMPLETE MULTI-VIEW CLUSTERING VIA INFERENCE AND EVALUATION" . (2024) : 8180-8184 . |
APA | Huang, Binqiang , Huang, Zhijie , Lan, Shoujie , Zheng, Qinghai , Yu, Yuanlong . INCOMPLETE MULTI-VIEW CLUSTERING VIA INFERENCE AND EVALUATION . (2024) : 8180-8184 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In the era of smart cities, the advent of the Internet of Things technology has catalyzed the proliferation of multimodal sensor data, presenting new challenges in cross -modal event detection, particularly in audio event detection via textual queries. This paper focuses on the novel task of text -to -audio grounding (TAG), aiming to precisely localize sound segments that correspond to events described in textual queries within an untrimmed audio. This challenging new task requires multi -modal (acoustic and linguistic) information fusion as well as the reasoning for the cross -modal semantic matching between the given audio and textual query. Unlike conventional methods that often overlook the nuanced interactions between and within modalities, we introduce the Cross -modal Graph Interaction (CGI) model. This innovative approach leverages a language graph to model complex semantic relationships between query words, enhancing the understanding of textual queries. Additionally, a cross -modal attention mechanism generates snippet -specific query representations, facilitating fine-grained semantic matching between audio segments and textual descriptions. A cross -gating module further refines this process by emphasizing relevant features across modalities and suppressing irrelevant information, optimizing multimodal information fusion. Our comprehensive evaluation on the Audiogrounding benchmark dataset not only demonstrates the CGI model's superior performance over existing methods, but also underscores the significance of sophisticated multimodal interaction in improving the efficacy of TAG in smart cities.
Keyword :
Cross-modal learning Cross-modal learning Graph neural network Graph neural network Multimodal information fusion Multimodal information fusion Smart city Smart city Text-to-audio grounding Text-to-audio grounding
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Tang, Haoyu , Hu, Yupeng , Wang, Yunxiao et al. Listen as you wish: Fusion of audio and text for cross-modal event detection in smart cities [J]. | INFORMATION FUSION , 2024 , 110 . |
MLA | Tang, Haoyu et al. "Listen as you wish: Fusion of audio and text for cross-modal event detection in smart cities" . | INFORMATION FUSION 110 (2024) . |
APA | Tang, Haoyu , Hu, Yupeng , Wang, Yunxiao , Zhang, Shuaike , Xu, Mingzhu , Zhu, Jihua et al. Listen as you wish: Fusion of audio and text for cross-modal event detection in smart cities . | INFORMATION FUSION , 2024 , 110 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi -view clustering has attracted widespread attention because it can improve clustering performance by integrating information from various views of samples. However, many existing methods either neglect graph information entirely or only partially incorporate it, leading to information loss and non -comprehensive representation. Besides, they usually make use of graph information by determining a fixed number of neighbors based on prior knowledge, which limits the exploration of graph information contained in data. To address these issues, we propose a novel method, termed Graph -Driven deep Multi -View Clustering with self -paced learning (GDMVC), which integrates both feature information and graph information to better explore information within the data. Additionally, based on the idea of self -paced learning, this method gradually increases the number of neighbors and updates the similarity matrix, progressively providing more graph information to guide representation learning. By this way, we avoid issues associated with a fixed number of neighbors and ensure a thorough exploration of graph information contained in the original data. Furthermore, this method not only ensures the consistency among views but also leverages graph information to further enhance the unified representation, aiming to obtain more separable cluster structures. Extensive experiments on real datasets demonstrate its effectiveness for multi -view clustering. Our code will be released on https://github.com/yff-java/GDMVC/.
Keyword :
Graph information Graph information Multi-View Clustering Multi-View Clustering Representation learning Representation learning Self-paced learning Self-paced learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai et al. Graph-Driven deep Multi-View Clustering with self-paced learning [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
MLA | Bai, Shunshun et al. "Graph-Driven deep Multi-View Clustering with self-paced learning" . | KNOWLEDGE-BASED SYSTEMS 296 (2024) . |
APA | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai , Zhu, Jihua . Graph-Driven deep Multi-View Clustering with self-paced learning . | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Federated learning encounters a critical challenge of data heterogeneity, adversely affecting the performance and convergence of the federated model. Various approaches have been proposed to address this issue, yet their effectiveness is still limited. Recent studies have revealed that the federated model suffers severe forgetting in local training, leading to global forgetting and performance degradation. Although the analysis provides valuable insights, a comprehensive understanding of the vulnerable classes and their impact factors is yet to be established. In this paper, we aim to bridge this gap by systematically analyzing the forgetting degree of each class during local training across different communication rounds. Our observations are: (1) Both missing and non-dominant classes suffer similar severe forgetting during local training, while dominant classes show improvement in performance. (2) When dynamically reducing the sample size of a dominant class, catastrophic forgetting occurs abruptly when the proportion of its samples is below a certain threshold, indicating that the local model struggles to leverage a few samples of a specific class effectively to prevent forgetting. Motivated by these findings, we propose a novel and straightforward algorithm called Federated Knowledge Anchor (FedKA). Assuming that all clients have a single shared sample for each class, the knowledge anchor is constructed before each local training stage by extracting shared samples for missing classes and randomly selecting one sample per class for non-dominant classes. The knowledge anchor is then utilized to correct the gradient of each mini-batch towards the direction of preserving the knowledge of the missing and non-dominant classes. Extensive experimental results demonstrate that our proposed FedKA achieves fast and stable convergence, significantly improving accuracy on popular benchmarks. © 2023 ACM.
Keyword :
federated learning federated learning knowledge preservation knowledge preservation non-iid non-iid
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, J. , Zhu, J. , Zheng, Q. . Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor [未知]. |
MLA | Chen, J. et al. "Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor" [未知]. |
APA | Chen, J. , Zhu, J. , Zheng, Q. . Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Without the valuable label information to guide the learning process, it is demanding to fully excavate and integrate the underlying information from different views to learn the unified multi-view representation. This paper focuses on this challenge and presents a novel method, termed Graph-guided Unsupervised Multi-view Representation Learning (GUMRL), taking full advantage of multi-view graph information during the learning process. To be specific, GUMRL jointly conducts the view-specific feature representation learning, which is under the guidance of graph information, and the unified feature representation learning, which fuses the underlying graph information of different views to learn the desired unified multi-view feature representation. Regarding downstream tasks, such as clustering and classification, the classic single-view algorithms can be directly performed on the learned unified multi-view representation. The designed objective function is effectively optimized based on an alternating direction minimization method, and experiments conducted on six real-world multi-view datasets show the effectiveness and competitiveness of our GUMRL, compared to several state-of-the-art methods.
Keyword :
graph information graph information Multi-view learning Multi-view learning multi-view representation learning multi-view representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Qinghai , Zhu, Jihua , Li, Zhongyu et al. Graph-Guided Unsupervised Multiview Representation Learning [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2023 , 33 (1) : 146-159 . |
MLA | Zheng, Qinghai et al. "Graph-Guided Unsupervised Multiview Representation Learning" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33 . 1 (2023) : 146-159 . |
APA | Zheng, Qinghai , Zhu, Jihua , Li, Zhongyu , Tang, Haoyu . Graph-Guided Unsupervised Multiview Representation Learning . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2023 , 33 (1) , 146-159 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
As developments in the field of computer vision continue to be achieved, there is a need for more flexible strategies to cope with the large-scale and dynamic properties of real-world object categorization situations. However, regarding most existing traditional incremental learning methods, they ignore the rich information of the previous tasks embedded in the trained model during the continuous learning process. By innovatively combining model inversion and generative adversarial networks, this paper proposes a model inversion-based generation technique, which makes the information contained in the images generated by the generator more informative. To be specific, the information in the model, which has been trained by the previous task, can be inverted into an image, which can be added to the training process of the generative network. The experimental results show that the proposed method alleviates the catastrophic forgetting problem in incremental learning and outperforms other traditional methods.
Keyword :
Catastrophic forgetting Catastrophic forgetting Deep learning Deep learning Incremental learning Incremental learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Dianbin , Jiang, Weijie , Huang, Zhiyong et al. Model Inversion-Based Incremental Learning [J]. | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 , 2023 , 153 : 1228-1236 . |
MLA | Wu, Dianbin et al. "Model Inversion-Based Incremental Learning" . | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 153 (2023) : 1228-1236 . |
APA | Wu, Dianbin , Jiang, Weijie , Huang, Zhiyong , Zheng, Qinghai , Chen, Xiaodong , Lin, WangQiu et al. Model Inversion-Based Incremental Learning . | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 , 2023 , 153 , 1228-1236 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Few-shot anomaly detection needs to solve the problem of anomaly detection when the training samples are scarce. The previous anomaly detection methods showed incompatibility in the case of lack of samples, and the current few-shot anomaly detection methods were not satisfactory. So, we hope to solve this problem from the perspective of few-shot learning. We utilize a pre-trained model for feature extraction and construct multiple sub-prototype networks in multi-scale features to compute anomaly maps corresponding to each scale. The final anomaly map can be used for anomaly detection. Our method does not need to be trained for each category and can be plug-and-play when there are a small number of normal class samples as the support set. Experiments show that our method achieves excellent performance on MNIST, CIFAR10, and MVTecAD datasets.
Keyword :
Few-shot Few-shot Multi-scale features Multi-scale features Pre-trained model Pre-trained model
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Jingkai , Jiang, Weijie , Huang, Zhiyong et al. Multi-scale Prototypical Network for Few-shot Anomaly Detection [J]. | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 , 2023 , 153 : 1067-1076 . |
MLA | Wu, Jingkai et al. "Multi-scale Prototypical Network for Few-shot Anomaly Detection" . | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 153 (2023) : 1067-1076 . |
APA | Wu, Jingkai , Jiang, Weijie , Huang, Zhiyong , Lin, Qifeng , Zheng, Qinghai , Liang, Yi et al. Multi-scale Prototypical Network for Few-shot Anomaly Detection . | ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022 , 2023 , 153 , 1067-1076 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |