Query:
学者姓名:郑清海
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
As a widely used method in signal processing, Principal Component Analysis (PCA) performs both the compression and the recovery of high dimensional data by leveraging the linear transformations. Considering the robustness of PCA, how to discriminate correct samples and outliers in PCA is a crucial and challenging issue. In this paper, we present a general model, which conducts PCA via a non-decreasing concave regularized minimization and is termed PCA-NCRM for short. Different from most existing PCA methods, which learn the linear transformations by minimizing the recovery errors between the recovered data and the original data in the least squared sense, our model adopts the monotonically non-decreasing concave function to enhance the ability of model in distinguishing correct samples and outliers. To be specific, PCA-NCRM enlarges the attention to samples with smaller recovery errors and diminishes the attention to samples with larger recovery errors at the same time. The proposed minimization problem can be efficiently addressed by employing an iterative re-weighting optimization. Experimental results on several datasets show the effectiveness of our model.
Keyword :
Adaptation models Adaptation models Dimensionality reduction Dimensionality reduction High dimensional data High dimensional data Iterative algorithms Iterative algorithms Iterative re-weighting optimization Iterative re-weighting optimization Lagrangian functions Lagrangian functions Minimization Minimization Optimization Optimization Principal component analysis Principal component analysis principal component analysis (PCA) principal component analysis (PCA) Robustness Robustness Signal processing algorithms Signal processing algorithms unsupervised dimensionality reduction unsupervised dimensionality reduction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Qinghai , Zhuang, Yixin . Non-Decreasing Concave Regularized Minimization for Principal Component Analysis [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 486-490 . |
MLA | Zheng, Qinghai 等. "Non-Decreasing Concave Regularized Minimization for Principal Component Analysis" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 486-490 . |
APA | Zheng, Qinghai , Zhuang, Yixin . Non-Decreasing Concave Regularized Minimization for Principal Component Analysis . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 486-490 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In real-world scenarios, missing views is common due to the complexity of data collection. Therefore, it is inevitable to classify incomplete multi-view data. Although substantial progress has been achieved, there are still two challenging problems with incomplete multi-view classification: (1) Simply ignoring these missing views is often ineffective, especially under high missing rates, which can lead to incomplete analysis and unreliable results. (2) Most existing multi-view classification models primarily focus on maximizing consistency between different views. However, neglecting specific-view information may lead to decreased performance. To solve the above problems, we propose a novel framework called Trusted Cross-View Completion (TCVC) for incomplete multi-view classification. Specifically, TCVC consists of three modules: Cross-view Feature Learning Module (CVFL), Imputation Module (IM) and Trusted Fusion Module (TFM). First, CVFL mines specific- view information to obtain cross-view reconstruction features. Then, IM restores the missing view by fusing cross-view reconstruction features with weights, guided by uncertainty-aware information. This information is the quality assessment of the cross-view reconstruction features in TFM. Moreover, the recovered views are supervised by cross-view neighborhood-aware. Finally, TFM effectively fuses complete data to generate trusted classification predictions. Extensive experiments show that our method is effective and robust.
Keyword :
Cross-view feature learning Cross-view feature learning Incomplete multi-view classification Incomplete multi-view classification Uncertainty-aware Uncertainty-aware
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhou, Liping , Chen, Shiyun , Song, Peihuan et al. Trusted Cross-view Completion for incomplete multi-view classification [J]. | NEUROCOMPUTING , 2025 , 629 . |
MLA | Zhou, Liping et al. "Trusted Cross-view Completion for incomplete multi-view classification" . | NEUROCOMPUTING 629 (2025) . |
APA | Zhou, Liping , Chen, Shiyun , Song, Peihuan , Zheng, Qinghai , Yu, Yuanlong . Trusted Cross-view Completion for incomplete multi-view classification . | NEUROCOMPUTING , 2025 , 629 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering has attracted significant attention in recent years because it can leverage the consistent and complementary information of multiple views to improve clustering performance. However, effectively fuse the information and balance the consistent and complementary information of multiple views are common challenges faced by multi-view clustering. Most existing multi-view fusion works focus on weighted-sum fusion and concatenating fusion, which unable to fully fuse the underlying information, and not consider balancing the consistent and complementary information of multiple views. To this end, we propose Cross-view Fusion for Multi-view Clustering (CFMVC). Specifically, CFMVC combines deep neural network and graph convolutional network for cross-view information fusion, which fully fuses feature information and structural information of multiple views. In order to balance the consistent and complementary information of multiple views, CFMVC enhances the correlation among the same samples to maximize the consistent information while simultaneously reinforcing the independence among different samples to maximize the complementary information. Experimental results on several multi-view datasets demonstrate the effectiveness of CFMVC for multi-view clustering task.
Keyword :
Cross-view Cross-view deep neural network deep neural network graph convolutional network graph convolutional network multi-view clustering multi-view clustering multi-view fusion multi-view fusion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai et al. Cross-View Fusion for Multi-View Clustering [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 621-625 . |
MLA | Huang, Zhijie et al. "Cross-View Fusion for Multi-View Clustering" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 621-625 . |
APA | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai , Yu, Yuanlong . Cross-View Fusion for Multi-View Clustering . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 621-625 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering learns consistent information from multi-view data, aiming to achieve more significant clustering characteristics. However, data in real-world scenarios often exhibit temporal or spatial asynchrony, leading to views with unaligned instances. Existing methods primarily address this issue by learning transformation matrices to align unaligned instances, but this process of learning differentiable transformation matrices is cumbersome. To address the challenge of partially unaligned instances, we propose P artially M ulti-view C lustering via R e-alignment (PMVCR). Our approach integrates representation learning and data alignment through a two-stage training and a re-alignment process. Specifically, our training process consists of three stages: (i) In the coarse-grained alignment stage, we construct negative instance pairs for unaligned instances and utilize contrastive learning to preliminarily learn the view representations of the instances. (ii) In there- alignment stage, we match unaligned instances based on the similarity of their view representations, aligning them with the primary view. (iii) In the fine-grained alignment stage, we further enhance the discriminative power of the view representations and the model's ability to differentiate between clusters. Compared to existing models, our method effectively leverages information between unaligned samples and enhances model generalization by constructing negative instance pairs. Clustering experiments on several popular multi-view datasets demonstrate the effectiveness and superiority of our method. Our code is publicly available at https://github.com/WenB777/PMVCR.git.
Keyword :
Contrastive learning Contrastive learning Multi-view clustering Multi-view clustering Partial view-aligned multi-view learning Partial view-aligned multi-view learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yan, Wenbiao , Zhu, Jihua , Chen, Jinqian et al. Partially multi-view clustering via re-alignment [J]. | NEURAL NETWORKS , 2025 , 182 . |
MLA | Yan, Wenbiao et al. "Partially multi-view clustering via re-alignment" . | NEURAL NETWORKS 182 (2025) . |
APA | Yan, Wenbiao , Zhu, Jihua , Chen, Jinqian , Cheng, Haozhe , Bai, Shunshun , Duan, Liang et al. Partially multi-view clustering via re-alignment . | NEURAL NETWORKS , 2025 , 182 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Zero-shot Natural Language Video Localization (NLVL) aims to automatically generate moments and corresponding pseudo queries from raw videos for the training of the localization model without any manual annotations. Existing approaches typically produce pseudo queries as simple words, which overlook the complexity of queries in real-world scenarios. Considering the powerful text modeling capabilities of large language models (LLMs), leveraging LLMs to generate complete queries that are closer to human descriptions is a potential solution. However, directly integrating LLMs into existing approaches introduces several issues, including insensitivity, isolation, and lack of regulation, which prevent the full exploitation of LLMs to enhance zero-shot NLVL performance. To address these issues, we propose BTDP, an innovative framework for Boundary-aware Temporal Dynamic Pseudo-supervision pairs generation. Our method contains two crucial operations: 1) Boundary Segmentation that identifies both visual boundaries and semantic boundaries to generate the atomic segments and activity descriptions, tackling the issue of insensitivity. 2) Context Aggregation that employs the LLMs with a self-evaluation process to aggregate and summarize global video information for optimized pseudo moment-query pairs, tackling the issue of isolation and lack of regulation. Comprehensive experimental results on the Charades-STA and ActivityNet Captions datasets demonstrate the effectiveness of our BTDP method. © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Semantic Segmentation Semantic Segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Xiongwen , Tang, Haoyu , Jiang, Han et al. Boundary-Aware Temporal Dynamic Pseudo-Supervision Pairs Generation for Zero-Shot Natural Language Video Localization [C] . 2025 : 2717-2725 . |
MLA | Deng, Xiongwen et al. "Boundary-Aware Temporal Dynamic Pseudo-Supervision Pairs Generation for Zero-Shot Natural Language Video Localization" . (2025) : 2717-2725 . |
APA | Deng, Xiongwen , Tang, Haoyu , Jiang, Han , Zheng, Qinghai , Zhu, Jihua . Boundary-Aware Temporal Dynamic Pseudo-Supervision Pairs Generation for Zero-Shot Natural Language Video Localization . (2025) : 2717-2725 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi -view clustering has attracted widespread attention because it can improve clustering performance by integrating information from various views of samples. However, many existing methods either neglect graph information entirely or only partially incorporate it, leading to information loss and non -comprehensive representation. Besides, they usually make use of graph information by determining a fixed number of neighbors based on prior knowledge, which limits the exploration of graph information contained in data. To address these issues, we propose a novel method, termed Graph -Driven deep Multi -View Clustering with self -paced learning (GDMVC), which integrates both feature information and graph information to better explore information within the data. Additionally, based on the idea of self -paced learning, this method gradually increases the number of neighbors and updates the similarity matrix, progressively providing more graph information to guide representation learning. By this way, we avoid issues associated with a fixed number of neighbors and ensure a thorough exploration of graph information contained in the original data. Furthermore, this method not only ensures the consistency among views but also leverages graph information to further enhance the unified representation, aiming to obtain more separable cluster structures. Extensive experiments on real datasets demonstrate its effectiveness for multi -view clustering. Our code will be released on https://github.com/yff-java/GDMVC/.
Keyword :
Graph information Graph information Multi-View Clustering Multi-View Clustering Representation learning Representation learning Self-paced learning Self-paced learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai et al. Graph-Driven deep Multi-View Clustering with self-paced learning [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
MLA | Bai, Shunshun et al. "Graph-Driven deep Multi-View Clustering with self-paced learning" . | KNOWLEDGE-BASED SYSTEMS 296 (2024) . |
APA | Bai, Shunshun , Ren, Xiaojin , Zheng, Qinghai , Zhu, Jihua . Graph-Driven deep Multi-View Clustering with self-paced learning . | KNOWLEDGE-BASED SYSTEMS , 2024 , 296 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Federated learning encounters substantial challenges with heterogeneous data, leading to performance degradation and convergence issues. While considerable progress has been achieved in mitigating such an impact, the reliability aspect of federated models has been largely disregarded. In this study, we conduct extensive experiments to investigate the reliability of both generic and personalized federated models. Our exploration uncovers a significant finding: federated models exhibit unreliability when faced with heterogeneous data, demonstrating poor calibration on in-distribution test data and low uncertainty levels on out-of-distribution data. This unreliability is primarily attributed to the presence of biased projection heads, which introduce miscalibration into the federated models. Inspired by this observation, we propose the'Assembled Projection Heads' (APH) method for enhancing the reliability of federated models. By treating the existing projection head parameters as priors, APH randomly samples multiple initialized parameters of projection heads from the prior and further performs targeted fine-tuning on locally available data under varying learning rates. Such a head ensemble introduces parameter diversity into the deterministic model, eliminating the bias and producing reliable predictions via head averaging. We evaluate the effectiveness of the proposed APH method across three prominent federated benchmarks. Experimental results validate the efficacy of APH in model calibration and uncertainty estimation. Notably, APH can be seamlessly integrated into various federated approaches but only requires less than 30% additional computation cost for 100× inferences within large models. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Artificial intelligence Artificial intelligence Reliability Reliability
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai et al. Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models [C] . 2024 : 11329-11337 . |
MLA | Chen, Jinqian et al. "Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models" . (2024) : 11329-11337 . |
APA | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai , Li, Zhongyu , Tian, Zhiqiang . Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models . (2024) : 11329-11337 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Federated learning encounters substantial challenges with heterogeneous data, leading to performance degradation and convergence issues. While considerable progress has been achieved in mitigating such an impact, the reliability aspect of federated models has been largely disregarded. In this study, we conduct extensive experiments to investigate the reliability of both generic and personalized federated models. Our exploration uncovers a significant finding: federated models exhibit unreliability when faced with heterogeneous data, demonstrating poor calibration on in-distribution test data and low uncertainty levels on out-of-distribution data. This unreliability is primarily attributed to the presence of biased projection heads, which introduce miscalibration into the federated models. Inspired by this observation, we propose the "Assembled Projection Heads" (APH) method for enhancing the reliability of federated models. By treating the existing projection head parameters as priors, APH randomly samples multiple initialized parameters of projection heads from the prior and further performs targeted fine-tuning on locally available data under varying learning rates. Such a head ensemble introduces parameter diversity into the deterministic model, eliminating the bias and producing reliable predictions via head averaging. We evaluate the effectiveness of the proposed APH method across three prominent federated benchmarks. Experimental results validate the efficacy of APH in model calibration and uncertainty estimation. Notably, APH can be seamlessly integrated into various federated approaches but only requires less than 30% additional computation cost for 100x inferences within large models.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai et al. Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models [J]. | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10 , 2024 : 11329-11337 . |
MLA | Chen, Jinqian et al. "Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models" . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10 (2024) : 11329-11337 . |
APA | Chen, Jinqian , Zhu, Jihua , Zheng, Qinghai , Li, Zhongyu , Tian, Zhiqiang . Watch Your Head: Assembling Projection Heads to Save the Reliability of Federated Models . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10 , 2024 , 11329-11337 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view Representation Learning (MRL) has recently attracted widespread attention because it can integrate information from diverse data sources to achieve better performance. However, existing MRL methods still have two issues: (1) They typically perform various consistency objectives within the feature space, which might discard complementary information contained in each view. (2) Some methods only focus on handling inter-view relationships while ignoring inter-sample relationships that are also valuable for downstream tasks. To address these issues, we propose a novel Multi-view representation learning method with Dual-label Collaborative Guidance (MDCG). Specifically, we fully excavate and utilize valuable semantic and graph information hidden in multi-view data to collaboratively guide the learning process of MRL. By learning consistent semantic labels from distinct views, our method enhances intrinsic connections across views while preserving view-specific information, which contributes to learning the consistent and complementary unified representation. Moreover, we integrate similarity matrices of multiple views to construct graph labels that indicate inter-sample relationships. With the idea of self-supervised contrastive learning, graph structure information implied in graph labels is effectively captured by the unified representation, thus enhancing its discriminability. Extensive experiments on diverse real-world datasets demonstrate the effectiveness and superiority of MDCG compared with nine state-of-the-art methods. Our code will be available at https: //github.com/Bin1Chen/MDCG.
Keyword :
Contrastive learning Contrastive learning Graph information Graph information Multi-view representation learning Multi-view representation learning Semantic information Semantic information
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Bin , Ren, Xiaojin , Bai, Shunshun et al. Multi-view representation learning with dual-label collaborative guidance [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 . |
MLA | Chen, Bin et al. "Multi-view representation learning with dual-label collaborative guidance" . | KNOWLEDGE-BASED SYSTEMS 305 (2024) . |
APA | Chen, Bin , Ren, Xiaojin , Bai, Shunshun , Chen, Ziyuan , Zheng, Qinghai , Zhu, Jihua . Multi-view representation learning with dual-label collaborative guidance . | KNOWLEDGE-BASED SYSTEMS , 2024 , 305 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, multi-view clustering methods have garnered considerable attention and have been applied in various domains. However, in practical scenarios, some samples may lack specific views, giving rise to the challenge of incomplete multi-view clustering. While some methods focus on completing missing data, incorrect completion can negatively affect representation learning. Moreover, separating completion and representation learning prevents the attainment of an optimal representation. Other methods eschew completion but singularly concentrate on either feature information or graph information, thus failing to achieve comprehensive representations. To address these challenges, we propose a graph-guided, imputation-free method for incomplete multi-view clustering. Unlike completion-based methods, our approach aims to maximize the utilization of existing information by simultaneously considering feature and graph information. This is realized through the feature learning component and the graph learning component. Introducing a degradation network, the former reconstructs view-specific representations proximate to available samples from a unified representation, seamlessly integrating feature information into the unified representation. Leveraging the semi-supervised idea, the latter utilizes reliable graph information from available samples to guide the learning of the unified representation. These two components collaborate to acquire a comprehensive unified representation for multi-view clustering. Extensive experiments conducted on real datasets demonstrate the effectiveness and competitiveness of the proposed method when compared with other state-of-the-art methods. Our code will be released on https://github.com/yff-java/GIMVC/.
Keyword :
Graph information Graph information Incomplete multi-view clustering Incomplete multi-view clustering Representation learning Representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Shunshun , Zheng, Qinghai , Ren, Xiaojin et al. Graph-guided imputation-free incomplete multi-view clustering [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 258 . |
MLA | Bai, Shunshun et al. "Graph-guided imputation-free incomplete multi-view clustering" . | EXPERT SYSTEMS WITH APPLICATIONS 258 (2024) . |
APA | Bai, Shunshun , Zheng, Qinghai , Ren, Xiaojin , Zhu, Jihua . Graph-guided imputation-free incomplete multi-view clustering . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 258 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |