Indexed by:
Abstract:
As a widely used method in signal processing, Principal Component Analysis (PCA) performs both the compression and the recovery of high dimensional data by leveraging the linear transformations. Considering the robustness of PCA, how to discriminate correct samples and outliers in PCA is a crucial and challenging issue. In this paper, we present a general model, which conducts PCA via a non-decreasing concave regularized minimization and is termed PCA-NCRM for short. Different from most existing PCA methods, which learn the linear transformations by minimizing the recovery errors between the recovered data and the original data in the least squared sense, our model adopts the monotonically non-decreasing concave function to enhance the ability of model in distinguishing correct samples and outliers. To be specific, PCA-NCRM enlarges the attention to samples with smaller recovery errors and diminishes the attention to samples with larger recovery errors at the same time. The proposed minimization problem can be efficiently addressed by employing an iterative re-weighting optimization. Experimental results on several datasets show the effectiveness of our model.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE SIGNAL PROCESSING LETTERS
ISSN: 1070-9908
Year: 2025
Volume: 32
Page: 486-490
3 . 2 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: