Indexed by:
Abstract:
Deep clustering attempts to capture the feature representation that benefits the clustering issue for inputs. Although the existing deep clustering methods have achieved encouraging performance in many research fields, there still presents some shortcomings, such as the lack of consideration of local structure preservation and sparse characteristics of input data. To this end, we propose the deep-stacked sparse embedded clustering method in this paper, which considers both the preservation of local structure and sparse property of inputs. The proposed network is trained to capture the feature representation for input data by the guidance of clustering loss and reconstruction loss, where the reconstruction loss prevents the corruption of feature space and guarantees the local structure preservation. Besides, some sparse parameters are added to the encoder to avoid learning of meaningless features. Through simultaneously minimizing the reconstruction loss and cluster loss, the proposed network can jointly learn the clustering oriented feature and optimize the assignment of cluster labels. Then we conduct amounts of comparative experiments, which consist of six clustering methods and four publicly available data sets. Eventually, the clustering accuracy, adjusted rand index and normalized mutual information are utilized as three evaluation metrics to provide a comparison. Comprehensive experiments validate the effectiveness of introducing sparse property and preserving local structure in our method. © 2019 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2019
Page: 1532-1538
Language: English
Cited Count:
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: