Indexed by:
Abstract:
Due to the heterogeneity gap in multi-view data, researchers have been attempting to apply these data to learn a co-latent representation to bridge this gap. However, multi-view representation learning still confronts two challenges: (1) it is hard to simultaneously consider the performance of downstream tasks and the interpretability and transparency of the network; (2) it fails to learn representations that accurately describe the class boundaries of downstream tasks. To overcome these limitations, we propose an interpretable representation learning framework, named interpretable multi-view proximity representation learning network. On the one hand, the proposed network is customized by an explicitly designed optimization objective that enables it to learn semantic co-latent representations while maintaining the interpretability and transparency of the network from the design level. On the other hand, the designed multi-view proximity representation learning objective function encourages its learned co-latent representations to form intuitive class boundaries by increasing the inter-class distance and decreasing the intra-class distance. Driven by a flexible downstream task loss, the learned co-latent representation can adapt to various multi-view scenarios and has been shown to be effective in experiments. As a result, this work provides a feasible solution to a generalized multi-view representation learning framework and is expected to accelerate the research and exploration in this field. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
Keyword:
Reprint 's Address:
Email:
Source :
Neural Computing and Applications
ISSN: 0941-0643
Year: 2024
Issue: 24
Volume: 36
Page: 15027-15044
4 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: