Indexed by:
Abstract:
Multi-view Representation Learning (MRL) has recently attracted widespread attention because it can integrate information from diverse data sources to achieve better performance. However, existing MRL methods still have two issues: (1) They typically perform various consistency objectives within the feature space, which might discard complementary information contained in each view. (2) Some methods only focus on handling inter-view relationships while ignoring inter-sample relationships that are also valuable for downstream tasks. To address these issues, we propose a novel Multi-view representation learning method with Dual-label Collaborative Guidance (MDCG). Specifically, we fully excavate and utilize valuable semantic and graph information hidden in multi-view data to collaboratively guide the learning process of MRL. By learning consistent semantic labels from distinct views, our method enhances intrinsic connections across views while preserving view-specific information, which contributes to learning the consistent and complementary unified representation. Moreover, we integrate similarity matrices of multiple views to construct graph labels that indicate inter-sample relationships. With the idea of self-supervised contrastive learning, graph structure information implied in graph labels is effectively captured by the unified representation, thus enhancing its discriminability. Extensive experiments on diverse real-world datasets demonstrate the effectiveness and superiority of MDCG compared with nine state-of-the-art methods. Our code will be available at https: //github.com/Bin1Chen/MDCG.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
KNOWLEDGE-BASED SYSTEMS
ISSN: 0950-7051
Year: 2024
Volume: 305
7 . 2 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: