Indexed by:
Abstract:
To address the key challenge encountered in fake news detection, i.e., multimodal data is difficult to be effectively semantically represented due to its intrinsic heterogeneity, this paper proposes a multimodal knowledge representation method for fake news detection. First, visual feature extraction is performed for fake news image data, the relevant images are sliced into multiple blocks, and then visual modal features are obtained by linear projection layer mapping. This simplifies the feature extraction process and reduces the computational cost, which helps to improve the fake news recognition performance. Second, to meet the actual fake news detection needs, a long text representation method based on topic words is investigated for the text data in fake news. Finally, the multimodal representation of the same fake news data is optimized by establishing a connection between two different modalities, visual and text, and inputting it into a BiLSTM-Attention based network to achieve the fusion of multimodal features. The experiment selects the same fake news data of EANN model and uses four classical classification methods to verify the effect of knowledge representation and compare it with the fusion model ViLT which is not optimized for long text. The experiment proves that the accuracy rate of fake news detection using the multimodal representation proposed in this paper is improved by 7.4% compared to the EANN model, and by 9.3% compared to the ViLT representation.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024
Year: 2024
Page: 360-364
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2