Indexed by:
Abstract:
In most learning models for knowledge graph representation, only structural knowledge between entities and relations is taken into account. Therefore, the capability of the models is limited by knowledge storage, and the completion performance of knowledge base is unstable. Existing knowledge representation methods incorporating external information mostly model for a specific kind of external modal information, leading to limited application scopes. Thus, a knowledge representation learning model, Conv-AT, is proposed. Firstly, two external modes of information, text and images, are considered, and three schemes fusing external knowledge and entities are introduced to obtain multimodal representation of entities. Secondly, the performance of convolution is enhanced and the quality of knowledge representation as well as the completion ability of the model are improved by combining the channel attention module and spatial attention module. Link prediction and triple classification experiments are conducted on public multimodal datasets, and the results show that the proposed method is superior to other methods. © 2021, Science Press. All right reserved.
Keyword:
Reprint 's Address:
Email:
Source :
Pattern Recognition and Artificial Intelligence
ISSN: 1003-6059
Year: 2021
Issue: 1
Volume: 34
Page: 33-43
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: