Indexed by:
Abstract:
This paper presents a self-training word embedding text classification model based on knowledge graph expansion for text classification. Current mixed word embedding methods are overly dependent on the Fasttext pre-training model and here is still a problem of missing words with rich semantic information are not mapped. First, we propose a method for extracting missing nouns based on shape near word filtering. Second, we design a self-training word embedding method based on knowledge graph that mixes with pre-training word embedding to obtain a high-quality mixed word vector with rich semantics and rich semantics. Third, we designed a GRU model based on improved mixed word embedding to improve the quality of text classification. Experiments conducted on multiple text classification datasets demonstrate that our methods can effectively improve the text classification accuracy. © 2019 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2019
Page: 1618-1623
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: