Indexed by:
Abstract:
With the development of text summarization research, the methods based on RNN with the Encoder-Decoder model gradually become the mainstream. However, RNN tends to forget previous context information and leads to the lack of original information. That will reduce the accuracy of the generated summarizations. The Transformer model uses the self-attention mechanism to encode and decode historical information so that it can achieve better performance than RNN in learning context information. In this paper, a text summarization model based on transformer and switchable normalization is proposed. The accuracy of the model is improved by optimizing the normalization layer. Compared with other models, the new model has a great advantage in understanding words' semantics and associations. Experimental results in English Gigaword dataset show that the proposed model can achieve high ROUGE values and make the summarization more readable. © 2019 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2019
Page: 1606-1611
Language: English
Cited Count:
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: