• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Qiu, J.-Q. (Qiu, J.-Q..) [1] | Zhang, C.-Y. (Zhang, C.-Y..) [2] (Scholars:张春阳) | Chen, C.L.P. (Chen, C.L.P..) [3]

Indexed by:

Scopus

Abstract:

Pre-trained language models (PLMs) have shown remarkable performance on question answering (QA) tasks, but they usually require fine-tuning that depends on a substantial quantity of QA pairs. Therefore, improving the performance of PLMs in scenarios with only a small number of training examples, also known as a few-shot setting, is of great practical significance. Current mitigation strategies for the few-shot QA task largely rely on pre-training a QA task-specific language model from scratch, overlooking the potential of foundational PLMs to generate QA pairs, particularly in the few-shot setting. To address this issue, we propose a prompt-based QA data augmentation method aimed at automating the creation of high-quality QA pairs. It employs the prompt-based fine-tuning method, adapting the question generation process of PLMs to the few-shot setting. Additionally, we introduce a dynamic text filling training strategy. This strategy simulates the progressive human learning process, thereby alleviating overfitting of PLMs in the few-shot setting and enhancing their reasoning capability to tackle complex questions. Extensive experiments demonstrate that the proposed method outperforms existing approaches across various few-shot configurations.  © 2020 IEEE.

Keyword:

data augmentation few-shot learning prompt learning Question answering

Community:

  • [ 1 ] [Qiu J.-Q.]Fuzhou University, College of Computer and Data Science, Fuzhou, 350108, China
  • [ 2 ] [Zhang C.-Y.]Fuzhou University, College of Computer and Data Science, Fuzhou, 350108, China
  • [ 3 ] [Chen C.L.P.]South China University of Technology, School of Computer Science and Engineering, Guangzhou, 510006, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Source :

IEEE Transactions on Artificial Intelligence

ISSN: 2691-4581

Year: 2024

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 4

Online/Total:288/10108188
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1