Indexed by:
Abstract:
Pretrained language models (PLMs) have shown remarkable performance on question answering (QA) tasks, but they usually require fine-tuning (FT) that depends on a substantial quantity of QA pairs. Therefore, improving the performance of PLMs in scenarios with only a small number of training examples, also known as a few-shot setting, is of great practical significance. Current mitigation strategies for the few-shot QA task largely rely on pretraining a QA task-specific language model from scratch, overlooking the potential of foundational PLMs to generate QA pairs, particularly in the few-shot setting. To address this issue, we propose a prompt-based QA data augmentation method aimed at automating the creation of high-quality QA pairs. It employs the PFT method, adapting the question generation process of PLMs to the few-shot setting. Additionally, we introduce a dynamic text filling training strategy. This strategy simulates the progressive human learning process, thereby alleviating overfitting of PLMs in the few-shot setting and enhancing their reasoning capability to tackle complex questions. Extensive experiments demonstrate that the proposed method outperforms existing approaches across various few-shot configurations. 2691-4581 © 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE Transactions on Artificial Intelligence
Year: 2025
Issue: 3
Volume: 6
Page: 589-603
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: