• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Jiang, Yi (Jiang, Yi.) [1] | Yang, Yong (Yang, Yong.) [2] | Yin, Jiali (Yin, Jiali.) [3] | Liu, Xiaolei (Liu, Xiaolei.) [4] | Li, Jiliang (Li, Jiliang.) [5] | Wang, Wei (Wang, Wei.) [6] | Tian, Youliang (Tian, Youliang.) [7] | Wu, Yingcai (Wu, Yingcai.) [8] | Ji, Shouling (Ji, Shouling.) [9]

Indexed by:

EI

Abstract:

In recent years, large language models (LLMs) have emerged as a critical branch of deep learning network technology, achieving a series of breakthrough accomplishments in the field of natural language processing (NLP), and gaining widespread adoption. However, throughout their entire lifecycle, including pre-training, fine-tuning, and actual deployment, a variety of security threats and risks of privacy breaches have been discovered, drawing increasing attention from both the academic and industrial sectors. Navigating the development of the paradigm of using large language models to handle natural language processing tasks, as known as the pre-training and fine-tuning paradigm, the pre-training and prompt learning paradigm, and the pre-training and instruction-tuning paradigm, this article outlines conventional security threats against large language models, specifically representative studies on the three types of traditional adversarial attacks (adversarial example attack, backdoor attack and poisoning attack). It then summarizes some of the novel security threats revealed by recent research, followed by a discussion on the privacy risks of large language models and the progress in their research. The content aids researchers and deployers of large language models in identifying, preventing, and mitigating these threats and risks during the model design, training, and application processes, while also achieving a balance between model performance, security, and privacy protection. © 2025 Science Press. All rights reserved.

Keyword:

Anonymity Computational linguistics Data privacy Deep learning Learning systems Natural language processing systems Network security Security systems

Community:

  • [ 1 ] [Jiang, Yi]College of Computer Science and Technology, Zhejiang University, Hangzhou; 310007, China
  • [ 2 ] [Jiang, Yi]College of Renwu, Guizhou University, Guiyang; 550025, China
  • [ 3 ] [Yang, Yong]College of Computer Science and Technology, Zhejiang University, Hangzhou; 310007, China
  • [ 4 ] [Yin, Jiali]College of Computer Science and Big Data, Fuzhou University, Fuzhou; 350108, China
  • [ 5 ] [Liu, Xiaolei]Institute of Computer Application, China Academy of Engineering Physics, Sichuan, Mianyang; 621054, China
  • [ 6 ] [Li, Jiliang]School of Cyber Science and Engineering, Xi’an Jiaotong University, Xi’an; 710049, China
  • [ 7 ] [Wang, Wei]Beijing Key Laboratory of Security and Privacy in Intelligent Transportation (Beijing Jiaotong University), Beijing; 100091, China
  • [ 8 ] [Tian, Youliang]College of Computer Science and Technology, Guizhou University, Guiyang; 550025, China
  • [ 9 ] [Wu, Yingcai]College of Computer Science and Technology, Zhejiang University, Hangzhou; 310007, China
  • [ 10 ] [Ji, Shouling]College of Computer Science and Technology, Zhejiang University, Hangzhou; 310007, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Computer Research and Development

ISSN: 1000-1239

Year: 2025

Issue: 8

Volume: 62

Page: 1979-2018

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 6

Affiliated Colleges:

Online/Total:1660/14066214
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1