Large Language Models (LLMs) have become key enablers for a wide range of natural language tasks, spanning understanding, generation, and inference across various applications and industrial domains. However, their increasing adoption has brought to light numerous security and privacy vulnerabilities, some of which remain insufficiently studied. This paper aims to identify and examine underexplored vulnerabilities affecting LLMs, with particular attention to threats that remain underrepresented in current literature. The selection of vulnerabilities is guided by a systematic comparison of four major surveys, prioritizing in-depth analysis for vulnerabilities absent in at least three of these works. The methodological approach combines a targeted literature search across major academic databases with strict inclusion and exclusion criteria focused on relevance, novelty, and the selection of peer-reviewed journal and conference publications. The survey introduces a taxonomy based on the LLM lifecycle (i.e., training, inference, and deployment) through which 17 vulnerabilities are categorized and discussed. Eight emerging threats (e.g., model collapse, gradient leakage, denial-of-service, and dependency risks) receive comprehensive analysis, reflecting their growing relevance and limited prior exploration. The result is a structured taxonomy and in-depth analysis designed to guide both academic investigations and practical efforts toward the secure deployment of LLMs in real-world environments. In addition to mapping the current threat landscape, this survey highlights open challenges and outlines key directions for future research.

A lifecycle-oriented survey of emerging threats and vulnerabilities in large language models / De Maio, Carmen; Di Gisi, Maria; Fenza, Giuseppe; Gallo, Mariacristina; Loia, Vincenzo. - In: IEEE ACCESS. - ISSN 2169-3536. - 13:(2025), pp. 176482-176500. [10.1109/access.2025.3619764]

A lifecycle-oriented survey of emerging threats and vulnerabilities in large language models

Di Gisi Maria;
2025

Abstract

Large Language Models (LLMs) have become key enablers for a wide range of natural language tasks, spanning understanding, generation, and inference across various applications and industrial domains. However, their increasing adoption has brought to light numerous security and privacy vulnerabilities, some of which remain insufficiently studied. This paper aims to identify and examine underexplored vulnerabilities affecting LLMs, with particular attention to threats that remain underrepresented in current literature. The selection of vulnerabilities is guided by a systematic comparison of four major surveys, prioritizing in-depth analysis for vulnerabilities absent in at least three of these works. The methodological approach combines a targeted literature search across major academic databases with strict inclusion and exclusion criteria focused on relevance, novelty, and the selection of peer-reviewed journal and conference publications. The survey introduces a taxonomy based on the LLM lifecycle (i.e., training, inference, and deployment) through which 17 vulnerabilities are categorized and discussed. Eight emerging threats (e.g., model collapse, gradient leakage, denial-of-service, and dependency risks) receive comprehensive analysis, reflecting their growing relevance and limited prior exploration. The result is a structured taxonomy and in-depth analysis designed to guide both academic investigations and practical efforts toward the secure deployment of LLMs in real-world environments. In addition to mapping the current threat landscape, this survey highlights open challenges and outlines key directions for future research.
2025
Deployment security
Inference vulnerabilities
Large language models (LLMs)
LLM risks
LLM taxonomy
Model lifecycle
Security vulnerabilities
Systematic survey
Training-time attacks
File in questo prodotto:
File Dimensione Formato  
A_Lifecycle-Oriented_Survey_of_Emerging_Threats_and_Vulnerabilities_in_Large_Language_Models.pdf

accesso aperto

Descrizione: A Lifecycle-Oriented Survey of Emerging Threats and Vulnerabilities in Large Language Models
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.4 MB
Formato Adobe PDF
1.4 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/39658
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
social impact