This research proposal aims to comprehensively explore the trustworthy, safe, and ethical use of Generative Artificial Intelligence (GAI), particularly Large Language Models (LLMs). To this end, we examine the risks and potential social hazards of LLMs, adopting a multidimensional approach— focused on society, human rights, and ethics— involving various stakeholders, including the AI industry, governmental institutions, and regulatory organizations, among others. This strategy allows for offering a research proposal grounded on social and technological dimensions and providing a comprehensive diagnosis, including perceived challenges in the AI industry, the regulatory debate, ethical dilemmas, etc. By delving into these areas, we aim to design a post-audit tool to ensure models are trustworthy, socially responsible, and in alignment with human rights. Additionally, we aim to encourage responsible AI Innovation through Ethics-Driven Incentives. Supervisor: Prof. Danilo Caivano, [email protected], University of Bari "A. Moro" Co-supervisor: Dr. Azzurra Ragone, [email protected] University of Bari "A. Moro"

Toward a human-centered framework for trustworthy, safe and ethical generative artificial intelligence: a multi-level analysis of large language models’ social impact / Fernandez Nieto, Berenice. - (2024), pp. 505-509. ( EASE 2024 - 28th International Conference on Evaluation and Assessment in Software Engineering Salerno, Italy 18-21/06/2024) [10.1145/3661167.3661177].

Toward a human-centered framework for trustworthy, safe and ethical generative artificial intelligence: a multi-level analysis of large language models’ social impact

Fernandez Nieto Berenice
2024

Abstract

This research proposal aims to comprehensively explore the trustworthy, safe, and ethical use of Generative Artificial Intelligence (GAI), particularly Large Language Models (LLMs). To this end, we examine the risks and potential social hazards of LLMs, adopting a multidimensional approach— focused on society, human rights, and ethics— involving various stakeholders, including the AI industry, governmental institutions, and regulatory organizations, among others. This strategy allows for offering a research proposal grounded on social and technological dimensions and providing a comprehensive diagnosis, including perceived challenges in the AI industry, the regulatory debate, ethical dilemmas, etc. By delving into these areas, we aim to design a post-audit tool to ensure models are trustworthy, socially responsible, and in alignment with human rights. Additionally, we aim to encourage responsible AI Innovation through Ethics-Driven Incentives. Supervisor: Prof. Danilo Caivano, [email protected], University of Bari "A. Moro" Co-supervisor: Dr. Azzurra Ragone, [email protected] University of Bari "A. Moro"
2024
9798400717017
File in questo prodotto:
File Dimensione Formato  
3661167.3661177.pdf

accesso aperto

Descrizione: Toward a Human-Centered Framework for Trustworthy, Safe and Ethical Generative Artificial Intelligence: A Multi-Level Analysis of Large Language Models Social Impact
Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 4 MB
Formato Adobe PDF
4 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/39842
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
social impact