The recent rapid development of generative artificial intelligence (AI), and the resulting market growth, has introduced new challenges for social responsibility, an area where companies may need more guidance. In this regard, the literature covers a broad spectrum, from the impact of bias to the potential use of this technology to implement undemocratic surveillance. Another focus area discusses the AI industry's commitment to human rights and social responsibility, examining the diverse actors involved in this commitment and the context-dependent nature of their impact on human rights. This work performs a systematic review and a comparative analysis of the strategies and actions taken by four leading companies—OpenAI, Meta AI Research, Google AI, and Microsoft AI—with respect to five critical dimensions: bias, privacy, cybersecurity, hate speech, and misinformation. Our study analyzes 192 publicly available documents and reveals that depending on the diversity of products and their nature, some companies excel in the research and development of technologies and methodologies for privacy preservation and bias reduction, offering user-friendly tools for managing personal data, establishing expert groups to research the social impact of their technologies, and possessing significant expertise in tackling hate speech and misinformation. Nonetheless, there is an urgent need for greater linguistic, cultural, and geographic diversity in research lines, tools, and collaborative efforts. From this analysis, we draw a set of actionable best practices aimed at supporting the responsible development of AI models, and foundation models, in particular, that are aligned with human rights principles.

Fostering human rights in responsible AI: a systematic review for best practices in industry / Baldassarre Maria, Teresa; Caivano, Danilo; Fernandez Nieto, Berenice; Gigante, Domenico; Ragone, Azzurra. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - 6:2(2024), pp. 416-431. [10.1109/TAI.2024.3394389]

Fostering human rights in responsible AI: a systematic review for best practices in industry

Fernandez Nieto Berenice
;
2024

Abstract

The recent rapid development of generative artificial intelligence (AI), and the resulting market growth, has introduced new challenges for social responsibility, an area where companies may need more guidance. In this regard, the literature covers a broad spectrum, from the impact of bias to the potential use of this technology to implement undemocratic surveillance. Another focus area discusses the AI industry's commitment to human rights and social responsibility, examining the diverse actors involved in this commitment and the context-dependent nature of their impact on human rights. This work performs a systematic review and a comparative analysis of the strategies and actions taken by four leading companies—OpenAI, Meta AI Research, Google AI, and Microsoft AI—with respect to five critical dimensions: bias, privacy, cybersecurity, hate speech, and misinformation. Our study analyzes 192 publicly available documents and reveals that depending on the diversity of products and their nature, some companies excel in the research and development of technologies and methodologies for privacy preservation and bias reduction, offering user-friendly tools for managing personal data, establishing expert groups to research the social impact of their technologies, and possessing significant expertise in tackling hate speech and misinformation. Nonetheless, there is an urgent need for greater linguistic, cultural, and geographic diversity in research lines, tools, and collaborative efforts. From this analysis, we draw a set of actionable best practices aimed at supporting the responsible development of AI models, and foundation models, in particular, that are aligned with human rights principles.
2024
Generative artificial intelligence (AI), Human centered artificial intelligence (AI), Human rights, Responsible artificial intelligence (AI), Trustworthy AI
File in questo prodotto:
File Dimensione Formato  
Fostering_Human_Rights_in_Responsible_AI_A_Systematic_Review_for_Best_Practices_in_Industry.pdf

non disponibili

Descrizione: Fostering Human Rights in Responsible AI: A Systematic Review for Best Practices in Industry
Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 781.45 kB
Formato Adobe PDF
781.45 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
_Camera_Ready__RAIE__IEEE_Transactions_on_AI.pdf

accesso aperto

Descrizione: Preprint - Fostering Human Rights in Responsible AI: A Systematic Review for Best Practices in Industry
Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 2.04 MB
Formato Adobe PDF
2.04 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/39839
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
social impact