Working Paper

Real-World Gaps in AI Governance Research: AI Safety and Reliability in Everyday Deployments

Ilan Strauss, Isobel Moure, Tim O’Reilly and Sruly Rosenblat
Download

Description

Drawing on 1,178 safety and reliability papers from 9,439 generative AI papers (January 2020 through March 2025), we compare research outputs of leading AI companies (Anthropic, Google DeepMind, Meta, Microsoft, and OpenAI) and AI universities (CMU, MIT, NYU, Stanford, UC Berkeley, and University of Washington). We find that Corporate AI research increasingly concentrates on pre-deployment areas — model alignment and testing & evaluation — while attention to deployment-stage issues, such as model bias, has waned as commercial imperatives and existential risk concerns have taken precedence. We identify significant research gaps in high-risk deployment domains, including healthcare applications, commercial and financial contexts, misinformation, persuasive and addictive features, hallucinations, and copyright usage in training and inference. Without concerted efforts to enhance external observability into AI’s deployment, the growing concentration of AI research with corporations could deepen knowledge deficits in these critical deployment areas. We recommend measures to expand external researcher access to deployment data and improve systematic observability of AI systems’ in-market behaviors.

CITATION DETAILS

Ilan Strauss, Isobel Moure, Tim O’Reilly and Sruly Rosenblat. “Real-World Gaps in AI Governance Research: AI Safety and Reliability in Everyday Deployments.” SSRC AI Disclosures Project Working Paper Series (SSRC AI WP 2025-04), Social Science Research Council, April 2025. https://www.ssrc.org/publications/real-world-gaps-in-ai-governance-research/

ISSN: 3067-1361

DOI: 10.35650/AIDP.4112.d.2025

Menu