Frontiers in Social Science features new research in the flagship journals of the Social Science Research Council’s founding disciplinary associations. Every month we publish a new selection of articles from the most recent issues of these journals, marking the rapid advance of the frontiers of social and behavioral science.
Foreign advocacy for Tibet after the Dalai Lama’s flight to India in 1959 rarely defined Tibetans as nationalist claimants, both supporting and constraining Tibetans’ pursuit of autonomy.
Through a study of the Indian Central Relief Committee for Tibetans and the American Emergency Committee for Tibetan Refugees, this article maps the multiple dimensions of Indian and American civil society advocacy on behalf of Tibet in the immediate aftermaths of the Dalai Lama’s 1959 flight to India: anticommunism, imperialism, discourses of religious freedom and civilizational solidarity, domestic politics, and regional security interests. These contexts did not operate separately, but rather formed layered interactions, layers that eventually bound Tibetan autonomy. While the Dalai Lama and other Tibetan nationalists worked across the geographic and political spectrum to generate international support as a matter of practicality and necessity, the complex web from which this support came, and through which it operated, functioned as constraints as well as backing. Advocacy from such a disparate set of national, personal, religious, and political interests came with limitations that defined Tibetans as communist victims, an oppressed religious minority, and a humanitarian commodity, but not as nationalist claimants.
A new framework for auditing bias in AI-based decision tools defines principles of fairness in design and output and provides recommendations for auditors.
Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology’s more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims.