Frontiers in Social Science features new research in the flagship journals of the Social Science Research Council’s founding disciplinary associations. Every month we publish a new selection of articles from the most recent issues of these journals, marking the rapid advance of the frontiers of social and behavioral science.
In a randomized trial conducted on an online job platform, imposing a minimum wage on firms increased wages but reduced hours worked, with no net effects on workers’ total earnings.
Firms posting job openings in an online labor market were randomly assigned minimum hourly wages. When facing a minimum wage, fewer firms hired, but those they did hire paid higher wages. Hours-worked fell substantially. Treated firms shifted to hiring more productive workers. Using the platform's imposition of a market-wide minimum wage after the experiment, I find that many of the experimental results also hold in equilibrium, including the substitution towards more productive workers. However, there was also a large reduction in the number of jobs posted for which the minimum wage would likely bind.
In a series of survey experiments, exposure to uncivil or intolerant social media posts did not lead to majority support for content moderation by social media platforms.
When is speech on social media toxic enough to warrant content moderation? Platforms impose limits on what can be posted online, but also rely on users’ reports of potentially harmful content. Yet we know little about what users consider inadmissible to public discourse and what measures they wish to see implemented. Building on past work, we conceptualize three variants of toxic speech: incivility, intolerance, and violent threats. We present results from two studies with pre-registered randomized experiments (Study 1, N=5,130; Study 2, N=3,734) to examine how these variants causally affect users’ content moderation preferences. We find that while both the severity of toxicity and the target of the attack matter, the demand for content moderation of toxic speech is limited. We discuss implications for the study of toxicity and content moderation as an emerging area of research in political science with critical implications for platforms, policymakers, and democracy more broadly.