Measuring Susceptibility to Misinformation in Lower-Income Populations
In this post, Anwesha Bhattacharya (Harvard Kennedy School), Erik Jorgensen (Inclusion Economics at Yale University), Urvi Naik (Inclusion Economics India Centre), and Charity Troyer Moore (Inclusion Economics at Yale University) explain the challenges to doing research on susceptibility to misinformation, especially online, and particularly for women in a low-literacy marginalized population in rural India. This context required developing new, locally appropriate measures and a particular sensitivity to the political situation of the moment.
Access to accurate information is an important aspect of economic opportunity, social inclusion, and wellbeing—but inequalities in households and communities can result in information access disparities. Such disparities may interact with other forms of marginalization that constrain individuals’ ability to verify and validate the information they receive (e.g., lower education levels), amplifying susceptibility to misinformation. While search costs to find high-quality information may be too high, increasing access to information online may also increase access to misinformation.
Across low- and middle-income countries, women are 15 percentage points less likely to access mobile internet than men, even as mobile devices continue to grow as the primary access point to the internet. In 2018, in response to the digital gender divide, the government of the low-income central Indian state of Chhattisgarh, with a large rural population and significant representation of marginalized groups known as “Scheduled Castes” and “Scheduled Tribes,” launched the ambitious SKY program, providing free smartphones to two million women in rural areas across the state who live in locations with more than 1,000 residents.
With new internet access, women may have better access to information—including health information—that may be accurate or inaccurate. However, to assess whether access to more information means more misinformation in settings like Chhattisgarh, we need to have better measures of susceptibility to misinformation in low-literacy contexts. While researchers have studied the propensity to believe misinformation and discern true from false information, existing measures assume relatively higher levels of literacy, which is not the case in many populations.
In Chhattisgarh, as in many parts of rural India, restrictive gender norms constrain women’s mobility, economic agency, and human capital. These inequities are reflected in women’s smaller social networks, lower literacy levels, and the large gaps in phone ownership and use, including access to mobile internet. For more than five years, our team at Inclusion Economics (a multisite policy-engaged research initiative) has sought to better understand how women engage with mobile technology, and what types of interventions can support women’s technological engagement.
With support from the Mercury Project, our project, A tough call, measured susceptibility to “fake news” among more than 20,000 rural residents in Chhattisgarh. In the general population we study, women have an average of 5.8 years of education, compared to men’s 7.9 years. Because most tools developed to study misinformation focus on high-income, high-literacy contexts, we developed our own approach to measuring susceptibility to misinformation among this population.
Quantitatively capturing whether differences in information access translate to greater susceptibility to misinformation in low-income, low-literacy contexts like rural Chhattisgarh is a difficult task. Such a large-scale data collection activity provided an important opportunity to map how susceptibility to misinformation varies across subpopulations and interacts with information sources. Developing a suitable module that could both be succinct and manageable for field enumerators with 20,000 respondents, and return an accurate assessment, required deep investment in piloting exercises with a diverse set of local men and women. In the process, we learned lessons that we hope are useful to researchers working to understand susceptibility to misinformation in similar contexts.
Learning about misinformation susceptibility
In-depth qualitative work is useful in establishing an understanding of the “misinformation environment.” In exploratory qualitative interviews, we learned that participants were aware of the existence of “fake news” and already had their own methods for identifying what they considered to be misinformation. Some reported skepticism about news shared through WhatsApp forwards, while others indicated that they only trusted vernacular newspapers—those printed (or reprinted) in local languages—and hyperlocal news apps to provide accurate information about nearby events. In addition, many respondents identified viral or sensational news as untrustworthy.
Despite individuals’ efforts to identify incorrect news, our formative work using semi-structured interviews indicated that respondents were indeed vulnerable to “fake news,” as they often reported believing that viral “fake news” headlines were true (e.g., that eating garlic helps prevent Covid-19). These interviews also pointed to Covid-19 misinformation being widely relevant at the time of our formative work, more so than financial scams or political information. Based on our findings, we decided to explore news consumption and trust in media and information sources in the context of Covid-19, with the aim of eliciting beliefs in specific inaccurate rumors. We defined misinformation susceptibility as holding beliefs in these rumors.
When moving from qualitative to quantitative description, addressing respondents’ barriers to comprehension was key in ensuring more precise measurement. Using insights from the qualitative research, our team developed a survey module composed of meme-style infographics related to Covid-19. Qualitative work allowed us to develop a set of examples of specific hypothetical headlines that were considered plausible, but would not agitate participants or elicit extreme reactions. During interviews, participants were shown these graphics and headline examples and were asked to determine the likelihood that each one was true.
This testing phase identified four key takeaways related to the module’s content that could be instructive for others interested in measuring misinformation and susceptibility to misinformation among populations with similarly limited literacy and experience with surveys.
- First, we used simple, concise infographics that required minimal explanation by enumerators.
- Second, we asked respondents about news that was directly relevant in their state and avoided national-level content, since respondents were generally less familiar with national information. For example, nearly 30 percent of survey respondents reported not knowing the identity of the Indian prime minister.
- Third, we consulted local translators to ensure that wording was easily understood and used familiar phrasing, while maintaining the intended message. Use of colloquial phrasing was crucial to ensure that respondents understood the intended messaging of the infographics.
- Fourth, for messages related to local politics, we monitored the local political situation in the state throughout the time that the survey was being fielded to ensure that none of the memes were inadvertently touching on “hot-button” issues. Though LSE researchers have discussed the linkages between misinformation and important issues like political or religious violence in India, we ultimately steered entirely clear of any information that could be considered partisan due to upcoming elections in the state. This decision allowed us to capture susceptibility to an important form of misinformation while making clear to respondents that our organizational objectives were purely research oriented.
We also pre-tested the most suitable answer choices to module questions, quickly seeing that abstract or ambiguous answers—like asking respondents to report a percentage likelihood that a meme was true—were challenging in this context. Likert scales were better understood, especially when there were fewer answer choices. As a result, we settled on a 5-point scale, ranging from 1 (indicating the respondent thought that the meme was completely false) to 5 (indicating the respondent believed that the information in the meme was completely true).
In order to mitigate potential negative consequences of the survey module—e.g., people walking away believing inaccurate information—it was important to correct misperceptions. Our misinformation module included three “true” and four “fake” news infographics. In an effort not to introduce bias into this section of the survey, we didn’t include reference to misinformation in information that we provided to respondents when requesting them to participate in the survey. Instead, we ended the module with the enumerator “debiasing” the respondents–informing them which infographics were true and which were false. This approach aimed to curtail the spread of misinformation, which was important for upholding the ethical standards of the research project.
(Mis)informational insights: Key findings
As anticipated, women respondents are at an informational disadvantage. Women report relying less on news from all types of sources—both digital and analog—including institutional sources, such as healthcare workers or government officials; traditional sources such as word-of-mouth, television, and newspapers; and digital sources, including Facebook, YouTube, and WhatsApp. Alongside less access to information, women report lower trust in a variety of information sources about Covid-19 than men: For example, women are four to five percentage points more likely to report not trusting news they hear from local health workers or government officials, compared to 15–20 percent of men reporting not trusting news from these sources, perhaps reflecting less confidence in their ability to distinguish true news from false.
Furthermore, we see important gender differences in susceptibility to misinformation. Despite lower reported trust in news sources, women had a higher propensity to report believing that both true and false statements were accurate. While 62 percent of male respondents reported that the statement “losing one’s sense of taste is a symptom of Covid-19” was likely to be true, 67 percent of women thought the same. But, women are also more likely than men to believe that mobile phone towers spread Covid-19, with 24 percent of women reporting this statement likely to be true, compared to 15 percent of men, and that smelling certain spices could help prevent Covid-19, with 52 percent of women reporting this statement likely to be true, compared to 40 percent of men.
The contrast between women’s reported lower trust in information sources and their higher susceptibility to misinformation highlights the importance of testing and deploying a realistic misinformation survey module like the one we ran in rural Chhattisgarh. Knowing that women are more likely to believe both true and false information, policymakers could focus on prioritizing interventions that help women ascertain the quality of information sources, enabling them to better distinguish between true and false claims, or that expand access to additional information sources, enabling women to cross-validate these claims.
We thank the exceptional research team at Inclusion Economics India Centre for their support in this study, and we are grateful to IDinsight for their role in collecting survey data. This research was supported by funds from Pivotal Ventures, the Social Science Research Council’s Mercury Project, USAID Development Innovations Ventures, the National Science Foundation Grant Number 1949522, the Gender, Growth, and Labour Markets in Low Income Countries Programme, the Initiative for What Works to Advance Women and Girls in the Economy, and the Women’s Economic Empowerment and Digital Finance Initiative.