My research as a Marie Curie Fellow questioned the widely held assumption that generalized trust is reflective of lack of prejudice towards out-groups. Nor does generalized trust inevitably correlate with in-group trust. Moreover, attitudes do not unequivocally correlate with behaviors. A faceless stranger one is asked about in a survey could well be an in-group as much as a person known to the respondent (friend, neighbor, family member) can be an out-group. This project thus aimed to return to some of the overlooked issues: the measurement and validity of generalized trust as a proxy for intergroup attitudes and behavior. I have further developed insights from this into a new project. By building on classical test theory, this project aims to predict the validity of answers to the generalized trust survey item using Machine Learning Ensembles and large-scale social data.

Manuscript in preparation

I assessed how well self-reported items relate to behaviors less susceptible to social desirability by conducting large-N online and laboratory experiments. Recently I presented this project at the 2022 sessions of European Consortium for Political Research and the Midwest Political Science Association. Earlier drafts have been presented at the International Society of Political Psychology in 2021 and at the European Consortium for Political Research in 2020.

Generalized trust (GT) is a conspicuous double barreled question, with ‘trust in most people’ or ‘being careful’ as extremes. It is used in many studies on ethnic diversity and intergroup relations. While the diversity-trust nexus is often debated, few have focused on systematic response bias in GT. Evidence whether this survey item taps into prejudice or whether it represents a more general measure of trust or risk attitude is lacking. Across two large laboratory studies and two online studies (with Dutch, American and British samples on Prolific), the validity of GT as a proxy for intergroup attitudes (feeling thermometers and preference), implicit prejudice (an adapted Race IAT), or general risk taking is examined. Moreover, I investigate the behavioral validity of GT by assessing its relation to a Trust Game, Social/Digital Distance, and an Investment Task. The lab studies allow the most control over participants’ behavior, while external validity of these findings is confirmed through the online studies. As data collection in the lab started prior to the COVID-19 pandemic, I leverage this source of random variation to investigate the impact of a natural hazard on GT and Social Distance towards out-groups versus in-groups. I limit the samples to the majority white and focus on their attitudes and behavior towards stigmatized minorities - either Dutch Moroccans or British and American Blacks. These constitute most likely cases to find the strongest effect sizes.

Manuscript in preparation

Together with Prof. Eldad Davidov (University of Cologne) I have examined the measurement invariance of generalized trust across 50 countries with the aid of latent variable modelling. Earlier drafts have been presented at the Political Methodology Specialist Group of the Political Studies Association in 2020 and at the European Survey Research Association in 2019.

Generalized trust (GT) is a conspicuous survey question employed in many studies on social cohesion and ethnic diversity. The question asks whether one trusts most people or whether one should be careful in dealing with them. While prior research has debated the negative link between ethnic diversity and GT, only a handful of studies have so far focused on the underlying measurement issues of GT. These studies have examined the measurement invariance (MI) properties of GT across countries. However, this is unfortunate, because prior research has shown that there can be significant differences in the understanding of GT question - also within countries. In other words, respondents say they associate different categories of people with “most people”. This questions the validity of GT as an indicator of social cohesion. Studies of MI of human values and political trust attribute the different ways respondents interpret attitudinal survey questions to varying educational levels. In this paper we similarly propose that education may influence the way GT question is understood and attempt to close this gap by performing a mixed methods approach with British think aloud data, on the one hand, and the World Values Survey, on the other. First, we reanalyze the think aloud data to examine how GT may be understood differently across individuals with varying education levels. Next, we conduct MI tests across education groups using WVS data. We argue that measurement issues of GT across groups cannot be ignored, since the think aloud results (as an external criterion) suggest there is no theoretically viable argument that ‘most people’ unequivocally refers to out-groups.

Updated: