Cornell University Department of Psychology

Judgment, Decision Making, & Political Behavior

Role: Research Assistant 

Team: Professor David Pizarro and PhD doctoral student Rajen Anderson

I served as a Research Assistant with Professor David Pizarro and PhD doctoral student Rajen Anderson. I conducted qualitative coding based on morality scales, and worked on psychology experiments focused on emotion, morality, judgment, and cognition, and effects on political behavior. I organized, ran, and collected human subject data for the project: A matter of taste: Gustatory sensitivity shapes political ideology by Ruisch, B. C., Anderson, R. A., Inbar, Y., & Pizarro, D. A. (2020) Journal of Personality and Social Psychology.

Objective

Previous research has shown that political attitudes are highly heritable, but the proximal physiological mechanisms that shape ideology remain largely unknown. Based on work suggesting possible ideological differences in genes related to low-level sensory processing, Ruisch, B. C., Anderson, R. A., Inbar, Y., & Pizarro, D. A. predicted that taste (i.e., gustatory) sensitivity would be associated with political ideology. In 4 studies (combined N = 1,639) they tested this hypothesis and found robust support for the association. In Studies 1-3, they found that sensitivity to the chemicals PROP and PTC – two well established measures of taste sensitivity – were associated with greater political conservatism. In Study 4, it was found that fungiform papilla density, a proxy for taste bud density, also predicted greater conservatism, and that this association was partially mediated by disgust sensitivity. This work suggests that low-level physiological differences in sensory processing may shape an individual’s political attitude.

Excerpts from A matter of taste: Gustatory sensitivity shapes political ideology below (read in full here).

Procedure

Research assistants set up a table and invited passersby to participate in the study in exchange for piece of chocolate. Participants were provided with a PTC taste strip (purchased from Nasco Precision Laboratories) and a paper survey packet. They were instructed to place the taste strip on their tongue for 30 s. After tasting the test strip, but before rating its intensity, participants were asked to indicate the taste of the strip, with the following response options: no flavor, bitter, salty, sour, or sweet. Participants then rated the intensity of the taste they experienced using the same general intensity scale from Study 1. They then indicated their political orientation and social and cultural liberalism/conservatism using the same scales from Study 1 and provided information about their age and sex. Additionally, participants also answered nine questions regarding their food preferences. As specified in our preregistration documentation, however, the results of these questions were not analyzed in relation to the current research question.

Results and Discussion

Two participants did not indicate their political ideology and therefore could not be included in analyses, leaving us with an analyzable sample of 398 participants.2 Seventy-one participants (17.75%) reported no taste from the taste strip, and were therefore coded as “0” for the intensity measure. Additionally, six participants (1.5%) rated the strip as salty and four (1%) rated it as sweet, indicating a lack of ability to taste PTC. Following our preregistered analysis plan, we coded intensity as “0” for these participants. The remainder of the participants indicated that the strip tasted bitter or sour, indicating an ability to detect PTC. (Results are nearly identical if “sour” responses are also coded as indicating a lack of ability to taste PTC.) Replicating the results of Study 1, we found that greater taste sensitivity—indicated by the intensity of bitterness experienced from the PTC strip—was associated with greater general political conservatism ( .19, t[396] 3.80, p  .001) and greater social and cultural conservatism ( .19, t[396] 3.80, p  .001). This association remained significant (and in fact became slightly stronger) when controlling for participants’ age and sex (general conservatism:  .21, t[390] 4.16, p  .001; social/cultural conservatism:  .21, t[390] 4.18, p  .001), providing further evidence for the hypothesized association between taste sensitivity and political conservatism.

Read more here

Role: Research Assistant

Team: PhD doctoral student Sebastian Deri

I also served as a Research Assistant for PhD doctoral student Sebastian Deri. Sebastian studies deception and the internet's effects on people and society, political polarization, social comparison, and misperceptions of social influence. He’s also done research at Facebook and Nokia Bell Labs.

I organized, ran, and collected human subject data on lie detection, judgement, and decision making for various research projects as a research assistant. Specifically, one on-going project of Sebastian’s is on Hybrid Lie Detection, where I conducted qualitative coding and rating of various statements assessing if statements were lies or truths. This research project’s main objective is on performance. With respect to humans, computers, or hybrid human-computer models, which type of decision making agent has the highest level of lie detection accuracy?

Excerpts from Hybrid Lie Detection below (read in full here). 

Overview

My focus here is on performance. I want to know which type of decision making agent achieves the highest levels of lie detection accuracy – humans, computers, or hybrid human-computer models. I believe that the best performance can be achieved by hybrid human-computer models. I expect this result because I expect the following three conditions to hold: (1) humans will perform better than chance, (2) computer models will perform better than chance, (3) the bases of human judgments and computer judgments will differ. While there is debate about human lie detection accuracy and how exactly to measure it (Vrij & Granhag, 2012), there is credible research which suggests that humans’ overall accuracy rate in truth-lie detection is better than chance (e.g. Bond & DePaulo, 2006 find an overall accuracy rate of 54% in an analysis of 24,483 judgments from 206 papers; see also: ten Brinke, Vohs, & Carney, 2016). Likewise, others have built computer models that are able to perform significantly better than chance at truth-lie detection (e.g. Mihalcea & Strapparava, 2009; Newman, Pennebaker, Berry, & Richards, 2003). Finally, it is certain that human and computer judgments are formed on different bases. Previous computer models have been trained on very rudimentary textual features which can be extracted from the words in a sentence (e.g. sentiment and parts of speech), as our model will be. In contrast, humans do not primarily attend to things like the number of adverbs in a sentence when making truth-lie judgments. They are likely attend to a host of factors when making truth-lie judgments that computer models, as yet, cannot and do not incorporate – notably, they can contrast the claims put forth in statements with their general knowledge of the world and personal experiences (e.g. “why would a person in that situation do that? this seems like a lie”).

In this section, I examine the accuracy of humans in truth and lie detection. To do this, I first needed people to judge the statements in our corpus. This was done with the help of three research assistants (Alexis Levine, Emem-Esther Ikpot, and Catherine Seita). I describe the procedure by which they rendered judgments in more detail below. This is followed by an analysis of their performance.

Procedure

To begin, I randomly sorted the full set of 5,004 statements. I then divided this randomly sorted list of statements into three non-overlapping sets. I assigned one RA to each given segment. And I asked them go through the statements within their segment, one statement at a time. For each statement, they were asked to make two judgments. First, they made a binary judgment, a guess, about whether the statement was a truth or a lie. Second, they assessed how confident they were in their guess. The research assistants assessed their confidence by responding to the question “How confident are you in your guess?”, to which they could pick one of five responses: “0 = Not at all confident; 1 = Slightly confident; 2 = Somewhat confident; 3 = Fairly confident; 4 = Very confident”.

They were given the following general instruction about how they should orient their guessing.

“Each of these statements represent a person’s response to a question that was asked of them. Sometimes those people responded to the question truthfully (i.e. by telling the truth) and sometimes they responded to the question untruthfully (i.e. by telling a lie).

We would like for you to go through each of these statements, one at a time. First, read the statement thoroughly. And then, give us your best guess as to whether that statement is true (i.e. a case where the person responded to the question by telling the truth) or that statement is a lie (i.e. a case where the person responded to the question by telling a lie). Then, move on to the next statement and do the same.

For each statement, you may make this guess on whatever basis you choose (i.e. on intuition and “gut” feeling, careful deliberation, or any other basis of deciding). What is simply most important is that you give us your best guess as to what you think is more likely - that the person’s statement is a truth or a that the person’s statement is a lie."

You can read more about this project here.

Previous
Previous

Board Member

Next
Next

Writer