Do a literature review on the attached file
3 pages
5 Article References
APA style
Contents lists available at ScienceDirect
Computers in Human Behavior
journal homepage: www.elsevier.com/locate/comphumbeh
“
T
his is fake news”: Investigating the role of conformity to other users’ views
when commenting on and spreading disinformation in social media
Jonas Colliander
Center for Retailing, Stockholm School of Economics, P.O. Box 6501, SE-113 83, Stockholm, Sweden
A R T I C L E I N F O
Keywords:
Fake news
Online disinformation
Conformity
Self-concept
Disclaimers
A B S T R A C T
This study examines the effects of conformity to others online when individuals respond to fake news. It finds
that after exposure to others’ comments critical of a fake news article, individuals’ attitudes, propensity to make
positive comments and intentions to share the fake news were lower than after exposure to others’ comments
supportive of a fake news article. Furthermore, this research finds that the use of a disclaimer from a social
media company alerting individuals to the fact that the news might be fake does not lower individuals’ attitudes,
propensity to make positive comments and intentions to share the fake news as much as critical comments from
other users.
1. Introduction
On December 4, 2016, 29-year old Edgar Maddison Welch fired a
military style assault rifle inside the popular Washington D.C. Comet
Ping Pong restaurant. Mr. Welch had set out to rescue children he be-
lieved were held there in a child abuse scheme led by Hillary Clinton.
The theory, known as “Pizzagate”, stemmed from unfounded but
widespread online reports. Rather than finding any children, however,
Mr. Welch found himself in handcuffs. He was convicted to four years in
prison and later confessed in an interview with the New York Times
that “the intel on this wasn’t 100 percent.”
Indeed, it wasn’t. But the concern among researchers, journalists
and politicians about the effects of online disinformation is. The pro-
blem is rampant. A recent study by the Pew Research Center revealed
that 23% of Americans had knowingly or unknowingly shared a made-
up news story (Pew Research Center, 2016). Furthermore, during the
2016 U.S. presidential election, the most popular made-up news stories
were more widely shared on Facebook than the most popular authentic
news stories (Silverman, 2016). Some commentators have even sug-
gested that online disinformation played a deciding role in that election
(e.g. Dewey, 2016; Parkinson, 2016; Read, 2016).
Online disinformation has been defined by Lazer et al. (2018) as
“false information that is purposely spread to deceive people.” (p. 2). As
such, it overlaps with the definition of fake news, given by Allcott and
Gentzkow (2017) as “news articles that are intentionally and verifiably
false, and could mislead readers” (p.
213
). Increasingly the topic of
public debate, fake news has been investigated by researchers from a
variety of angles. One is studies into the prevalence of the problem. For
instance, Allcott and Gentzkow (2017) studied Americans’ level of ex-
posure to fake news during 2016 U.S. presidential election and which
segments of the population that believed in them. In another example,
Watanabe (2017) studied the spread of Russian disinformation in
western news media during the Ukraine crisis. Another area of research
is how fake news travel within social networks. For instance, Vasoughi,
Roy, and Aral (2018) investigated how false and true news spread on-
line. A third stream of research into misinformation and fake news is
that of corrections and debunking. Research into these areas have pri-
marily investigated how misperceptions spread through disinformation
can be reduced by statements of correction from various sources. Bode
and Vraga (2018), for instance, studied how misperceptions spread by
health disinformation in social media were reduced by the presentation
of correct facts by either algorithms or other social media users. Nyhan
and Reifler (2010), on the other hand, concluded that corrections often
fail and sometimes increase misperceptions when certain ideological
groups have been presented with political misinformation. In a meta-
study, Chan, Jones, Jamieson, and Alberracin (2017), also concluded
that more detailed debunking is positively correlated with a debunking
effect.
This research is intended to add to the research on debunking dis-
information and fake news. However, it takes a step back from the
research mentioned above in that it investigates not the effects of
presenting counterfactuals to fake news, but rather the effects of the far
more common occurrence of simply pointing out to readers that it is
fake news. Specifically, this research examines what effect it has on
individuals exposed to fake news that other users take a stand against
the disinformation and identifies it as such through the comment
https://doi.org/10.1016/j.chb.2019.03.032
Received 4 December 2018; Received in revised form 1 March 2019; Accepted 25 March 2019
E-mail address: Jonas.colliander@hhs.se.
Computers in Human Behavior 97 (2019) 202–
215
Available online 28 March 2019
0747-5632/ © 2019 Elsevier Ltd. All rights reserved.
T
http://www.sciencedirect.com/science/journal/07475632
https://www.elsevier.com/locate/comphumbeh
https://doi.org/10.1016/j.chb.2019.03.032
https://doi.org/10.1016/j.chb.2019.03.032
mailto:Jonas.colliander@hhs.se
https://doi.org/10.1016/j.chb.2019.03.032
http://crossmark.crossref.org/dialog/?doi=10.1016/j.chb.2019.03.032&domain=pdf
function. Given the state of many comment threads to fake news, which
are less about correcting the disinformation and more about simple
statements either supporting or attacking both the story and the person
posting it, the lack of research investigating the many facets of such
simple statements is conspicuous. Therefore, the present research con-
sists of two experimental studies on this phenomenon. Study 1 in-
vestigates whether readers of disinformation, upon seeing comments
from others either supporting the fake news story or opposing it by
attacking the news story or the original poster, respectively, are more
likely to a) have a more positive or negative attitude towards the fake
news b) make comments of either support or opposition to the fake
news and c) share the fake news story on social media. Study 2 in-
vestigates the same behavior among respondents when they are ex-
posed to other users’ comment of either support or opposition to a fake
news story. However, it also compares the effect of other users’ com-
ments identifying the news as fake with the use of an official Facebook
disclaimer stating that the fake news story is disputed by independent
fact checkers. Taken together, the two studies are intended to shine a
few rays of light on the effects and importance of other users in pre-
venting the spread of fake news online.
Beyond simply focusing on the simpler ways in which users in social
media can debunk fake news this research makes two additional con-
tributions. Firstly, it introduces additional dependent variables in the
form of the attitude towards the fake news, the likelihood of com-
menting in various ways on the fake news and the sharing the fake
news. Previous research on debunking have focused mainly on cor-
recting the misconceptions (caused by disinformation) of respondents.
Important as that may be, another stated goal of policymakers and
social media companies alike is to stop the spread of fake news. This
research is intended to help in that pursuit by investigating the three
dependent variables mentioned above. Secondly, it explores whether
other ways of disputing the accuracy of fake news, such as disclaimers
from social media companies, has a similar effect on readers as the
actions of other users.
Structurally, the rest of this article is straightforward. It begins with
reviews of conformity and the self-concept, which are the theoretical
foundations of this research. That is followed by the hypothesis devel-
opment and a description of the two studies. Lastly, conclusions and
implications for both researchers and practitioners are discussed.
2. Theoretical background and hypotheses
2.1. Conformity
Conformity is the act of matching one’s behavior to the responses of
others (Cialdini & Goldstein, 2004). Conformity has been found to be a
powerful social phenomenon as individuals are often found to conform
to the behaviors of others even when the actions of those other in-
dividuals run contrary to individuals own convictions, such as in the
classic experiments by Asch (1956). Subsequent research has also de-
monstrated that even our memories are affected by exposure to the
recollections of others (Edelson, Sharot, Dolan, & Dudai, 2011) Deutsch
and Gerard (1955) made a distinction between informational and
normative motivations for conformity. Informational motivations are
driven by a desire to interpret reality in an accurate way whereas
normative motivations are based on the desire to obtain social approval
from others. More contemporary research has largely upheld these
findings. The overview by Cialdini and Goldstein (2004), underscores
the importance of conformity in gaining social approval, stating that
“individuals often engage in … conscious and deliberate attempts to
gain the social approval of others, to build rewarding relationships with
them, and in the process, to enhance their self-esteem. Conformity
offers such an opportunity” (p. 610). Interestingly, Williams, Cheung,
and Choi (2000) concluded that conformity still occurs among anon-
ymous internet users.
2.2. Self-concept
The self-concept is an individual’s collection of beliefs about him or
herself, generally answering the question of ‘who am I’? (Meyers,
2009). Individuals tend to conceptualize themselves in accordance with
two basic aspects of human beings: agency and communion (Wiggins,
1991). Agency represents such personal interests and values as self-
assertion, self-improvement and self- esteem. Communion, conversely,
is about social bonding, connections with others, cooperation and care
for others (Nam, Lee, Youn, & Kwon, 2016). Agentic individuals are
dispositioned to show a more self-centered behavior and focus on dif-
ferentiating themselves from others. Communal individuals, on the
other hand, are more likely to be a part of a group and form social
connections (Wiggins, 1991). Cialdini and Trost (1998) state that all
individuals share a strong need to enhance the self-concept. This is done
by behaving consistently with their statements, actions, beliefs, com-
mitments and self-ascribed traits. One of the ways in which this man-
ifests itself is by the consumption by individuals of products that cor-
respond with their self-concept as a means of self-expression (Braun,
Ellis, & Loftus, 2002). Another is the way individuals behave and write
online in response to comments from other internet users (Colliander &
Wien, 2013).
2.3. Hypotheses development
Here, it is proposed that due to conformity and the desire to
maintain a positive self-concept, respondents who are exposed to
comments identifying a fake news story as such have a more negative
attitude towards the news story, are more likely to critically comment
on the story and are less likely to spread it through their own social
channels. Furthermore, it is proposed that due to the critical role of the
self-concept, these tendencies are especially pronounced when the
comments from other users include personal attacks on the poster of the
original story. Several authors have documented the power of con-
formity in online behavior. Zhu and Huberman (2014), for instance,
demonstrated that consumers tend to shift their preferences in an online
setting when faced with the recommendations of others. Breitsohl,
Wilcox-Jones, and Harris (2015) found support for a groupthink men-
tality in online communities. Tsikerdekis (2013), meanwhile, found
that conforming to the opinions of the group occurred irrespective of
the levels of anonymity that users perceived themselves as having.
Specifically investigating online news contexts, Winter, Bruckner, and
Krämer (2015) found evidence of the social influence of others com-
ments when judging stories online. Other researchers have demon-
strated that conformity extend beyond the mental dimension and affect
actions online. In a comprehensive study involving the analysis of on-
line discussion forums as well as four experiments, Hamilton, Schlosser,
and Chen (2017) found that commenting is significantly affected by the
need for affiliation. Therefore, commenters online were likely to con-
form their writings to already existing comments.
Based on this body of evidence, it is likely that when people are
exposed to comments critical of a fake news story (rather than sup-
portive comments), they will gain a more negative attitude towards the
fake news story, and will be more likely to themselves comment criti-
cally (rather than in a supportive manner). Moreover, Cialdini and
Goldstein (2004) state that “people are frequently motivated to con-
form to others’ beliefs and behaviors in order to enhance, protect or
repair their self-esteems” (p. 611). Colliander and Wien (2013),
J. Colliander Computers in Human Behavior 97 (2019) 202–215
203
meanwhile, state that individuals’ actions on social media is partly
motivated by their desire to bolster their self-concepts. Following the
importance of an individual’s self-concept, as highlighted above, it is
therefore likely that people are less inclined to share a fake news story
after seeing others commenting critically on it. Furthermore, when
exposed to comments shaming the original poster for sharing the fake
news story, the threat to the self-concept inherent in sharing the fake
news story ought to be especially salient to people. Therefore, they
should be even less likely to share the fake news story after exposure to
these comments than after exposure to comments simply claiming that
the news story is fake. Given this reasoning, the following hypotheses
are proposed:
H1. After exposure to a fake news story with user comments critical of
the content, people will have a more negative attitude towards the fake
news, than after exposure to the fake news story with user comments
supportive of the content.
H2. After exposure to a fake news story with user comments critical of
the content, people are more likely to make critical comments
themselves, than after exposure to the fake news story with user
comments supportive of the content.
H3a. After exposure to a fake news story with user comments
identifying the news as fake, people are less likely to share the fake
news, than after exposure to the fake news story with user comments
supportive of the content.
H3b. After exposure to a fake news story with user comments shaming
the poster for spreading said fake news, people are less likely to share
the fake news story, than after exposure to the fake news story with
either user comments supportive of the content or with user comments
merely identifying the news as fake.
3. Study 1
Study 1 was intended to test H1, H2. H3a and H3b. To that end, we
used a between-subjects experimental design with three treatment
groups. One group of participants (group 1) were subjected to a fake
news social media post with supportive comments from other users. The
second experimental group (2) was subjected to the same fake news
social media post but this time the comments critically identified the
fake news as such. The third experimental group (3) was subjected to
the same fake news social media post with comments both critically
identifying the fake news as such and criticizing the poster of the fake
news for spreading it.
3.1. Stimulus development
The study utilized a role-play scenario were participants were sub-
jected to one of the three experimental posts embedded in a survey tool
and instructed to imagine that they saw it posted by a distant ac-
quaintance on Facebook. To maximize validity, it was decided that the
study should employ a real piece of fake news. To that end, a search of
the internet for known sources of fake news was undertaken.
Eventually, it was decided to use a Facebook post from a page called
“America’s last line of defense”. The page has been noted for solely
spreading made up news by both The Washington Post (Saslow, 201
8)
and Politifact.com (Gillin, 2018).
Three criteria were used to pick the post to be used as stimuli for the
study. The post had to a) reference an event or issue that was relevant
and well known to a U.S. audience at the time of the study b) be in-
disputably false and c) could reasonably be identified as false by an
average individual. It was decided to use a fake story about the Austin
serial bombings that took place between March 2 and March 20, 2018.
After the perpetrator of those bombings committed suicide on March 21
and was subsequently identified, America’s last line of defense pub-
lished a post mimicking a news alert on March 22 that stated that the
bomber had been “on Clinton Foundation payroll”. The post thus im-
plied that the bomber had been employed by Bill and Hillary Clinton’s
foundation, which is a frequent target of fake news. Using the criteria
above, it was decided that the post met all three. It was demonstrably
false and should be identifiable as such by an average person and due to
extensive news coverage of the Austin events it was deemed relevant at
the time of the study (early April 2018).
A screenshot of the post was taken to be used as the focal fake news
story of the study. Next, three different comment sections were created.
Photos were blurred and names were altered to create fictitious, non-
identifiable individuals. Next, three sets of comments of four each were
created to achieve the experimental stimuli. That number of comments
was deemed appropriate to establish the desired pattern. Each comment
feed was inspired by actual comments found on America’s last line of
defense and similar Facebook feeds. The comments for group one
contained expressions such as “I knew it” and “Unbelievable”. The
comments for group two contained comments such as “Fake story!” and
“This is fake news.” The comments for group three contained comments
such as “It’s irresponsible of you to spread this untrue stuff” and “Shame
on you for spreading this lie”. Screenshots were taken of each comment
section and each was merged with the screenshot of the fake news
story, thus creating a stimulus for each of the experimental groups. A
small focus group of four university students fluent in English at a
western European business school was gathered to assess the stimuli.
All participants were active on Facebook. Asked whether the posts
looked like real Facebook posts they all answered in the affirmative.
Likewise, all participants deemed the various comments as credible and
representative of actual comments that they had encountered online.
They all also judged the comments intended for group one as supportive
of the post and the comments for groups two and three as critical of the
post. Asked which one of the comment sections intended for group two
and three that was most critical of the poster, all respondents indicated
the one meant for group 3. It was thus determined that the stimuli were
suitable for use in the study. Please see appendix 1 for the stimuli.
3.2. Data collection and participants
Each scenario version was followed by questionnaire items to
measure the variables in the hypotheses. The scenarios were randomly
allocated to participants (N = 1201). Respondents were recruited
through Amazon Mechanical Turk and consisted of US residents over
the age of 18 who were members of Facebook. Research supports the
validity of Amazon Mechanical Turk data within quantitative studies as
compared to other methods for online survey data collection
(Buhrmester, Kwang, & Gosling, 2011). Participation was open to
people who had a validated track record in past surveys of above 90%
approval and the Qualtrics (the survey publishing tool adopted) anti-
ballot stuffing setting was enabled to avoid multiple submissions from
the same participant. 40% of respondents were male and the average
age of the respondents was 37. There were no significant differences in
gender (Chi2 = 0.832) or age (p = .321) between our 3 experimental
groups. After initially filling out demographic questions ensuring that
they were in fact U.S. residents and members of Facebook, respondents
J. Colliander Computers in Human Behavior 97 (2019) 202–215
204
http://Politifact.com
were, as noted, instructed to “Please imagine that you the following
post by a distant acquaintance on Facebook” and to look carefully at the
post and comments and answer all questions.
3.3. Measures
Attitudes towards the fake news story was measured with three
items on seven-point Likert scales (1 = completely disagree, 7 = com-
pletely agree): “My impression of the Facebook post is good”, “My
impression of the Facebook post is pleasant”, “My impression of the
Facebook post is favorable” (Colliander & Marder, 2018). Responses to
the three items were averaged to form an index, Cronbach’s
alpha = .96).
The likelihood to make positive or negative comments on the fake
news story was measured with a single variable: “If you would com-
ment on this post, would your comment be mostly supportive or mostly
critical of the content in the post?” Responses to this question were
measured on a binary scale (Mostly critical/Mostly supportive).
The likelihood to share the post was measured with three items on
seven-point Likert scales (1 = completely disagree, 7 = completely
agree): “It is likely that I would share this post on Facebook”, “It is
possible that I would share this post on Facebook”, “It is probable that I
would share this post on Facebook” (Huang, Cai, Tsang, & Zhou, 2011).
Responses to the three items were averaged to form an index, Cron-
bach’s alpha = .95).
4. Results
Before analyzing the dependent variables, an initial manipulation
check was employed. Respondents were asked “Were the comments on
the Facebook post that you could see supportive or critical of the post?”
Responses to this question were measured on a binary scale
(Supportive/Critical). Only respondents who correctly answered the
question (N = 1164) were subsequently analyzed when testing the
hypotheses.
When testing H1, that attitude towards the fake news story would be
lower after reading one of the two sets of comments critical of the post,
a one-way ANOVA with a Scheffe post-hoc test was employed. The
results showed that the means of the attitudes towards the post was
significantly lower in groups two and three than it was in group one.
Thus, H1 was empirically supported. The same method for analysis was
employed when testing H3a and H3b, that intentions to share the fake
news story would be lower in the group subjected to the comments
pointing out the news story as fake (group two) than in the group
subjected to the comments supporting the fake news story (group one),
and that these intentions would be lower still in the group subjected to
comments critical of the poster (group three). The results showed that
the means of both groups two and three were significantly lower than
the means of group one. H3a was thus empirically supported. However,
the means of group three was not significantly different from that of
group two. H3b was thus only partially empirically supported. Please
see Table 1 for the all means, standard deviations and p values of the
ANOVA tests.
When testing H2, that respondents would be less likely to make
comments supportive of the fake news story after reading comments of
others critical of the story, than after reading comments of others
supportive of the story, a cross tabulation with a chi-square test was
employed. Results show that there was a significant difference
(p < .001) between the expected proportions of respondents who
would make critical and supportive comments, respectively, in the
three experimental groups. Among respondents in group one, who saw
comments supportive of the fake news story, more individuals than
expected would themselves make comments supportive of the fake
news story. Meanwhile, the reverse pattern emerged for groups two and
three. Thus, H2 was empirically supported. Please see Table 2 for the
expected and actual count of respondents who would make supportive
and critical comments, respectively.
4.1. Discussion
The results of study 1 show that the comments and actions of other
users in social media can indeed affect the reactions to, and spread of,
fake news online. Users exposed to comments by others users that were
critical of the fake news had lower attitudes to the fake news, and were
Table 1
Mean values (standard deviations) for attitudes towards the post and intentions to share the post in study 1.
Variable Mean supportive comments from
users (group 1)
Mean comments pointing out that the news
is fake (group 2)
Mean comments pointing out that the news is fake and
critical of the poster (group 3)
Attitudes towards the post 2.18 (1.54) 1.64 (1.23)a 1.82 (1.45)b
Intentions to share the post 2.0 (1.68) 1.47 (1.09)a 1.62 (1.31)b
a = significantly lower than group 1 at p < .001. b = significantly lower than group 1 at p < .005.
Table 2
Expected and actual count of respondents who would make mostly supportive or mostly critical comments study 1.
Group Actual and expected counts Mostly supportive comments Mostly critical comments
Supportive comments from users (group 1) Count 90 295
Expected count 55.2 329.8
Comments pointing out that the news is fake (group 2) Count 35 350
Expected count 55.2 329.8
Comments pointing out that the news is fake and critical of the poster (group 3) Count 42 352
Expected count 56.5 337.5
Total count 167 997
Total expected count 167 997
J. Colliander Computers in Human Behavior 97 (2019) 202–215
205
more likely to comment critically and share the fake news themselves,
than users who were exposed to comments supportive of the fake news.
These findings clearly demonstrate the potential and responsibility of
ordinary readers in stopping the spread and mitigating the impact of
fake news and online disinformation.
Theoretically, study 1 offered a mixed bag. In particular, the fact
that H3b was only partially supported is interesting. Making a threat to
the self-concept especially salient, by showing an individual the po-
tential of being shamed by other users online when spreading a fake
news story, did not affect respondents in this study more than when
other users simply wrote that the story was false. This could be because
the importance of maintaining the self-concept in an online setting has
been overestimated in previous studies. Though, with the robust body
of research indicating its importance, that seems unlikely. More prob-
able is the fact that the simple pointing out that the story is fake is also
seen as an implicit rebuke by other users online and that conformity
and the potential threat to the self-concept act in combination in ex-
plaining the differences between the three experimental groups.
It could be argued, however, that neither conformity nor the threat
to the self-concept were responsible for the results of study 1. Instead,
one can argue that those response patterns were simply due to a
‘waking up’ – effect. That is, research has demonstrated that people do
not spend much time digesting content online (Weinreich, Obendorf,
Herder, & Mayer, 2008), indicating that they do not spend much cog-
nitive effort to process web content. This would indicate that at least
some users might not think about the fact that a fake news article is fake
and that it is only when seeing comment from other users that they
realize that fact. This would be similar to the effect of incongruent
advertising in drawing attention to certain commercial messages (e.g
Dahlén, Rosengren, Törn, & Öhman, 2008). If that was the case, other
stimuli that draws an individual’s attention to the fact that a news story
is fake would achieve similar effects to the comments used for groups
two and three in this study. One such stimuli could be disclaimers from
social media companies themselves. These are notifications informing
users to pay attention to the content for some reason. When it comes to
fake news, Facebook started using disclaimers in 2017 stating that the
content is disputed by multiple fact-checkers (Hunt, 2017). That prac-
tice has since been replaced by a ‘related stories’- function (Flynn,
2017) but including such a disclaimer would nevertheless alert readers
to the fact that the news is fake. Thus, it would reveal if the results of
study 1 are due to conformity or simply the alerting to consumers that
the news is fake.
Putting this to the test a second study was launched. However, since
there is potential for a number of different outcomes, no hypotheses
will be formulated. After all, the results of study 1 could be due to
conformity, as was argued leading up to study 1, and there is no
‘waking up’ – effect. Or, a disclaimer could achieve similar results to the
comments in study 1. Therefore, an open research question was devised
instead to test the effects of disclaimers vs. comments from others users
pointing out that a news story is fake. Hence:
Research question 1: How does the disclosure that a news story is
fake by disclaimers from a social media company compare to comments
to that effect from other users in affecting attitudes towards the news
story, propensity to comment on the story in a supportive or critical
manner and intentions to share the news story in social media?
5. Study 2
Like study 1, study 2 used a between-subjects experimental design.
This time, it included 4 treatment groups. One group of participants
(group 1) was subjected to a fake news social media post with no
comments or disclaimers. The presence of this control group is intended
to put the means of other experimental groups into context. Adding this
group in study 2 is a further contribution of the second study. The
second group (2) was subjected to a fake news social media post with
supportive comments from other users. This was similar to the first
group in study 1. The third group (3) was subjected to a fake news
social media post with comments pointing out that the news was fake.
This was similar to the second group in study 1. The fourth group (4)
was subjected to a fake news social media post with comments sup-
portive of the post and with a disclaimer stating its content was dis-
puted by fact checkers.
The attentive reader will notice that it was decided to use only one
version of comments critical of the content, the ones where the com-
menters simply pointed out that the news post was fake. Finding no
significant differences between groups two and three in study 1, and
assuming that the comments stating that the news was fake served as an
implicit rebuke as discussed above, this was done to avoid an overly
cluttered study 2. In addition, it was decided to apply the disclaimer to
a post with user comments supporting the fake news. This was done in
order to directly compare groups 2, 3 and 4 to discern whether com-
ments from other users stating that news is fake or disclaimers from a
social media company was more effective in mitigating the effects of
other users’ supportive comments found in study 1.
5.1. Stimulus development
The same criteria as in study 1 (reference an event or issue that was
relevant and well known to a U.S. audience at the time of the study; be
indisputably false; could reasonably be identified as false by an average
individual) were uses when finding a fake news post to use in study
2.
The same fake news feed was used to find a new suitable post. This
time, the study ran in early October 2018. Therefore, it was decided to
use a post about the ad campaign that Nike ran with former San
Francisco 49:ers quarterback Colin Kaepernick, which had been the
topic of much debate in the weeks prior to the study. The athlete, who
spearheaded a movement by NFL players to kneel during the national
anthem in protest against racial injustice in the US, was seen as a
controversial choice as a brand spokesperson. Nike drew much right-
wing ire for their decision to use Kaepernick and there was a social
media campaign started against the company. The post chosen for the
study stated that Nike had filed for bankruptcy after the “failed
Kaepernick campaign”. It was decided that the post met the criteria.
The names of the commenters were changed and their comments were
altered to suit each stimulus. For added ecological validity, likes and
emojis were retained on the comments. Thereafter, screenshots were
taken of the post and adapted comments. The photos of commenters
were blurred and, for group 4, a disclaimer taken from an older fake
news story was photoshopped into the post. As in study 1, a small focus
group consisting of English-speaking university students at a western
European business school was gathered to pre-test the stimuli. After the
focus group confirmed that the stimuli conveyed the intended messages
it was decided to go ahead with the study. Please see appendix 2 for the
stimuli in study 2.
5.2. Data collection and participants
Again, the study utilized a role-play scenario were participants were
subjected to one of the six experimental posts embedded in a survey
tool. They were instructed to imagine that they saw it posted by a
J. Colliander Computers in Human Behavior 97 (2019) 202–215
206
distant acquaintance on Facebook. Each scenario version was followed
by the same questionnaire items as in study 1. The scenarios were
randomly allocated to participants (N = 800). Again, respondents were
recruited through Amazon Mechanical Turk and consisted of US re-
sidents over the age of 18 who were members of Facebook and who had
a validated track record in past surveys of above 90% approval. The
Qualtrics anti-ballot stuffing setting was enabled to avoid multiple
submissions from the same participant. This time, 50% of respondents
were male and the average age of the respondents was 36. There were
no significant differences in gender (Chi2 = 0.779) or age (p = .170)
between our 4 experimental groups. After initially filling out demo-
graphic questions ensuring that they were in fact U.S. residents and
members of Facebook, respondents were instructed to look carefully at
the post and comments and answer all questions.
6. Results
Before analyzing the dependent variables, three manipulation
checks were employed. As in study 1, respondents were asked “Were
the comments on the Facebook post that you could see supportive or
critical of the post?” Responses to this question were measured on a
binary scale (Supportive/Critical). Secondly, they were asked “What
brand was the focus of the Facebook post?” Respondents could choose
between Nike/Adidas/New Balance. Thirdly, they were asked “Was
there a disclaimer stating that the post had been disputed included in
the post?” Responses to this question were measured on a binary scale
(Yes/no). As in study 1, respondents were only asked these questions
after answering the questions below but only respondents who correctly
answered the questions (N = 506) were subsequently analyzed when
testing the hypotheses.
In order to test the effects on attitude towards the post and inten-
tions to share the post, one-way ANOVAs with Scheffe post-hoc tests
were once more employed. Please see Table 3 for the results of these
tests.
In order to test whether respondents would be likely to make
comments supportive or critical of the post after seeing the experi-
mental stimuli, a cross tabulation with a chi-square test was once more
employed. Results show that there was a significant difference
(p < .001) between the expected proportions of respondents who
would make critical and supportive comments, respectively, in the four
experimental groups. Please see Table 4 for the expected and actual
count of respondents who would make supportive and critical com-
ments, respectively.
6.1. Discussion
Study 2 was conducted to shine a light on research question 1. To
reiterate, it was formulated as follows:
Research question 1: How does the disclosure that a news story is
fake by disclaimers from a social media company compare to comments
to that effect from other users in affecting attitudes towards the news
story, propensity to comment on the story in a supportive or critical
manner and intentions to share the news story in social media?
The results of study 2 indicate that a disclaimer is not as effective as
other users’ comments in stopping the spread of fake news. Whereas the
means of the attitudes towards the post and intentions to share the post
in the group who had seen critical comments (group 3) was sig-
nificantly lower than in the group that had seen supportive comments
(group 2), the means of the group who had seen a disclaimer (group 4)
was not. Furthermore, when looking at Table 4, one notices that it is not
until exposed to other users’ critical comments to the fake news story
that respondents’ own likelihood of posting critical comments exceed
the expected count.
Ta
bl
e
3
M
ea
n
va
lu
es
(s
ta
nd
ar
d
de
vi
at
io
ns
)
fo
r
at
tit
ud
e
s
to
w
ar
ds
th
e
po
st
an
d
in
te
nt
io
ns
to
sh
ar
e
th
e
po
st
in
st
ud
y
2.
Va
ri
ab
le
M
ea
n
no
co
m
m
en
ts
or
di
sc
la
im
er
s
(g
ro
up
1)
M
ea
n
su
pp
or
tiv
e
co
m
m
en
ts
fr
om
us
er
s
(g
ro
up
2)
M
ea
n
co
m
m
en
ts
po
in
tin
g
ou
t
th
at
th
e
ne
w
s
is
fa
ke
(g
ro
up
3)
M
ea
n
su
pp
or
tiv
e
co
m
m
en
ts
fr
om
us
er
s
an
d
a
di
sc
la
im
er
(g
ro
up
4)
A
tt
itu
de
s
to
w
ar
ds
th
e
po
st
2.
13
(1
.5
8)
2.
89
(2
.2
7)
a
1.
82
(1
.5
4)
b
2.
42
(1
.9
8)
In
te
nt
io
ns
to
sh
ar
e
th
e
po
st
1.
99
(1
.8
2)
2.
45
(2
.1
5)
1.
62
(1
.4
4)
c
2.
04
(1
.8
8)
a
si
gn
ifi
ca
nt
ly
hi
gh
er
th
an
gr
ou
p
1
at
p
< .0
5.
b
si
gn
ifi
ca
nt
ly
lo
w
er
th
an
gr
ou
p
2
at
p
<
.0
01
.
c
si
gn
ifi
ca
nt
ly
lo
w
er
th
an
gr
ou
p
2
at
p
<
.0
1.
J. Colliander Computers in Human Behavior 97 (2019) 202–215
207
Returning briefly to the discussion following study 1, the results of
study 2 supports the theoretical reasoning behind the hypotheses rather
than the existence of a mere ‘waking up’- effect. Conformity does indeed
seem to be an important factor in steering people’s responses to fake
news. Social media users seem to use the comments of other people as a
guide for how to respond to disinformation online rather than dis-
claimers. It thus validates recent decisions by social media companies to
move away from flagging options in responses to fake news as they do
not seem to be particularly effective.
7. Final discussion
As noted in the introduction, the present research was intended to
contribute to the emerging literature on debunking disinformation and
fake news. Previous studies have thoroughly investigated how coun-
terfactuals serves to correct misperceptions caused by fake news. This
research, however, takes a step back and investigates not the mis-
perceptions of those exposed to fake news. Rather, it investigates peo-
ples’ attitudes towards, and intentions to comment on and share, the
fake news in the light of other users’ reactions to the disinformation.
Specifically, this research examines what effect it has on individuals
exposed to fake news that other users take a stand against it and
identifies the disinformation as such through the comment function.
The results show that the actions of other users in the comment
section of fake news articles significantly influences peoples’ attitude
towards disinformation, as well as their intentions to comment and
share the fake news. The results also show that actions of other users
online might be more effective than disclaimers and other means of
countering fake news from social media companies.
7.1. Implications
The results of the present research offer implications to both theory
and practice. Theoretically, it adds primarily to the research on con-
formity online. Previous studies have demonstrated that conformity is
not confined to physical interactions but is also very much a factor
online (Rosander & Eriksson, 2012). For instance, Fox and Tang (2014)
have demonstrated that conformity predicts sexist behavior online and
Teunissen, Spijkerman, Prinstein, and Cohen (2012) have shown how
conformity online influences drinking habits. Furthermore, to under-
score the powerful role of conformity on the internet, both Williams
et al. (2000) and Tsikerdekis (2013) has demonstrated that individuals
conform to others online irrespective of their degree of anonymity. This
study adds to that stream of research. It demonstrates yet again the
powerful forces of conformity online. It shows that it influences con-
sumer responses to fake news and online disinformation, an important
issue of our time. Previous research in this field have mostly examined
the role and tactics of social media companies in debunking fake news.
This study instead focuses on the role of other users, demonstrating that
their actions are as important, if not more important, than those of
social media platforms.
That’s not to say that this study offers no practical implications for
social media companies, however. It shows that they need to combine
their ongoing work of finding effective ways of alerting users to the
existence of fake news (Flynn, 2017) with initiatives to involve other
users in these efforts. Encouraging other users to debunk fake news
stories and providing them with incentives to do so ought to be high on
their agenda. Another stakeholder who might derive practical im-
plications from this study is the authorities. If ordinary citizens should
play a role in countering fake news they must be given the tools to do
so. Initiatives to strengthen peoples’ skills in source criticism as well as
public information campaigns about fake news and individuals’ role in
countering it are both options to consider.
7.2. Limitations
This study is naturally has limitations that we encourage future
researchers to address. For starters, no distinction was made between
heavy users of Facebook and those who use it less frequently. It is
plausible, for instance, that heavy users are better at spotting fake news
articles and are influenced less by the comments of other users than
novices. Investigating how heavy vs. light users of Facebook are gov-
erned by the actions of other users when reacting to fake news is a task
left to future researchers.
Another limitation of this research is that it did not account for how
personally relevant the fake news used in the studies were to re-
spondents and how that influenced their reactions. For example, the
two fake news used in this research might feel more relevant to con-
servatives than liberals and thus the responses among those two groups
might differ. Likewise, the reaction to the Colin Kaepernick, who many
associate with the Black Lives Matter- movement, might be different
among minority respondents. Future researchers should look at this
issue as well.
Lastly, future researchers are encouraged to investigate how mixed
comments influence reactions to fake news. Typically, comment threads
of fake news offer a mixture of positive and negative comments. This
study did not take the effects of such mixed comment threads into ac-
count. Future studied could for example investigate how different
proportions of positive and negative comments affect responses, as well
as the order of those comments. That way, we could all gain a better
understanding of how people are influenced by others when responding
to fake news.
Table 4
Expected and actual count of respondents who would make mostly supportive or mostly critical comments study 2.
Group Actual and expected counts Mostly supportive comments Mostly critical comments
No comments or disclaimers (group 1) Count 37 102
Expected count 34.9 104.1
Supportive comments from users (group 2) Count 41 67
Expected count 27.1 80.9
Comments pointing out that the news is fake (group 3) Count 12 131
Expected count 35.9 107.1
Supportive comments from users and a disclaimer (group 4) Count 37 79
Expected count 29.1 86.9
Total count 127 379
Total expected count 127 379
J. Colliander Computers in Human Behavior 97 (2019) 202–215
208
Appendix 1. Stimuli used in study 1
Post with supportive comments
J. Colliander Computers in Human Behavior 97 (2019) 202–215
209
Post with comments pointing out that the story is fake
J. Colliander Computers in Human Behavior 97 (2019) 202–215
210
Post with comments pointing out that the story is fake and attacking the poster
J. Colliander Computers in Human Behavior 97 (2019) 202–215
211
Appendix 2. Stimuli used in study 2
Post with no comments or disclaimers
Post with supportive comments
J. Colliander Computers in Human Behavior 97 (2019) 202–215
212
Post with comments pointing out that the story is fake
J. Colliander Computers in Human Behavior 97 (2019) 202–215
213
Post with supportive comments and a disclaimer
References
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. The
Journal of Economic Perspectives, 31(2), 211–236.
Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a
unanimous majority, Vol. 70. Psychological Monographs.
Bode, B., & Vraga, E. K. (2018). See something, say something: Correction of global health
misinformation on social media. Health Communication, 33(9), 1131–1140.
Braun, K. A., Ellis, R., & Loftus, E. F. (2002). Make my memory: How advertising can
change our memories of the past. Psychology and Marketing, 19, 1–23.
Breitsohl, J., Wilcox-Jones, J. P., & Harris, I. (2015). Groupthink 2.0: An empirical ana-
lysis of customers’ conformity-seeking in online communities. Journal of Customer
Behaviour, 14(2), 87–106.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk a new
source of inexpensive, yet high-quality, data? Perspectives on Psychological Science,
6(1), 3–5.
Chan, M. S., Jones, C. R., Jamieson, K. H., & Alberracin, D. (2017). Debunking: A meta-
analysis of the psychological efficacy of messages countering misinformation.
Psychological Science, 28(11), 1531–1546.
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity.
Annual Review of Psychology, 55, 591–621.
Cialdini, R. B., & Trost, M. R. (1998). Social influence: Social norms, conformity and
compliance. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.). The handbook of social
psychology (pp. 151–192). Boston, MA: McGraw-Hill.
Colliander, J., & Marder, B. (2018). ‘Snap happy’ brands: Increasing publicity effective-
ness through a snapshot aesthetic when marketing a brand on Instagram. Computers
in Human Behavior, 78, 34–43.
Colliander, J., & Wien, A. (2013). Trash talk rebuffed: What can we learn from the
phenomenon of consumers defending companies criticized in online communities?
European Journal of Marketing, 47(10), 1733–1757.
Dahlén, M., Rosengren, S., Törn, F., & Öhman, N. (2008). Could placing ads wrong Be
right? Journal of Advertising, 37(3), 57–67.
Deutsch, M., & Gerard, H. B. (1955). A study of normative and informational social
influences upon individual judgment. Journal of Abnormal and Social Psychology,
51(3), 629–636.
Dewey, C. (2016). Facebook fake-news writer: ‘I think donald trump is in the white house
because of me’. Washington Post. Retrieved from: https://www.washingtonpost.com/
news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-
trump-is-in-the-white-house-because-of-me/?utm_term=.f609b6682faa, Accessed
date: 3 December 2018.
Edelson, M., Sharot, T., Dolan, R. J., & Dudai, Y. (2011). Following the crowd: Brain
substrates of long-term memory conformity. Science, 333(6038), 108–111.
Flynn, K. (2017). Facebook abandons an attempt to curb fake news. Here’s why. Mashable.
Retrieved from: https://mashable.com/2017/12/21/facebook-fake-news-abandon-
disputed-flag-related-articles/?europe=true#SwKMS5TSaaqI, Accessed date: 3
December 2018.
Fox, J., & Tang, W. Y. (2014). Sexism in online video games: The role of conformity to
masculine norms and social dominance orientation. Computers in Human Behavior, 33,
314–320.
Gillin, J. (2018). If you’re fooled by fake news, this man probably wrote it. Politifact.
Retrieved from: https://www.politifact.com/punditfact/article/2017/may/31/If-
youre-fooled-by-fake-news-this-man-probably-wro/, Accessed date: 3 December
2018.
Hamilton, R. W., Schlosser, A., & Chen, Y.-J. (2017). Who’s driving this conversation?
Systematic biases in the content of online consumer discussions. Journal of Marketing
Research, 54(4), 540–555.
Huang, M., Cai, F., Tsang, A. S. L., & Zhou, N. (2011). Making your online voice loud: The
critical role of WOM information. European Journal of Marketing, 45(7/8),
1277–1297.
Hunt, E. (2017). ‘Disputed by multiple fact-checkers’: Facebook rolls out new alert to combat
fake news. Guardian. Retrieved from: https://www.theguardian.com/technology/
2017/mar/22/facebook-fact-checking-tool-fake-news, Accessed date: 3 December
2018.
Lazer, D. M. J., Baum, M., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al.
(2018). The Science of Fake News, Science, 359(6380), 1094–1096.
Nam, J., Lee, Y., Youn, N., & Kwon, K.-M. (2016). Nostalgia’s fulfilment of agentic and
communal needs: How different types of self-concepts shape consumer attitudes
J. Colliander Computers in Human Behavior 97 (2019) 202–215
214
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref1
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref1
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref2
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref2
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref3
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref3
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref4
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref4
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref5
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref5
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref5
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref6
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref6
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref6
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref7
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref7
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref7
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref8
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref8
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref9
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref9
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref9
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref10
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref10
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref10
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref11
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref11
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref11
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref12
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref12
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref13
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref13
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref13
https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/?utm_term=.f609b6682faa
https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/?utm_term=.f609b6682faa
https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/?utm_term=.f609b6682faa
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref15
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref15
https://mashable.com/2017/12/21/facebook-fake-news-abandon-disputed-flag-related-articles/?europe=true#SwKMS5TSaaqI
https://mashable.com/2017/12/21/facebook-fake-news-abandon-disputed-flag-related-articles/?europe=true#SwKMS5TSaaqI
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref17
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref17
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref17
https://www.politifact.com/punditfact/article/2017/may/31/If-youre-fooled-by-fake-news-this-man-probably-wro/
https://www.politifact.com/punditfact/article/2017/may/31/If-youre-fooled-by-fake-news-this-man-probably-wro/
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref19
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref19
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref19
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref20
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref20
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref20
https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news
https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref22
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref22
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref23
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref23
toward nostalgia. Journal of Consumer Behaviour, 15, 303–313.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political mis-
perceptions. Political Behavior, 32(2), 303–330.
Parkinson, H. J. (2016). Click and elect: How fake news helped donald trump win a real
election. Guardian. Retrieved from: https://www.theguardian.com/commentisfree/
2016/nov/14/fake-news-donald-trump-election-alt-right-social-media-tech-
companies, Accessed date: 3 December 2018.
Pew Research Center (2016). Many Americans believe fake news is sowing confusion.
Retrieved from: http://www.journalism.org/2016/12/15/many-americans-believe-
fake-news-is-sowing-confusion/, Accessed date: 3 December 2018.
Read, M. (2016). Donald trump won because of Facebook. New York Magazine. Retrieved
from: http://nymag.com/intelligencer/2016/11/donald-trump-won-because-of-
facebook.html, Accessed date: 3 December 2018.
Rosander, M., & Eriksson, O. (2012). Conformity on the internet – the role of task diffi-
culty and gender differences. Computers in Human Behavior, 28(5), 1587–1595.
Saslow, E. (2018). ‘Nothing on this page is real’: How lies become truth in online America. The
Washington Post. Retrieved from: https://www.washingtonpost.com/national/
nothing-on-this-page-is-real-how-lies-become-truth-in-online-america/2018/11/17/
edd44cc8-e85a-11e8-bbdb-72fdbf9d4fed_story.html?utm_term=.784a7d367485,
Accessed date: 3 December 2018.
Silverman, C. (2016). This analysis shows how fake election news stories outperformed real
news on Facebook. Buzzfeed News. Retrieved from: https://www.buzzfeednews.com/
article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-
facebook, Accessed date: 3 December 2018.
Teunissen, H. A., Spijkerman, R., Prinstein, M. J., & Cohen, G. L. (2012). Adolescents’
conformity to their peers’ pro-alcohol and anti-alcohol norms: The power of popu-
larity. Behavior, Treatment and Prevention, 36(7), 1257–1267.
Tsikerdekis, M. (2013). The effects of perceived anonymity and anonymity states on
conformity and groupthink in online communities: A wikipedia study. Journal of the
American Society for Information Science and Technology, 64(5), 1001–1015.
Vasoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146–1151.
Watanabe, K. (2017). The spread of the Kremlin’s narratives by a western news agency
during the Ukraine crisis. Journal of International Communication, 23(1), 138–158.
Weinreich, H., Obendorf, H., Herder, E., & Mayer, M. (2008). Not quite the average: An
empirical study of Web use. ACM Transactions on the Web, 2(1), 1–31.
Wiggins, J. S. (1991). Agency and communion as conceptual coordinates for the under-
standing and measurement of interpersonal behavior. In M. G. William, & D. Cicchetti
(Eds.). Thinking clearly about psychology (pp. 89–113). Minneapolis, MN: University of
Minnesota Press.
Williams, K. P., Cheung, C. K. T., & Choi, W. (2000). Cyberostricism: Effects of being
ignored over the internet. Journal of Personality and Social Psychology, 79, 748–762.
Winter, S., Bruckner, C., & Krämer, N. C. (2015). They came, they liked, they commented:
Social influence on Facebook news channels. Cyberpsychology, Behavior, and Social
Networking, 18(8), 431–436.
Zhu, H., & Huberman, B. A. (2014). To switch or not to switch: Understanding social
influence in online choices. American Behavioral Scientist, 58(10), 1329–1344.
J. Colliander Computers in Human Behavior 97 (2019) 202–215
215
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref23
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref24
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref24
https://www.theguardian.com/commentisfree/2016/nov/14/fake-news-donald-trump-election-alt-right-social-media-tech-companies
https://www.theguardian.com/commentisfree/2016/nov/14/fake-news-donald-trump-election-alt-right-social-media-tech-companies
https://www.theguardian.com/commentisfree/2016/nov/14/fake-news-donald-trump-election-alt-right-social-media-tech-companies
http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/
http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/
http://nymag.com/intelligencer/2016/11/donald-trump-won-because-of-facebook.html
http://nymag.com/intelligencer/2016/11/donald-trump-won-because-of-facebook.html
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref28
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref28
https://www.washingtonpost.com/national/nothing-on-this-page-is-real-how-lies-become-truth-in-online-america/2018/11/17/edd44cc8-e85a-11e8-bbdb-72fdbf9d4fed_story.html?utm_term=.784a7d367485
https://www.washingtonpost.com/national/nothing-on-this-page-is-real-how-lies-become-truth-in-online-america/2018/11/17/edd44cc8-e85a-11e8-bbdb-72fdbf9d4fed_story.html?utm_term=.784a7d367485
https://www.washingtonpost.com/national/nothing-on-this-page-is-real-how-lies-become-truth-in-online-america/2018/11/17/edd44cc8-e85a-11e8-bbdb-72fdbf9d4fed_story.html?utm_term=.784a7d367485
https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook
https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook
https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref31
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref31
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref31
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref32
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref32
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref32
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref33
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref33
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref34
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref34
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref35
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref35
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref36
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref36
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref36
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref36
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref37
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref37
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref38
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref38
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref38
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref39
http://refhub.elsevier.com/S0747-5632(19)30130-X/sref39
- “This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media
Introduction
Theoretical background and hypotheses
Conformity
Self-concept
Hypotheses development
Study 1
Stimulus development
Data collection and participants
Measures
Results
Discussion
Study 2
Stimulus development
Data collection and participants
Results
Discussion
Final discussion
Implications
Limitations
Stimuli used in study 1
Stimuli used in study 2
References
Conformity Feedback in an Online Review Helpfulness Evaluation
Task Leads to Less Negative
Feedback-Related Negativity
Amplitudes and More Positive P300 Amplitudes
Daomeng Guo
Wuhan University and Hubei
Engineering University
Yang Zhao, Liyi Zhang, and Xuan Wen
Wuhan University
Cong Yin
Chongqing University of Technology
Compared with an offline context, the sources of online review are typically unknown,
with more variable opinions from multiple individuals. This variability can make it
difficult for consumers to judge the consensus of others’ views using only direct social
clues. However, few studies have focused on the brain’s processing of mixed opinions
in an online context. In this study, an experiment that involved voting on the helpful-
ness of online reviews was designed to investigate how participants processed their
personal views alongside others’ views. A total of 32 participants were asked to decide
whether each online review was helpful and were then given feedback regarding how
many people found each review helpful. Participants’ voting behaviors and conformity
feedback-related event-related brain potentials (ERPs) were recorded and analyzed.
Participants rated positive reviews as more helpful than negative reviews. Response
times were longer when participants evaluated negative reviews. Therefore, the nega-
tivity bias of reviews may not result from the review’s helpfulness but rather from the
cognitive processing involved in the evaluation of the reviews. Further ERP analysis
showed that the incongruence of participants’ choices with the relative majority
opinion
generated from a ranking of a review’s helpfulness elicited more negative-going
feedback-related negativity and less positive-going P300 than did the condition of their
choices’ congruence with the relative majority opinion. This finding suggests that
incongruence with the relative majority opinion was processed as negative feedback
due to expectation violation, whereas congruence with the relative majority opinion
was processed as positive feedback for conformit
y.
Furthermore, the feedback-related
negativity response elicited by the trials of inconsistency with relative majority opin-
ions during the early period was smaller than that in the later period, whereas the
P300
response elicited by the trials of consistency with relative majority opinions in the early
period was greater than that in the later period. The ERP results suggest that even in an
online context, the brain can automatically encode the relative majority opinion by
learning from a comparison of other visible social cues, and automatically categorize
whether one’s personal views are consistent with those of the relative majorit
y.
This article was published Online First February 25,
2019.
Daomeng Guo, School of Information Management, Wu-
han University, and School of Economics and Management,
Hubei Engineering University and Hubei Micro and Small
Businesses Development Research Center; Yang Zhao, Liyi
Zhang, and Xuan Wen, School of Information Management,
Wuhan University; Cong Yin, Chongqing Intellectual Prop-
erty School, Chongqing University of Technology.
This work was supported by grant 713
73
192, 71874126
from the National Natural Science Foundation of China;
Daomeng Guo and Yang Zhao contributed equally to this
work and should be considered cofirst authors.
Correspondence concerning this article should be ad-
dressed to Liyi Zhang, School of Information Manage-
ment, Wuhan University, 299# Bayi Road, Wuhan 430072,
People’s Republic of China. E-mail: lyzhang@whu
.edu.cn
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
Journal of Neuroscience, Psychology, and Economics
© 2019 American Psychological Association 2019, Vol. 12, No. 2, 73– 87
1937-321X/19/$12.00 http://dx.doi.org/10.1037/npe0000102
73
mailto:lyzhang@whu.edu.cn
mailto:lyzhang@whu.edu.cn
http://dx.doi.org/10.1037/npe0000102
Keywords: online review helpfulness, social judgment, negativity bias, relative majority,
event-related potential
Amazon.com asks, “Was this review helpful
to you?” after each online review and then
shows the votes of review helpfulness alongside
the review (e.g., “30 people found this help-
ful”), and positions the most helpful reviews
more prominently on the product information
page. These methods are expected to help con-
sumers overcome information overload and im-
prove the effective use of online reviews. How-
ever, these methods may influence consumers’
perception of online review helpfulness. Previ-
ous studies have primarily focused on extract-
ing features from vote-upon reviews to explain
or predict the helpfulness of reviews (Baek,
Lee, Oh & Ahn, 2015; Cao, Duan, & Gan,
2011; Hong, Xu, Wang, & Fan, 2017; Karimi &
Wang, 2017; Lee, Jeong, & Lee, 2017;
Mudambi & Schuff, 2010; Schindler & Bickart,
2012; Sen & Lerman, 2007; Wang, Li, & Sun,
2016; Wu, 2013; Yang, Chen, & Bao, 2017;
Zhao, Ni, & Zhou, 2018). Those studies utilized
the hypothesis that consumers’ usefulness eval-
uation was based on their real experience and
thoughts. In reality, consumers may post public
opinions that are inconsistent with their private
views due to social pressure. For example, con-
sumers tend to post negative comments when
they see negative comments from others, de-
spite a positive personal experience with the
product (Schlosser, 2005). In addition, the vast
majority of individuals in an online context are
lurkers (those not posting their opinions). Some
studies have found differences in online reviews
between lurkers (their private reviews) and
posters as a result of less social pressure on
lurkers (Moe & Schweidel, 2012; Schlosser,
2005). This finding may explain why the re-
views predicted to be helpful by models are
often not recognized by most consumers (Wang
et al., 2016). However, the question remains as
to how people process a vote for a review and
perceive the social pressure of mixed opinions
represented by the number of votes in an online
context.
A substantial amount of research has shown
that people’s opinions or behaviors are suscep-
tible to the influence of others. People are mo-
tivated by an accurate perception of reality or by
social identity to conform to others (Cialdini &
Goldstein, 2004). In an online review context,
many studies have revealed the existence of
social influence. The closer the score is to the
average product score, the higher the helpful-
ness of the review (Baek et al., 2015); this
finding suggests an effect of social influence on
the perception of the helpfulness of online re-
views. However, other individuals show anti-
conformity behavior to generate a sense of
uniqueness and personal identity. For example,
consumers can post negative comments as a
way to differentiate themselves from other
members of a group (Moe & Schweidel, 2012).
Current neuroimaging studies have provided
more direct evidence and profound explanations
for herd behavior. When people’s responses
conflict with the group opinion, the ventral
striatum is deactivated, and the regions of the
posterior medial frontal cortex and anterior in-
sula are activated. The activation of posterior
medial frontal cortices can predict subsequent
behavior conformity (Wu, Luo, & Feng, 2016).
However, most studies have focused on social
behavior in face-to-face environments (Thomas
& Vinuales, 2017), whereas normative influ-
ence is much weaker in a virtual environment
(Perfumi, Cardelli, Bagnoli, & Guazzini, 2016).
Processing the consensus of others’ views is the
basis on which people decide what kind of
information strategies to adopt next (to remain
independent or to conform). There are clear
social clues in a face-to-face environment that
influence the social judgment of the consensus
of others, such as group structure and informa-
tion source. However, in an online context, the
information sources are usually unknown (Nay-
lor, Lamberton, & Norton, 2011), with groups
of varying size. Taking the feedback “30 people
found this helpful” as an example, consumers
do not know who these 30 people are, how
many lurkers are behind the 30 people, or what
their views are. In addition, the highly dispersed
opinion in an online context can exacerbate the
difficulty of relying on experience in the forma-
tion social judgments. Consumers in an online
environment face the challenge of learning from
multiple individuals whose choices emerge
74 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
https://Amazon.com
from unobservable, latent social groups (Gersh-
man, Pouncy, & Gweon, 2017). Although Per-
fumi et al. (2016) studied social influence in the
virtual environment, this study focused only on
the neuroresponse differences between partici-
pants’ behaviors of conforming to group views
and insisting on their own independence in a
series of Asch, cultural, and apperceptive tasks.
Furthermore, the subjects in the experiment had
seen the answers of the other group members
before making their own choices, which makes
it difficult to distinguish between their personal
private views and the views of the others. Thus,
we still do not know how users process the
views of others with their personal views in an
online environment. Furthermore, the subjects
in their experiment were studied in relation to a
fixed membership of six. Unlike face-to-face
environments, online environments are full of
digital user views and behaviors, rendering
comparisons of the behaviors of net users more
convenient and extensive. Exploring how users
process and perceive the number representing
the views or behaviors of others can help us to
understand users’ herd or nonherd behavior in
an online environment.
Therefore, this study attempts to use event-
related brain potentials (ERPs) to reveal how
people process their personal views with others’
views, as represented by a series of numbers in
an online context. Social comparison is a com-
mon phenomenon in social life (Festinger,
1954). Uncertainty often leads to relevant social
comparisons. Previous studies have shown that
people working in a virtual environment tend to
look for an objective comparison criterion for
their personal job performance because it is
difficult to find directly comparable objects
(such as colleagues; Conner, 2003). In addition,
an individual occupying a given social space is
more likely to be influenced by the local numer-
ical majority than by either the local numerical
minority or less proximate persons (Latané,
1996). Therefore, we propose the following hy-
pothesis: The uncertainty about others’ opinion
arising from a single feedback source of review
helpfulness will inspire people to compare the
votes of other visible reviews to build an eval-
uation criterion of numerical relative majority,
which will then be used to evaluate their per-
sonal opinion in relation to other opinions. We
developed a task in which participants were
asked to judge the helpfulness of reviews while
their brain potentials were recorded. After a
participant made his or her initial choice, he or
she was provided with feedback regarding how
many people found each review helpful. The
participant’s choice may be consistent or incon-
sistent with the relative majority opinion, which
allows us to examine how the brain processes
this social comparison information. In this con-
text, the relative majority opinion means the
following: If a review receives a higher number
of helpfulness votes than do other visible re-
views, the relative majority opinion is that the
review is helpful; otherwise the relative major-
ity opinion is that the review is unhelpful. Un-
like the experiments previously noted, the par-
ticipants in this study did not know how many
people were involved in the online helpfulness
voting or who was involved in the vote.
We focused on feedback-related negativity
(FRN) and P300. FRN, a negative-going ERP
component that peaks at approximately 250 to
300 ms, with the largest amplitude at the
frontocentral recording sites, has been asso-
ciated with outcome evaluation and perfor-
mance monitoring (Hajcak, Moser, Holroyd,
& Simons, 2006; Hajcak, Moser, Holroyd, &
Simons, 2007; Kimura, Murayama, Miura, &
Katayama, 2013; Masaki, Takeuchi, Gehring,
Takasawa, & Yamazaki, 2006). Previous stud-
ies have suggested that the FRN amplitude is
more pronounced for negative feedback, such as
incorrect responses, monetary losses, and viola-
tions of expectancy, than for positive feedback
(Wu & Zhou, 2009). Importantly, recent studies
have found that social norms violations, such as
conflict with the group opinion, also elicit more
negative-going FRN (Chen, Wu, Tong, Guan,
& Zhou, 2012). Based on these studies, we
predict that when compared with a participant’s
choice being congruent with the relative major-
ity opinion in the helpfulness judgment task, a
participant’s choice that is incongruent with the
relative majority opinion will elicit a greater
negative-going FRN response. P300 has been
reported as another important ERP component
related to outcome evaluation and reward pro-
cessing and typically peaks at approximately
300 to 400 ms. Previous studies have shown
that P300 was sensitive to the valence and mag-
nitude of reward (Wu & Zhou, 2009; Yeung &
Sanfey, 2004). Recent studies have extended
the role of P300 to the social domain and found
that winning more than others (Wu, Zhang,
75ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
Elieson, & Zhou, 2012) or being more attractive
than other faces resulted in a greater P300 am-
plitude (Werheid, Schacht, & Sommer, 2007).
Based on these studies, we predict that a partic-
ipant’s congruence with the relative majority
opinion in the helpfulness judgment task will be
processed as a positive outcome and therefore
elicit a greater positive-going P300 response. In
addition, previous studies showed a decrease in
FRN and P300 amplitudes following learning
(Bellebaum & Daum, 2008; Sailer, Fischmeis-
ter, & Bauer, 2010). The decline in FRN and
P300 amplitudes was thought to be the result of
learning that reduced the motivational signifi-
cance and attentive processing of the feedback
(Sailer et al., 2010). However, unlike the above-
mentioned experiments, the participants in our
experiment did not know the relative majority
of the evaluation criteria before the experiment
but learned from the number of votes cast in the
experiment from the comparative study. Based
on reinforcement learning theory (Berridge,
2012), this evaluation criterion is defined and
strengthened in the brain of subjects as the
number of trials accumulates. In addition, neg-
ative feedback always has more impact than
positive ones (Baumeister, Bratslavsky, Finke-
nauer, & Vohs, 2001), which indicates that the
effect of inconsistency with the relative major-
ity does not decline as the effect of consistency
with the relative majority does. Therefore, we
predicted that the amplitudes of FRN in the
early period of the experiment was lower than
that in the later period of the experiment,
whereas the amplitudes of P300 in the early
period of the experiment would be greater than
those in the later stage of the experiment.
Method
General Experimental Design
This experiment adopted a one-factor within-
subject with a two-level (C1: initial rating of
review helpfulness congruent with the relative
majority opinion vs. C2: initial rating of review
helpfulness incongruent with the relative major-
ity opinion) repeated-measure design. The par-
ticipants were asked to rate the helpfulness of
online reviews (helpful or unhelpful) and were
then provided with feedback on the reviews’
helpfulness (how many people found this re-
view helpful).
Participants
A total of 32 right-handed students from uni-
versities in Wuhan, China (18 women and 14
men, age range 18 –28 years) volunteered and
were paid 50 RMB (approximately $8) to par-
ticipate in this experiment. All the participants
were native Chinese speakers who reported that
they had normal or corrected-to-normal vision
with no history of neurological or mental dis-
ease. In addition, all the participants had more
than 1 year of online shopping experience. In-
formed consent was obtained from each partic-
ipant before the test.
Materials
Online review stimuli were extracted from
the top five best-selling lists of computer mice
and chocolate on Amazon.cn. The subjects were
familiar with the abovementioned product types
and had experience with buying the abovemen-
tioned products online more than once; thus,
they were able to easily complete the helpful-
ness rating task. Computer mice and chocolates
were used for this online review helpfulness
study as typical representatives of search and
experience goods (Yu, Zu, & Sun, 2016). Re-
views that have been posted recently and con-
tained more than 15 words were chosen (10
positive and 10 negative for each item) as the
stimuli material set (total 200). To eliminate the
influence of social information on the percep-
tion of review helpfulness, rating (score), help-
fulness votes, and brand information were re-
moved from each review. In addition, some
content was removed from longer reviews to
ensure the relative consistency of review
lengths without breaking the complete seman-
tics of a given reserved topic. Furthermore, two
postgraduate students in e-commerce reclassi-
fied the valence of the reviews independently,
and only the reviews with consistent categori-
zation were included in the study. We obtained
128 final reviews. There were 64 positive and
negative reviews each, and the mean length of
the reviews was 25 words (SD � 5.05). A total
of 10 product pictures (computer mice: 5; choc-
olate: 5) were chosen from Amazon.cn and dig-
itized at 500 � 500 pixels with the same gray
background to match the reviews.
The feedback on review helpfulness adopted
the style of Amazon.cn: “n people found this
76 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
https://Amazon.cn
https://Amazon.cn
helpful,” where the “n” was manipulated to be
between 30 and 40 (numerical relative majority)
or 10 to 20 (numerical relative minority) by the
software E-Prime (PST, Psychology Software
Tools, Inc., Sharpsburg, Maryland). Therefore,
the stimuli consisted of 10 pictures (S1) � 128
reviews (S2) � 2 categories of review helpful-
ness feedback (S3). The “n” was chosen for the
following reasons: (a) We focused on the rela-
tive size of votes compared with the votes of
other visible reviews, and it was therefore nec-
essary to ensure that “n” would not be processed
directly as a numerical majority or minority; (b)
there should be visible differences in the help-
fulness votes between the numerical relative
majority and the relative minority; (c) in refer-
ence to the mean helpfulness of 15,095 reviews
(76.98 � 25.63; Baek et al., 2015), selecting an
“n” value that is half of the mean helpfulness
was appropriate for a numerical relative major-
ity; and (d) a pretest in 10 students showed that
they did not directly take the two groups of
numbers as the numerical majority or minority
and that there was a good degree of identifica-
tion between the two groups of numbers.
Procedure
During the experiment, the participant sat in a
comfortable chair in a shielded room. The par-
ticipant was then instructed to read the intro-
duction to the experiment and focus on the
stimuli while avoiding eyeblinks or movement
of the eyes and head. Stimulus presentation and
behavioral response collection were controlled
by the E-Prime software. The stimulus (black
on a white background) was presented in the
center of a 22-in. computer monitor, with a
visual angle of 2.58 � 2.4.
The experiment procedure is depicted in Fig-
ure 1. Each trial began with a red cross (“�”),
which appeared for 1,000 ms, as a fixation
point. After a 500-ms blank screen, a picture of
the product (S1) was shown for 1,000 ms be-
cause we wanted the participant to focus on the
review evaluation rather than the product. After
another 500 ms blank screen, a review of the
product (S2) was randomly presented and then
disappeared upon the participant’s decision.
The participant was asked to decide as quickly
as possible whether the review was helpful.
After showing the blank screen for another 500
ms, feedback on review helpfulness (S3) was
presented for 2,000 ms. The helpfulness feed-
back was predetermined by a program without
the participant’s knowledge, and the two feed-
back categories were randomly assigned.
The experiment consisted of 200 trials di-
vided into four blocks, and the sequence of
trials in each block was randomly assigned.
There was a 2-min interval after each block for
participants to rest. A practice block of 10 trials
was assigned to familiarize the participants with
the experiment before the formal test. The entire
experiment lasted approximately 30 to 40 min.
Electroencephalogram Recordings and ERP
Data Processing
Electroencephalogram (EEG) was continu-
ously recorded (bandpass 0.05–100 Hz; sam-
pling rate 1,000 Hz (Chen et al., 2012; Liu et al.,
2013) with an Eego Amplifier (ANT Neuro,
Inc., Hengelo, the Netherlands) using an elec-
trode cap with 32 Ag/AgCl electrodes mounted
according to the extended International 10 –20
system and referenced to the mastoids. Elec-
trode impedance was kept below 10 k�
Figure 1. Sequence of events in a single trial. See the online article for the color version of
this figure.
77ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
throughout the experiment (Liu et al., 2013).
E-Prime was used to collect all the behavioral
responses including participant choices and re-
sponse times.
Offline data processing was performed using
ASA software (ANT Software BV, Enschede,
the Netherlands). The continuous EEG was re-
referenced to the average of the right and left
mastoids and then digitally filtered with a high-
pass filter at 0.1 Hz and a low-pass filter at 30
Hz (24 dB/octave; Chen et al., 2012; Liu et al.,
2013). Subsequently, electrooculogram artifacts
were corrected using the ASA software. After
artifact correction, the data were segmented into
1,000-ms stimulus-locked epochs from �200
ms (before S3 onset) to 800 ms (after S3 onset),
with the first 200 ms of prestimulus as a base-
line. Epochs with a deflection exceeding � 80
�V were excluded from the averaging (Chen et
al., 2010, 2012). The remaining epochs were
then averaged for each participant and each
condition (C1, initial rating on review helpful-
ness congruent with the relative majority opin-
ion; C2, initial rating on review helpfulness
incongruent with the relative majority opinion)
to produce the ERP waveforms. Subsequently,
the ERP waveforms were corrected to baseline
(�200 ms-0) and then grand-averaged across all
the participants in each condition to produce
grand-averaged ERP waveforms. To investigate
the neurophysiologic factors correlating with
the processing of different review helpfulness
evaluation consistency categories, we compared
the amplitudes of the FRN and P300 using a
within-subject repeated-measure analysis of
variance (ANOVA).
For the FRN, the largest amplitude was typ-
ically located at the frontocentral recording sites
(Hajcak et al., 2006, 2007; Masaki et al., 2006);
thus, the electrode sites of Fz, FC1, FC2, and Cz
were selected for further analysis. According to
the FRN latency and upon waveform visual
inspection (Figure 2), the mean amplitudes of
Fz, FC1, FC2, and Cz in the 250 to 350 ms time
window were analyzed. For P300, the maxi-
mum amplitude was reported at parietal sites
(Wu & Zhou, 2009; Yeung & Sanfey, 2004);
therefore, the electrode sites of CP1, CP2, and
Figure 2. Grand-averaged event-related brain potential waveforms for two conditions: C1
and C2 at electrode sites: Fz, FC1, FC2, and Cz (�V/ms). See the online article for the color
version of this figure.
78 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
Pz were selected for further analysis. According
to the P300 latency and waveform visual in-
spection (Figure 3), the mean amplitudes of
CP1, CP2, and Pz in the 300 to 400 ms time
window were analyzed. Then, we performed a
within-subject repeated-measure ANOVA of
the mean amplitudes of FRN and P300 using the
software SPSS 22.0 (SPSS Inc., Chicago, Illi-
nois). The Greenhouse-Geisser correction for
violation of the assumption of sphericity was
applied, and the Bonferroni correction was used
for multiple comparisons.
Results
Manipulation Check
A paired t test was used to compare the trial
numbers among positive reviews (M � 100.28 �
2.29) and negative reviews (M � 99.72 � 2.29)
for each participant. The difference between
positive and negative reviews was not signifi-
cant (t � .695, p .05). The trial numbers of
each condition (conformity feedback, M �
92.03 � 6.34, vs. nonconformity feedback,
M � 107.97 � 6.34) for each participant were
counted; all the trial numbers of each condition
for each participant were more than 30, which
met the requirements for an ERP experiment
(Luck, 2005).
Behavioral Data
A total of 6,400 behavioral data (200 items
per subject, including reviews helpfulness rat-
ing and response times) were recorded by E-
Prime. For each participant, response times
greater than � 2 SDs from the mean in each
condition were excluded from the helpfulness
review rating and response time analyses (Liu et
al., 2013). Finally, 6,120 behavioral data were
retained (280 items were removed).
A paired t test on the helpful review rates
between positive (M � 0.77 � 0.11) and neg-
ative reviews (M � 0.69 � 0.13) for each
participant showed that the helpful review rat-
ings for positive reviews were significantly
higher than were those for negative reviews (t �
3.455, p
.01), indicating that participants
more often rated positive reviews to be helpful
than they did negative reviews. Interestingly, a
paired t test on response times for positive re-
views (M � 2518.52 � 840) and negative re-
views (M � 2771.55 � 1154.88) found that the
Figure 3. Grand-averaged event-related brain potential waveforms for two conditions: C1
and C2 at electrode sites: CP1, CP2, and Pz (�V/ms). See the online article for the color
version of this figure.
79ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
response time for negative reviews was signif-
icantly higher than that for positive reviews (t �
3.619, p
.01), indicating that participants
responded faster to positive reviews (Table 1).
To further compare the differences in re-
sponse times for the early and late phase con-
ditions (Bellebaum & Daum, 2008; Kraus &
Horowitz-Kraus, 2014; Sailer et al., 2010), we
divided each subject’s trials into two parts ac-
cording to the presentation sequence of the trials
(T1 and T2, each part for 100 trials). The results
of the paired t test on response time for T1 (M �
2642.38 � 905.96) and T2 (M � 2589.63 �
1117.09) showed that the differences between
the early phase and the late phase were not
significant.
Feedback-Related Negativity
As shown in Figure 2, an obvious negative
deflection was elicited with a peak at approxi-
mately 300 ms. A two-factor 2 (condition: con-
formity feedback and nonconformity feed-
back) � 4 (frontocentral: Fz, FC1, FC2, and Cz)
within-subjects repeated-measure ANOVA of
the mean ERP amplitude between 250 and 350
ms showed that the main effect of condition
(conformity feedback and nonconformity feed-
back) was significant, F(1, 31) � 18.112, p
.001. This result indicates that the mean ERP
amplitude across the four electrodes in the time
window for nonconformity feedback was sig-
nificantly different from that for conformity
feedback. A paired t test on the mean ERP
amplitude between the conditions of conformity
feedback and nonconformity feedback found
that the negative amplitude of the conformity
feedback condition was significantly smaller
than that of the nonconformity feedback condi-
tion (t � 4.256, p
.001), indicating that the
nonconformity feedback condition elicited
more negative-going FRN than did the confor-
mity feedback condition. The main effect of
electrode location was also significant, F(3,
93) � 9.671, p
.001, but the interaction effect
between the two factors was not significant,
F(3, 93) � 1.224, p .05; this finding indi-
cates that the amplitudes were significantly
different among the four electrodes but that
the main effect of condition (conformity feed-
back, nonconformity feedback) was not af-
fected by the electrode position. Subsequent
paired t tests on mean ERP amplitude among
Table 1
Mean Helpful Reviews Rate and Mean Response Time Across Two Conditions
Condition Helpful reviews rate (%) SD t value Response time (ms) SD t value
Positive review 76.75 .11 3.455�� 2,518.52 840.63 �3.619��
Negative review 69.10 .13 2,771.55 1,154.88
Note. Helpful reviews rate is the percentage of “helpful” choices among all valid choices.
�� p
.01.
Table 2
Mean Feedback-Related Negativity and Planned Contrasts in Time Window of
250 to 350 ms (�V)
Electrode
site
C1: Congruent with
the relative majority
opinion
C2: Incongruent
with the relative
majority opinion
t valueM SD M SD
Fz �0.39 2.31 �1.00 2.14 2.729�
FC1 �0.51 2.20 �1.26 2.01 3.693��
FC2 1.00 2.64 0.09 2.44 5.581���
Cz �0.14 2.59 �0.97 2.07 3.524��
Frontocentral �0.01 2.20 �0.79 1.90 4.256���
|C1|
|C2|, F � 18.112, p � .000
� p
.05.
�� p
.01. ��� p
.001.
80 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
the abovementioned four electrodes showed
that the greatest main effect of condition
(conformity feedback vs. nonconformity
feedback) was on FC2 (Table 2).
To further compare the differences in the
FRN between the early and late periods (Belle-
baum & Daum, 2008; Kraus & Horowitz-
Kraus, 2014; Sailer et al., 2010), we divided
each subject’s trials into two parts according to
the presentation sequence of the trials (T1 and
T2, each part for 100 trials). A three-factor 2
(condition: conformity feedback and noncon-
formity feedback) � 2 (period: T1 and T2) � 4
(frontocentral: Fz, FC1, FC2, and Cz) within-
subjects repeated-measure ANOVA of the
mean ERP amplitude in the interval 250 to 350
ms showed that the main effect of period (T1
and T2) was significant, F(1, 31) � 4.472, p
.05. This result indicates that the mean ERP
amplitude across the four electrodes in the early
period was significantly different from that in
the later period. A further paired t test on the
mean ERP amplitude between T1 and T2 found
that, except for site FC2, the negative ampli-
tudes of T1 for nonconformity feedback at sites
Fz (t � 3.255, p
.01), FC1 (t � 2.092, p
.05), and Cz (t � 2.272, p
.05) were signif-
icantly smaller than those for T2. However, all
differences for conformity feedback between
T1 and T2 across the four sites were not signif-
icant, indicating that the nonconformity feed-
back condition in the later period elicited more
negative-going FRN than did the nonconfor-
mity feedback condition in the early period at
sites Fz, FC1, and Cz (Table 3).
P300
In Figure 3, an obvious positive deflection
can be observed near 350 ms. A two-factor 2
(condition: conformity feedback and noncon-
formity feedback) � 3 (parietal: CP1, CP2, and
Pz) within-subjects repeated-measure ANOVA
on the mean ERP amplitude between 300 and
400 ms was performed. The results showed that
the main effect of condition (conformity feed-
back and nonconformity feedback) was signifi-
cant, F(1, 31) � 17.776, p
.001, along with a
significant main effect of electrode location,
F(2, 62) � 4.368, p
.05, but the interaction
effect between condition and location was not
significant, F(2, 62) � 0.193, p .05. This
result indicates that the main effect of condition
(conformity feedback vs. nonconformity feed-
back) was significant and was not affected by
electrode location. A paired t test on the mean
ERP amplitude between the conformity condi-
tion and the nonconformity feedback condition
revealed that the amplitude of the conformity
feedback condition was significantly greater
than that of the nonconformity feedback condi-
tion (t � 4.216, p
.001), indicating that the
conformity feedback condition elicited more
positive-going P300 than did the nonconformity
feedback condition. Further paired t tests on
the mean ERP amplitude among the above-
mentioned three electrodes showed that the
largest main effect of condition (conformity
feedback vs. nonconformity feedback) was on
CP2 (Table 4).
To further compare the differences of FRN
between the experimental periods, a three-factor
2 (condition: conformity feedback and noncon-
formity feedback) � 2 (period: T1 and T2) � 3
(parietal: CP1, CP2, and Pz) within-subjects
repeated-measure ANOVA of the mean ERP
amplitude between 300 and 400 ms was con-
ducted; the results showed that the main effect
of period (T1 vs. T2) was significant, F(1,
31) � 4.619, p
.05. This result indicates that
the mean ERP amplitude across the four elec-
trodes in the early period was significantly dif-
Table 3
Mean Feedback-Related Negativity and Planned
Contrasts Between T1 and T2 in Time Window of
250 to 350 ms (�V)
Electrode site
T1: First
half trials
T2: Second
half trials
t valueM SD M SD
Fz
C1 �.36 2.70 �0.31 2.17 �0.152
C2 �.55 2.43 �1.42 2.11 3.255��
FC1
C1 �.24 2.31 �0.72 2.49 1.364
C2 �.96 2.18 �1.57 2.13 2.092�
FC2
C1 .93 2.92 1.23 2.58 �0.985
C2 .27 2.60 0.03 2.30 1.011
Cz
C1 .15 2.69 �0.31 2.83 1.257
C2 �.58 1.96 �1.25 2.29 2.272�
Note. C1 � congruent with the relative majority opinion;
C2 � incongruent with the relative majority opinion.
� p
.05. �� p
.01.
81ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
ferent from that in the later period. Further
paired t test on the mean ERP amplitude be-
tween T1 and T2 found that, except for site
CP2, the amplitudes of T1 for the conformity
feedback condition at sites CP1(t � 2.395, p
.05) and Pz (t � 2.362, p
.05) were signifi-
cantly larger than those of T2; however, all
differences for conformity feedback between
T1 and T2 across the three sites were not sig-
nificant, indicating that the conformity feedback
condition in the early period elicited more pos-
itive-going P300 than did the conformity feed-
back condition in the later period at sites CP1
and Pz (Table 5).
Discussion
The behavioral results of the helpfulness re-
view rating task indicated that participants rated
positive reviews as more helpful than negative
reviews. Negativity bias is a widespread and
widely recognized phenomenon. However, re-
search conclusions on this topic in the context
of online reviews have been inconsistent. Sev-
eral previous studies found that negative re-
views were more helpful than positive reviews
(Cao et al., 2011; Lee et al., 2017), whereas
other studies suggested that negative reviews
were not more helpful than positive reviews
(Mudambi & Schuff, 2010; Sen & Lerman,
2007; Wu, 2013). Interestingly, participants re-
sponded significantly more rapidly to positive
reviews than to negative reviews. Previous stud-
ies have shown that response time was posi-
tively associated with cognitive load (Cowen,
Ball, & Delin, 2002; Sweller, 1988). The un-
derlying causes of negativity bias on reviews
include the findings that negative reviews car-
ried greater surprise value and the ability to
avoid losses (Yin, Mitra, & Zhang, 2012). The
surprise value of negative reviews may more
easily capture people’s attention (Carretié, Mer-
cado, Tapia, & Hinojosa, 2001) but cannot
guarantee the perceived value of negative re-
views (Chen et al., 2010). The positive and
negative reviews were manipulated to be ran-
domly presented with the same probability in
our experiment; thus, the influence novelty of
negative reviews would be weakened. Negative
reviews are typically associated with risks,
which may cause the participants to spend more
time assessing potential risks. In a laboratory
environment, it is difficult to guarantee the par-
ticipants’ purchase motivation; thus, the value
of negative reviews on avoiding losses might be
discounted. In contrast, the relatively low cog-
nitive load of positive reviews means that sub-
Table 4
Mean P300 and Planned Contrasts in Time Window of 300 to 400 ms (�V)
Electrode site
C1: Congruent
with the relative
majority opinion
C2: Incongruent
with the relative
majority opinion
t valueM SD M SD
CP1 1.06 2.94 �.07 2.66 3.970���
CP2 1.75 3.05 .58 2.33 4.673���
Pz 1.14 3.01 .05 2.69 3.750��
Across three sites 1.32 2.88 .19 2.43 4.216���
|C1| |C2|, F � 17.776, p � .000
�� p
.01. ��� p
.001.
Table 5
Mean P300 and Planned Contrasts Between T1
and T2 in Time Window of 300 to 400 ms (�V)
Electrode site
T1: First
half trials
T2: Second
half trials
t valueM SD M SD
CP1
C1 1.57 2.95 0.71 3.27 2.395�
C2 0.08 2.75 �0.33 2.85 1.455
CP2
C1 2.06 3.13 1.63 3.15 1.450
C2 0.72 2.16 0.50 2.51 0.987
Pz
C1 1.62 2.90 0.81 3.41 2.362�
C2 0.03 2.76 �0.09 2.84 0.445
Note. C1 � congruent with the relative majority opinion;
C2 � incongruent with the relative majority opinion.
� p
.05.
82 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
jects do not require excessive cognitive effort to
make decisions about a review’s helpfulness,
which may explain why positive reviews re-
ceived a higher rate of “helpful” votes. In addi-
tion, the lack of other social clues in the online
review helpfulness task might have caused the
participants to perceive less social pressure.
Previous studies have shown that negativity
bias emerges in the context of public opinions
but not in private opinions or thoughts
(Schlosser, 2005). Consequently, the response
time together with the behavioral results of the
review helpfulness ratings indicates that nega-
tivity bias in the context of reviews may not lie
in the perception of a review’s helpfulness but
rather in the cognitive processing of such re-
views. This issue clearly requires further inves-
tigation.
FRN is an ERP component that is closely
related to outcome evaluation. Similar to the
results of a study on group opinion (Chen et al.,
2012), a significant difference in FRN ampli-
tude was observed in the present study. The
stimulus condition of inconsistency with the
relative majority opinion elicited a greater FRN
response than did consistency with the relative
majority opinion. In the studies of Chen et al.
(2012) and Perfumi et al. (2016), all subjects
were in a group with fixed members; thus they
were able to judge the degree of the consistency
of group opinion directly. However, in our
study, it was difficult for participants to judge
the consensus opinions of others directly from a
vote on a review. However, significant differ-
ences in FRN amplitude were also observed in
the present experiment, indicating that conflicts
between personal views and the views of others
can be detected by the brain of the participant in
this context. That is, participants can build an
evaluation criterion for the number of votes that
can represent the relative consensus of others by
learning from a comparison among more than
one visible voting number. Differences in par-
ticipants’ performance between the early and
late phases of a learning task were often used to
test the learning effect (Bellebaum & Daum,
2008; Kraus & Horowitz-Kraus, 2014; Sailer et
al., 2010). The results of a comparison of the
FRN amplitude between the different periods of
the trials showed that the nonconformity feed-
back condition in the later period elicited more
negative-going FRN than did the nonconfor-
mity feedback condition in the early period,
confirming our inference. Because the definite
evaluation criteria of the number of votes had
not yet been fully formatted in the early period
of the experiment, the stimulus condition of
inconsistency with the relative majority opinion
elicited a smaller negative-going FRN. How-
ever, as the number of trials increased, this
criterion was strengthened continuously; thus, a
greater negative-going FRN was observed in
the later period of the experiment. However,
there was no significant difference in the re-
sponse time between the different periods of the
trials, indicating that the learning effect was not
on the speed of processing. Therefore, we con-
cluded that the brain establishes a criterion for
judging the relative majority opinion by learn-
ing from a comparison of the visible votes of
other reviews. The results showed that the brain
could automatically encode the relative major-
ity opinion by comparing the visible votes and
could detect whether one’s personal choice is
consistent with the relative majority opinion.
Because social pressure in online environments
is much less than that in face-to-face environ-
ments, the FRN response elicited by inconsis-
tency with the relative majority opinion should
be smaller in online environments than in face-
to-face environments. However, it is interesting
that the mean amplitudes of FRN in the current
experiment, mean (Fz, Cz, Pz) � �4.41 � 1.62
for consistent with the relative majority opinion
and �4.47 � 2.02 for inconsistent with the
relative majority opinion, were far lower than
those in the face-to-face experiment environ-
ment of Chen et al. (2012), mean (Fz, FCz, Cz,
CPz, Pz) � 3.98 � 1.13 for highly incongruent
trials, 5.72 � 1.07 for moderately incongruent
trials, and 8.56 � 1.13 for congruent trials,
indicating that the FRN response was much
greater in the online environment than in the
face-to-face experiment environment of Chen et
al. (2012). Differences in the experimental task
and environment may be reasons for the above-
mentioned phenomenon. Compared with the
line judgment task of Chen et al. (2012), the
task of judging review helpfulness was more
subjective and ambiguous. Therefore, the sub-
jects might have paid more attention to the
views of others based on the need for accurate
cognition (Cialdini & Goldstein, 2004), thus
allowing them to be more influenced by the
informational influence of others’ views (vote
of helpfulness). Second, compared with the
83ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
face-to-face environment, group sizes in the on-
line environment are much larger, and the dig-
ital visualization of users’ opinions or behaviors
make comparing the views or behaviors of oth-
ers more convenient and extensive. Together,
these factors resulted in a greater FRN response
when processing the votes of online reviews. At
the same time, the normative influence of the
online environment is much weaker than that of
face-to-face environments (Perfumi et al.,
2016); thus subjects in online environments will
not worry more about whether their views are
consistent with others’ views, as in face-to-face
environments. Therefore, the differences in
FRN amplitudes between stimuli conditions
should not be greater than the differences in the
face-to-face environments. In a comparison of
the main effect of stimulus condition, F(1,
31) � 18.112, p
.001, Chen et al. (2012), F(2,
36) � 64.57, p
.001 showed that the effect of
condition on differences in FRN response for
the online environment was smaller than that for
the face-to-face environment, consistent with
our assumption.
In the current study, P300 was a slightly late
ERP component following FRN and could elicit
significantly enhanced deflection in the condi-
tion of consistency with the plurality opinion.
P300 has been associated with outcome evalu-
ation, reward processing, and selective attention
(Wu & Zhou, 2009; Yeung & Sanfey, 2004). As
shown in a previous study, FRN was sensitive
to feedback valence, whereas P300 was sensi-
tive to reward magnitude (Schindler & Bickart,
2012). Therefore, FRN and P300 are thought to
be responsible for encoding different aspects of
outcome evaluation. In addition, other studies
found that P300 was related to participants’
expectations (Hajcak et al., 2006, 2007; Kimura
et al., 2013). Compared with being given ex-
pected results, feedback beyond expectations
elicited more positive P300 amplitudes. Be-
cause the present study contained only two lev-
els randomly presented for review helpfulness
feedback, the probability of inconsistency with
the relative majority opinion should be the same
as the probability of consistency with the rela-
tive majority opinion. Thus, the factors of ex-
pectation and magnitude that can affect P300
amplitude could be ignored in this context.
Therefore, the P300 effect in the current exper-
iment is likely to be primarily related to the
valence of feedback. Consistency with the rel-
ative majority opinion may be encoded as pos-
itive outcomes (rewards, such as social ap-
proval) by the nervous system. Furthermore, the
results of comparing P300 amplitudes between
the different periods of the trials show that the
P300 response elicited by the trials of consis-
tency with the relative majority opinion in the
early period is significantly greater than that in
the later period, indicating that the marginal
effect of consistency with relative majority de-
clines as the number of trials increases.
Together, the FRN and P300 results indicate
that conflict with the relative majority opinion
of online review helpfulness triggered a cascade
of neuronal responses including an earlier FRN
response monitoring the violation of relative
majority opinions and a later P300 response
differentiating positive from negative feedback.
Furthermore, the FRN response elicited by the
trials of inconsistency with relative majority
opinions in the early period were smaller than
those in the later period, whereas the P300
response elicited by the trials of consistency
with relative majority opinions in the early pe-
riod were greater than those in the later period.
The results confirmed that in an online environ-
ment in which it was difficult to clearly judge
the consensus views of others, the brain could
automatically encode relative majority opinions
by learning from a comparison of other visible
social cues and automatically categorize
whether one’s personal views are consistent
with those of the relative majority. In addition,
although the normative influence of online en-
vironments is weaker than that of face-to-face
environments, the advantages of convenient
comparison among users’ behaviors and the
scale of net users strengthen the informational
influence of net user behavior.
To meet the needs of a controlled experimen-
tal, the editing of stimuli materials and their
presentation resulted in the loss of some impor-
tant information. In addition, it was difficult to
guarantee the purchase motivation of the partic-
ipants; thus, the personal relevance of a review
may have affected participant attitudes toward
the tasks and, in turn, their judgment and ERP
responses to the reviews’ helpfulness. Further-
more, the previous task may affect the FRN
response of the latter task (Schmidt et al., 2017).
These factors may weaken the generalizability
of these results to the real world. In addition, the
comparison of these results with those of Chen
84 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
et al. (2012) may be influenced by differences
in the tasks and recording instruments used;
thus, the results need to be further verified.
Although the current study revealed how the
brain processes personal views with others’
views in an online context, the information
strategies (to remain independent or to con-
form) that could be subsequently adopted war-
rant further study. Furthermore, in contrast to
helpfulness votes, the opinions presented by
online reviews are more complex, with a single
review often mixing positive and negative
views from multiple dimensions. A previous
study showed that people’s attribution for dis-
persion in online reviews was affected by the
perceived taste of the product (He & Bond,
2015). How we cognitively process the contra-
dictory and complex views of others in an on-
line environment remains a very important top-
ic. In addition, there are many types of
information available online, such as sales and
total reviews. How we understand these social
clues and use them for social judgment or de-
cision making remains to be fully understood.
References
Baek, H., Lee, S., Oh, S., & Ahn, J. H. (2015).
Normative social influence and online review help-
fulness: Polynomial modeling and response sur-
face analysis. Journal of Electronic Commerce
Research, 16, 291–306.
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., &
Vohs, K. D. (2001). Bad is stronger than good.
Review of General Psychology, 5, 323–370. http://
dx.doi.org/10.1037/1089-2680.5.4.323
Bellebaum, C., & Daum, I. (2008). Learning-related
changes in reward expectancy are reflected in the
feedback-related negativity. European Journal of
Neuroscience, 27, 1823–1835. http://dx.doi.org/10
.1111/j.1460-9568.2008.06138.x
Berridge, K. C. (2012). From prediction error to
incentive salience: Mesolimbic computation of re-
ward motivation. European Journal of Neurosci-
ence, 35, 1124–1143. http://dx.doi.org/10.1111/j
.1460-9568.2012.07990.x
Cao, Q., Duan, W. J., & Gan, Q. W. (2011). Explor-
ing determinants of voting for the “helpfulness” of
online user reviews: A text mining approach. De-
cision Support Systems, 50, 511–521. http://dx.doi
.org/10.1016/j.dss.2010.11.009
Carretié, L., Mercado, F., Tapia, M., & Hinojosa, J. A.
(2001). Emotion, attention, and the ‘negativity bias’,
studied through event-related potentials. Interna-
tional Journal of Psychophysiology, 41, 75– 85.
http://dx.doi.org/10.1016/S0167-8760(00)00195-1
Chen, J., Wu, Y., Tong, G., Guan, X., & Zhou, X.
(2012). ERP correlates of social conformity in a
line judgment task. BMC Neuroscience, 13, 43.
http://dx.doi.org/10.1186/1471-2202-13-43
Chen, M., Ma, Q., Li, M., Dai, S., Wang, X., & Shu,
L. (2010). The neural and psychological basis of
herding in purchasing books online: An event-
related potential study. Cyberpsychology, Behav-
ior, and Social Networking, 13, 321–328. http://dx
.doi.org/10.1089/cyber.2009.0142
Cialdini, R. B., & Goldstein, N. J. (2004). Social
influence: Compliance and conformity. Annual Re-
view of Psychology, 55, 591– 621. http://dx.doi
.org/10.1146/annurev.psych.55.090902.142015
Conner, D. S. (2003). Social comparison in virtual
work environments: An examination of contempo-
rary referent selection. Journal of Occupational
and Organizational Psychology, 76, 133–147.
http://dx.doi.org/10.1348/096317903321208925
Cowen, L., Ball, L. J. S., & Delin, J. (2002). An eye
movement analysis of web page usability. Inter-
face, 28, 317–335.
Festinger, L. (1954). A theory of social comparison
processes. Human Relations, 7, 117–140. http://dx
.doi.org/10.1177/001872675400700202
Gershman, S. J., Pouncy, H. T., & Gweon, H. (2017).
Learning the structure of social influence. Cogni-
tive Science, 41, 545–575. http://dx.doi.org/10
.1111/cogs.12480
Hajcak, G., Moser, J. S., Holroyd, C. B., & Simons,
R. F. (2006). The feedback-related negativity re-
flects the binary evaluation of good versus bad
outcomes. Biological Psychology, 71, 148–154.
http://dx.doi.org/10.1016/j.biopsycho.2005.04.001
Hajcak, G., Moser, J. S., Holroyd, C. B., & Simons,
R. F. (2007). It’s worse than you thought: The
feedback negativity and violations of reward pre-
diction in gambling tasks. Psychophysiology, 44,
905–912. http://dx.doi.org/10.1111/j.1469-8986
.2007.00567.x
He, S. X., & Bond, S. D. (2015). Why is the crowd
divided? Attribution for dispersion in online word
of mouth. Journal of Consumer Research, 41,
1509–1527. http://dx.doi.org/10.1086/680667
Hong, H., Xu, D., Wang, G. A., & Fan, W. (2017).
Understanding the determinants of online review
helpfulness: A meta-analytic investigation. Deci-
sion Support Systems, 102, 1–11. http://dx.doi.org/
10.1016/j.dss.2017.06.007
Karimi, S., & Wang, F. (2017). Online review help-
fulness: Impact of reviewer profile image. Deci-
sion Support Systems, 96, 39– 48. http://dx.doi.org/
10.1016/j.dss.2017.02.001
Kimura, K., Murayama, A., Miura, A., & Katayama,
J. (2013). Effect of decision confidence on the
evaluation of conflicting decisions in a social con-
85ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
http://dx.doi.org/10.1037/1089-2680.5.4.323
http://dx.doi.org/10.1037/1089-2680.5.4.323
http://dx.doi.org/10.1111/j.1460-9568.2008.06138.x
http://dx.doi.org/10.1111/j.1460-9568.2008.06138.x
http://dx.doi.org/10.1111/j.1460-9568.2012.07990.x
http://dx.doi.org/10.1111/j.1460-9568.2012.07990.x
http://dx.doi.org/10.1016/j.dss.2010.11.009
http://dx.doi.org/10.1016/j.dss.2010.11.009
http://dx.doi.org/10.1016/S0167-8760%2800%2900195-1
http://dx.doi.org/10.1186/1471-2202-13-43
http://dx.doi.org/10.1089/cyber.2009.0142
http://dx.doi.org/10.1089/cyber.2009.0142
http://dx.doi.org/10.1146/annurev.psych.55.090902.142015
http://dx.doi.org/10.1146/annurev.psych.55.090902.142015
http://dx.doi.org/10.1348/096317903321208925
http://dx.doi.org/10.1177/001872675400700202
http://dx.doi.org/10.1177/001872675400700202
http://dx.doi.org/10.1111/cogs.12480
http://dx.doi.org/10.1111/cogs.12480
http://dx.doi.org/10.1016/j.biopsycho.2005.04.001
http://dx.doi.org/10.1111/j.1469-8986.2007.00567.x
http://dx.doi.org/10.1111/j.1469-8986.2007.00567.x
http://dx.doi.org/10.1086/680667
http://dx.doi.org/10.1016/j.dss.2017.06.007
http://dx.doi.org/10.1016/j.dss.2017.06.007
http://dx.doi.org/10.1016/j.dss.2017.02.001
http://dx.doi.org/10.1016/j.dss.2017.02.001
text. Neuroscience Letters, 556, 176–180. http://dx
.doi.org/10.1016/j.neulet.2013.09.020
Kraus, D., & Horowitz-Kraus, T. (2014). The effect
of learning on feedback-related potentials in ado-
lescents with dyslexia: An EEG-ERP study. PLoS
ONE, 9, e100486. http://dx.doi.org/10.1371/
journal.pone.0100486
Latané, B. (1996). Dynamic social impact: The cre-
ation of culture by communication. Journal of
Communication, 46, 13–25. http://dx.doi.org/10
.1111/j.1460-2466.1996.tb01501.x
Lee, M., Jeong, M., & Lee, J. (2017). Roles of
negative emotions in customers’ perceived help-
fulness of hotel reviews on a user-generated re-
view website: A text mining approach. Interna-
tional Journal of Contemporary Hospitality
Management, 29, 762–783. http://dx.doi.org/10
.1108/IJCHM-10-2015-0626
Liu, X., Liao, Y., Zhou, L., Sun, G., Li, M., & Zhao,
L. (2013). Mapping the time course of the positive
classification advantage: An ERP study. Cognitive,
Affective and Behavioral Neuroscience, 13, 491–
500. http://dx.doi.org/10.3758/s13415-013-0158-6
Luck, S. J. (2005). An introduction to the event-
related potential technique. Cambridge, MA: MIT
Press.
Masaki, H., Takeuchi, S., Gehring, W. J., Takasawa,
N., & Yamazaki, K. (2006). Affective-motiva-
tional influences on feedback-related ERPs in a
gambling task. Brain Research, 1105, 110–121.
http://dx.doi.org/10.1016/j.brainres.2006.01.022
Moe, W. W., & Schweidel, D. A. (2012). Online
product opinions: Incidence, evaluation, and evo-
lution. Marketing Science, 31, 372–386. http://dx
.doi.org/10.1287/mksc.1110.0662
Mudambi, S. M., & Schuff, D. (2010). What makes a
helpful online review? A study of customer re-
views on amazon.com. Social Science Electronic
Publishing, 34, 185–200. http://dx.doi.org/10
.2307/20721420
Naylor, R. W., Lamberton, C. P., & Norton, D. A.
(2011). Seeing ourselves in others: Reviewer am-
biguity, egocentric anchoring, and persuasion.
Journal of Marketing Research, 48, 617– 631.
http://dx.doi.org/10.1509/jmkr.48.3.617
Perfumi, S. C., Cardelli, C., Bagnoli, F., & Guazzini,
A. (2016). Conformity in virtual environments: A
hybrid neurophysiological and psychosocial ap-
proach. International Conference on Internet Sci-
ence (pp. 148–157). Florence, Italy: Springer In-
ternational Publishing.
Sailer, U., Fischmeister, F. P. S., & Bauer, H. (2010).
Effects of learning on feedback-related brain po-
tentials in a decision-making task. Brain Research,
1342, 85–93. http://dx.doi.org/10.1016/j.brainres
.2010.04.051
Schindler, R. M., & Bickart, B. (2012). Perceived
helpfulness of online consumer reviews: The role
of message content and style. Journal of Consumer
Behaviour, 11, 234–243. http://dx.doi.org/10.1002/cb
.1372
Schlosser, A. E. (2005). Posting versus lurking:
Communicating in a multiple audience context.
Journal of Consumer Research, 32, 260–265.
http://dx.doi.org/10.1086/432235
Schmidt, B., Mussel, P., Osinsky, R., Rasch, B.,
Debener, S., & Hewig, J. (2017). Work first then
play: Prior task difficulty increases motivation-
related brain responses in a risk game. Biological
Psychology, 126, 82– 88. http://dx.doi.org/10
.1016/j.biopsycho.2017.04.010
Sen, S., & Lerman, D. (2007). Why are you telling
me this? An examination into negative consumer
reviews on the web. Journal of Interactive Mar-
keting, 21, 76–94. http://dx.doi.org/10.1002/dir
.20090
Sweller, J. (1988). Cognitive load during problem solving:
Effects on learning. Cognitive Science, 12, 257–285.
http://dx.doi.org/10.1207/s15516709cog1202_4
Thomas, V. L., & Vinuales, G. (2017). Understand-
ing the role of social influence in piquing curiosity
and influencing attitudes and behaviors in a social
network environment. Psychology and Marketing,
34, 884– 893. http://dx.doi.org/10.1002/mar.21029
Wang, Z. S., Li, H. Y., & Sun, R. (2016). Determi-
nants of votes of helpfulness for Chinese online
customer reviews: A moderating effect of product
type. Management Review, 28, 143–153.
Werheid, K., Schacht, A., & Sommer, W. (2007).
Facial attractiveness modulates early and late
event-related brain potentials. Biological Psycho-
logy, 76, 100–108. http://dx.doi.org/10.1016/j
.biopsycho.2007.06.008
Wu, H., Luo, Y., & Feng, C. (2016). Neural signa-
tures of social conformity: A coordinate-based ac-
tivation likelihood estimation meta-analysis of
functional brain imaging studies. Neuroscience
and Biobehavioral Reviews, 71, 101–111. http://dx
.doi.org/10.1016/j.neubiorev.2016.08.038
Wu, P. F. (2013). In search of negativity bias: An
empirical study of perceived helpfulness of online
reviews. Psychology and Marketing, 30, 971–984.
http://dx.doi.org/10.1002/mar.20660
Wu, Y., Zhang, D., Elieson, B., & Zhou, X. (2012).
Brain potentials in outcome evaluation: When so-
cial comparison takes effect. International Journal
of Psychophysiology, 85, 145–152. http://dx.doi
.org/10.1016/j.ijpsycho.2012.06.004
Wu, Y., & Zhou, X. (2009). The P300 and reward
valence, magnitude, and expectancy in outcome
evaluation. Brain Research, 1286, 114–122. http://
dx.doi.org/10.1016/j.brainres.2009.06.032
Yang, Y., Chen, C., & Bao, F. S. (2017). Aspect-
based helpfulness prediction for online product
reviews. IEEE, International Conference on
86 GUO, ZHAO, ZHANG, WEN, AND YIN
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
http://dx.doi.org/10.1016/j.neulet.2013.09.020
http://dx.doi.org/10.1016/j.neulet.2013.09.020
http://dx.doi.org/10.1371/journal.pone.0100486
http://dx.doi.org/10.1371/journal.pone.0100486
http://dx.doi.org/10.1111/j.1460-2466.1996.tb01501.x
http://dx.doi.org/10.1111/j.1460-2466.1996.tb01501.x
http://dx.doi.org/10.1108/IJCHM-10-2015-0626
http://dx.doi.org/10.1108/IJCHM-10-2015-0626
http://dx.doi.org/10.3758/s13415-013-0158-6
http://dx.doi.org/10.1016/j.brainres.2006.01.022
http://dx.doi.org/10.1287/mksc.1110.0662
http://dx.doi.org/10.1287/mksc.1110.0662
http://dx.doi.org/10.2307/20721420
http://dx.doi.org/10.2307/20721420
http://dx.doi.org/10.1509/jmkr.48.3.617
http://dx.doi.org/10.1016/j.brainres.2010.04.051
http://dx.doi.org/10.1016/j.brainres.2010.04.051
http://dx.doi.org/10.1002/cb.1372
http://dx.doi.org/10.1002/cb.1372
http://dx.doi.org/10.1086/432235
http://dx.doi.org/10.1016/j.biopsycho.2017.04.010
http://dx.doi.org/10.1016/j.biopsycho.2017.04.010
http://dx.doi.org/10.1002/dir.20090
http://dx.doi.org/10.1002/dir.20090
http://dx.doi.org/10.1207/s15516709cog1202_4
http://dx.doi.org/10.1002/mar.21029
http://dx.doi.org/10.1016/j.biopsycho.2007.06.008
http://dx.doi.org/10.1016/j.biopsycho.2007.06.008
http://dx.doi.org/10.1016/j.neubiorev.2016.08.038
http://dx.doi.org/10.1016/j.neubiorev.2016.08.038
http://dx.doi.org/10.1002/mar.20660
http://dx.doi.org/10.1016/j.ijpsycho.2012.06.004
http://dx.doi.org/10.1016/j.ijpsycho.2012.06.004
http://dx.doi.org/10.1016/j.brainres.2009.06.032
http://dx.doi.org/10.1016/j.brainres.2009.06.032
TOOLS with Artificial Intelligence (pp. 836– 843).
San Jose, CA: IEEE.
Yeung, N., & Sanfey, A. G. (2004). Independent coding
of reward magnitude and valence in the human brain.
The Journal of Neuroscience, 24, 6258– 6264. http://
dx.doi.org/10.1523/JNEUROSCI.4537-03.2004
Yin, D., Mitra, S., & Zhang, H. (2012). Mechanisms
of negativity bias: An empirical exploration of app
reviews in apple’s app store. In Proceedings of the
33rd International Conference on Information Sys-
tems, Orlando, FL.
Yu, W. P., Zu, X., & Sun, Y. B. (2016). Online
review impact on heterogeneous consumer pur-
chase intention based on product category. Journal
of Dalian University of Technology, 37, 1–5.
Zhao, Y., Ni, Q., & Zhou, R. X. (2018). What factors
influence the mobile health service adoption? A
meta-analysis and the moderating role of age. In-
ternational Journal of Information Management,
43, 342–350. http://dx.doi.org/10.1016/j.ijinfomgt
.2017.08.006
Received January 28, 2018
Revision received December 3, 2018
Accepted December 17, 2018 �
In the article, “Kissing Babies to Signal You Are Not a Psychopath,” by Ryan
H. Murphy (Journal of Neuroscience, Psychology, and Economics, Vol. 9, Iss.
3– 4, pp. 217–225. http://dx.doi.org/10.1037/npe0000062), the following para-
graph should appear as a quotation:
“. . .Cheater detection stands out in acuity from mere error detection and the
assessment of altruistic intent on the part of others. It is furthermore triggered
as a computation procedure only when the cost and benefits of a social contract
are specified. More than error, more than good deeds, and more than even profit,
the possibility of cheating by others attracts attention. It excites emotion and
serves as the principal source of hostile gossip and moralistic aggression by
which the integrity of the political economy is maintained (E. O. Wilson, 1998,
pp. 186 –187) . . .”
http://dx.doi.org/10.1037/npe0000106
87ERPS IN ONLINE REVIEW HELPFULNESS EVALUATION TASK
T
hi
s
do
cu
m
en
t
is
co
py
ri
gh
te
d
by
th
e
A
m
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
or
on
e
of
it
s
al
li
ed
pu
bl
is
he
rs
.
T
hi
s
ar
ti
cl
e
is
in
te
nd
ed
so
le
ly
fo
r
th
e
pe
rs
on
al
us
e
of
th
e
in
di
vi
du
al
us
er
an
d
is
no
t
to
be
di
ss
em
in
at
ed
br
oa
dl
y.
http://dx.doi.org/10.1523/JNEUROSCI.4537-03.2004
http://dx.doi.org/10.1523/JNEUROSCI.4537-03.2004
http://dx.doi.org/10.1016/j.ijinfomgt.2017.08.006
http://dx.doi.org/10.1016/j.ijinfomgt.2017.08.006
http://dx.doi.org/10.1037/npe0000062
http://dx.doi.org/10.1037/npe0000106
- Conformity Feedback in an Online Review Helpfulness Evaluation Task Leads to Less Negative Feedb …
Method
General Experimental Design
Participants
Materials
Procedure
Electroencephalogram Recordings and ERP Data Processing
Results
Manipulation Check
Behavioral Data
Feedback-Related Negativity
P300
Discussion
References
Correction to Murphy (2016)
Journal of Applied Psychology
1972. Vol. 56, No. 1,
54
-59
INFORMATIONAL SOCIAL INFLUENCE AND
PRODUCT EVALUATION 1
JOEL B. COHEN2 AND ELLEN GOLDEN
University of Illinois
Two groups of 5s were exposed to 16 scaled product evaluations (supposedly from
peers). High uniformity and low uniformity conditions, respectively, were deter-
mined by degree of dispersion. Both expected their evaluations to be visible to
others. Two remaining groups were given no information regarding others’ evalua-
tions. One group, however, expected their evaluations to be visible to others. The
5s’ subsequent evaluations were significantly influenced by others’ ratings, the
greatest influence occurring under the high uniformity-visibility condition. There
was, however, no significant difference due to 5s’ expectations that their ratings
would be visible to others. Individual differences in interpersonal response orienta-
tions were not significantly related to the acceptance of information from others,
although the direction of results was in accord with predictions.
For many, the application of social influence
research is limited to rather specialized settings
(e.g., formal group interaction or structured
authority relationships) or tied to the notion
of conformity or conformity proneness. This
view tends to understate the pervasiveness of
social influence and its importance to human
behavior. Informational social influence, espe-
cially, has not received its due consideration
in many settings and under many circum-
stances in which it is likely to be a significant
factor in decision making and overt behavior.
Product evaluation may prove to be an
especially fertile setting within which informa-
tional social influence is likely to operate.
Products are typically evaluated relative to a
number of competing needs and demands on
individual and family resources. Resulting
questions of value judgments, which are them-
selves not completely reducible to objective
evidence and matters of fact, are without doubt
subject to social frames of reference. Appro-
priate or correct behavior is such, in large
part, because of the evidence we have that
others agree with or accept the behavior.
Aside from questions of value, the very com-
plexity of product evaluation itself (e.g., the
number of brands and models, the claims and
counterclaims, and the difficulty of obtaining
1 Appreciation is expressed to Raymond Suh for his
help on the study.
2 Requests for reprints should be sent to Joel B.
Cohen, Department of Business Administration, Uni-
versity of Illinois at Urbana-Champaign, Urbana,
Illinois 61801.
objective evidence) and the time it would take
to resolve the many uncertainties combine to
favor the utilization of information from others.
The study to be reported focuses specifically
on three potential sources of influence on a
consumer’s judgment in social situations: (a)
the uniformity of relevant information pro-
vided by others, (6) the extent to which one’s
judgment (evaluation) is known to others, and
(c) one’s interpersonal response orientations.
Many of the early conformity studies failed
to distinguish clearly between two processes
of social influence whose differences are of
considerable importance (Asch, 1958; Crutch-
field, 1955; Sherif, 1958). The first, normative
social influence, refers to influence to conform
with certain expectations held by others. The
second, informational social influence, refers
to influence to accept information provided by
others which is taken as evidence about
reality.3 The former might be termed conform-
ity in the sense that one accepts influence
either to establish or enhance a favora
ble
reward-punishment relationship with certain
individuals or because of a desire to identify
with such individuals or their points of view
(Kelman, 1961). The second, however, is not
true conformity in the sense that a lack of
3 We are relying most strongly here on the distinction
made by Deutsch and Gerard (19SS), although a
number of researchers have proposed fairly similar ap-
proaches. See Jones and Gerard (1967) for a partic-
ularly insightful discussion of these as social compari-
son processes, especially in the context of “information
dependence” and “effect dependence.”
54
SOCIAL INFLUENCE AND PRODUCT EVALUATION
55
information, an ambiguous situation, or prema-
ture demands for action or decision lead the
person to substitute seemingly competent in-
formation from others for his own search for
direct evidence. Indeed direct, physical, and
objective evidence regarding the truth of many
of our beliefs (and especially values) is simply
not easily obtainable. For many of these our
primary point of reference may be other indi-
viduals or groups, and our reality, therefore,
is socially as well as physically determined.
Under either informational or normative
conditions, the uniformity of information pro-
vided by others regarding the relative quality
of a product should have a direct bearing on
consumers’ evaluations. This should be espe-
cially true when (a) quality is somewhat am-
biguous because of a lack of clear standards,
and (b) one’s own ability to discriminate is
not thought satisfactory. Venkatesan (1966)
demonstrates that social influence is operative
in this type of product evaluation situation.
We prefer to characterize the process he studied
not as “conformity to group pressure” (as he
has done) but, rather, as “informational social
influence.”
Stafford (1966) provides an interesting pic-
ture of informal group influence on brand
preferences within sociometrically determined
“natural” groups. Here the setting is conducive
to both influence processes, although the rela-
tive strength of normative influence would
almost certainly be greater for an object or
issue of greater relevance to the group (around
which norms could develop) than bread.
In order to more adequately study conditions
underlying the acceptance of social influence,
it is necessary to go beyond a one-way flow of
information and influence (from the group to
the individual). Such a conceptualization is too
narrow and does not consider others’ subse-
quent reactions to the behavior of the individ-
ual, especially the extent of his acceptance or
rejection of group influence. It seems espe-
cially important to separate out the effects of
factors which influence public acceptance of
information from those which influence adher-
ence to such information. Adherence should
follow directly from uniformity, for example,
under conditipns supportive of informational
social influence. If an individual has merely
expressed public acceptance (under conditions
favoring normative influence), his perception
that others are able to maintain surveilance
and impose sanctions may be necessary condi-
tions for adherence. In the classic conformity
studies, either 5s’ evaluations or behaviors
were perceived to be visible to others. In this
study we will specifically examine the impor-
tance of this factor under informational social
influence conditions.
Interpersonal response orientations refer to
people’s predominant modes of response to
others. They can be thought of as interpersonal
aspects of personality. Using Horney’s (1945)
tripartite classification of moving toward,
against, or away from others, Cohen (1967)
developed the Compliant, Aggressive, De-
tached (CAD) scale to measure the extent of a
person’s corresponding compliant, aggressive,
and detached interpersonal orientations. As
predicted, compliant people were more sus-
ceptible to information regarding group judg-
ments than were aggressive people, although
(at least in the absence of group pressure and
overt influence attempts) no significant differ-
ences in detached orientations among high- and
low-opinion changers were observed (Campbell,
1966).
Most people seem to have a reasonable
balance among the orientations so that al-
though one is usually preferred (more con-
sistent with other values or more often rein-
forced in social interaction) the person remains
flexible to the demands of the situation. Even
a highly aggressive person may refrain from
aggressive behavior under certain physical or
moral constraints. To the extent that more
specific situational influences (e.g., substantive
issues, objects, other people’s identity, task
requirements, etc.) encourage the expression
of individual differences, we should find some
correspondence between behavior and pre-
ferred modes of relating to others. Accordingly,
interpersonal response orientations were an
additional factor incorporated into the design
of the study.
METHOD
Each of three groups of 48 introductory marketing
students at the University of Illinois was randomly
assigned to four treatment conditions to form three
blocks of 12 Ss within each. Treatments are summarized
in Table 1. Each of the three groups was made up
entirely of individuals scoring at least one standard
56 JOEL B. COHEN AND ELLEN GOLDEN
TABLE 1
MEAN VALUES UNDER EACH TREATMENT CONDITION
Treatment
Uniformity in
others’ evaluations
High uniformity
Low uniformity
No information
No information
Visibility of
5s’ Behavior
Visible
Visible
Visible
Not
visible
X product
evaluation
10.75
9.83
9.17
8.50
deviation above the sample mean on one of the traits
measured by the CAD scale, a set of 35 items each
calling for a response relative to the desirability of
engaging in particularly characteristic types of inter-
personal behavior (Cohen, 1967).
Students were given to believe that a marketing
research project was being conducted to predict the
likely success of a new coffee product recently intro-
duced in the area. Under both the high uniformity-
visible and low uniformity-visible conditions, 5s were
individually shown a rating board containing other
5s’ evaluations of the coffee they were instructed to
taste and evaluate. The rating board was a large and
attractive piece of heavy cardboard subdivided into
five general categories for evaluation (from “worst
I’ve ever tasted” to “best I’ve ever tasted”), each, in
turn, broken down into three degrees of favorability.
(There were 15 response categories in all.) Under each
category were a set of small nails. Name tags were hung
on a predetermined number; the effect, in total, looked
very much like a frequency distribution histogram.
Name tags (many similar to, but none identical with,
other 5s’ names) were written in a large number of hand-
writing styles and with different pens and colors of ink.
Each 5 in these two treatments saw 16 name tags
representing others’ prior evaluations of the coffee.
In both treatments, the modal “evaluation” (preset
by £) was 12 (compared to the control group’s mean
evaluation of 8.5). We wished to produce a reasonable
discrepancy for those whose own estimates were at
the mean or several rating points above it, yet without
danger of a ceiling effect.
In the high-uniformity condition, nine of the name
tags were placed on the modal rating with the remaining
seven concentrated as follows: one on 10, two on 11,
and four on 13. In the low-uniformity condition, 5
name tags were placed on the modal position with the
others as follows: one on 5, one on 7, two on 8, one on
9, one on 10, two on 11, and three on 13. Thus, each 5
in the high-uniformity condition was exposed to the
same information (without risk of bias by confederates’
actions) with a substantially greater consensus than
5s in the low-uniformity condition. Any number of
variations in the dispersion of others’ evaluations
(including the identification of certain 5s) could be
used to easily vary and standardize the information
provided under possible treatments. Only the two
variations discussed above, however, were incorporated
into this study.
After tasting the coffee, each 5 in these two conditions
wrote his name on a tag and placed it on the board.
Since the name tag would always be placed last in any
column chosen, 5s could not reasonably expect their
evaluations to be hidden from others, no matter where
it was placed. After each 5 left the room, E removed
his name tag from the board.
The third treatment, the no information-visible con-
dition, was used to separate out the effects of informa-
tion presumably provided by others from the expecta-
tion that others will know how one has evaluated the
product. As such, this provides a control group for the
factor “uniformity of information,” as well as a direct
comparison with the no information-no visibility control
group (Treatment 4).4 The 5s in Treatment 3 were
given to believe that theirs was the first name tag to
be placed on the chart for a “new group of tasters.”
The E explained simply that the procedure was to let
the board get fairly well filled, copy a summary of the
evaluations, take the tags off, and start all over again.
This procedure was used for each of the 36 5s in this
condition.
The fourth treatment used a rating form identical
in scale to the rating board. Evaluations of the coffee
were obtained in the absence of information from others.
The rating form was simply taken from 5s and placed
in a stack.
In total, the methodology was designed to create a
setting in which a small to moderate amount of un-
certainty regarding a correct product evaluation could
be tied to variations in informational input from others.
No attempt was made to build in factors which would
tend to produce normative influences. In such a setting
it was hypothesized that informational social influence
would be accepted for its own sake and not for reasons
of conformity.
RESULTS
A 4 X 3 factorial analysis of variance (Treat-
ments X Interpersonal orientations) was run.
Differences in treatment effects were signifi-
cant and in the predicted direction (see Table
2).
Analysis of the significant treatment effect
by orthogonal trend components revealed that
99,01% of the variation in evaluation by treat-
ments (SS treatments = 99.69) may be pre-
dicted from a linear regression equation
(Winer, 1962). This tends to indicate (a) that
the acceptance of social influence was a linear
4 A before-after design with an initial private rating
and one after seeing others’ ratings would have per-
mitted equivalent comparisons. This present design
was chosen (a) to avoid sensitizing 5s to the fact that
the information is “supposed to” make a difference in
your evaluation and (b) to prevent postcommitment
dissonance from influencing the results.
SOCIAL INFLUENCE AND PRODUCT EVALUATION 57
function of the degree of uniformity or con-
sensus in the information presented, and (6)
that no complex interaction between uniform-
ity and visibility was present. Further analysis
of these interrelationships was conducted
using an orthogonal decomposition of the treat-
ment sum of squares and comparisons among
treatment sums (Winer, 1962). Table 3 sum-
marizes the four comparisons used to separate
out the effects of uniformity and visibility.
Comparison 1 in Table 3, for example, looks at
the following weighted linear comparison of
treatment sums: [(7\ + T2 + F3)/3] – T4.
Approximately 55% of the variation among
treatments (54.19/99.69) is due to the differ-
ence between the control group (no informa-
tion-no visibility) and the other treatments
combined.
To what extent is this difference due to the
information seemingly provided by other 5s
or to the known visibility of one’s own evalua-
tion? If the latter, then the informational social
influence hypothesis (i.e., influence is accepted
largely because it reduces uncertainty) cannot
be supported since 5s would appear to be more
concerned with anticipating others’ positive
or negative reactions. F ratios on compari-
son sums of squares (e.g., SSci/MS error)
permitted more definitive answers to these
questions.
Comparison 2 (see Table 3) reveals a signi-
ficant difference (and in the predicted direc-
tion) between the two groups provided with
information regarding others’ evaluations and
the group not given such information, all
three groups believing their evaluations to be
visible to others. Comparison 4, on the other
hand, indicates that visibility, per se, is not a
significant source of variation when informa-
tion is held constant. Approximately 30% of
the variation among treatments is due to
TABLE 2
ANALYSIS or VARIANCE
TABLE 3
COMPARISONS ON TREATMENT SU
MS
Source of variation
Treatments
Interpersonal orientations
Interaction
Error
df
3
2
6
132
MS
33.23
6.27
2.55
7.15
F
4.65*
.88
.36
parison
2
Ci
c,
Ca
Ci
High
uni-
formity
-visi-
ble
387
1
1
1
0
Low
uni-
formity
-visi-
ble
354
1
1
-1
0
No in-
forma-
tion
-visi-
ble
330
1
-2
0
1
No in-
forma-
tion
-not
visible
306
-3
0
0
-1
55
54.19
30.38
15.13
8.00
F
7.58**
4.25*
2.12
1.12
*p < .005.
Note.—C = comparison.
* p < .05.
** p < .01.
Comparison 2, while only 8% is due to Com-
parison 4. We must conclude that visibility is
not a significant feature of this social influence
situation in which informational social in-
fluence appears to predominate over normative
social influence.
Comparison 3 indicates that acceptance of
social influence is not significantly greater
under high uniformity than under low uni-
formity, although results are in the predicted
direction (see Table 1).
Interpersonal response orientations did not
prove to be a significant source of variation,
although the direction of results fits the under-
lying model. Compliant 5s were the most
favorable in their product evaluations
(X = 9.96). Aggressive 5s were least favorable
(X = 9.25_), while detached 5s were inter-
mediate (X = 9.48).
DISCUSSION
These results provide strong confirmation
that social influence is operative in situations
not characterized by strong normative pres-
sures (cohesive groups, relevant issues, estab-
lished norms, sanctions, etc.). Buying decisions,
even when the product or brand being judged
is not novel or unfamiliar, seem to be char-
acterized by uncertainty. This may stem, in
part, from a lack of objective standards and a
lack of reliable comparative brand information.
Such conditions should tend to produce a
heightened readiness to respond to apparently
competent information from others.
The absence of a more pronounced differ-
ence between high- and low-uniformity treat-
ment groups is somewhat surprising. Our
manipulation of uniformity was tied to a range
58 JOEL B. COHEN AND ELLEN GOLDEN
of 5s’ coffee evaluations, however, rather than
markedly contrasting conditions of unanimous
agreement among others versus sharp disagree-
ment. Uniformity, in this study, is a somewhat
more involved notion than in most similar
studies. In many previous studies, information
from others was uniform if it was absolutely
identical (i.e., each confederate gave the exact
same answer or caused the exact same light to
go on). Here, the focus is on product evalua-
tion which can only be forced into a similar
conception of uniformity either by collapsing
the evaluation task into two or three cate-
gories (so as to make perfect consensus believa-
ble) or by telling S you are providing him with
consensus data (e.g., group means).
In reality, of course, it is seldom that no
variation exists in the advice and opinions
others so thoughtfully supply. We do not move
instantly from uncertainty to certainty by
virtue of the information received. There is
doubt and disagreement, and it may be of
some value for researchers to more realistically
deal with variance in information, specifically
in so far as learning how consumers respond to
it. It may be that consumers (or at least our
5s) tend to rely on specific information aggrega-
tion schemes such as a modal evaluation or
some other simplifying rule of thumb in dealing
with the results of diversity in product ratings.
Since the mode was the same in both the high-
uniformity and low-uniformity conditions (12
in both cases), we might possibly have pro-
vided much less of a difference in the two
uniformity conditions than was desirable for
maximal effect upon evaluations.
The failure of interpersonal response traits
to be a more discriminating predictor variable
may, to a large extent, be an artifact of the
methodology employed. We note with interest
that compliant 5s gave evaluations closest to
the mode, and, hence, more similar to their
peers. Aggressive 5s were furthest from the
mode, thus consistent with a movement against
the typical response. Detached 5s were inter-
mediate, neither responding strongly pro norm
nor counter norm. It may be recalled that the
methodology minimized social interaction and
direct influence attempts, two of the factors in
social influence situations which one would
expect to be most strongly related to this type
of treatment of individual differences.
CONCLUSION
The 5s asked to evaluate an unknown brand
of coffee were significantly influenced by rating
distributions (other 5s evaluations) of both
relatively high and low concentration (uniform-
ity). There was some tendency for acceptance
of the modal evaluation to be greater under
conditions of higher uniformity. The difference
between high- and low-uniformity conditions
was, however, not significant. This may have
been due to the uniformity manipulations
which dealt more with degree of dispersion
than more absolute dichotomies. Perceived
visibility of 5s’ subsequent ratings was not a
significant factor leading to the acceptance of
information from others. Differences in 5s’
interpersonal orientations did not prove to
be a significant factor, although results were
in the predicted direction.
Our data suggest that even for a familiar
product whose taste was the sole criterion for
evaluation, individual judgments may be
modifiable by the perceived evaluations of
others. No attempt was made to convey infor-
mation of a more expert nature or in any way
encourage 5s to feel the information was
somehow reliable or accurate. Thus, even under
minimal conditions for social influence, such
information had a significant effect on product
evaluation. These results are interpreted as
supporting the pervasiveness and significance
of informational social influence even when
conditions favoring normative compliance are
largely absent.
REFERENCES
ASCH, S. E. Effects of group pressure upon the modifica-
tion and distortion of judgments. In E. E. Maccoby,
T. M. Newcomb, & E. L. Hartley (Eds.), Readings
in social psychology. (3rd ed.) New York: Holt,
Rinehart & Winston, 1958.
CAMPBELL, R. The utilization of expert information in
business forecasting. Unpublished doctoral disserta-
tion, Graduate School of Business Administration,
University of California at Los Angeles, 1966.
COHEN, J. B. An interpersonal orientation to the study
of consumer behavior. Journal of Marketing Research,
1967, 4, 270-278.
CRUTCHMELD, R. S. Conformity and character.
American Psychologist, 1955, 10, 191-198.
DETJTSCH, M., & GERARD, H. B. A study of normative
and informational social influence upon individual
judgment. Journal of Abnormal and Social Psy-
chology, 1955, 51, 629-636.
SOCIAL INFLUENCE AND PRODUCT EVALUATION 59
HORNEY, K. Our inner conflicts. New York: W. W.
Norton, 1945.
JONES, E. E., & GERARD, H. B. Foundations of social
psychology. New York: Wiley, 1967.
KELMAN, H. C. Processes of opinion change. Public
Opinion Quarterly, 1961, 25, 57-78.
SHERIF, M. Group influences upon the formation of
norms and attitudes. In E. E. Maccoby, T. M.
Newcomb, & E. L. Hartley (Eds.), Readings in
social psychology. (3rd ed.) New York: Holt, Rine-
hart and Winston, 1958.
STAFFORD, J. E. Effects of group influence on consumer
brand preferences. Journal of Marketing Research,
1966, 3, 68-75.
VENKATESAN, M. Consumer behavior: Conformity and
independence. Journal of Marketing Research, 1966,
3, 384r-387.
WINER, B. J. Statistical principles in experimental design,
New York: McGraw-Hill, 1962.
(Received November 13, 1970)
FXG, Modesto A. Maidique Campus, Interlibrary Loan, Lending, Green Library, Room 290, 11200 S.W. 8th Street, Miami,
Florida 33199, Phone: (305) 348-4054, Fax: (305) 348-6055, Odyssey: 206.107.42.95, e-mail: upill@fiu.edu.
Notice
Warning Concerning Copyright Restriction
The copyright law of the United States (Title 17, United States Code) governs the making of
photocopies or other reproductions of copyrighted materials. Under certain conditions specified in
the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of
these specified conditions is that the photocopy or reproduction is not to be “used for any purpose
other than private study, scholarship, or research”. If a user makes a request for, or later uses, a
photocopy or reproduction for purposes in excess of “fair use”, that user may be liable for copyright
infringement. The FIU Libraries retain the right to refuse to accept a copying order if, in its judgment,
fulfillment of the order would involve a violation of copyright Law. No further reproduction and
distribution of this copy is permitted by transmission or any other means.
Transaction Number 888550
Morality and Conformity: The Ach Paradigm
Applied to Moral Decisions
mailto:upill@fiu.edu
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=psif20
Social Influence
ISSN: 1553-4510 (Print) 1553-4529 (Online) Journal homepage: https://www.tandfonline.com/loi/psif20
Morality and conformity: The Asch paradigm
applied to moral decisions
Payel Kundu & Denise Dellarosa Cummins
To cite this article: Payel Kundu & Denise Dellarosa Cummins (2013) Morality and
conformity: The Asch paradigm applied to moral decisions, Social Influence, 8:4, 268-279, DOI:
10.1080/15534510.2012.727767
To link to this article: https://doi.org/10.1080/15534510.2012.727767
Published online: 05 Oct 2012.
Submit your article to this journal
Article views: 3269
View related articles
Citing articles: 18 View citing articles
https://www.tandfonline.com/action/journalInformation?journalCode=psif20
https://www.tandfonline.com/loi/psif20
https://www.tandfonline.com/action/showCitFormats?doi=10.1080/15534510.2012.727767
https://doi.org/10.1080/15534510.2012.727767
https://www.tandfonline.com/action/authorSubmission?journalCode=psif20&show=instructions
https://www.tandfonline.com/action/authorSubmission?journalCode=psif20&show=instructions
https://www.tandfonline.com/doi/mlt/10.1080/15534510.2012.727767
https://www.tandfonline.com/doi/mlt/10.1080/15534510.2012.727767
https://www.tandfonline.com/doi/citedby/10.1080/15534510.2012.727767#tabModule
https://www.tandfonline.com/doi/citedby/10.1080/15534510.2012.727767#tabModule
Morality and conformity: The Asch paradigm
applied to moral decisions
Payel Kundu and Denise Dellarosa Cummins
University of Illinois at Urbana-Champaign, Champaign, IL, USA
Morality has long been considered an inherent quality, an internal moral
compass that is unswayed by the actions of those around us. The Solomon
Asch paradigm was employed to gauge whether moral decision making is
subject to conformity under social pressure as other types of decision making
have been shown to be. Participants made decisions about moral dilemmas
either alone or in a group of confederates posing as peers. On a majority of
trials confederates rendered decisions that were contrary to judgments
typically elicited by the dilemmas. The results showed a pronounced effect
of conformity: Compared to the control condition, permissible actions were
deemed less permissible when confederates found them objectionable, and
impermissible actions were judged more permissible if confederates judged
them so.
Keywords: Moral judgment; Conformity; Asch; Decision making.
Traditional theories of moral psychology endorsed the Kantian view that
moral judgments are the outcome of conscious deliberation based on moral
rules, an internal ‘‘moral compass’’ (Kant, 1785, 1787; Kohlberg, 1969).
However, recent studies have shown that moral judgment can be strongly
swayed by seemingly irrelevant contextual factors. People judge actions as
more morally wrong if they are primed to feel disgust before making a moral
judgment (Schnall, Benton, & Harvey, 2008; Schnall, Haidt, Clore, &
Jordan, 2008), while priming positive emotions makes moral transgressions
sometimes appear more permissible (Valdesolo & DeSteno, 2006). Marked
order effects have also been reported in which the judged moral
permissibility of a dilemma varies as a function of the nature of the
Address correspondence to: E-mail: dcummins@illinois.edu
The authors thank Andrew Higgins, Joseph Spino, and John Clevenger for their assistance in
conducting the experiment. This work was supported by research funds provided by the
University of Illinois.
SOCIAL INFLUENCE, 2013
Vol. 8, No. 4, 268–279, http://dx.doi.org/10.1080/15534510.2012.727767
© 2013 Taylor & Francis
Morality and conformity: The Asch paradigm
applied to moral decisions
Payel Kundu and Denise Dellarosa Cummins
University of Illinois at Urbana-Champaign, Champaign, IL, USA
Morality has long been considered an inherent quality, an internal moral
compass that is unswayed by the actions of those around us. The Solomon
Asch paradigm was employed to gauge whether moral decision making is
subject to conformity under social pressure as other types of decision making
have been shown to be. Participants made decisions about moral dilemmas
either alone or in a group of confederates posing as peers. On a majority of
trials confederates rendered decisions that were contrary to judgments
typically elicited by the dilemmas. The results showed a pronounced effect
of conformity: Compared to the control condition, permissible actions were
deemed less permissible when confederates found them objectionable, and
impermissible actions were judged more permissible if confederates judged
them so.
Keywords: Moral judgment; Conformity; Asch; Decision making.
Traditional theories of moral psychology endorsed the Kantian view that
moral judgments are the outcome of conscious deliberation based on moral
rules, an internal ‘‘moral compass’’ (Kant, 1785, 1787; Kohlberg, 1969).
However, recent studies have shown that moral judgment can be strongly
swayed by seemingly irrelevant contextual factors. People judge actions as
more morally wrong if they are primed to feel disgust before making a moral
judgment (Schnall, Benton, & Harvey, 2008; Schnall, Haidt, Clore, &
Jordan, 2008), while priming positive emotions makes moral transgressions
sometimes appear more permissible (Valdesolo & DeSteno, 2006). Marked
order effects have also been reported in which the judged moral
permissibility of a dilemma varies as a function of the nature of the
Address correspondence to: E-mail: dcummins@illinois.edu
The authors thank Andrew Higgins, Joseph Spino, and John Clevenger for their assistance in
conducting the experiment. This work was supported by research funds provided by the
University of Illinois.
dilemmas that preceded it (Nichols & Mallon, 2006), an effect that was
replicated among expert moral reasoners (Schwitzgebel & Cushman, 2012).
One contextual factor that has not been adequately investigated is that of
social consensus on moral decision making. There has been a plethora of
research on decision-making conformity and the situations in which it can
be induced. Perhaps the most famous are the classic studies conducted by
Solomon Asch (1956) using simple visual discrimination. Asch required
participants to choose which of three lines of different lengths matched the
length of a target line. Participants made decisions in a group context which
included six to eight people, and all but one person was a confederate of the
experimenter. Over the course of 18 trials the confederates gave correct
answers on only 6 trials. Asch found that, while participants made errors on
fewer than 1% of trials when deciding alone, they made errors on 37% of
trials in the group condition.
Although numerous studies have been conducted since the publication of
Asch’s classic paper, the majority have as their primary aim identifying the
motivations underlying conforming behavior (see Cialdini & Goldstein,
2004, for a review). Three core motivations have been identified: a desire
for accuracy, a desire for affiliation, and the maintenance of a positive
self-concept. Recent work by Erb and colleagues (Erb, Bohner, Rank,
& Einwiller, 2002; Erb, Bohner, Schmalzle, & Rank, 1998) found that
the contribution of these factors varies as a function of the individuals’
prior beliefs toward the topic under consideration. When people’s
prior beliefs are strongly opposed to the position held by the majority,
conformity is driven by a desire to fit in. But when people hold moderately
or no strong prior beliefs concerning the topic, conformity is driven by a
belief that the majority view is more likely to constitute an objective
consensus.
It is assumed that people violate a norm of rationality when they allow
social consensus to override facts. Campbell (1990) argued that yielding to
conformity allows error and confusion to spread throughout a group, while
independent decision making and resistance to conformity is socially
productive because it allows errors to be corrected. Resistance to conformity
is therefore considered both moral and rational. It is moral because it
reflects adherence to principle, and it is rational because it introduces fact-
based judgment into the group decision-making process.
This raises the following question: Can conformity influence something
we consider to be an integral part of our identities; namely, morality? Unlike
visual decision making where correct answers are clear and unambiguous,
moral dilemmas are dilemmas precisely because the correct course of action
is unclear. Yet the laws and social institutions of virtually every culture are
grounded in moral principles, such as avoiding harm to others and fairness
in social transactions (Haidt, 2007). People are expected to rely on culturally
MORALITY AND CONFORMITY 269
dictated moral principles as well as their own personal moral intuitions
when choosing when and whether to aid others in distress, how to judge the
culpability of parties involved in wrongdoing or disputes, and which
behaviors should be subject to social and legal censure. Our behavior is
frequently judged on the basis of whether we acted in accordance with our
moral principles, or whether we simply chose to ‘‘go along to get along’’, as
would be the case if we allowed social conformity to override moral
principles. Taking this course of action typically makes one the target of
criticism and social censure. An over-reliance on social conformity in
guiding one’s actions is also the hallmark of conventional (stage 3) moral
reasoning in Kohlberg’s six-stage theory of moral development; the highest
level of moral development (stage 6) is rooted in reliance on moral principles
to guide behavior (Kohlberg, 1969).
Despite the ubiquity and gravity of moral judgment in our everyday lives,
scant research exists on the impact of conformity on moral judgment.
Crutchfield (1955) tested the impact of majority opinion on judgments in a
variety of different domains, including agreement with morally relevant
statements such as ‘‘Free speech being a privilege rather than a right, it is
proper for a society to suspend free speech whenever it itself is threatened.’’
He found that only 19% of participants agreed with such statements when
alone, but 58% agreed when confronted with a unanimous group who
endorsed the statements. This is surprising given that people have been
found to reject and distance themselves socially from morally dissimilar
others (Skitka, Bauman, & Sargis, 2005), and should therefore have little
desire to conform to the group. Indeed, Hornsey and colleagues (Hornsey,
Majkut, Terry, & McKimmie, 2003; Hornsey, Smith, & Begg, 2007) found
that participants with strong moral convictions about a social issue
expressed stronger intentions to verbally oppose the issue when they
believed they held a minority view than when they believed they held the
majority view. Importantly, these intentions did not translate to actual
behavior. Aramovich, Lytle, and Skitka (2012) assessed participants’ prior
beliefs concerning the acceptability of torture, along with their prior moral
commitments, socio-political attitudes, and other factors. The participants
then took part in an allegedly group discussion concerning the use of torture
via computer-simulated chat room; the participants believed they were
discussing the topic with fellow students. During the simulated group
discussion, 80% of participants reported less opposition to torture than they
had reported at pretest, but strength of moral conviction about torture was
negatively associated with the degree of pro-torture attitude change.
Although these results addressed only a single moral topic (i.e., permissi-
bility of torture), they suggest that moral judgment may in fact be
susceptible to conformity pressure.
270 KUNDU AND CUMMINS
dictated moral principles as well as their own personal moral intuitions
when choosing when and whether to aid others in distress, how to judge the
culpability of parties involved in wrongdoing or disputes, and which
behaviors should be subject to social and legal censure. Our behavior is
frequently judged on the basis of whether we acted in accordance with our
moral principles, or whether we simply chose to ‘‘go along to get along’’, as
would be the case if we allowed social conformity to override moral
principles. Taking this course of action typically makes one the target of
criticism and social censure. An over-reliance on social conformity in
guiding one’s actions is also the hallmark of conventional (stage 3) moral
reasoning in Kohlberg’s six-stage theory of moral development; the highest
level of moral development (stage 6) is rooted in reliance on moral principles
to guide behavior (Kohlberg, 1969).
Despite the ubiquity and gravity of moral judgment in our everyday lives,
scant research exists on the impact of conformity on moral judgment.
Crutchfield (1955) tested the impact of majority opinion on judgments in a
variety of different domains, including agreement with morally relevant
statements such as ‘‘Free speech being a privilege rather than a right, it is
proper for a society to suspend free speech whenever it itself is threatened.’’
He found that only 19% of participants agreed with such statements when
alone, but 58% agreed when confronted with a unanimous group who
endorsed the statements. This is surprising given that people have been
found to reject and distance themselves socially from morally dissimilar
others (Skitka, Bauman, & Sargis, 2005), and should therefore have little
desire to conform to the group. Indeed, Hornsey and colleagues (Hornsey,
Majkut, Terry, & McKimmie, 2003; Hornsey, Smith, & Begg, 2007) found
that participants with strong moral convictions about a social issue
expressed stronger intentions to verbally oppose the issue when they
believed they held a minority view than when they believed they held the
majority view. Importantly, these intentions did not translate to actual
behavior. Aramovich, Lytle, and Skitka (2012) assessed participants’ prior
beliefs concerning the acceptability of torture, along with their prior moral
commitments, socio-political attitudes, and other factors. The participants
then took part in an allegedly group discussion concerning the use of torture
via computer-simulated chat room; the participants believed they were
discussing the topic with fellow students. During the simulated group
discussion, 80% of participants reported less opposition to torture than they
had reported at pretest, but strength of moral conviction about torture was
negatively associated with the degree of pro-torture attitude change.
Although these results addressed only a single moral topic (i.e., permissi-
bility of torture), they suggest that moral judgment may in fact be
susceptible to conformity pressure.
Importantly, a growing number of studies have shown that judged moral
permissibility varies systematically with the degree of conflict between
morally relevant dilemma features (Greene et al., 2009). Dilemmas
describing actions that maximize aggregate benefits (‘‘greater good’’) while
violating no a priori moral rules yield high endorsement rates, and actions
that fail to maximize such benefits while simultaneously violating one or
more moral rule yield very low endorsement rates. When the two conflict,
causing the decision maker to choose between violating moral principles or
sacrificing the greater good, low decisional consensus obtains. In these
circumstances people are less certain what the morally permissible course of
action should be.
In the present study we used a modification of Asch’s methods to
investigate the impact of social consensus on moral decision making.
Participants were asked to render moral judgments for a series of dilemmas
either alone or in a group that included three confederates. Unlike Asch’s
participants, however, our participants rendered judgments by choosing a
number from a Likert-type scale that described a range of permissibility
ratings, including ‘‘uncertain’’. This allowed greater variability among
confederate judgments while still creating confederate consensus. If moral
judgment is influenced by social context, then participants’ ratings should be
swayed in the direction of the confederates’ atypical judgments compared to
ratings given in the absence of social pressure.
METHOD
Participants
A total of 33 participants were recruited from the University of Illinois
Psychology paid-participant website. There were 17 participants (12 female)
in the control condition, and 16 participants (9 female) in the experimental
condition.
Materials
A total of 12 dilemmas were selected from materials used by Greene, Morelli,
Lowenberg, Nystrom, and Cohen (2008). They differed along three
dimensions: (a) percent ‘‘permissible’’ judgments, (b) use of personal force,
and (c) whether the harm inflicted was intentional or a side effect of the action
taken. The latter two constitute deontological criteria that have been shown
to influence moral judgment (Greene et al., 2009). According to Greene et al.
(2009), an agent applies personal force when the force that directly impacts
the other is generated by the agent’s muscles and is not mediated by
intervening mechanisms that are distinct from the agent’s muscular force,
such as firing a gun. The vignette names, deontological values, percent ‘‘yes’’
MORALITY AND CONFORMITY 271
(permissible) judgments from Green et al., (2008), and confederate judgments
are displayed in Table 1.
Each vignette was printed on single sheet of paper with a 1–7 rating scale
underneath. The labels for the rating scale were (from 1 to 7, respectively)
Highly Impermissible, Impermissible, Somewhat Impermissible, Unsure,
Somewhat Permissible, Permissible, and Highly Permissible.
Four vignettes served as fillers; confederates always gave ratings that were
consistent with the judgment typically elicited by these vignettes (i.e., 6 or 7
for Submarine and Modified Bomb, which people typically judge
permissible; 1 or 2 for Smother for Dollars and Hard Times, which people
typically judge impermissible). Six of the experimental vignettes fell into two
categories. The first contained vignettes that are a majority of people
typically judge to be permissible (Standard Trolley, Standard Fumes, and
Vaccine Test), and for which the confederates gave atypical judgments
(i.e., ratings of 1 or 2). The second contained vignettes that a majority of
people typically judge to be impermissible (Sacrifice, Safari, and Vitamins),
and for which the confederates gave atypical judgments (i.e., ratings of
6 or 7). Finally, two vignettes were included that typically elicit high
disagreement concerning permissibility. Confederates rated one of these
TABLE 1
Vignette title, deontological features, percent acceptance rates, and judgments given
by confederates for the experiment materials
Vignette
Personal
force Harm
%
Yes
a
%
Yes
b
Confederate
judgment
Fillers
Submarine No Intentional 91 80 Permissible
Modified Bomb Yes Intentional 90 85 Permissible
Smother for Dollars Yes Intentional 7 8 Impermissible
Hard Times No Side Effect 9 3 Impermissible
Experimental
Standard Trolley No Side Effect 85 80 Impermissible
Standard Fumes No Side Effect 75 67 Impermissible
Vaccine Test No Side Effect 79 68 Impermissible
Sacrifice
c
Yes Intentional 51 28 Permissible
Safari Yes Intentional 22 28 Permissible
Vitamins Yes Intentional 35 38 Permissible
Sophie’s Choice No Side Effect 62 41 Impermissible
Crying Baby Yes Intentional 60 40 Permissible
a
Values reported by Greene et al. (2008).
b
Values reported by Cummins and Cummins (2012, Exp 1) based on decisions made by UIUC
students.
c
We opted to use Cummins and Cummins (2012) data to classify this vignette because
participants in this study were also drawn from UIUC students.
272 KUNDU AND CUMMINS
(permissible) judgments from Green et al., (2008), and confederate judgments
are displayed in Table 1.
Each vignette was printed on single sheet of paper with a 1–7 rating scale
underneath. The labels for the rating scale were (from 1 to 7, respectively)
Highly Impermissible, Impermissible, Somewhat Impermissible, Unsure,
Somewhat Permissible, Permissible, and Highly Permissible.
Four vignettes served as fillers; confederates always gave ratings that were
consistent with the judgment typically elicited by these vignettes (i.e., 6 or 7
for Submarine and Modified Bomb, which people typically judge
permissible; 1 or 2 for Smother for Dollars and Hard Times, which people
typically judge impermissible). Six of the experimental vignettes fell into two
categories. The first contained vignettes that are a majority of people
typically judge to be permissible (Standard Trolley, Standard Fumes, and
Vaccine Test), and for which the confederates gave atypical judgments
(i.e., ratings of 1 or 2). The second contained vignettes that a majority of
people typically judge to be impermissible (Sacrifice, Safari, and Vitamins),
and for which the confederates gave atypical judgments (i.e., ratings of
6 or 7). Finally, two vignettes were included that typically elicit high
disagreement concerning permissibility. Confederates rated one of these
TABLE 1
Vignette title, deontological features, percent acceptance rates, and judgments given
by confederates for the experiment materials
Vignette
Personal
force Harm
%
Yes
a
%
Yes
b
Confederate
judgment
Fillers
Submarine No Intentional 91 80 Permissible
Modified Bomb Yes Intentional 90 85 Permissible
Smother for Dollars Yes Intentional 7 8 Impermissible
Hard Times No Side Effect 9 3 Impermissible
Experimental
Standard Trolley No Side Effect 85 80 Impermissible
Standard Fumes No Side Effect 75 67 Impermissible
Vaccine Test No Side Effect 79 68 Impermissible
Sacrifice
c
Yes Intentional 51 28 Permissible
Safari Yes Intentional 22 28 Permissible
Vitamins Yes Intentional 35 38 Permissible
Sophie’s Choice No Side Effect 62 41 Impermissible
Crying Baby Yes Intentional 60 40 Permissible
a
Values reported by Greene et al. (2008).
b
Values reported by Cummins and Cummins (2012, Exp 1) based on decisions made by UIUC
students.
c
We opted to use Cummins and Cummins (2012) data to classify this vignette because
participants in this study were also drawn from UIUC students.
(Sophie’s Choice) as impermissible and the other (Crying Baby) as
permissible. Examples of the vignettes are shown in Table 2. Texts for all
vignettes can be found by clicking the supplementary materials link
provided in Greene et al. (2008).
Procedure
In the control condition the experimenter and participant were seated at a
conference table in a private room. In the experimental condition three
confederates came into the room around the same time as the real participant
and posed as real participants. The confederates were three male graduate
students. The confederates took care to sit around the table so that the three
of them were in consecutive seats and the real participant was at one end.
TABLE 2
Examples of vignettes used in the experiment
Filler: Submarine: You are the captain of a military submarine
traveling under a large iceberg. An explosion has damaged
your oxygen supply and injured one of your crew. The
injured crew member cannot survive his wounds. There is
not enough oxygen left for the entire crew to make it to the
surface. The only way to save the other crew members is to
shoot dead the injured crew member so that there will be just
enough oxygen for the rest of the crew to survive. Is it
morally permissible to kill the injured crew member under
the circumstances?
Weak Consensus Crying Baby: Enemy soldiers are approaching your village.
You and your townspeople are hiding. Your baby begins to
cry loudly, which will surely alert the soldiers to your
location. If you cover your baby’s face to muffle the sound
until the soldiers leave, you will smother him. Is it morally
permissible to smother your baby under the circumstances?
Strong Consensus – ‘‘Yes’’ Standard Trolley: A runaway trolley is approaching a fork in
the tracks. On the left track are five people. On the right
track is one person. If you do nothing the trolley will go left,
causing the deaths of five people. The only way to avoid this
is to push a switch that will cause the trolley to go right,
causing the death of the single person. Is it morally
permissible to push the switch under the circumstances?
Strong Consensus – ‘‘No’’ Sacrifice: You, your spouse, and your four children are
crossing a mountain range on your return journey to your
homeland. You have inadvertently set up camp on a local
clan’s sacred burial ground. The leader of the clan says if
you kill your oldest son with the clan leader’s sword, he will
let the rest of you live. Is it morally permissible to kill your
oldest son under the circumstances?
MORALITY AND CONFORMITY 273
Participants were instructed that they would be asked to make a series of
decisions about moral dilemmas for which there were no right or wrong
answers. They were told we were interested in their responses to help us
choose materials for future research. Folders were distributed which
contained the vignettes. The folders given to the confederates had a small
mark beside the rating they were supposed to give for each vignette.
Confederates were not blind to the experimental hypotheses, and so were
trained and instructed to respond according to script, without giving
explanation or commentary on their choices. The answers confederates gave
were distributed across the extreme end of the appropriate range (i.e.,
‘‘permissible’’ could be 6 or 7, and ‘‘impermissible’’ could be 1 or 2). The first
vignette was always Submarine, and the confederates gave a typical answer.
The remaining sheets were shuffled between sessions. Each vignette was read
aloud once and participants were given about 4 seconds to consider the
situation. They were then asked to announce their answers aloud in turn as
the experimenter recorded their choices. The real participant was always
prompted to answer last after all of the confederates had given their answers.
It was explained that answers were to be given aloud in order to save time and
so that the printed materials could be re-used. After the experiment
concluded the purpose of the experiment was explained, including the use
of deception. Participants were not queried about their beliefs concerning the
true purpose of the experiment prior to debriefing, although the majority
spontaneously expressed surprise when informed of the deception, particu-
larly that the graduate students were confederates and not true participants.
RESULTS
If participants’ moral judgments were swayed by social consensus, then we
would expect that ratings of vignettes typically judged permissible should
receive lower permissibility ratings in the group condition than in the
control condition, while ratings of vignettes typically judged impermissible
should receive higher permissibility ratings in the group condition than in
the control condition. To test this prediction, ratings for vignettes that
typically yield strong consensus were analyzed separately from those that
typically yield weak consensus.
For the strong consensus vignettes, ratings were averaged across the three
‘‘impermissible’’ vignettes (Sacrifice, Safari, and Vitamins), and across the
three ‘‘permissible’’ vignettes (Standard Trolley, Standard Fumes, and
Vaccine Test). These mean ratings were analyzed via mixed ANOVA using
condition (Control or Group) and sex (Female or Male) as between-
participant variables, and moral category (Impermissible or Permissible)
as repeated measures. The analysis returned a single significant effect,
274 KUNDU AND CUMMINS
Participants were instructed that they would be asked to make a series of
decisions about moral dilemmas for which there were no right or wrong
answers. They were told we were interested in their responses to help us
choose materials for future research. Folders were distributed which
contained the vignettes. The folders given to the confederates had a small
mark beside the rating they were supposed to give for each vignette.
Confederates were not blind to the experimental hypotheses, and so were
trained and instructed to respond according to script, without giving
explanation or commentary on their choices. The answers confederates gave
were distributed across the extreme end of the appropriate range (i.e.,
‘‘permissible’’ could be 6 or 7, and ‘‘impermissible’’ could be 1 or 2). The first
vignette was always Submarine, and the confederates gave a typical answer.
The remaining sheets were shuffled between sessions. Each vignette was read
aloud once and participants were given about 4 seconds to consider the
situation. They were then asked to announce their answers aloud in turn as
the experimenter recorded their choices. The real participant was always
prompted to answer last after all of the confederates had given their answers.
It was explained that answers were to be given aloud in order to save time and
so that the printed materials could be re-used. After the experiment
concluded the purpose of the experiment was explained, including the use
of deception. Participants were not queried about their beliefs concerning the
true purpose of the experiment prior to debriefing, although the majority
spontaneously expressed surprise when informed of the deception, particu-
larly that the graduate students were confederates and not true participants.
RESULTS
If participants’ moral judgments were swayed by social consensus, then we
would expect that ratings of vignettes typically judged permissible should
receive lower permissibility ratings in the group condition than in the
control condition, while ratings of vignettes typically judged impermissible
should receive higher permissibility ratings in the group condition than in
the control condition. To test this prediction, ratings for vignettes that
typically yield strong consensus were analyzed separately from those that
typically yield weak consensus.
For the strong consensus vignettes, ratings were averaged across the three
‘‘impermissible’’ vignettes (Sacrifice, Safari, and Vitamins), and across the
three ‘‘permissible’’ vignettes (Standard Trolley, Standard Fumes, and
Vaccine Test). These mean ratings were analyzed via mixed ANOVA using
condition (Control or Group) and sex (Female or Male) as between-
participant variables, and moral category (Impermissible or Permissible)
as repeated measures. The analysis returned a single significant effect,
the interaction of moral category and condition, F(1, 29) ¼ 23.57,
MSe ¼ 1.29, p 5 .0001, w2¼ .45. Four planned comparisons were conducted.
Looking within groups, the control group did indeed find the vignettes in
the permissible category more permissible (M ¼ 4.45) than vignettes in the
impermissible category (M ¼ 3.23), t(16) ¼ 5.31, p 5 .0001, Cohen’s d ¼ .80,
thereby replicating the findings of past research using these vignettes. The
social context group, however, departed significantly from this oft-replicated
consensual pattern: When confederates judged highly impermissible moral
transgressions to be permissible, participants also rated them as permissible
(M ¼ 4.37), and when confederates judged highly permissible vignettes to be
impermissible, so did participants (M ¼ 2.67), t(15) ¼ 3.38, p 5 .004,
Cohen’s d ¼ .66. Comparing vignette ratings across groups also yielded a
strong conformity effect: As predicted, vignettes that are typically judged
permissible were found to be significantly less so under dissenting social
pressure (M ¼ 2.67) than when participants made decisions on their own
(M ¼ 4.45), t(31) ¼ 4.18, p 5 .0001, Cohen’s d ¼ .62. Conversely, vignettes
that are normally judged highly impermissible were rated as more
permissible when confederates said so (M ¼ 4.38) than when participants
made decisions by themselves (M ¼ 3.23), t(31) ¼ 2.74, p 5 .01,
Cohen’s d ¼ .62. These results clearly show that our participants’ judgments
were strongly swayed by social context, even for vignettes that typically elicit
the opposite decision from an overwhelming majority of decision makers.
When reasoning under uncertainty, we would expect that decision makers
would be more likely to conform to strong group consensus, and that is
what we found when we analyzed the two vignettes that typically elicit low
decision consensus. Ratings were analyzed via mixed ANOVA using
condition (Control or Group) and sex (Female or Male) as between-
participant factors and dilemma (Sophie’s Choice and Crying Baby) as
repeated measures. The main effect of Dilemma was significant,
F(1, 29) ¼ 6.19, MSe ¼ 2.20, p 5 .02, w2¼ .18. This effect was modified by
an interaction with Condition, F(1, 29) ¼ 21.67, MSe ¼ 2.2, p 5 .0001,
w2¼ .43. Four planned comparisons were conducted.
Looking first within groups, the control group did indeed give statistically
equivalent ratings to Sophie’s Choice (M ¼ 3.53) and to Crying Baby
(M ¼ 2.76), t(16) ¼ 1.54, p ¼ .14. In the social context group the confederates
rated Sophie’s Choice as highly impermissible and Crying Baby as highly
permissible, and participants followed their lead. When deciding among
dissenting confederates, participants found Sophie’s Choice to be far less
permissible (M ¼ 2.00) than Crying Baby (M ¼ 4.75), t(15) ¼ 5.46, p 5 .0001,
Cohen’s d ¼ .82. Comparing group performance on each vignette, partici-
pants were found to rate Crying Baby as significantly more permissible when
confederates rated it so (M ¼ 4.75) than when they made decisions alone
(M ¼ 2.76), t(31) ¼ 3.31, p 5 .002, Cohen’s d ¼ .51. Conversely, participants
MORALITY AND CONFORMITY 275
found Sophie’s Choice far less permissible (M ¼ 2.00) when confederates
rated it as impermissible than when they made decisions on their own
(M ¼ 3.53), t(31) ¼ 2.66, p 5 .025, Cohen’s d ¼ .43. Clearly, our participants’
judgments regarding these ‘‘ambiguous’’ moral dilemmas were strongly
swayed by social consensus.
DISCUSSION
Our results clearly show a strong conformity effect, indicating that moral
decision making is strongly influenced by social context, thereby replicating
Asch’s seminal finding in a new domain. Given that our participants’ moral
judgments were so strongly influenced by social consensus, the next
important questions are whether this behavior (a) is rational and (b) is
itself morally acceptable.
Conformity is considered irrational only if one believes that social
consensus should be awarded less weight in decision making than one’s own
information or beliefs. But according to rational-actor models, people are
not necessarily behaving irrationally when they conform if they believe that
conformity maximizes the expected value of the decision. Consider the Asch
situation from a game-theoretic perspective (Krueger & Massey, 2009; Luce
& Raiffa, 1957). Participants are assumed to prefer to speak the truth, but
the strength of this preference is modulated by what others do. This yields
four possible outcomes that can be ordered in terms of payoffs to the
participant. If the participant is purely self-regarding, then the payoff matrix
yields the following: Everyone tells the truth 4 Participant tells the truth but
others lie (Positive Resistance) 4 Everyone lies 4 Participant lies while
others tell the truth (Negative Resistance). Under these circumstances, the
dominant choice (the best choice regardless of what other parties do) is to
tell the truth. If others tell the truth, the payoff is greater for the participant
if he or she tells the truth as well. If others lie instead, the payoff is still
greater for telling the truth.
But if we assume that people are a mixture of selfish and other-regarding
(benevolent) preferences, the payoff matrix can be modeled as the sum of
one’s own payoffs and others’ payoffs weighted by 1/N, where N is the
number of other people in the group (van Lange, 1999). This yields the
following: Everyone tells the truth 4 Participant tells the truth but others lie
(Positive Resistance) ¼ Participant lies while others tell the truth (Negative
Resistance) ¼ Everyone lies. Now there is no dominant choice. If others tell
the truth, the payoff is greater for telling the truth as well. But if others lie,
then the payoffs for being truthful and for going along with the lie are
the same.
Why would people choose to go along with the lie rather than tell
the truth? One explanation is that pronounced social consensus in a
276 KUNDU AND CUMMINS
found Sophie’s Choice far less permissible (M ¼ 2.00) when confederates
rated it as impermissible than when they made decisions on their own
(M ¼ 3.53), t(31) ¼ 2.66, p 5 .025, Cohen’s d ¼ .43. Clearly, our participants’
judgments regarding these ‘‘ambiguous’’ moral dilemmas were strongly
swayed by social consensus.
DISCUSSION
Our results clearly show a strong conformity effect, indicating that moral
decision making is strongly influenced by social context, thereby replicating
Asch’s seminal finding in a new domain. Given that our participants’ moral
judgments were so strongly influenced by social consensus, the next
important questions are whether this behavior (a) is rational and (b) is
itself morally acceptable.
Conformity is considered irrational only if one believes that social
consensus should be awarded less weight in decision making than one’s own
information or beliefs. But according to rational-actor models, people are
not necessarily behaving irrationally when they conform if they believe that
conformity maximizes the expected value of the decision. Consider the Asch
situation from a game-theoretic perspective (Krueger & Massey, 2009; Luce
& Raiffa, 1957). Participants are assumed to prefer to speak the truth, but
the strength of this preference is modulated by what others do. This yields
four possible outcomes that can be ordered in terms of payoffs to the
participant. If the participant is purely self-regarding, then the payoff matrix
yields the following: Everyone tells the truth 4 Participant tells the truth but
others lie (Positive Resistance) 4 Everyone lies 4 Participant lies while
others tell the truth (Negative Resistance). Under these circumstances, the
dominant choice (the best choice regardless of what other parties do) is to
tell the truth. If others tell the truth, the payoff is greater for the participant
if he or she tells the truth as well. If others lie instead, the payoff is still
greater for telling the truth.
But if we assume that people are a mixture of selfish and other-regarding
(benevolent) preferences, the payoff matrix can be modeled as the sum of
one’s own payoffs and others’ payoffs weighted by 1/N, where N is the
number of other people in the group (van Lange, 1999). This yields the
following: Everyone tells the truth 4 Participant tells the truth but others lie
(Positive Resistance) ¼ Participant lies while others tell the truth (Negative
Resistance) ¼ Everyone lies. Now there is no dominant choice. If others tell
the truth, the payoff is greater for telling the truth as well. But if others lie,
then the payoffs for being truthful and for going along with the lie are
the same.
Why would people choose to go along with the lie rather than tell
the truth? One explanation is that pronounced social consensus in a
decision-making context signals the creation of a social norm; that is, an
explicit or implicit rule concerning what one is permitted, obligated, or
forbidden to do in the current context (Cummins, 1998, 2000, 2005).
Deviations from expectation in nonsocial contexts (such as ‘‘oddball’’
detection in visual and semantic tasks) typically elicit activation in neural
reinforcement learning circuitry. The same network has been shown to be
active when there is conflict with a social norm (Klucharev Hytonen,
Rijpkema, Smidts, & Fernandez, 2009). When conforming to a norm, brain
regions associated with anxiety or disgust (such as the insula) are active,
indicating that conforming comes at an emotional cost (Berns, Capra,
Moore, & Noussair, 2010). These error-related neural signals alert the
reasoner when a decision that deviates from a particular social norm or a
broader social norm that one should both trust others and reciprocate trust
that has been placed in oneself.
Another reason why people may conform is that consensus that departs
from our own beliefs introduces uncertainty, particularly the suspicion that
the consensus ‘‘reflect[s] information that they have and we do not’’
(Banerjee, 1992, p. 798). Conformity can then be viewed as a rational
decision under conditions of uncertainty. This is particularly relevant when
conformity is modeled as informational cascades (Bikhchandani,
Hirschleifer, & Welch, 1992). In cascade models the first person is assumed
to have private information while each subsequent person is assumed to
have private information plus information about others’ decisions. If the
first two people agree, then the third concludes that they share the same
private information. If that information concurs with their own, the cascade
continues on to the next person, and so on. If two consecutive people
disagree, however, then this signals that they have different private
information. Each person can be thought of as equally weighting their
own and other people’s judgments. Group consensus that departs from
one’s own judgment therefore holds sway.
These analyses indicate that conformity can indeed be the outcome of a
rational process. But they also just as clearly indicate that rationality and
morality are separate, incommensurate criteria. One cannot be reduced to or
explained in terms of the other.
CONFLICT OF INTEREST STATEMENT
This research was conducted in the absence of any commercial or financial
relationships that could be construed as a potential conflict of interest.
Manuscript received 14 June 2012
Manuscript accepted 3 September 2012
First published online 4 October 2012
MORALITY AND CONFORMITY 277
REFERENCES
Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2011). Opposing torture: Moral
conviction and resistance to majority influence. Social Influence, 7, 21–34,
doi: 10.1080/15534510.2011.640199.
Asch, S. E. (1956). Studies of independence and conformity; I. A minority of one
against a unanimous majority. Psychological Monograph, 70, (9, whole number
416).
Banerjee, V (1992). A simple model of herd behavior. Quarterly Journal of
Economics, 107, 797–817.
Berns, S., Capra, C. M., Moore, S., & Noussair, C. (2010). Neural mechanisms of the
influence of popularity on adolescent ratings of music. Neuroimage, 49,
2687–2696, doi: 10.1016/j.neuroimage.2009.10.070.
Bikhchandani, S., Hirschleifer, D., & Welch, I. (1992). A theory of fads, fashion,
custom, and cultural change as informational cascades. Journal of Political
Economy, 100, 992–1026.
Campbell, D. T. (1990). Asch’s moral epistemology for socially shared knowledge.
In I. Rock (Ed.), The legacy of Solomon Asch: Essays in cognition and social
psychology (pp. 39–52). Hillsdale, NJ: Lawrence Erlbaum Associates Inc.
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and
conformity. Annual Review of Psychology, 55, 591–621, doi: 10.1146/
annurev.psych.55.090902.142015.
Crutchfield, R. S. (1955). Conformity and character. American Psychologist, 10,
191–198.
Cummins, D. D. (1998). Social norms and other minds: The evolutionary roots of
higher cognition. In D. D. Cummins & C. A. Allen (Eds.), The evolution of mind
(pp. 30–50). New York: Oxford University Press.
Cummins, D. D. (2000). How the social environment shaped the evolution of mind.
Synthese, 122, 3–28.
Cummins, D. D. (2005). Dominance, status, and social hierarchies. In D. Buss (Ed.),
The evolutionary psychology handbook (pp. 676–697). New York: Wiley.
Cummins, D. D., & Cummins, R. C. (2012). Emotion and deliberative reasoning in
moral judgment. Frontiers in Psychology: Emotion Science, 3, 1–16. doi: 10.3389/
fpsyg.2012.00328.
Erb, H. P., Bohner, G., Rank, S., & Einwiller, S. (2002). Processing minority and
majority communications: The role of conflict with prior attitudes. Personality
and Social Psychology Bulletin, 28, 1172–1182, doi: 10.1177/01461672022812003.
Erb, H. P., Bohner, G., Schmalzle, K., & Rank, S. (1998). Beyond conflict and
discrepancy: Cognitive bias in minority and majority influence. Personality and
Social Psychology Bulletin, 24, 620–633.
Greene, J. D., Cushman, F. A., Stewart, L. E., Loweberg, K., Nystrom, L. E., &
Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal
force and intention in moral judgment. Cognition, 111, 364–371, doi: 10.1016/
j.cognition.2009.02.001.
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008).
Cognitive load selectively interferes with utilitarian moral judgment. Cognition,
107, 1144–1154, doi: 10.1016/j.cognition.2007.11.004.
278 KUNDU AND CUMMINS
REFERENCES
Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2011). Opposing torture: Moral
conviction and resistance to majority influence. Social Influence, 7, 21–34,
doi: 10.1080/15534510.2011.640199.
Asch, S. E. (1956). Studies of independence and conformity; I. A minority of one
against a unanimous majority. Psychological Monograph, 70, (9, whole number
416).
Banerjee, V (1992). A simple model of herd behavior. Quarterly Journal of
Economics, 107, 797–817.
Berns, S., Capra, C. M., Moore, S., & Noussair, C. (2010). Neural mechanisms of the
influence of popularity on adolescent ratings of music. Neuroimage, 49,
2687–2696, doi: 10.1016/j.neuroimage.2009.10.070.
Bikhchandani, S., Hirschleifer, D., & Welch, I. (1992). A theory of fads, fashion,
custom, and cultural change as informational cascades. Journal of Political
Economy, 100, 992–1026.
Campbell, D. T. (1990). Asch’s moral epistemology for socially shared knowledge.
In I. Rock (Ed.), The legacy of Solomon Asch: Essays in cognition and social
psychology (pp. 39–52). Hillsdale, NJ: Lawrence Erlbaum Associates Inc.
Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and
conformity. Annual Review of Psychology, 55, 591–621, doi: 10.1146/
annurev.psych.55.090902.142015.
Crutchfield, R. S. (1955). Conformity and character. American Psychologist, 10,
191–198.
Cummins, D. D. (1998). Social norms and other minds: The evolutionary roots of
higher cognition. In D. D. Cummins & C. A. Allen (Eds.), The evolution of mind
(pp. 30–50). New York: Oxford University Press.
Cummins, D. D. (2000). How the social environment shaped the evolution of mind.
Synthese, 122, 3–28.
Cummins, D. D. (2005). Dominance, status, and social hierarchies. In D. Buss (Ed.),
The evolutionary psychology handbook (pp. 676–697). New York: Wiley.
Cummins, D. D., & Cummins, R. C. (2012). Emotion and deliberative reasoning in
moral judgment. Frontiers in Psychology: Emotion Science, 3, 1–16. doi: 10.3389/
fpsyg.2012.00328.
Erb, H. P., Bohner, G., Rank, S., & Einwiller, S. (2002). Processing minority and
majority communications: The role of conflict with prior attitudes. Personality
and Social Psychology Bulletin, 28, 1172–1182, doi: 10.1177/01461672022812003.
Erb, H. P., Bohner, G., Schmalzle, K., & Rank, S. (1998). Beyond conflict and
discrepancy: Cognitive bias in minority and majority influence. Personality and
Social Psychology Bulletin, 24, 620–633.
Greene, J. D., Cushman, F. A., Stewart, L. E., Loweberg, K., Nystrom, L. E., &
Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal
force and intention in moral judgment. Cognition, 111, 364–371, doi: 10.1016/
j.cognition.2009.02.001.
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008).
Cognitive load selectively interferes with utilitarian moral judgment. Cognition,
107, 1144–1154, doi: 10.1016/j.cognition.2007.11.004.
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998–1002, doi:
10.1126/science.1137651.
Hornsey, M. J., Majkut, L., Terry, D. J., & McKimmie, B. M. (2003). On being loud
and proud: Non-conformity and counter-conformity to group norms. British
Journal of Social Psychology, 42, 319–335, doi: 10.1348/014466603322438189.
Hornsey, M. J., Smith, J. R., & Begg, D. (2007). Effects of norms among those with
moral conviction: Counter-conformity emerges on intentions but not behaviors.
Social Influence, 2, 244–268, doi: 10.1080/15534510701476500.
Kant, I. (1785/1989). The foundations of the metaphysics of morals. Upper Saddle
River, NJ: Prentice-Hall.
Kant, I. (1787/1997). The critique of practical reason. Cambridge, UK: Cambridge
University Press.
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to
socialization. In D. Goslin (Ed.), Moral development and behavior (pp. 31–53).
New York: Holt, Reinhart, & Winston.
Klucharev, V., Hytonen, K., Rijpkema, M., Smidts, A., & Fernandez, G. (2009).
Reinforcement learning signal predicts social conformity. Neuron, 61, 140–51,
doi:10.1016/j.neuron.2008.11.027.
Krueger, J., & Massey, A. L. (2009). A rational reconstruction of misbehavior. Social
Cognition, 27, 786–812, doi: 10.1521/soco.2009.27.5.786.
Luce, R. D., & Raiffa, H. (1957). Games and decisions. New York: Wiley.
Nichols, S., & Mallon, R. (2006). Moral dilemmas and moral rules. Cognition, 100,
530–542, doi: 10.1016/j.cognition.2005.07.005.
Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: Cleanliness
reduces the severity of moral judgments. Psychological Science, 19, 1219–1222,
doi: 10.1111/j.1467-9280.2008.02227.x.
Schnall, S., Haidt, J., Clore, G., & Jordan, A. (2008). Disgust as embodied moral
judgment. Personality and Social Psychology Bulletin, 34, 1096–1109, doi:
10.1177/0146167208317771.
Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects
on moral judgment in professional philosophers and non-philosophers. Mind &
Language, 27, 135–153.
Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral conviction: Another
contributor to attitude strength or something more? Journal of Personality and
Social Psychology, 88, 895–917.
Valdesolo, P., & Desteno, D. (2006). Manipulations of emotional context shape
moral judgment. Psychological Science, 17, 476–477, doi: 10.1111/j.1467-
9280.2006.01731.x.
Van Lange, P. A. M. (1999). The pursuit of joint outcomes and equality in outcomes:
An integrative model of social value orientation. Journal of Personality and Social
Psychology, 77, 337–349.
MORALITY AND CONFORMITY 279
Running head: LITERATURE REVIEW INSTRUCTIONS 1
PAPER II: METHODS AND RESULTS INSTRUCTIONS 3
Instructions for Paper I: Study One Literature Review Instructions (Worth 25 Points)
Ryan J. Winter
Florida International University
Purpose of Paper I: Study One Literature Review
1). Psychological Purpose
This paper serves several purposes, the first of which is helping you gain insight into research papers in psychology. As this may be your first time reading and writing papers in psychology, one goal of Paper I is to give you insight into what goes into such papers. This study one-lit review will help you a). better understand the psychology topic chosen for the course this semester (Facebook Consensus), b). learn about the various sections of an empirical research report by reading five peer-reviewed articles (that is, articles that have a Title Page, Abstract, Literature Review, Methods Section, Results Section, and References Page), and c). use information gathered from research articles in psychology to help support your hypotheses for your first study this semester (Facebook Consensus). Of course, you’ll be doing a study two literature review later in the semester, so think of this Paper I as the first part of your semester long paper. I recommend looking at the example Paper V, actually, to see what your final paper will look like. It might give you a better idea about how this current paper (as well as Papers II, III, and IV) all fit together into your final paper of the semester.
In this current paper (Paper I), you will read five research articles, summarize what the authors did and what they found, and use those summaries to support your Facebook Consensus study hypothesis. IMPORTANT: Yes you need five references, but keep in mind that you can spend a lot of time (a page or two!) summarizing one of them and a sentence or two summarizing others. Thus spend more time on the more relevant articles!
For this paper, start your paper broadly and then narrow your focus (think about the hourglass example provided in the lecture). My suggestion is to give a brief overview of your paper topic in your opening paragraph, hinting at the research variables you plan to look at for study one. Your next paragraphs will review prior research (those five references required for this paper). Make sure to draw connections between these papers, using smooth transitions between paragraphs. Your final paragraphs should use the research you just summarized to support your research hypothesis. And yes, that means you MUST include your study predictions (which we provided in the researcher instructions and the debriefing statement. Use them!). In other words, this first paper will look like the literature reviews for the five research articles you are summarizing for this assignment. Use the articles you are using as references as examples! See what they did and mimic their style! Here, though, you will end the paper after providing your hypothesis. In Paper II, you will pick the topic up again, but in that future paper you will talk about your own study methods and results.
2). APA Formatting Purpose
The second purpose of Paper I: Study One Literature Review is to teach you proper American Psychological Association (APA) formatting. In the instructions below, I tell you how to format your paper using APA style. There are a lot of very specific requirements in APA papers, so pay attention to the instructions below as well as Chapter 14 in your textbook!
3). Writing Purpose
Finally, this paper is intended to help you grow as a writer. Few psychology classes give you the chance to write papers and receive feedback on your work. This class will! We will give you extensive feedback on your first few paper in terms of content, spelling, and grammar. You will even be able to revise aspects of Paper I and include them in future papers (most notably Papers III and V). My hope is that you craft a paper that could be submitted to an empirical journal. Thus readers may be familiar with APA style but not your specific topic. Your job is to educate them on the topic and make sure they understand how your study design advances the field of psychology.
In fact, your final paper in this class (Paper V), might be read by another professor at FIU and not your instructor / lab assistant. Write your paper for that reader – the one who may know NOTHING about your topic and your specific study.
Note: The plagiarism limit for this paper is 30% (though this excludes any overlap your paper might have with regard to citations, references, and the hypotheses). Make sure your paper falls under 30% (or 35% if including predictions).
Note: I am looking for 2.5 pages minimum
Instructions for Paper I: Study One Literature Review (Worth 25 Points)
Students: Below are lengthy instructions on how to write your study one literature review. There is also a checklist document in Canvas, which I recommend you print out and “check off” before submitting your paper (we are sticklers for APA format, so make sure it is correct! We mark off if you have a misplaced “&”, so carefully review all of your work and use the checklist! It will help). Also look at the example paper in Canvas. It will show you what we expect.
1. Title Page: I expect the following format. (5 Points)
a. You must have a header and page numbers on each page.
i. If you don’t know how to insert headers, ask your instructor or watch this very helpful video!
.
ii. The header goes at the top of the paper and it is left justified.
1. Use “Insert Headers” or click on the top of the page to open the header. Make sure to select the “Different first page” option so that your title page header will differ from subsequent pages
2. The R in Running head is capitalized but the “h” is lower case, followed by a colon and a short title (in ALL CAPS). This short running head title can be the same one as the rest of your paper or it can differ – the choice is yours, but it should be no more than 50 characters including spaces and punctuation
3. Insert a page number as well. The header is flush left, but the page number is flush right.
iii. Want an example header? Look at the title page of these instructions! You can use other titles depending on your own preferences (e.g. SOCIAL MEDIA AND CONSENSUS; CONFORMITY; JUDGING OTHERS; etc.).
b. Your Title should be midway up the page. Again, see my “Title” page above as an example of the placement, but for your title try to come up with a title that helps describe your study one. Avoid putting “Paper One”. Rather, consider the titles you saw in PsycInfo. Create a similar title that lets the reader know what your paper is about
c. Your name (First Last) and the name of your institution (FIU) beneath the title. For this class, only your own name will go on this paper. Double space everything!
i. You can also refer to Chapter 14 in your powerpoints and/or Smith and Davis textbook
d. This Title Page section will be on page 1
2. Abstract?
a. You DO NOT need an abstract for Paper I. In fact, you cannot write it until you run both study one and two (as the abstract highlights the results), so omit the abstract for now
3. Literature Review Section (12 points)
a. First page of your literature review (Page 2)
i. Proper header with page numbers. Your running head title will appear in the header of your page WITHOUT the phrase “Running head”. To insert this header, use the headers program.
ii. The title of your paper should be on the first line of page two, centered. It is IDENTICAL to the title on your title page. Just copy and paste it!
iii. The beginning text for your paper follows on the next line
b. Citations for the literature review
i. Your paper must cite a minimum of five (5) empirical research articles that are based on studies conducted in psychology. That is, each of the five citations you use should have a literature review, a methods section, a results section, a conclusion/discussion, and references.
1. For Paper I, you MUST use at least three of the five articles provided in the Canvas folder. You can use four if you like, but you must use three at minimum – however, you cannot use all five. For that fifth article, you must find it using PsycInfo. There are some other conditions for this fifth article that you must follow:
a. First, remember that the fifth article cannot be any of the five found in the Canvas folder.
b. Second, for your fifth article, it can be based on a wide variety of topics, including general priming studies, studies on consensus or conformity (without a social media angle), studies on social media (without a consensus or conformity angle), studies on impression formation, studies on friends, studies on informational social influence or morality etc. Trust me, there are TONS of topics that can help you in your paper. Just choose one that will help you support your experimental hypothesis for your Facebook Consensus study. That is, it has to help you justify your study one hypothesis (all students are using this same hypothesis, so make sure to read it. You can find it in the researcher instructions along with the questionnaires you are giving to participants. I actually suggest copying and pasting that hypothesis into this first paper at the end).
c. Finally, you can have more than five references if you want, but you must have a minimum of five references.
ii. Proper citations must be made in the paper – give credit where credit is due, and don’t make claims that cannot be validated.
iii. If you use a direct quote, make sure to provide a page number for where you found that quote in the citations. Do not directly quote too often, though.
You can have no more than three direct quotes in the whole paper
(though zero quotes would be even better). Instead, I would like you to paraphrase when possible.
c. Requirements for the information in your literature review
i. Your study one literature review should use prior research as a starting point, narrowing down the main theme of your specific project – think about the hourglass example from Chapter 14 in Smith and Davis.
ii. The last part of your literature review should narrow down your focus onto your own study, eventually ending in your study hypothesis. However, DO NOT go into specific details about your methods. You will talk about your specific methods in Paper II in a few weeks.
iii. Again, to make it clear, at the end of your paper you will give an overview of your research question, providing your specific predictions/hypotheses.
d.
The literature review must have minimum of two (2) full pages NOT INCLUDING THE HYPOTHESES (2.5 pages with the hypotheses). It has a maximum of five (5) pages
(thus, with the title page and references page, the paper should be between 4.5 and 7 pages). If it is only four and a half pages (again, including the hypotheses), it better be really, really good. I don’t think I could do this paper justice in fewer than five pages, so if yours isn’t at least five pages, I doubt it will get a good grade.
4. References (6 points)
a. The References section starts on its own page, with the word References centered. Use proper APA format in this section or you will lose points.
b. All five references that you cited in the literature review must be in this section (there should be more than five references here if you cited more than five articles, which is fine in this paper). However, at least three must come from the article folder on Canvas while the remaining two can come from either the last Canvas paper or two new ones from psychinfo. Only peer-reviewed articles are allowed here (no books, journals, websites, or other secondary resources are allowed for paper one).
c. For references, make sure you:
i. use alphabetical ordering (start with the last name of the first author)
ii. use the authors’ last names but only the initials of their first/middle name
iii. give the date in parentheses – e.g. (2007).
iv. italicize the name of the journal article
v. give the volume number, also in italics
vi. give the page numbers (not italicized) for articles
vii. provide the doi (digital object identifier) if present (not italicized)
5. Writing Quality (2 Points)
a. This includes proper grammar and spelling. I recommend getting feedback on your paper from the Pearson Writer program prior uploading it on Canvas.
6. Between the title page, literature review, and reference page, I expect a minimum of 4 pages and a maximum of 7 pages for this assignment. But like I said, the shorter the paper, the less likely it is to get a good grade, so aim for 5 pages minimum.
The above information is required for your paper, but I wanted to provide a few tips about writing your literature review as well. Students often struggle with the first paper, but hopefully this will give you some good directions:
· First, remember that you need 5 references, all of which MUST be peer-reviewed (three coming from the Canvas folder and one or two that you find on your own using PsycInfo).
· Second, I don’t expect a lengthy discussion for each and every article that you cite. You might spend a page talking about Article A and a sentence or two on Article B. The amount of time you spend describing an article you read should be proportional to how important it is in helping you defend your hypotheses. See if there is a prior study that looks a lot like yours (hint – there is at least one, which I based this study on, but you’ll have to find it on your own!). I would expect you to spend more time discussing that prior research since it is hugely relevant to your own study. If an article you read simply supports a global idea that ties into your study but has very different methods (like “frustrated people get mad!”), you can easily mention it in a sentence or two without delving into a lot of detail. Tell a good story in your literature review, but only go into detail about plot elements that have a direct bearing on your study!
· Third, this paper is all about supporting your hypotheses. Know what your hypotheses are before you write the paper, as it will help you determine how much time to spend on each article you are citing. My suggestion is to spend some time describing the nature of consensus and conformity, and then talking about studies that looked at this area. Use those studies to help defend your own study hypothesis. That is, “Since they found X in this prior study, that helps support the hypothesis in the present study”. Do you remember your hypotheses? Okay, I’ll be really helpful here. BELOW are your hypotheses. In your paper, support it! Just remember that the rest of your paper needs to be at least two full pages NOT INCLUDING the hypothesis below. In other words, including the hypotheses below, your actual text for your paper should be at least two and a half pages!
In general, we predict that participants who read unanimously supportive feedback will rate the Facebook user’s conduct as more acceptable than participants who read unanimously oppositional feedback, with those who read mixed feedback falling between these extremes.
More specifically, participants in the unanimously supportive condition will more strongly agree with supportive survey statements (“Abigail’s behavior was understandable, “Abigail’s behavior was reasonable”, “Abigail’s behavior was appropriate”, “I would advise Abigail to keep silent”, and “I would try to comfort Abigail”) and more strongly disagree with oppositional survey statements (“Abigail’s behavior was wrong”, “Abigail’s behavior was unethical”, “Abigail’s behavior was immoral”, and “Abigail’s behavior was unacceptable”) compared to participants in the unanimously oppositional condition, with participants in the mixed condition falling between these extremes. However, participants in both the unanimously supportive and unanimously oppositional conditions will strongly agree that they would give Abigail the same advice that her friends gave her.
· Fourth, make sure to proofread, proofread, proofread! Use the Pearson Writer for help, but note that their suggestions are just that – suggestions. It is up to you to make sure the flow of the paper is easy to understand. Good luck!
· Fifth, go look at the supporting documents for this paper. There is a checklist, a grade rubric, and an example paper. All will give you more information about what we are specifically looking for as well as a visual example of how to put it all together. Good luck!
· Finally, note that you have a lot of help available to you. You can go to the Research Methods Help Center (which is staffed by research methods instructors and teaching assistants). You can go to the Writing Center in the Green Library (at MMC) and get help with writing quality. You can attend workshops from the Center for Academic success (CfAS) focusing on APA formatting, paraphrasing, and statistics. Your instructor might even be willing to give you extra credit for using these resources, so make sure to ask your instructor about it.