Post 5 different article references from the Journal of MIS that relate to your dissertation research question. Provide a description of each article discussing how it relates to your research question.
An Agile Methodology for the Disaster
Recovery of Information Systems Under
Catastrophic Scenarios
COREY BAHAM, RUDY HIRSCHHEIM, ANDRES A. CALDERON,
AND VICTORIA KISEKKA
COREY BAHAM (corey.baham@okstate.edu; corresponding author) is an assistant
professor of management science and information systems (IS) at Oklahoma State
University. His research focuses on agility in IS development, systems recovery, and
firm dexterity. His work has been published in Communications of the AIS and major
IS conference proceedings.
RUDY HIRSCHHEIM (rudy@lsu.edu) is the Ourso Family Distinguished Professor of
Information Systems at Louisiana State University. He was previously on the
faculties of the University of Houston, Templeton College–Oxford, and the
London School of Economics. He was given the LEO Award for Lifetime
Achievement by the Association for Information Systems. He is senior editor for
the journal Information and Organization and on the editorial boards of Information
Systems Journal, Journal of Management Information Systems, Journal of Strategic
Information Systems, and others.
ANDRES A. CALDERON (calandres@gmail.com) has 25 years of diversified technology
experience at both large and small enterprises; technical personnel management
capability at different levels of the organizational structure; project management,
and enterprise system administration experience. He also has vast experience in
aligning technology to corporate vision and extensive background in business
development.
VICTORIA KISEKKA (vkisekka@albany.edu) is an assistant professor at the State
University of New York at Albany. Her research interests include information
assurance, organizational resilience, health-care information technologies, and dis-
aster recovery and response. Her work has been published in Computers in Human
Behavior and various information systems conference proceedings.
ABSTRACT: We explore the use of an agile methodology for improving the recovery
of complex systems under catastrophic scenarios. Our adaptation of Kanban presents
a novel, agile approach to overcoming the unique challenges that organizations face
during disaster recovery. An action research study approach is employed to test the
Color versions of one or more of the figures in the article can be found online at
www.tandfonline.com/mmis
Journal of Management Information Systems / 2017, Vol. 34, No. 3, pp. 633–663.
Copyright © Taylor & Francis Group, LLC
ISSN 0742–1222 (print) / ISSN 1557–928X (online)
DOI: https://doi.org/10.1080/07421222.2017.1372996
mailto:corey.baham@okstate.edu
mailto:rudy@lsu.edu
mailto:calandres@gmail.com
mailto:vkisekka@albany.edu
http://www.tandfonline.com/mmis
https://crossmark.crossref.org/dialog/?doi=10.1080/07421222.2017.1372996&domain=pdf&date_stamp=2017-11-04
implementation of Kanban during a complex scenario at a large enterprise. The
findings suggest that an adaptive and flexible methodology is required for an
efficient disaster recovery in confronting unintended and cascading consequences.
This research offers several contributions. First, to our knowledge, this is the first
study to detail an approach for disaster recovery using an agile methodology.
Second, this study uses a new combination of classic, canonical, and dialogical
action research approaches to conduct the first empirical test of the effectiveness of
an agile approach during an actual disaster recovery event. Third, in response to this
Special Issue, the aforementioned research approach discusses the relationships
between information systems researchers and research clients, demonstrating how
action research can lead to improved organizational situations.
KEY WORDS AND PHRASES: action research, agile project management, catastrophic
scenario planning, IS disaster recovery, Kanban.
Disaster recovery (DR) and business continuity planning is one of the top concerns
for information technology (IT) executives [24] because of the increasingly detri-
mental effects of IT downtime on a firm’s reputation, its ability to conduct business,
and ultimately its survivability. According to a survey by Computer Associates
Technologies, IT downtime cost over $26 billion in lost revenue in 2010 [20].
One compelling reason for this loss is the increased dependency on technology
found in today’s firms. The increase in productivity and efficiency afforded by
technological tools has also increased a firm’s dependency on its technical infra-
structure, which has led to an increase in a firm’s sensitivity to IT interruptions.
Despite advances in technology such as high-availability storage area networks, self-
healing virtual machine environments, and cloud computing, which have drastically
reduced system recovery times [28], the IT DR practice still lacks a methodology to
recover complex information systems (IS) in the wake of a catastrophic event.1 The
DR practice, which dates back to military command and control (C2) doctrines (see
Online Appendix A), largely assumes that decision making and authority need to be
centralized, includes the tendency to over-detail, and treats adaptation as dysfunc-
tional or harmful. Moreover, the continued use of traditional DR approaches has
ignored the necessity of a distributed and adaptive response and recovery during a
catastrophe [15], despite the need for adaptive and agile processes for the changing
faces of disaster response. This begs the question: Have IT executives overempha-
sized the resiliency of the technologies while neglecting the need for a systematic
DR approach that can increase team readiness and expedite the recovery of complex
IS? In this study, we explore the use of agile methodologies in providing a systema-
tic and holistic approach to DR orchestration.
Within the organizational context, agile capabilities have been highlighted in
software development methodologies, which also have parallels to IS recovery
methodologies [3, 37]. Since the increase in the complexity of IS after the growth
of the Internet, software development methodologies have focused on becoming
more agile in order to meet the rapid changes in user requirements. Similarly, we
634 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
posit that dynamic organizational DR scenarios, which are characterized by cascad-
ing consequences in a complex environment, should consider approaches that are
more agile. Thus, the main research question that guides this study is: How can the
use of agile project management methodologies improve disaster response and
recovery (orchestration) efforts? To address the research question, we draw upon
the extant literature to identify useful agile methodologies for the DR practice, which
have the potential to improve the delivery of project requirements. In this study, a
new agile approach to DR was adapted and tested using an action research approach
to study the IT DR practice of a large enterprise.
Disaster Recovery Literature
We define disaster recovery as a subset of business continuity planning, which
focuses on the process of “creating and executing a plan for how an organization
will resume partially or completely interrupted IT, organizational, or business critical
functions within a predetermined time after a disaster or disruption has occurred”
[27, p. 1]. In this study, we focus on the DR of complex IS in organizations. Online
Appendix B contains key terms related to IS DR, including complex DR (C-DR),
which describes the environment in which the recovery process takes place. For
simplicity, we will refer to the recovery efforts that take place within the C-DR
context as DR.
Methods of IS recovery largely focus on the use of IS recovery tools and the
acquisition of sophisticated hardware, while neglecting the wider context concerning
how these tools integrate with existing processes. Within the DR context, the
complexities presented by the interdependencies among interrelated IS pose serious
challenges to DR orchestration. In particular, we identify the critical factors in
executing DR orchestration with efficiency as the need for teams to be flexible
and responsive to changing circumstances, maintain a shared common operating
picture, and maintain a strong focus.
In the extant literature, a few DR researchers attempt to study DR at the organizational
level by investigating the antecedents of effective DR.2 The demonstrated correlates of
effective DR include planning organizational size, management support, internal and
external collaborations, an organization’s financial condition, severity of the disaster,
and economic climate [23, 25, 44]. Other research primarily focuses on developing
prototypes and modeling techniques for managing disasters [34, 43, 50]. Despite these
developments, there is still a dearth of seminal work in the area of DR. To our knowl-
edge, there are no comprehensive methodologies for effectively managing complex DR
efforts at the organizational level. This lack of holistic approaches for managing
complex disaster environments has been observed previously [3, 11, 31]. An observable
limitation in the discussed works is that they mainly focus on people or technology
without consideration of a systematic approach for understanding relationships or the
DISASTER RECOVERY OF INFORMATION SYSTEMS 635
processes critical for successful DR. In particular, the linkage that exists between IT,
people, policies, and processes is not addressed. Prior observations also point out that
there is a general lack of interoperability of existing DR solutions [10, 11]. In fact, the
popularly referenced four-stage model of disaster management, which is often the basis
of existing solutions, has been shown not to be a good representation of reality [8]. With
this in mind, we refer to elements of the four-stage model only as a common language
for describing DR orchestration activities (response and recovery), which occur simul-
taneously, without suggesting that its isolated stages are an adequate representation of
the DR process. Lastly, while several disaster management IS have been proposed, to
our knowledge, there are no known deployments and usage of such IS in the industry.
Agile Methodology Literature
To address the aforementioned needs of a DR environment for DR orchestration in
organizations, we began by identifying the need for agility in the DR practice.3 Extant
research indicates that the concept of agility first appeared in the mainstream business
literature in the early 1990s [18]. Prior studies explore the concept of agility concerning
manufacturing, management, product development, and other business research devel-
opment [45, 48]. Despite the contributions by these fields, the term “agile” became
widely popular after the advent of the Agile Manifesto [7] in 2001, which described a
new approach to building software. Although we recognize that the roots of agile project
management methodologies stem from fields both inside and outside of the business
literature [47], our motivation to study the recovery of complex IS leads us to examine
the concept of agility primarily within the software development context.
Theoretical Lens: Parallel Between DR Needs and Agile Practices
We examine agile methodologies for capabilities that might overcome the complex-
ities identified by a DR environment. There are several benefits of agile methodol-
ogies in completing organizational projects, including adaptability, flexibility, and
project visibility [26]. For the DR practice, the need for a highly adaptive methodol-
ogy with the ability to cope with sudden or frequent changes is critical to minimizing
the downtime of complex IS [19]. Unfortunately, traditional DR approaches have
negatively impacted the necessity of a distributed and adaptive response and recov-
ery during a catastrophe [15]. The extant literature both inside and outside of the
military domain (i.e., flooding, nature disasters) emphasizes situational awareness
and collaboration to facilitate a response to dynamic recovery situations [22, 51].
Agile methodologies use short feedback cycles to provide timely and frequent
updates, which are critical during the DR response and recovery. The focus of effort
through a common operating picture is vital in facilitating situational awareness [13].
In addition, the ability to maintain focus on critical activities, allowing teams to self-
manage toward resolving bottleneck issues, and the inherent operating picture
presented is critical in DR and promoted by methodologies such as Kanban [1].
636 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Many critical components, such as the ability to respond to ongoing emergency
conditions, the coordination of recovery teams, and access to adequate communica-
tion mediums, play a part in successful recovery efforts [34]. To rebuild entire
information systems quickly, coordination among multiple teams must align with
recovery opportunities and business priorities. Therefore, communication and colla-
boration between and within DR stakeholders at all levels, horizontally and verti-
cally, are vital to achieving high levels of DR orchestration efficiency. In summary,
agile principles complement DR needs of adaptability, situational awareness, focus
of effort, and orchestration efficiency. In this study, we use the twelve principles
behind the agile manifesto [7] along with Conboy’s [12] definition of information IS
development (ISD) agility to ground our notion of agile project management.
Table 1 compares these twelve principles (represented by the numbers enclosed in
parentheses) with the four key challenges of DR orchestration defined above.
Thus, drawing on the extant literature, we seek to answer our research question as
posed in the introduction: How can the use of agile project management methodol-
ogies improve disaster response and recovery efforts? To address this question, we
studied the DR program of a large enterprise using an action research approach.
Action Research Approach
Action research (AR) has been defined as “research that involves practical problem
solving which has theoretical relevance” [33, p. 12]. AR is particularly helpful in
addressing both rigor and relevance by applying scientific research in the setting of a
real-world problem. In contrast to traditional research approaches, the action
researcher is actively engaged in the creation of organizational change. AR has its
roots in the philosophy of pragmatism where the focus is on practice, and in
particular, the outcomes or consequences of practice [39]. In the IS domain,
Baskerville and Myers [5] cogently articulate the connection between pragmatism
and AR. AR differs from case study research in that the former is directly involved
with helping the organization to learn by conducting one or more experimental
solutions [6]. In AR, findings are reflected upon and used in subsequent iterations.
The IS literature contains guidelines for conducting AR [2, 4, 6], including those
specific to several types of AR such as canonical [14, 29] and dialogical [30], which
we combined in this study.
A key feature of AR is its reflective and iterative cycle. In addition, AR
approaches commonly consist of two main parts, the action and the reaction. The
action is the organizational intervention, which attempts to remedy the real-world
problem, and the response is the response to the experimental stimulus. In this study,
we incorporate dialogical AR [30] into Baskerville and Wood-Harper’s [6] AR
cycle, which has been further formalized as canonical AR [14]. First, Baskerville’s
five-phase AR cycle provides a framework by which practitioners and researchers
work together to remedy a problem. Using this five-phase cyclical process, the
research team (1) diagnoses the underlying causes of the organization’s desire for
DISASTER RECOVERY OF INFORMATION SYSTEMS 637
change, (2) specifies organizational actions, (3) takes the action, (4) evaluates the
action’s outcomes, and (5) identifies knowledge gained during the process [4].
Second, dialogical AR recognizes the respective historical and social contexts of
the researcher and the practitioner [30]. We selected dialogical AR, among several
AR approaches in the extant literature [2, 6], because it is iterative, collaborative,
and separates expertise into two separate entities: the researcher’s expertise and
Table 1. A Comparison of Disaster Recovery Needs and Agile Principles
DR needs Agility definition and principles
Adaptation and agility Definition
Facilitate the creation of change (1).
Agile Principles
Welcome changing requirements, even late in development.
Agile processes harness change for the customer’s
competitive advantage (2).
Situational awareness Agile Principles
Business people and developers must work together daily
throughout the project (4).
At regular intervals, the team reflects on how to become more
effective, then tunes and adjusts its behavior accordingly (12).
Efficient orchestration Definition
Contains a methodology component that also contributes to
perceived economy, quality, or simplicity but should not
perform poorly in any of the three (2).
Continual readiness of the methodology component (3).
Agile Principles
Deliver working software frequently, from a couple of weeks to
a couple of months, with a preference to the shorter
timescale (3).
Business people and developers must work together daily
throughout the project (4).
Agile processes promote sustainable development. The
sponsors, developers, and users should be able to maintain
a constant pace indefinitely (8).
Simplicity—the art of maximizing the amount of work not done
—is essential (10).
The best architectures, requirements, and designs emerge from
self-organizing teams (11).
At regular intervals, the team reflects on how to become more
effective, then tunes and adjusts its behavior accordingly (12).
Our highest priority is to satisfy the customer through early and
continuous delivery of valuable software (1).
Focus of effort Agile Principles
Continuous attention to technical excellence and good
design enhances agility (7).
: There are twelve principles behind the agile manifesto [7]. The numbers inside the brackets
indicate which of the twelve principles behind the agile manifesto is being referenced.
638 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
practitioner’s expertise. Based on Schultz’s [42] concept of the scientific attitude and
the natural attitudes of everyday life, dialogical AR treats these concepts as “quali-
tatively different categories of knowledge and reasoning, each category being dis-
tinguished by a dependent on its own social context” [30, p. 513]. Moreover, the
one-on-one dialogue between the researcher and practitioner fit well with the
research agreement and time schedules of the research team. Furthermore, dialogical
AR fits well with the elements found in the AR cycle [30]. We drew the conceptual
basis of our research approach from dialogical AR, while following the cyclical and
iterative process of canonical AR. There were additional and practical reasons why
an AR approach was appropriate for this study, which we discuss in Online
Appendix C. Following Mårtensson and Lee [30], we refer to theory (theoria) as
the world of scientific knowledge that is the basis of the researcher’s expertise. Here,
the scientific knowledge on agile methodologies is used in developing an action plan
in combination with the practitioner’s expertise (praxis). Such knowledge was used
as the basis of the DR intervention. The results of each DR intervention were
reflected upon and used in preparation for subsequent interventions as needed.
Research Setting
The research project emerged as part of the initiative of the host company, hereafter
referred to as Alpha, to assess the appropriateness of agile approaches to DR
orchestration. The wider IT unit of Alpha served as context to test the logic of a
generic, agile methodology for the DR of complex systems. Alpha offers health-care
services in the United States. Alpha has three data centers with an array of technol-
ogies such as mainframes, midranges, and WINTEL platforms to support its offer-
ings. Critical infrastructures at the data centers have N + 1 resilience that ensures
system availability in the event of component failure. Alpha’s IT team has over 300
staff members that manage over 2,000 servers and support over 300 applications.
Alpha has developed a mature DR practice with comprehensive DR plans.
The research team consists of a lead researcher, who was previously employed by
Alpha and was the primary driver of the project, a lead practitioner, and two
additional researchers, who provided insights from extant literature. The lead
researcher is an academic action researcher, who spent approximately 200 hours
working with Alpha’s DR department. The lead practitioner is Alpha’s DR manager,
who works full-time with the company and is responsible for developing DR plans
and leading all IT DR efforts. During the project, the DR manager hired contractors
to help with the DR planning.
Alpha’s IT division had largely neglected DR planning until a catastrophic event
decimated the region near to Alpha’s headquarters. Executives soon raised concerns
over the company’s ability to survive a natural disaster. Despite hiring a DR
manager, the sheer size of Alpha’s IT infrastructure, the constant IS changes to
DISASTER RECOVERY OF INFORMATION SYSTEMS 639
both hardware and software across multiple departments, and changing personnel
rendered the DR manager unable to keep documentation updated at the rate of
Alpha’s organization change. The company still needed an effective way to coordi-
nate the efforts of separate departments during a DR scenario. In search of meth-
odologies that respected the changing nature of DR efforts, the DR manager began
to hear about and eventually research agile methodologies. Eventually, Alpha con-
tacted the lead researcher to learn about how agile methods might be leveraged and
implemented to improve the recovery time and efficiency of DR. A client–researcher
agreement was made between the lead researcher and Alpha, which committed to
developing a framework for the use of agile methodologies to improve DR orches-
tration. The lead researcher was expected to work with Alpha’s agile work group to
understand what works and what does not work at the company. This agreement
initiated a four-month effort, May 2014 through August 2014, to improve DR
orchestration using agile methodologies. This effort was later extended one month.
Data Collection Details
To produce a rich understanding of the project context, multiple data sources were
used to triangulate findings including direct observation, DR documentation analy-
sis, group meetings, semistructured interviews, and postmortem reports. The role of
the lead researcher was that of an observer. He observed the interactions between
Alpha’s DR department and other IT departments to gain insights into the behavior,
interactions, and information that may not have been reported during the interviews.
The lead researcher took notes of incidents and observed practices as well as direct
and indirect influencers of Alpha’s DR program. The lead researcher gained knowl-
edge of Alpha’s DR program by reviewing the DR department’s documents such as
reports, manuals, requirements documents, and lessons learned. The lead researcher
participated in group meetings both with the DR department and with IT executives,
who provided feedback on the framework as it was being developed. In addition, the
lead researcher conducted two formal interviews, which were recorded. The inter-
view script is shown in Online Appendix D. A postmortem was conducted after each
implementation of our DR framework.
The feedback gathered after each intervention was reflective in nature. Feedback
was gathered from IT staff that participated in the DR events, which the research
team used to refine the DR methodology. Both the lead researcher and the lead
practitioner reflected on and analyzed the data. A summary of the data sources is
presented in Table 2. The majority of the coding procedure was conducted at the
completion of each cycle. The coding scheme focused on a number of core themes
that related to DR needs and later evolved into additional themes [41]. The lead
researcher then wrote a case narrative based on the data sources and reflective
dialogue with the practitioners. Following Mårtensson and Lee’s [30] dialogical
AR, the case descriptions present different perspectives whereby readers are able
to make their own interpretation of the DR events described.
640 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
T
ab
le
2
.
D
at
a
C
o
ll
ec
ti
o
n
In
fo
rm
at
io
n
M
et
h
o
d
S
o
u
rc
e
D
at
es
D
ir
e
ct
o
b
se
rv
a
tio
n
D
R
m
a
n
a
g
e
r
,
D
R
co
n
tr
a
ct
o
rs
4
/1
5
–
9
/3
0
D
R
d
o
cu
m
e
n
ta
tio
n
a
n
a
ly
si
s
D
R
p
a
p
e
rw
o
rk
4
/1
–
9
/3
0
G
ro
u
p
m
e
e
tin
g
—
D
R
d
e
p
a
rt
m
e
n
t
D
R
m
a
n
a
g
e
r,
D
R
co
n
tr
a
ct
o
rs
(1
h
o
u
r
e
a
ch
)
6
/6
,
6
/1
3
,
7
/2
4
,
8
/1
9
,
9
/3
0
A
g
ile
co
a
ch
,
D
R
m
a
n
a
g
e
r
(3
0
m
in
.)
7
/2
4
In
te
rv
ie
w
A
g
ile
co
a
ch
(1
h
o
u
r)
7
/1
1
P
o
st
m
o
rt
e
m
—
F
ir
st
e
ve
n
t
D
R
m
a
n
a
g
e
r
8
/9
D
ir
e
ct
o
r,
IT
N
e
t
w
o
rk
O
p
e
ra
tio
n
s
C
e
n
te
r
(N
O
C
)
S
ys
te
m
s
E
n
g
in
e
e
ri
n
g
A
rc
h
ite
ct
S
U
P
V
,
IT
-C
o
m
p
u
te
r
a
n
d
N
O
C
O
p
e
ra
tio
n
s
N
O
C
e
n
g
in
e
e
r
S
r.
N
O
C
e
n
g
in
e
e
r
M
a
n
a
g
e
r,
IT
P
ro
d
u
ct
io
n
S
u
p
p
ly
a
n
d
N
O
C
M
a
n
a
g
e
r,
IT
-S
ys
te
m
s
S
u
p
p
o
rt
P
o
st
m
o
rt
e
m
A
—
S
e
co
n
d
e
ve
n
t
M
a
n
a
g
e
r,
IT
S
e
rv
e
r
E
n
g
in
e
e
ri
n
g
9
/2
1
IT
m
a
n
a
g
e
r
9
/2
2
V
P
,
A
p
p
lic
a
tio
n
S
e
rv
ic
e
s
G
ro
u
p
m
e
e
tin
g
—
E
xe
cu
tiv
e
s
(2
0
m
in
.)
C
IO
1
0
/1
0
D
ir
e
ct
o
r,
T
e
ch
n
o
lo
g
y
O
ff
ic
e
V
P
,
E
n
te
rp
ri
se
In
fr
a
st
ru
ct
u
re
P
o
st
m
o
rt
e
m
B
—
S
e
co
n
d
e
ve
n
t
M
a
n
a
g
e
r,
IT
-S
e
rv
ic
e
M
a
n
a
g
e
m
e
n
t
1
0
/2
9
D
ir
e
ct
o
r,
IT
N
e
tw
o
rk
O
p
e
ra
tio
n
s
C
e
n
te
r
D
R
m
a
n
a
g
e
r
In
te
rv
ie
w
D
R
m
a
n
a
g
e
r
(1
h
o
u
r)
1
1
/6
DISASTER RECOVERY OF INFORMATION SYSTEMS 641
The five-phase AR cycle, taken from Susman and Evered [46] and later Baskerville
[4], consists of diagnosing, action planning, action taking, evaluating, and specifying
learning.
First Cycle, 4 Months
Diagnosing: The entry point of this diagnosis came during the initial meeting
between the lead researcher and two of Alpha’s managers. The DR manager
provided on overview of the problem and the initial research that he conducted
regarding the use of agile methodologies for disaster management, emergency
management, business continuity, and DR. Through his initial research from
Gartner [26] on agile adoption, the DR manager believed that there was an align-
ment between the needs of DR orchestration and agile methodologies, however, no
references were found in the related literature concerning this relationship.
Action Planning: Following the diagnostic phase, the lead researcher worked
closely with the IT leaders to conduct a systematic exploration of the opportunities
and benefits of applying agile methodologies to Alpha’s DR program, specifically,
the area of DR orchestration. The lead researcher synthesized findings from the
literature and consulted with certified agile coaches to identify best practices for
scaling agile methodologies. As a result, matching concepts between agile meth-
odologies and DR needs were identified and a high-level framework was drafted in
July 2014, which considered the challenges of large enterprises and catastrophic
scenarios. A number of scaled frameworks were identified and carefully examined
(e.g., Leffingwell’s Scaled Agile Framework). Ultimately, the lead researcher deter-
mined that adapting Kanban, an agile methodology, aligned best with the goals of
Alpha’s DR program. Although Kanban implementations may vary in complexity,
the Kanban methodology is based on three basic principles of visualizing workflow,
limiting the amount of work in progress, and managing workflow [1]. Additional
details concerning what the methodology entails can be found in the references
provided [1, 38, 45] as well as in the results section. The Kanban methodology fit
well with the Alpha’s methodological preferences of simplicity, project visibility,
adaptability, and flexibility. Online Appendix E provides a detailed justification for
choosing a Kanban-based methodology for DR orchestration. In response to the
research presented, the DR manager found an open-sourced Kanban board online,
which he familiarized himself with in preparation for use during a disaster. Table 3
illustrates an example of how the perspectives of the research and practitioner
worked toward a mutual understanding. We discuss how the development of an
agile framework for DR was perceived and solved from a practical and research
perspective using the framework from Mårtensson and Lee [30]. Online Appendixes
F and G include additional illustrations.
642 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Table 3. Developing an Agile Framework for Disaster Recovery
Practitioner’s perspective Researcher’s perspective
The practitioner saw the DR practice as an
agile practice in the sense that DR requires a
unified effort and the adaptation to change.
The researcher saw the DR practice as one
that could adapt waterfall, agile, or other
project management methodologies [40].
From the DR manager’s perspective, the
need for agility was self-evident to those
who were heavily involved in DR efforts.
Thus, he saw agile principles being
overlaid onto an existing DR framework.
Therefore, he challenged the researcher
to gain an understanding of COBIT, ITIL,
and other IT frameworks, which he
thought would help ground the agile DR
framework. These IT frameworks were
considered best practices for IT
processes by Alpha’s IT managers.
The DR manager showed the lead
researcher a framework for DR
orchestration that he had developed.
Although he struggled to explain the
inherent agility of the DR practice to the
other IT managers, he felt that the agile
project management literature would
contain existing frameworks and the
vocabulary to articulate what he had
trouble communicating.
From the researcher’s perspective, agile
methodologies could be applied to DR to
improve DR orchestration at Alpha.
However, he saw a potential problem with
introducing agile methodologies in Alpha’s
DR practice. Attempting to fit agile
approaches into traditional IT governance
and service frameworks is problematic,
because agile principles focus on
producing working process over
comprehensive documentation [7].
According to the extant literature, the
incompatibility between an organization’s
cultural assumptions and the assumptions
built into the methodology have negative
impacts on implementing process
improvements [9, 35].
Dialogue and Action
In the reflective dialogue between the researcher and the practitioner, they discussed their
different conceptualizations of applying agile methodologies to DR. From the practitioner’s
perspective, the agile framework needed to be comprehensive and provide clarification
concerning the stakeholder roles of a DR orchestration in a large enterprise, based on
industry best practices as defined by COBIT and ITIL. Such a framework would be respected
by Alpha’s managers who scrutinized the idea of applying agile approaches to DR. From the
lead researcher’s perspective, the agile practice aims to develop simple, lean frameworks
that guide organizational processes with flexibility. Because of the practitioner’s desire to
capture a high level of detail, the researcher suggested taking an incremental approach to
implementing agile instead of attempting to adopt a feature rich agile methodology
wholesale. The practitioner found the Kanban methodology to be simple as the researcher
suggested, yet scalable enough to include important DR roles and responsibilities. Thus,
these two perspectives were merged into a mutual understanding through reflective
dialogue [30].
Notes: Although reflective dialogue between the practitioner and the researcher occurred throughout the
research cycle, we limited the number of reflective dialogue illustrations to three because of space
limitations.
DISASTER RECOVERY OF INFORMATION SYSTEMS 643
Action Taking (t = 1): On August 9, 2014, the secondary data center of Alpha
experienced an unexpected power loss during the testing of generators conducted by
the hosting provider. Immediately, the DR plan was activated and teams were notified
of the event, but the DR manager confronted some serious challenges. As he was
assessing the damages caused by the power loss, he was bombarded with requests for
updates. The chief information officer (CIO) wanted to know how long it would take
to get the systems up and running. The chief operations officer (COO) wanted to know
when operations would resume. The human resources (HR) director wanted to know
whether to tell employees to report to work the next day. The technical teams wanted
to know what the next steps were and where they could find team members to replace
those who were affected by the disaster. As calls, e-mails, and text messages poured
in, the DR manager realized that following the DR Plan in its original form was in
jeopardy because of the increasing uncertainty and reports concerning the extent of the
damages. Not only was the internal phone system down, but also cell phone reception
was poor in the data center. Moreover, e-mail was rendered unusable because alerts
generated by Alpha’s system monitoring service clogged the team’s inboxes.
In an attempt to streamline communication, the DR manager implemented Kanban
boards via an open-sourced Kanban board in parallel with the use of set recovery
sequences established for the recovery of the secondary site. This application allowed
the DR manager to upload the recovery sequences from the DR cadence into a project
backlog from which work items were reprioritized as new information became avail-
able. Although the open-sourced application was restricted to a few users, soon
separate calls, texts, and e-mails were replaced by a single Kanban board, which
helped those involved to visualize the workflow while maintaining situational aware-
ness across the IT department. The use of Kanban was limited to four participants and
the lanes of work (Kanban columns) were broken down into eight categories as shown
in Figure 1. The columns listed the actual tasks in the recovery sequences for the
secondary data center. Members of the DR team completed work items in accordance
with their roles and responsibilities. During the recovery, the DR manager added the
“Critical Activities” column for work items that needed special attention. As hardware
failures were discovered during recovery, recovery sequences were reevaluated and the
DR orchestration adjusted within the Kanban board. After all pertinent work items
were completed, power was restored to all systems.
Evaluating (Record Findings of t = 1): Following prior research, this phase
necessitated the comparison of pre- and post-intervention states [29]. The people,
processes, and technology involved in the DR effort were evaluated. In addition,
qualitative feedback was solicited from key informants in the form of postmortem
questionnaires and interviews. For instance, a one-hour formal interview was con-
ducted with the DR manager shortly after the power loss in order to understand how
useful the Kanban methodology was in orchestrating the recovery of the system.
Findings were discussed and recorded. Overall, the outcome of the intervention
seemed positive (as illustrated in Online Appendix F). Following the DR event, the
DR manager described the benefits of using Kanban to Alpha’s management: “The
644 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Kanban Boards give the opportunity for resource optimization and alignment of
effort, allowing leadership to engage in a meaningful way, for the agile adaptation of
priorities, and to clearly communicate at a glance to all involved.”
Specifying Learning: While learning was ongoing across all phases, we specify
knowledge gained during this final phase. First, from the researcher’s perspec-
tive, not only did the use of agile methodologies seem to match DR needs, as the
researchers conceptualized from the literature, but the severity of the DR scenario
motivated the DR team’s sense of urgency. We learned that while the ability to
adapt to change and collaborate effectively were important in overcoming the
classic challenges of traditional approaches, a noteworthy advantage of applying
agile methods (Kanban) to DR was the facilitation of continuous delivery [1],
which was key for an efficient DR orchestration. Specifically, team members
found that the use of a web-based Kanban board improved project visibility,
monitoring, and the overall focus of effort in a DR scenario. Second, the use of a
web-based Kanban board overcame the spatial boundaries of using a physical
board. As such, a web-based Kanban board was found to be appropriate for the
distributed nature of the DR activities in a large enterprise, as not all pertinent
stakeholders were able to assemble around a single physical board in the after-
math of a catastrophic event. Third, each DR scenario presents unique chal-
lenges. Therefore, the methodology would have to strike a balance between
providing the guidance needed for a disciplined delivery and also being able to
adapt to different scenarios.
From the practitioner’s perspective, the DR manager found that Alpha’s printed
DR Plans were not as flexible as the Kanban boards to adjust to the realities of
the disaster. Thus, the value of the plans was limited to developing the initial
project backlog, while the Kanban boards were useful in orchestrating the recov-
ery. Additionally, the DR manager identified incidents during the recovery that
***Sensitive information has been blurred for confidentiality
Figure 1. Kanban Board—Unexpected Power Loss.
DISASTER RECOVERY OF INFORMATION SYSTEMS 645
had not been clearly identified, for which exceptions or workarounds had to be
developed. Incidents in the critical path of the recovery had to be resolved
through a workaround. Resolution for incidents that were not in the critical
path could be postponed, generating an exception in the recovery sequence. An
example of an incident is hardware failure such as failed drives; failed drives that
were not in the critical path of recovery needed to be handled through normal
operations. As a result, the DR manager decided to make the following changes
to the application of the methodology. First, the Issues column was added to
capture all the tasks that would need to be executed for each incident. The DR
manager opted to use the Issues column to prevent confusion between DR
activities (Issues) and the Information Technology Infrastructure Library (ITIL)
operational activities (Incident). Once all DR tasks were completed, the identified
issues were entered into the IT Service Management System for resolution
through normal operations. Second, the Critical Activities column was moved
next to the To Do column, so participating teams could maintain focus on the
mission.
Overall, the result of implementing a visual artifact (e.g., Kanban board) to guide
workflow led to Alpha’s IT departments to feel more comfortable with using an agile
approach for future DR events. After the systems were restored, the company began
to critically discuss the comparative advantages of using a Kanban system compared
to traditional DR approaches.
Second Cycle, 1 Month
The perceived success of the first intervention led to Alpha’s adoption of Kanban for
its next DR effort. The initial four-month client-research agreement was extended for
another month. The next event involved using Kanban to orchestrate the recovery of
Alpha’s IS after a planned shutdown of its entire system.
Diagnosing: On September 20, 2014, the replacement of one of two faulty automatic
transfer switches (ATS) required a full data center and corporate phone systems
shutdown for approximately five hours. Alpha had little to no lag time to replace the
ATS given the current threat profile of forecasted thunderstorms that could cause an
unexpected power loss. While Alpha maintains N + 1 redundancy through two
power switches, at this time both switches were faulty and required immediate
repair.
Action Planning: After soliciting feedback from the IT managers involved in the
systems recovery effort using Kanban, the DR manager sought a Kanban solution that
was embedded in one of the company’s existing systems. Meanwhile, he learned about
Trello [49], a software platform with Kanban functionality that was being piloted in
another department. Trello offered a few enhancements over the open-sourced appli-
cation used during the first event. These enhancements included: (1) the ability to add
646 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
vivid colors to differentiate tasks from one another, (2) a chat feature that allows team
members to communicate with one another, (3) the ability to install on a mobile
device, and (4) the ability to add more users. The DR manager decided to use Trello
[49], as a secondary backup to the notification system. The DR manager engaged IT
managers before the DR event began, and asked them to download the mobile version
of the application. The night before the event, the Kanban boards were loaded into
Trello using a modified version of the recovery steps listed in the DR plan. No Trello
training was provided because the application was found to be intuitive to the users
and familiar to those with prior experience with Kanban.
Action Taking (t = 2; Intervene Again Adjusting for Findings Revealed in t = 1):
As Alpha’s systems were shutdown, the primary notification system unexpectedly
failed, which made Trello (secondary) the primary notification system. Fortunately,
Trello allowed the teams to share notifications across the leadership and staff,
maintain situational awareness, and allow the DR team to sustain their response
while the primary notification system was down. Users of the Kanban boards under
Trello started to update their own tasks and to document the progress of activities. In
an attempt to further maintain the team’s focus, the DR manager introduced new
columns when needed and removed them when no longer used. Colors were also
used to designate areas of responsibility to allow the teams to easily identify their
tasks. The use grew organically over the seven hours of the power-up sequence with
over 10 active users and 30 subscribers that remained informed from various
locations. All systems were restored, tested, and validated. The operation environ-
ment was fully restored with no incidents affecting Alpha’s operation on the next
business day. The columns created for the recovery tasks are shown in Figure 2. All
transactions from this event were automatically logged by Trello and were later
reviewed by the teams as part of the DR lessons-learned process.
Evaluating (Record Findings of t = 2): The pre- and post-intervention states of the
second intervention (t = 2) were evaluated, and were adjusted for findings discovered in
the first intervention (t = 1). Again, qualitative feedback was solicited from key
informants including interviews with the DR manager, IT-service management director,
and IT Network Operations Center. Findings were discussed and recorded. Overall, the
adjustments from what was learned during the first intervention, which led to the use of a
more robust software platform, seemed to positively influence outcomes and improve
the DR orchestration. For example, the Network Operations Center supervisor com-
pared the use of the Kanban methodology to traditional DR approaches:
Overall, the time the PM [project manager] inputs into actually creating each
task and board are minimal because of the time savings from traditional
methods of manually keeping everyone updated and follow-up if issues
arise. The tool allows the PM to focus on keeping the upcoming tasks on
track, which helps the project progress smoothly.
We elaborate further on the improvements made over time in the next section.
DISASTER RECOVERY OF INFORMATION SYSTEMS 647
Specifying Learning: While learning was ongoing across all phases, we specify
knowledge gained during the second cycle. First, from the researcher’s perspective,
engaging IT managers before the DR event began allowed them to familiarize them-
selves with the Kanban system. This allowed each department to load its own work
items, which encouraged team autonomy and responsibility. The combination of more
IT manager involvement, the use of the visual Kanban board, and the chat features in
Trello encouraged project visibility, collaborative, coordination, and thus, a more
efficient DR orchestration. From the practitioner’s perspective, the DR manager’s
reuse of the Issues column was helpful for tracking incidents. Fortunately, Trello’s
chat feature provided central communication for the team, insofar as many members
had poor phone reception inside the data center. Thus, the DR manager was able to
orchestrate the recovery more efficiently through the Kanban system within the soft-
ware. After reflecting on the implementation, the practitioner determined that the
addition of a chat feature in one of Alpha’s newest software tools was the only
appropriate change needed for the software to function like Trello. Since this feature
was due in an upcoming software release, the DR manager concluded that no further
methodological changes were needed. After the completion of the second cycle, several
senior managers, including former skeptics, lauded the use of our Kanban methodology
and encouraged the team to present the findings of the project to all the IT managers.
During the presentation, the vice president of Enterprise Infrastructure, stated:
“I have been with this company for 25 years, and I can say that this was the
smoothest process we have ever had during an outage.”
Online Appendix G illustrates an example of how the problem of presenting the
findings was solved.
***Sensitive information has been blurred for confidentiality
Figure 2. Kanban Board—Replacement of the Automatic Transfer Switches
648 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Agile Methodologies and Disaster Recovery Orchestration
Our research question is: How can the use of agile project management methodol-
ogies improve disaster response and recovery (orchestration) efforts? Following the
use, reflection, adaptation, and reapplication of our Kanban methodology during two
DR scenarios, we found that the use of agile methodologies can improve disaster
response and recovery (DR orchestration) efforts. Our results show that agile
methodologies improve disaster response and recovery efforts by improving adapt-
ability, situational awareness, orchestration efficiency, focus of effort, and overall
communication. In addition, the results suggest that the agile principles of project
visibility and continuous delivery also play an important role in DR orchestration.
Thus, our findings suggest a strong match between the application of the Kanban
methodology and the DR contexts. Part of the AR process is to finish with lessons-
learned principles [17]. These principles would provide the initial starting point for
further AR projects.
Operationalizing Kanban Methodology Principles
As previously noted, the Kanban methodology is based on three basic principles of
visualizing workflow, limiting the amount of work in progress, and managing
workflow [1]. We discuss these principles in relation to our DR implementation.
Kanban Principle #1—Visualize the Workflow: The implementation of a visual
artifact in the form of a Kanban board helped fulfill the DR team’s need for
improved situation awareness as well as project transparency and visibility. First,
the ability to provide situational awareness by providing frequent status updates as
the work progresses was an important factor in the success of using an agile
approach to DR. The Kanban board helped the DR team maintain a shared common
operating picture and a reasonable level of situational awareness. Regarding the use
of agile methodologies to improve situational awareness, the Network Operations
Center supervisor explained:
The [Kanban] board helps with a visual of the multiple tasks assigned and
interdependencies. It provides updates immediately if there are any issues that
arise. Because everyone assigned to the project/board can see these updates, you
instantly have all the support team available to help, which reduces time to
resolve the issues, which help[s] streamline our processes. Before the [Kanban]
board, this was a function of the project manager to contact and gather the
appropriate team members together should an issue come up which is a longer
process and takes longer to resolve issues with a traditional cadence project.
Second, project transparency and visibility [32] were emergent themes during our
implementation. Our adaptation of Kanban provided a platform for improving
DISASTER RECOVERY OF INFORMATION SYSTEMS 649
situational awareness and focus of effort. The team was able to reap the benefits of
greater project visibility as noted by the Network Operations Center supervisor:
“[This process] was overall a great experience. The advantages of this process were
the ability to ‘login from anywhere,’ [which] allowed me to view the near real-time
status of the project from any device via Internet and the ability to provide feedback
and ask questions in near real-time.”
One of the IT managers added:
Just want to drop a note. Saturday ended up to be a very smooth power outage,
in my opinion. I was able to keep it up by logging into Trello several times
Saturday. Even without previously going over the cadence details, I was able
to guess from the task updates in Trello and let my team know that the
validation is coming (I guessed accurately two hours prior and sent the
notification to my team to stand-by).
Kanban Principle #2—Manage/Enhance Flow: Efficient Orchestration. Once the
steps for recovering the complex IS of Alpha were loaded into the software tool, the
DR manager was able to orchestrate the recovery and adjust the sequence as
necessary. Team members were able to see their assigned tasks and execute them
using Kanban’s “pull” system. Team members also had the ability to move their
tasks through the Kanban system and update their progress accordingly.
Collaboration and coordination were simplified by the Kanban board, which dis-
played the work backlog, the work in progress, and completed work. By using the
communication feature within the tool, the usually cluttered phone line was clear,
enabling the DR manager to provide situational awareness to specific stakeholders in
a timely fashion. The Kanban tool improved upon previous efforts by enhancing the
communication between the business and project teams. The DR manager commen-
ted on the use of a Kanban board in improving the flow and team efficiency with the
following statements:
When I define efficiency, I look at the critical path to full recovery. I look at
the recovery time objective of our production environment. In this case, it
would be the recovery time capability you have based on the people, pro-
cesses, and the hardware that you have in place. The critical path for us kept
on growing because we could not coordinate the efforts. Having the board
allowed us to maintain the focus of what was the next critical step and this
focus was provided from the person doing the work, the management level,
and upper management—maintaining everybody aligned and informed with
what we are trying to accomplish. That allowed us to reduce the timeline
further even.
Quality Adaptation. Adaptation and agility were important factors of the DR team’s
success. Two aspects of agility were observed during both DR events. First, the
ability of the team to respond to environmental changes and quickly alter its course
of direction played a pivotal role in the orchestration process. During the
650 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
postmortem after the first DR event, the DR manager pointed to the team’s ability to
“change the cadence on the fly and to adapt to some of the realities that they were
seeing in the data center that we did not expect,” which led to a successful DR effort.
The ability to edit the Kanban boards played a key role in the team’s ability to
respond to environmental changes and continue to maintain the flow of the recovery.
The DR manager elaborated:
The visual boards allowed us to quickly change activities around and being
able to highlight the critical ones to all available. We had a column labeled as
critical that we used to do this. The people that were using the tool were able
to hone in on this. We were able to clear roadblocks that way, providing
visibility to upper management of our efforts. So that went very well.
Second, the ability to quickly implement the methodology itself proved to be an agile
process as well. Before the second event, the DR manager was able to set up a Kanban
board prior to the DR event and had all pertinent stakeholders access the tool successfully.
Using the tool, changes were managed systematically, users provided status reports, and
efforts were communicated in a manner that was visible to the entire team. When asked
about the team’s ability to adapt to change, the DR manager explained:
When we lost power unexpectedly on our secondary data center, we did not
have anything in place, the systems crashed. We had our detailed recovery
sequences ready to recover, but the testing of the generators was going to take
3 more hours, so we had a few hours to plan. That is when I decided to go
ahead and implement the visual board, mostly for my own sake—to be able to
manage the cadence in a more malleable way.
We were able to create task on the fly. We were able to remove and modify
task and break dependencies, and all of that we could do in a very malleable
way without having to make changes on a plan that we could not have shared
because our systems were down.
Singular Focus. The use of a Kanban board improved the team’s ability to maintain
a singular focus. During the second event, the DR manager used the chat feature on
Trello to communicate easily with the team members by alerting them whenever it
was time for them to act in accordance with the DR cadence. In similar previous
events, the DR manager was being bombarded with different types of communica-
tion from different types of stakeholders simultaneously. Calls (mostly voice mail
since phones were down), e-mails, and text messages would pour in. The result was
that the DR Plan in its original form became obsolete as the DR manager started the
task of orchestrating the recovery, identifying which personnel or vendors were
available to assist in restoring the IS, and gaining a better understanding of the
impact to the technology and operation. The single point of focus minimized the
normally chaotic event by providing a centralized hub for all the important informa-
tion concerning DR orchestration. As a result, the teams were able to focus on the
critical items as they saw them enter the queue. In addition, information concerning
DISASTER RECOVERY OF INFORMATION SYSTEMS 651
any changes that were made were posted and shared across the board, so the teams
had awareness of those changes. The software provided the additional capability of
tracking the history of activities as the work proceeded. The DR manager explained:
We continued the flow of the cadence. Then the server team got to their part
of the cadence. They were very active in self-managing the steps. I had
created the up next lane to give 15 min. warning to the teams. The app gave
text notifications on their phone. Corporate phones systems were still not
available . . . system by system—until we started following the normal flow
that you would follow on a recovery event, but they continued to use the
boards.
Overall, our results suggest that the use of Kanban may improve adaptability,
situational awareness, orchestration efficiency, communications, focus of effort,
project visibility, and continuous delivery in disaster response and recovery
efforts.
Modifications
Although Kanban principles 1 and 2 were found to fit our DR context relatively
well, we found principle 3—limit work in progress—less applicable to DR.
Kanban Principle #3—Limit Work in Progress: The application of Kanban in
manufacturing sometimes uses Kanban cards, which signals to other team members
that a given team has additional capacity and is ready to pull in more items [38].
However, in our DR context, teams were signaled by notifications from the DR
manager according to the predefined cadence. In addition, most tasks were onetime
actions rather than stocks of duplicate items. Therefore, we did not attempt to signal
in the traditional sense of using a card system. Next, in ISD, work in progress (WIP)
limits are put in place to prevent bottlenecks. We removed WIP limits for two main
reasons. First, WIP is based on cycle times, which match the amount of WIP with a
team’s capacity. Cycle times are determined during early iterations of the ISD
implementation with a set team. As a contrast, our DR efforts were onetime,
continuous efforts, in which team members were constantly changing. Thus, it was
more advantageous to focus on enhancing flow through managing the cadence than
measuring unstable cycle times. Second, because of the uncertainties previously
discussed, the team was unable to determine the optimal WIP limit. Should it be
three, five, seven, or more? Instead, the DR manager allowed the team to pull in as
many work items as they felt comfortable with during a particular stage of the
recovery. In the end, we were able to divide the Kanban board into separate lanes for
each team using Trello, while allowing the DR manager to orchestrate the recovery
according the preestablished cadence.
652 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Reflections on the Use of AR in DR
Upon reflecting on the details of our study, we believe that it is important to
highlight some of the key lessons we learned by using AR, particularly as they
relate our contribution to the following elements of this Special Issue: Discussions of
relationships between information systems researchers and research clients (e.g.,
DR practitioners and executives) demonstrating how action research can lead to
improved organizational situations.
First, we learned that AR enabled the researchers to gain trust with the stakeholders
through our visibility and knowledge contribution. By employing AR, we were able to
better understand and appreciate the company environment, which led to a much richer and
robust dialogue with those involved in the DR project and in turn helped us in the
fulfillment of our research objectives. In particular, we observed that the direct knowledge
contribution of our framework for DR increased the relevance of IS research in the eyes of
Alpha’s practitioners. In addition, finding support in the extant literature, our framework
provided a more scientific approach to problem solving than traditional organizational
consulting, which is motivated more by commercial benefit than science [4]. In our study,
AR allowed us to examine Alpha’s DR practice within the constraints proposed by the
company. The constraints of the project, namely, its limited contract duration (initially four
months, later extended to five months) and its limited access to stakeholders, were
challenges likely to have been insurmountable using quantitative methods. The lead
researcher’s presence was felt during day-to-day operations and department meetings,
and led to his contributions being valued. Trust was fostered, which led to an extension of
the original contract, from four months to five months. AR may be a viable starting place
for academics seeking to establish the relevance of their work with organizations in which
they have not established legitimacy. In our case, the commitment to deliver business value
was more effective in establishing a relationship with Alpha’s stakeholders who were not
fond of completing surveys and had had no specific dealings with academics in the past.
Second, we learned that AR could be useful for developing new frameworks in
conjunction with practitioners. This is especially useful in areas where IS knowledge
is limited. In our study, we combined a more classic, canonical AR cyclic framework
with dialogical AR. This relatively new combination was useful, as we sought to
adapt and test the effectiveness of a framework commonly found in one IS domain
in a different IS domain, and investigate an area in which we had little specific
domain knowledge. The dialogical AR approach allowed us to leverage the DR
expertise of the practitioners and the research expertise of the academicians, who had
no a priori knowledge or hypotheses that could be used to solve the practical
problem presented in this study. During this study, we found that not only were
agile methodologies compatible with DR, but in many ways the DR teams showed
little resistance to agile approaches, which appears to run counter to the ways many
software development teams respond to new methodologies. AR allowed us to
collaborate with the practitioners using one-on-one dialogue, which enabled us to
DISASTER RECOVERY OF INFORMATION SYSTEMS 653
evolve a framework that solved a practical problem and was theoretically relevant.
This collaborative process not only aided our understanding Alpha’s corporate
context and the work conditions of its stakeholders, but this also gave us the
opportunity to revise our framework as we received new information. Overall, AR
provides a mechanism for more collaborative work with practitioners and the vetting
of an emergent theoretical model in a real-world setting. We believe that this kind of
research approach will be valuable in many settings and wish to promote its
acceptance in the IS research domain.
Third, the use of AR was not without its challenges. While rewarding, we found
conducting AR to be more time consuming than traditional quantitative methods. The
lead researcher not only had to interact with practitioners multiple times a week to
develop a framework but also needed to invest significant amounts of time under-
standing the organizational environment and its challenges. These interactions did not
lend themselves to the type of unbiased, objective, and distant examination of phenom-
ena that is so prominent within IS research. Instead, the researcher became recognized as
a “part of the team,” which was necessary to understand the organization and gain
further access. Furthermore, using AR is not the straightforward, template-driven, rule-
following research approach associated with the orthodox functionalist methods. It is
complicated, involves the need to be creative and adaptable in its application, and
requires both the researchers and practitioners to trust one another. As we noted
above, AR can facilitate the building of such trust; but concomitantly, it can blur the
distinction between “fact” and “opinion.” Furthermore, using AR within the confines of
this single organization leads to wondering whether the “results of the research” are
generalizable to other organizations, or are simply the product of this specific DR
experience. While we firmly believe that AR generated useful insights about DR, how
well these insights will transfer to other DR environments, is of course an open question.
Theoretical Contributions
The effectiveness of dialogical AR can be evaluated based on whether (1) the
practitioner considers the real-world problem facing him or her to be solved or
satisfactorily remedied, (2) there is an improvement in the practitioner’s expertise,
and (3) there is an improvement in the scientific researcher’s expertise [30]. We have
demonstrated the effectiveness of our AR approach, which we will draw upon to
discuss our theoretical contributions. Beginning with our theoretical lens (see Table 1),
we summarize how our approach of AR in the context of DR impacts the initial AR
cycle. This helps us to illustrate the theoretical contributions of our work more clearly.
First, we have shown how our AR approach helped us deal with the unique needs
of Alpha’s DR program and consequently how agile methodologies can improve DR
orchestration. We were able to identify four fundamental needs of the DR practice
that were identified in the literature by agile principles. Agile principles provided a
theoretical lens to examine Alpha’s DR program in the sense that it not only helped
to describe DR needs but also questioned and identified contradictions in prior
654 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
approaches that led to subpar results [29]. As prior research points out, the fit among
the elements in a DR methodology may affect recovery efforts [8]. The Kanban
methodology provided visibility to work tasks while not requiring prescriptive roles
and responsibilities that may not be available during a disaster. Our results suggest
that agile principles are more compatible than those found in traditional C2 meth-
odologies in addressing DR needs of adaptability, situational awareness, orchestra-
tion efficiency, and focus of effort (as shown in Table 4). Both our cases show that
the fit among DR people, agile processes, and agile technology (i.e., electronic
Kanban board) positively affected recovery efforts.
Second, we observed improvements in the DR manager’s expertise (praxis) of exist-
ing agile methodologies, how they could be leveraged in DR, and how their benefits
could be presented to senior managers. For instance, the DR manager who had little
knowledge of agile methodologies at the start of the project, provided a helpful sugges-
tion about how to present agile outcomes to senior managers (see Online Appendix G).
In addition, the DR manager identified the potential improvements that Trello provided
and engaged staff members to familiarize themselves with the software before the
planned outage. These examples demonstrate an improved understanding of agile
methodologies and their application in DR scenarios. Similarly, we observed improve-
ments in the DR manager’s understanding of AR throughout the study.
Third, we used pattern matching to test whether there was an improvement in
the scientific researcher’s expertise [53]. The match between the pattern antici-
pated by the theory refined after the first intervention and the pattern observed in
response to the action of the second intervention constituted an improvement in
the researcher’s expertise [53]. The key refinements applied after the first inter-
vention were relaxing the Kanban WIP limits and using a mobile application for
greater team flexibility. These results, which were excepted by Alpha’s execu-
tives, support the validity of the researcher’s theory that agile methodologies can
improve DR orchestration.
Other contributions relate to the existing literature on DR and agile project
management. Our DR approach adapts and extends Kanban as an agile methodology
for the DR context. Our DR approach addresses complexities introduced by com-
pany size and dependencies related to complex IS. The results of each scenario
indicated that the tailoring of formalized agile methodologies are appropriate for the
DR practice as they address many of the shortcomings that still exist with traditional
methodologies. The implementation of Kanban served as a proof of concept [36] for
Alpha thus providing empirical evidence to support the parallel between DR needs
and agile principles. To our knowledge, this implementation was the first of its kind.
Thus, our work provides the DR practice with its first methodology that is tailored to
improve the efficiency and effectiveness of DR orchestration using agile project
management. Given the success of our agile implementation in a DR program, our
findings are valuable to both IS and DR practices as they demonstrate a new way to
think about DR orchestration. Using a visual inspired by Malaurent and Avison [29],
Table 5 summarizes our theoretical and practical contributions, which are discussed
in the next section.
DISASTER RECOVERY OF INFORMATION SYSTEMS 655
T
ab
le
4
.
M
et
h
o
d
o
lo
g
y
C
o
m
p
ar
is
o
n
D
is
as
te
r
re
co
v
er
y
A
g
il
e
m
et
h
o
d
o
lo
g
ie
s
W
at
er
fa
ll
m
et
h
o
d
o
lo
g
ie
s
P
ri
o
ri
tiz
e
d
d
e
liv
e
ry
P
ri
o
ri
tiz
e
d
d
e
liv
e
ry
(a
d
a
p
tiv
e
)
P
ri
o
ri
tiz
e
d
d
e
liv
e
ry
(p
la
n
d
ri
ve
n
)
P
ro
ce
d
u
re
s
w
ri
tt
e
n
in
a
st
e
p
w
is
e
,
se
q
u
e
n
tia
l
fo
rm
a
t
L
ig
h
t
d
o
cu
m
e
n
ta
tio
n
H
e
a
vy
d
o
cu
m
e
n
ta
tio
n
R
e
sp
o
n
d
in
g
to
o
n
g
o
in
g
e
m
e
rg
e
n
cy
co
n
d
iti
o
n
s
A
d
a
p
tiv
e
to
ch
a
n
g
e
;
re
sp
o
n
d
in
g
to
ch
a
n
g
in
g
u
se
r
re
q
u
ir
e
m
e
n
ts
F
ix
e
d
sc
o
p
e
C
o
o
rd
in
a
tin
g
te
a
m
s
S
e
lf-
o
rg
a
n
iz
in
g
te
a
m
s
T
ra
d
iti
o
n
a
l
p
ro
je
ct
m
a
n
a
g
e
m
e
n
t
R
o
le
o
f
te
a
m
le
a
d
e
r
F
a
ci
lit
a
to
r
M
a
n
a
g
e
r
F
re
q
u
e
n
t
fe
e
d
b
a
ck
lo
o
p
s
F
re
q
u
e
n
t
fe
e
d
b
a
ck
lo
o
p
s
S
in
g
le
p
a
ss
C
a
d
e
n
ce
It
e
ra
tiv
e
w
o
rk
cy
cl
e
s;
w
o
rk
in
g
rh
yt
h
m
S
e
q
u
e
n
tia
l;
lin
e
a
r
p
ro
ce
ss
T
e
a
m
co
m
m
u
n
ic
a
tio
n
D
a
ily
st
a
n
d
u
p
s;
co
lla
b
o
ra
tiv
e
m
e
e
tin
g
s;
re
tr
o
sp
e
ct
iv
e
s
A
s
n
e
e
d
e
d
T
e
a
m
w
o
rk
e
n
vi
ro
n
m
e
n
t
C
o
lla
b
o
ra
tiv
e
S
ilo
e
d
P
ro
je
ct
vi
si
b
ili
ty
H
ig
h
L
o
w
C
o
n
tin
u
o
u
s
d
e
liv
e
ry
C
o
n
tin
u
o
u
s
d
e
liv
e
ry
S
in
g
le
d
e
liv
e
ry
656 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
Table 5. Practical and Theoretical Contributions Gained from AR Project
Stages Contributions
Diagnosing The formalization of this AR project was a great way to secure the
mutual commitment of researchers and practitioners to develop a
framework to improve DR orchestration.
It was also necessary to understand the causes of the organization’s
desire for change and to establish the theoretical connection between
DR needs and agile principles.
Theory Agile principles provided an appropriate theoretical lens to examine
Alpha’s DR program in the sense that it not only helped to describe
DR needs but also questioned and identified contradictions in prior
approaches that led to subpar results. It also helped to question the
consequences of prior DR approaches for Alpha. The Kanban
methodology helped to coordinate DR people, processes, and
technology throughout a dynamic DR effort.
Action
planning
Given the lack of prior working history between the researchers and
practitioners, AR was a great way to foster trust between the
researchers and practitioners. AR also provided the structure needed
for meaningful dialogue and reflection.
Dialogue In addition, a dialogical AR approach was useful for investigating the DR
domain in which the researchers were unfamiliar. Through the
iterations of dialogue and reflection, a Kanban-based agile framework
for DR was developed that specified organizational actions after a
disaster (1, 2).
Action taking DR needs were analyzed using agile principles as a theoretical lens, and
a new Kanban approach was implemented, first using web-based
application (1) and later using a mobile application (2).
Evaluating The benefits (1, 2) of our approach to DR and modifications (1) needed were
identified. This led to modifications in Kanban workflow requirements (2).
Specifying
learning
On reflection, we learned the following:
● That the DR context enhanced the team’s urgency and focus, which
helped with throughput in a Kanban system (1);
● How relaxing Kanban WIP limits increased DR efficiency and flow;
● How web (1) and mobile (2) applications can overcome a team’s
spatial boundaries;
● That familiarizing teams with the application beforehand improves
their efficiency when using the application during a live disaster (2);
● How communication mechanisms built into the application overcome
issues with traditional means of communication, which may be com-
promised during a disaster (2); and
● How a more feature rich and well-designed user interface improved
interteam communication (2).
Notes: Modified from Malaurent and Avison [29]. (1) = Action cycle 1; (2) = Action cycle 2.
DISASTER RECOVERY OF INFORMATION SYSTEMS 657
Implications for Practice
This research also has several practical implications. First, the findings inform
disaster planning managers of the need to incorporate agile principles in DR plans.
Despite the unpredictable and catastrophic nature of disasters, DR practitioners have
focused on creating and applying static plans to meet the needs of DR. The use of
agile methodologies helps planners to conceptualize revisions to the set procedures
commonly found in static plans. For instance, a manager may ensure that multiple
team members are able to complete a single work item in the Kanban project
backlog.
Second, during disaster response, it is not uncommon for DR plans to get outdated,
making them ineffective when responding to disasters. The use of agile methodol-
ogies facilitates revisions to DR plans that enable the response team to adapt to the
unstable, tumultuous disaster conditions. Using Kanban allowed Alpha’s DR teams
to adapt to having both limited personnel and evolving priorities. We recognize that
organizations will not have all their resources available. Therefore, it is essential that
they implement a solution that allows team members to join the DR as the situation
allows.
Historically, DR practitioners have relied on risk management, which by nature is
limited to known risk that the practitioners can anticipate, to address a wide range of
possible scenarios. However, this is not realistic because organizations’ environ-
ments are characterized by constant change due to changing business requirements,
customer needs, and technology advancements. Therefore, we developed our frame-
work to assist DR teams in determining the impacts of a disaster once the risks have
materialized rather than solely analyzing known risks.
Moreover, efforts to anticipate exact DR scenarios have been shown to be nearly
impossible, as recent disasters demonstrate. For instance, recent power outages due
to an unexpected “500-year flood” [52] in Baton Rouge, Louisiana, caused a wide-
spread AT&T cell outage affecting over 50,000 homes [21]. In light of such
disasters, we recognize that change and adaptability are requirements for every DR
effort and provide the DR practice with an agile approach.
Third, the use of agile methodologies in DR provides the ability to make updates
in real-time, which increases the likelihood of a successful recovery. Agile DR
methodologies provide the flexibility needed for effectively coping with the after-
math of an unplanned disaster. These methodologies may be applied to any disaster
scenario, including cybersecurity attacks. For instance, Britain’s National Health
System was recently the victim of a cybersecurity attack that paralyzed operations
across several hospitals [16]. An agile DR approach could be applied to this type of
disaster as follows: The web-based Kanban boards would need to be created; the
boards would detail specific information pertinent to the attack, such as, a list of
affected systems, severity of impact to each system and to hospital operations (i.e.,
processes affected, number of users, etc.), DR team and their responsibilities, and
contact information of stakeholders. The Kanban boards would need to be updated
throughout the DR with minute-by-minute details of the attack, resource needs and
658 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
allocations, as well as new incidents and their severity. This would facilitate situation
awareness, visibility of the overall DR process, and timely communication between
responders.
Other implications relate to the disciplines of DR and project management. Given
that DR approaches are heavily based on those of project management, the results of
this study hold significant implications for both disciplines. The use of agile
principles in DR broadens the application of agile methodologies from a project
management perspective. The DR practice presents unique challenges for project
management and highlights the need for project management frameworks that
address continual readiness in a variety of ways. Our results suggest that the
emergent factors of project visibility and continuous delivery play an important
role in DR orchestration where teams with interchanging members work together
in a onetime effort. Similar to the shift from waterfall to agile ISD, companies that
develop project management systems should expand the features of such systems to
accommodate the unique challenges of DR scenarios.
Limitations and Recommendations for Future Research
Like any study, ours has several limitations. However, these limitations offer future
research opportunities. First, we only tested our DR agility framework at one U.S.
site within a single industry. Our adaption of Kanban involved the tailoring of an
agile approach to fit the needs of a specific DR program. Future research can test our
methodology in multiple sites, contexts, or industries, which will allow for between-
case analyses. Future investigation is needed in order to understand the differences
in DR agility between firms of different attributes. Second, our study also contained
a limited number of participants. Although we were able to garner feedback from a
variety of IT managers, future studies could examine the phenomenon by including a
more comprehensive set of stakeholders. Third, our project teams were limited by
the security requirements of Alpha, a highly privatized company. Teams were not
able to load information (i.e., actual server names and locations, employee informa-
tion). Instead, they worked using a codified language. Given the sensitive nature of
the information contained in DR activities, Alpha codified all server names, infor-
mation on people, and proprietary information to refrain from potential exposure.
Bringing the Kanban and agile tools in-house, a controlled environment, would
allow teams to fully integrate the solution into the DR orchestration activities and
further define best practices around other operational frameworks such as ITIL or
COBIT. Fourth, we answered the research question we proposed here by adapting a
simple, agile methodology for DR orchestration, primarily by visualizing the work
through a Kanban board. However, the software development practice has expanded
the Kanban methodology by introducing metrics to chart team progress and inte-
grating features from other agile methodologies like Scrum. Future research could
DISASTER RECOVERY OF INFORMATION SYSTEMS 659
examine ways of enhancing agile DR orchestration by introducing agile techniques
such as scrum meetings and sprint retrospectives. In addition, the use of more
sophisticated tools might enable future teams to respond more efficiently. We
originally conceptualized using multiple Kanban boards in a client/server type of
relationship; however, we were not able to identify a software tool that would allow
us to test such a model. Future research could evaluate the effectiveness of scaling
the implementation described here using multiple Kanban boards. Despite these
limitations, we were able to explore the fit between DR needs and agile project
management methodologies, and to determine how agile methodologies could
improve disaster response and recovery efforts. The results suggest that agile
methodologies improve disaster response and recovery efforts by improving adapt-
ability, situational awareness, orchestration efficiency, focus of effort, and overall
communication. The results also suggest that the agile principles of project visibility
and continuous delivery also play an important role in DR orchestration. In conclu-
sion, this study extends the extant literature and assists future researchers in under-
standing DR orchestration.
Supplemental data for this article can be accessed on the publisher’s website at
10.1080/07421222.2017.1372996
NOTES
1. We acknowledge that the term “information systems” broadly pertains to people,
processes, and technology that are used to handle and interpret information. However, the
term is also used in a restrictive sense to refer only to the computer networks, systems, and
software used in an organization. We adopt the latter approach in this study.
2 We are aware that there are many types of disasters such as natural disasters, man-made
disasters, and onset disasters. In this study, we focus on disasters at the organizational level.
3. The lead researcher was contacted by a company wanting to know more about agile
methodologies and how agile might be leveraged and implemented to improve the recovery
time and efficiency of DR. Although other capabilities were considered, the potential relation-
ship between DR needs and agile methodologies were of interest to the company and needed
to be resolved.
REFERENCES
1. Anderson, D., and Carmichael, A. Essential Kanban Condensed. Seattle, WA: Blue
Hole Press, 2016.
2. Avison, D.; Lau, F.; Myers, M.; and Nielsen, P. Action research. Communications of the
ACM, 42, 1 (1999), 94–97.
3. Baham, C.; Calderon, A.; and Hirschheim, R. Applying a layered framework to
disaster recovery. Communications of the Association for Information Systems, 40, 1
(2017), 277–293.
4. Baskerville, R. Investigating information systems with action research. Communications
of the Association for Information Systems, 2, 19 (1999), 1–32.
660 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
https://doi.org/10.1080/07421222.2017.1372996
5. Baskerville, R., and Myers. M. Special issue on action research in information systems:
Making IS research relevant to practice: Foreword. MIS Quarterly, 28, 3 (2004), 329–335.
6. Baskerville, R., and Wood-Harper, A. A critical perspective on action research as a
method for information systems research. Journal of Information Technology, 11, 3 (1996),
235–246.
7. Beck, K.; Beedle, M.; Van Bennekum, A.; Cockburn, A.; Cunningham, W.; Fowler, M.;
Grenning, J.; Highsmith, J.; Hunt, A.; Jeffries, R.; and Kern, J. Manifesto for agile software
development, 2001.
8. Berke, P.; Kartez, J.; and Wenger, D. Recovery after disaster: Achieving sustainable
development, mitigation and equity. Disasters, 17, 2 (1993), 93–109.
9. Boehm, B., and Turner, R. Management challenges to implementing agile processes in
traditional development organizations. IEEE Software, 22, 5 (2005), 30–39.
10. Chen, R.; Sharman, R.; Rao, H.; and Upadhyaya, S. Coordination in emergency
response management. Communications of the ACM, 51, 5 (2008), 66–73.
11. Chen, R.; Sharman, R.; Rao, H.; and Upadhyaya, S. Data model development for fire
related extreme events: An activity theory approach. MIS Quarterly, 37, 1 (2013), 125–147.
12. Conboy, K. Agility from first principles: Reconstructing the concept of agility in
information systems development. Information Systems Research, 20, 3 (2009), 329–354.
13. Copeland, J. Emergency Response: Unity of Effort through a Common Operational
Picture. Carlisle Barracks, PA: Army War College, 2008.
14. Davison, R.; Martinsons, M.; and Kock, N. Principles of canonical action research.
Information Systems Journal, 14, 1 (2004), 65–86.
15. Dynes, R. Community emergency planning: False assumptions and inappropriate ana-
logies. International Journal of Mass Emergencies and Disasters, 12, 2 (1994), 141.
16. Erlanger, S.; Bilefsky, D.; and Chan, S. U.K. health service ignored warnings for
months. New York Times, May 12, 2017.
17. Fruhling, A., and Vreede, G. Field experiences with eXtreme programming: Developing
an emergency response system. Journal of Management Information Systems, 22, 4 (2006)
39–68.
18. Goldman, S.; Nagel, R.; Preiss, K.; and Dove, R. Iacocca Institute: 21st Century
Manufacturing Enterprise Strategy: An Industry Led View. Bethlehem, PA: Iacocca Institute,
1991.
19. Harrald, J. Agility and discipline: Critical success factors for disaster response. Annals
of the American Academy of Political and Social Science, 604, 1 (2006), 256–272.
20. Harris, C. IT downtime costs $26.5 billion in lost revenue. InformationWeek. 2010.
Available at: http://www.informationweek.com/it-downtime-costs-$265-billion-in-lost-
revenue/d/d-id/1097919?. (accessed on June 25, 2014)
21. Hasselle, D. Hours-long AT&T outage in Baton Rouge, Livingston areas undermines
rescue efforts, but some say service returning. New Orleans Advocate. 2016. Available at:
www.theadvocate.com/new_orleans/news/article_4fc33f0e-6255-11e6-9dd0-d3e354adba6e.
html. (accessed on August 14, 2016)
22. Ireson, N. Local community situational awareness during an emergency. In Proceedings
of the Third IEEE International Conference on Digital Ecosystems and Technologies. June 1–
3, 2009, pp. 49–54.
23. Ivancevich, D.; Hermanson, D.; and Smith, L. The association of perceived disaster
recovery plan strength with organizational characteristics. Journal of Information Systems, 12,
1 (1998), 31–43.
24. Kappelman, L.; Mclean, E.; Luftman, J.; and Johnson, V. Key issues of IT organizations
and their leadership: The 2013 SIM IT trends study. MIS Quarterly Executive, 12, 4 (2013),
227–240.
25. Kendall, K.; Kendall, J.; and Lee, K. Understanding disaster recovery planning through
a theatre metaphor: Rehearsing for a show that might never open. Communications of the
Association for Information Systems, 16, 1 (2005), 1001–1012.
26. Kenefick, S. Agile development methodologies (ID: G00211991). Available at: https://
www.gartner.com/doc/1694216/agile-development-methodologies. (accessed on June 1, 2014)
DISASTER RECOVERY OF INFORMATION SYSTEMS 661
http://www.informationweek.com/it-downtime-costs-$265-billion-in-lost-revenue/d/d-id/1097919?
http://www.informationweek.com/it-downtime-costs-$265-billion-in-lost-revenue/d/d-id/1097919?
http://www.theadvocate.com/new_orleans/news/article_4fc33f0e-6255-11e6-9dd0-d3e354adba6e.html
http://www.theadvocate.com/new_orleans/news/article_4fc33f0e-6255-11e6-9dd0-d3e354adba6e.html
https://www.gartner.com/doc/1694216/agile-development-methodologies
https://www.gartner.com/doc/1694216/agile-development-methodologies
27. Lawler, C.; Szygenda, S.; and Thornton, M. Techniques for disaster tolerant information
technology systems. In Systems Conference, 2007 1st Annual IEEE (2007), 1-6.
28. Lenk, A., and Tai, S. Cloud standby: Disaster recovery of distributed systems in the
cloud, in service-oriented and cloud computing. In M. Villari, W. Zimmermann, and K. Lau
(eds.), European Conference on Service-Oriented and Cloud Computing. Lecture Notes in
Computer Science, vol. 8745. Berlin, Germany: Springer, 2014, pp. 32–46.
29. Malaurent, J., and Avison, D. Reconciling global and local needs: A canonical action
research project to deal with workarounds. Information Systems Journal, 26, 3 (2015), 227–257.
30. Mårtensson, P., and Lee, A. Dialogical action research at omega corporation. MIS
Quarterly, 28, 3 (2004), 507–536.
31. Mcentire, D., and Fuller, C. The need for a holistic theoretical approach: An examina-
tion from the El Niño disasters in Peru. Disaster Prevention and Management, 11, 2 (2002),
128–140.
32. Mchugh, O.; Conboy, K.; and Lang, M. Agile practices: The impact on trust in software
project teams. Software IEEE, 29, 3 (2012), 71–76.
33. Mumford, E. Advice for an action researcher. Information Technology and People, 14, 1
(2001), 12–27.
34. National Incident Management System (NIMS) Integration Center. National Incident
Management System. Washington DC: U.S. Department of Homeland Security, 2004.
35. Ngwenyama, O., and Nielsen, P.A. Competing values in software process improvement:
An assumption analysis of CMM from an organizational culture perspective. IEEE
Transactions on Engineering Management, 50, 1 (2003), 100–112.
36. Nunamaker, J. Jr.; Briggs, R.; Derrick, D.; and Schwabe, G. The last research mile:
Achieving both rigor and relevance in information systems research. Journal of Management
Information Systems, 32, 3 (2015), 10–47.
37. OpsCentre. Does the concept of agile recovery make sense? Disaster Recovery Journal,
2015. Available at: www.drj.com/industry/industry-hot-news/does-the-concept-of-agile-recov
ery-make-sense.html. (accessed on June 2, 2015)
38. Radigan, D. A brief introduction to Kanban. Atlassian, 2015. Available at: www.
atlassian.com/agile/kanban. (accessed on June 3, 2015)
39. Reason, P. Pragmatist philosophy and action research. Action Research, 1, 1 (2003),
103–123.
40. Rose, K. A Guide to the Project Management Body of Knowledge (PMBOK® Guide),
5th ed. Project Management Journal, 44, 3, (2013), e1.
41. Rubin, H., and Rubin, I. Qualitative Interviewing: The Art of Hearing Data. Thousand
Oaks: Sage, 1995.
42. Schultz, T. Reflections on investment in man. Journal of Political Economy, 58, 1
(1962), 1–8.
43. Shao, B. Optimal redundancy allocation for disaster recovery planning in the network
economy. In H. Chen, R. Moore, D. Zeng, and J. Leavitt (eds.) Intelligence and Security
Informatics. Lecture Notes in Computer Science, vol. 3073. Berlin, Germany: Springer, 2004,
pp. 484–491.
44. Skipper, J.; Hall, D.; and Hanna, J. Top management support, external and internal
organizational collaboration, and organizational flexibility in preparation for extreme events.
Journal of Information System Security, 5, 1 (2009), 32–60.
45. Sugimori, Y.; Kusunoki, K.; Cho, F.; and Uchikawa, S. Toyota production system and
Kanban system materialization of just-in-time and respect-for-human system. International
Journal of Production Research, 15, 6 (1977), 553–564.
46. Susman, G., and Evered, R. An assessment of the scientific merits of action research.
Administrative Science Quarterly, 23, 4 (1978), 582–603.
47. Sutherland, J., and Schwaber, K. Business object design and implementation workshop.
In Proceedings of the OOPSLA ‘95. Austin, Texas, October 15–19, 1995, pp. 170–175.
48. Takeuchi, H., and Nonaka, I. The new product development game. Harvard Business
Review, 64, 1 (1986), 137–146.
49. Trello. Trello Inc. Delaware, 2014. Available at: http://trello.com. (accessed on August
15, 2014)
662 BAHAM, HIRSCHHEIM, CALDERON, AND KISEKKA
http://www.drj.com/industry/industry-hot-news/does-the-concept-of-agile-recovery-make-sense.html
http://www.drj.com/industry/industry-hot-news/does-the-concept-of-agile-recovery-make-sense.html
http://www.atlassian.com/agile/kanban
http://www.atlassian.com/agile/kanban
http://trello.com
50. Wang, K.; Li, L.; Yuan, F.; and Zhou, L. Disaster recovery system model with
e-government case study. Natural Hazards Review, 7, 4 (2006), 145–149.
51. Webb, G.; Tierney, K.; and Dahlhamer, J. Predicting long-term business recovery from
disaster: A comparison of the Loma Prieta earthquake and Hurricane Andrew. Global
Environmental Change Part B: Environmental Hazards, 4, 2 (2002), 45–58.
52. Yan, H. Louisiana’s mammoth flooding: By the numbers. CNN, 2016. Available at:
www.cnn.com/2016/08/16/us/louisiana-flooding-by-the-numbers/. (accessed on August 15,
2014)
53. Yin, R. Case Study Research: Design and Methods. Thousand Oaks, CA: Sage, 2008.
DISASTER RECOVERY OF INFORMATION SYSTEMS 663
http://www.cnn.com/2016/08/16/us/louisiana-flooding-by-the-numbers/
Copyright of Journal of Management Information Systems is the property of Taylor & Francis
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
- Abstract
- References
Theoretical Foundation
Disaster Recovery Literature
Agile Methodology Literature
Theoretical Lens: Parallel Between DR Needs and Agile Practices
Action Research Approach
Empirical Setting of the Study
Research Setting
Data Collection Details
A 5-Month, Two-Cycle Dialogical AR Project
First Cycle, 4 Months
Second Cycle, 1 Month
Results—Kanban-Based Framework
Agile Methodologies and Disaster Recovery Orchestration
Operationalizing Kanban Methodology Principles
Modifications
Discussion
Reflections on the Use of AR in DR
Theoretical Contributions
Implications for Practice
Conclusions
Limitations and Recommendations for Future Research
Supplemental File
Notes
Cyberbullying on Social Networking Sites:
The Crime Opportunity and Affordanc
e
Perspectives
TOMMY K. H. CHAN, CHRISTY M. K. CHEUNG, AND RANDY
Y. M. WONG
Tommy K. H. Chan (tommy.chan@northumbria.ac.uk; corresponding author) is
a Lecturer in Business Information Management at Northumbria University,
United Kingdom. He earned a Ph.D. in Information Systems and e-Business
Management from Hong Kong Baptist University. Dr. Chan’s research interests
include societal implications of information technology use, such as cyberbullying
and game addiction, and online consumer behaviors, such as customer engagement
and social media firestorm. His work has been published in such journals as
Information & Management, Industrial Marketing Management, Electronic
Commerce Research and Applications, Internet Research, and others.
Christy M. K. Cheung (ccheung@hkbu.edu.hk) is an Associate Professor at
Hong Kong Baptist University. She earned a Ph.D. in Information Systems
from the College of Business at City University of Hong Kong. Her research
interests include technology use and well-being, IT adoption and use, societa
l
implications of IT use, and social media. She has published over one hundr
ed
refereed articles in scholarly journals and conference proceedings, including,
Journal of Management Information Systems, Journal of Information
Technology, Journal of the Association for Information Science and
Technology, and MIS Quarterly, among others. Dr. Cheung is President of the
Association for Information Systems – Hong Kong Chapter. She also serves as
Editor-in-Chief of Internet Research.
Randy Y. M. Wong (rymwong@life.hkbu.edu.hk) is a Ph.D. candidate in the
Department of Finance and Decision Sciences at Hong Kong Baptist University.
Her research interests include social media and social networking, and societal
implications of technology use. Her work has appeared in Computers in Human
Behavior as well as in the proceedings of the International Conference on
Information Systems, European Conference on Information Systems, Pacific Asia
Conference on Information Systems, and Hawaii International Conference on
System Sciences.
ABSTRACT: Cyberbullying on social networking sites (SNS bullying) is an emerging
societal challenge related to the deviant use of technologies. To address the research
gaps identified in the literature, we draw on crime opportunity theory and the
affordance perspective to propose a meta-framework that guides our investigation
into SNS bullying. The meta-framework explains how SNS affordances give rise to
the evaluation of favorable SNS environmental conditions for SNS bullying, which,
Journal of Management Information Systems / 2019, Vol. 36, No. 2, pp. 574–609.
Copyright © Taylor & Francis Group, LLC
ISSN 0742–1222 (print) / ISSN 1557–928X (online
)
DOI: https://doi.org/10.1080/07421222.2019.1599500
http://orcid.org/0000-0001-9930-8897
http://orcid.org/0000-0003-4411-0570
http://orcid.org/0000-0001-6585-9973
mailto:tommy.chan@northumbria.ac.u
k
mailto:ccheung@hkbu.edu.hk
mailto:rymwong@life.hkbu.edu.hk
https://crossmark.crossref.org/dialog/?doi=10.1080/07421222.2019.1599500&domain=pdf&date_stamp=2019-06-12
in turn, promote SNS bullying. The research model was empirically tested using
a longitudinal online survey of 223 SNS users. The results suggest that the evalua-
tion of SNS environmental conditions predict SNS bullying, and SNS affordances
influence the evaluation of these environmental conditions. This work offers a new
theoretical perspective to study SNS bullying, highlighting the critical impacts of
environmental conditions in shaping such behavior. It also provides actionable
insights into measures that combat SNS bullying.
KEY WORDS AND PHRASES: cyberbullying, SNS bullying, crime opportunity, affor-
dance, social networking sites, meta-framework, societal impacts of technology use,
IT deviant use.
Social networking sites (SNSs) have become increasingly popular vehicles for
individuals to communicate with their friends and family, anytime and any-
where. Despite their promising potential for online social interactions, SNSs are
also ripe for abuse because they provide perpetrators with an ideal venue for
cyberbullying—in other words, for harassing, threatening, and exploiting poten-
tial targets. Cyberbullying on social networking sites (SNS bullying) refers to
any form of aggressive behavior on SNSs conducted by a group or an indivi-
dual, repeatedly and over time, against targets who cannot easily defend them-
selves [88].
SNS bullying is a relatively recent phenomenon; however, researchers have
already devoted much attention to reporting and documenting its prevalence
and the adverse consequences associated with it. The Pew Research Cent
er
[74] found that 40 percent of Internet users had experienced cyberbullying.
Facebook has been found to be the most common venue for SNS bullying:
54 percent of Facebook users reported that they have experienced cyberbully-
ing on Facebook [37]. Previous research has demonstrated that SNS bullying
incidents have adverse consequences for victims (e.g., Sticca et al. [91]), such
as depression, anxiety, low self-esteem, substance abuse, and in extreme cases,
self-harming behaviors and suicide attempts. Frequent news headlines report-
ing suicide cases linked to SNS bullying document the severity of this pro-
blem, including, for example, the recent case of an eighteen-year-old girl who
shot herself dead in front of her family after being relentlessly bullied for her
weight on Facebook [40].
Given its adverse consequences on individuals and society, SNS bullying has not
surprisingly become an important and emerging research topic across disciplines.
With roots in psychology, education, and public health research, most studies have
focused on individual traits and characteristics that lead to SNS bullying (or to
cyberbullying in general) [see 39, 47, for a review]. However, the research into SNS
bullying is still emerging in the information systems (IS) discipline. Only recently
Lowry et al. [56] drew on social learning theory to examine how social media
CYBERBULLYING ON SOCIAL NETWORKING SITES 575
anonymity affects adults’ engagement in SNS bullying. In general, there have been
few investigations into the phenomenon within the IS discipline. How SNS, as
a form of new information technology, shapes and fosters cyberbullying remains
relatively unexplored from a technological perspective.
Understanding SNS bullying from a technological perspective is vital in order
to shed light onto new measures that may effectively combat this emerging
societal challenge, given that existing research has mostly focused on identifying
individual characteristics associated with SNS bullying. Indeed, numerous social
science theories, such as social cognitive theory and crime opportunity theory,
have stressed the importance of the environment in shaping human behaviors.
Neglecting the environmental component in SNS bullying research could be
potentially dangerous because this produces a lopsided view into the causes of
the phenomenon.
Accordingly, our study aims to advance the scientific understanding of cyberbullying
by developing a meta-framework that explains how SNS affordances and the evalua-
tion of favorable SNS environmental conditions influence SNS bullying. We use crime
opportunity theory [30] to explain SNS bullying, considering both the perpetrator
characteristic and SNS environmental conditions that offer the criminogenic opportu-
nities. We further adopt the affordance perspective [63] to delineate how SNS affor-
dances give rise to such a favorable evaluation of the environmental conditions for SNS
bullying. We endeavor to answer two primary research questions:
Research Question 1: What are the key factors driving SNS bullying?
Research Question 2: How do SNS affordances influence the evaluation of
SNS environmental conditions for SNS bullying?
This work responds to calls for research on the societal impacts of technology use
(e.g., Majchrzak et al. [61] and Tarafdar et al. [94]) and contributes to theory and
practice in three distinct ways. First, this work advances the scientific knowledge of
cyberbullying by investigating how the SNS environment drives SNS bullying from
the crime opportunity perspective. We test how presence of suitable targets and
absence of capable guardianships affected SNS bullying and explore how the favor-
able evaluation of such environmental conditions intensified the relationship between
inclination to bully and SNS bullying.
Second, this work enriches the IS literature by examining how users interpret SNSs
and the resultant deviant behaviors from the affordance perspective. We test four SNS
affordances (i.e., accessibility, information retrieval, editability, and association) that
influence perpetrators’ evaluation of SNS environmental conditions for SNS bullying.
Although prior research has focused on the positive connotation of SNS affordances,
our work breaks new ground for the study of unintended and negative acts afforded by
the SNSs.
Finally, for practitioners, the findings of this work could provide insights into how to
effectively combat SNS bullying. Based on the empirical results, SNS developers could
prioritize resources to rectify the criminogenic environmental conditions that
576 CHAN ET AL.
exacerbate SNS bullying. Meanwhile, government agencies could launch campaigns to
educate users on the appropriate use of SNSs. Together, the findings of this work offer
a more proactive approach to tackle cyberbullying and maintain a healthy social
networking environment.
Definition of Cyberbullying
Cyberbullying is a new form of bullying that involves the use of information technology.
Different terminologies have been used to describe the phenomenon, such as electronic
bullying [79], Internet bullying [106], and cyberbullying [98], with the last term being the
most popular and widely adopted. Most cyberbullying studies have derived definitions
from traditional bullying literature. For instance, cyberbullyingwas defined aswillful and
repeated harm inflicted through the medium of electronic text [72]. Later, a more refined
definition, proposing that cyberbullying is an aggressive online behavior that encom-
passes three characteristics: (1) it is performed by individuals or groups using electronic
or digital media; (2) hostile or aggressivemessages are repeatedly communicated; and (
3)
the behavior is conducted with the intent to cause discomfort or inflict harm on the target,
was advocated [95]. Research also suggested that there are different types of cyberbully-
ing behavior, such as flaming, harassment, cyberstalking, denigration, masquerade, out-
ing and trickery, exclusion, and impersonation [48, 105]. At present, there is no
exhaustive list of the types of bullying behavior perpetrated on SNSs.
Nature of
SNS Bullying
SNS bullying is a form of aggressive behavior on SNSs conducted by individuals or
groups, repeatedly and over time, against targets who cannot easily defend themselves. It
shares three definitional criteria with the related concepts of bullying and cyberbullying:
intentionality, repetition, and power imbalance [24]. SNS bullying is distinguished from
other forms of online deviant behavior, such as Internet trolling and flaming, because it is
deliberate, repeated, and involves exploitation of a power imbalance to intentionally harm
a target by leveraging the functionalities and capabilities of social networking platforms.
SNS bullying is often viewed as a form of deviant behavior fostered by the emergence
of information technologies [29, 47]. Specifically, the widespread deployment of perso-
nal communication devices (such as smartphone, tablet, and laptop) and the ease of
connectivity to online platforms have led to individuals spending more time with
technologies. This shift in social activities, moving from offline venues to social net-
working platforms, creates criminogenic opportunities for SNS bullying. In particular,
the rapid growth in SNS users has created a wealth of online profiles that make it easy for
perpetrators to identify vulnerable individuals. Guardianships of SNS bullying behaviors
(e.g., SNS self-reporting functions, laws, and regulations prohibiting bullying) become
ineffective because there are thousands to millions of social interactions happen on SNSs
CYBERBULLYING ON SOCIAL NETWORKING SITES 577
every day. It is virtually impossible to monitor, moderate, and control all the uses that
have violated the community standards. Such a view is consistent with crime opportunity
theory [30], which asserts that social and technological changes produce new opportu-
nities for crime and deviance.
In some countries, individuals face criminal charges and prison time if found guilty of
SNS bullying. For instance, in the United Kingdom, Section 127 of the
Communications Act of 2003 makes SNS bullying a criminal offense for anyone
sending something grossly offensive, indecent, obscene, or menacing character via
a public electronic communications network. The law states that a perpetrator can face
up to six months in jail, a fine, or both if found guilty [52]. Similarly, nearly half of the
states in America include cyberbullying as part of their broader bullying laws. The
nationwide trend is toward greater accountability for cyberbullying, in general, includ-
ing criminal statutes [44]. For example, a bill recently passed in West Virginia, making
cyberbullying a misdemeanor offense with a maximum punishment of one year in
prison, a $500 fine, or both [14].
We use crime opportunity theory [30] and the affordance perspective [63] to develop
a meta-framework that guides our investigation into SNS bullying. Specifically, crime
opportunity theory posits two primary components contribute to a crime being com-
mitted: (1) a likely perpetrator, and (2) environmental conditions that offer criminogenic
opportunities. These are the building blocks of our meta-framework explaining SNS
bullying.We further incorporate the affordance perspective into crime opportunity theory
to explain how an SNS allows a perpetrator to evaluatewhether environmental conditions
would facilitate an SNS bullying act. By integrating the affordance perspective into well-
established theoretical frameworks, prior research has demonstrated the viability to obtain
contextualized insights into a wide spectrum phenomenon related to information technol-
ogy uses (e.g., Chatterjee et al. [15], Seidel et al. [85], andSuh et al. [93]). For instance, the
affordance perspective has been integrated into the notion of virtue ethics to explain the
effects of organizational ITaffordances on organizational virtues and innovation improve-
ment [15]. Hence, we expect that integrating crime opportunity theory and the affordance
perspectivewould provide a useful theoretical foundation for developing a contextualized
understanding of SNS bullying. Figure 1 depicts the meta-framework of SNS bullying.
Crime Opportunity Theory
Crime opportunity theory [30] asserts that social and technological changes produce new
opportunities for crime and deviance. Opportunities play a central role in every category
of offense, regardless of its nature and severity. Subscribing to this perspective, we
stipulate that the shifts in social activities from offline venues to SNS platforms provide
opportunities for likely perpetrators to engage in SNS bullying. We argue that the rapid
growth of user populations creates ample opportunities for SNS bullying. Specifically,
578 CHAN ET AL.
perpetrators can easily identify vulnerable individuals through browsing their online
profiles on SNSs. The massive amount of information flow and social interactions also
makes it difficult to monitor and identify acts of SNS bullying, which, in turns, weaken
the capabilities of authorities and detection mechanisms in regulating such acts.
Crime opportunity theory further emphasizes that the occurrence of crime and
deviance is influenced not just by the perpetrators’ characteristics but also by the
environmental conditions that offer criminogenic opportunities. Our review of past
studies suggest that SNS bullying research has mainly investigated the “likely perpe-
trator” component, and have included aspects such as the perpetrators’ demographic
characteristics (e.g., Cao and Lin [10] and Sengupta and Chaudhuri [87]), their
intensity of SNS usage (e.g., Kwan and Skoric [49]), their cyberbullying victimization
experience (e.g., Marcum et al. [62]), and their personality traits (e.g., Kokkinos et al.
[46]) (see
A for a review). The potential impacts of the “environment” have
only recently attracted attention in the literature. For instance, the anonymous SNS
environment has been found to be exploited by heavy SNS users to perpetrate others on
the platform [56]. As Lowry et al. [56] noted, most cyberbullying studies “have glossed
over the central issue: the role of information technology or social media artifacts
themselves in promoting cyberbullying” (p. 3).
Over the last two decades, researchers have been increasingly using opportunity
theories to investigate technology-related crime and deviance, such as data breaches
[86] and computer crimes [107]. Empirical studies have also illustrated the applic-
ability of crime opportunity theory for understanding bullying behaviors (e.g., Cho
et al. [17]). Hence, considering both the theoretical assumptions and empirical applica-
tions, together with the criminogenic nature of SNS bullying discussed in the previous
section, we believe that crime opportunity theory is a viable theoretical perspective for
explaining SNS bullying. Specifically, our study continues to advance the literature by
focusing on the “environment” component and by examining how the SNS environ-
ment fosters the development of SNS bullying. Building on prior criminology literature
[30, 100], we propose two SNS environmental conditions that offer the criminogenic
Figure 1. Meta-Framework of SNS Bullying
CYBERBULLYING ON SOCIAL NETWORKING SITES 579
opportunities for a likely offender to engage in SNS bullying: (1) presence of suitable
targets and (2) absence of capable guardianships.
Affordance Perspective
An affordance refers to “the potential for behaviors associated with achieving
an immediate concrete outcome and arising from the relationship between an
artifact and a goal-oriented actor or actors” [92, p. 69]. Technological affor-
dance refers to “the mutuality of actor intentions and technology capabilities
that provide the potential for a particular action” [60, p. 39]. It arises when
one interprets a technology through his or her goals for action. The relational
view of affordance is advantageous for understanding technology use because
it allows researchers to consider the symbiotic relationship between the cap-
abilities of the technology and the actor’s goal and action [36], treating the
entanglement between them as a unit of analysis [60]. Research has further
shown that one technology can support different goal-oriented actions for
members of different social groups [20, 53]. In other words, it is individuals’
goals that shape what they come to believe the technology can afford them
[96], which in turn leads to a wide spectrum of desirable or undesirable—or
intended or unintended—behaviors [60]. For instance, Majchrzak et al. [60
]
identified four affordances of social media that affect employees’ engagement
in group online workplace conversations. They suggest that some workers
believed metavoicing affordance (i.e., the action possibility enabled by social
media for users to engage in the ongoing online knowledge conversation by
reacting online to others’ presence, profiles, content, and activities) fostered
productive knowledge conversations, whereas some thought it inhibited pro-
ductivity by promoting potentially biased and inaccurate information.
Acting on this perspective, we argue that one could interpret an SNS differ-
ently depending on his or her goal [53]. The actualization of affordances occurs
when an actor takes advantage of one or more affordances of the SNS to
achieve immediate concrete outcomes that support their goals. In this study,
the artifact is an SNS, and the goal-oriented actor is a user who purposefully
uses an SNS to bully a target (i.e., a perpetrator). For general users, the
actualization of SNS affordances occurs when they make use of the SNS to,
perhaps, engage in self-disclosure and read their friends’ newsfeed in support of
their relationship maintenance and socialization [16]. However, for a likely
offender whose goal is to leverage the SNS to bully someone, the actualization
of affordances could be completely different. For instance, they might see the
SNS as affording them the ability to access information about the background
and activities of other users, which would help them to identify suitable targets,
giving rise to a favorable evaluation of SNS environmental conditions for SNS
bullying.
580 CHAN ET AL.
Based on the review of the literature on technological affordances [60, 96]
and social network research [45], we propose four types of SNS affordances
and suggest that they have the potential to influence how one evaluates the
SNS environmental condition for SNS bullying. These affordances include
accessibility, information retrieval, editability, and association. Table 1 sum-
marizes the definitions and illustrations of these affordances.
Table 1. SNS Bullying Affordances
SNS
affordance Definition
How the affordance
relates to SNS bullying
Related SNS
affordances/SNS
features
Accessibility The extent to which
a user believes that an
SNS offers the
opportunity to connect
to another user on the
platform.
This affordance allows
a perpetrator to
transcend time and
spatial constraints in
identifying a target for
SNS bullying.
Network-informed
associating [60];
network
transparency [45]
Information
retrieval
The extent to which
a user believes that an
SNS offers the
opportunity to obtain
information about
a user on the platform.
This affordance allows
a perpetrator to obtain
contents created by
a target to understand
his/her background,
preferences, and daily
activities for the
purpose of SNS
bullying.
Persistence [96];
search and
privacy [45]
Editability The extent to which
a user believes that an
SNS offers the
opportunity to
manipulate the content
that he/she posted,
commented on, and
shared on the platform.
This affordance allows
a perpetrator to deny
his SNS bullying acts
by erasing, editing, or
hiding bullying related
contents and
identification cues.
Editability [96];
digital profile [45]
Association The extent to which
a user believes that an
SNS offers the
opportunity to
associate the
responsibility for his/
her post with other
users who interacted
with the post on the
platform.
This affordance allows
a perpetrator to elude
sole accountability for
creating the bullying
contents by attributing
the contents with other
users.
Association [96];
relational ties [45]
CYBERBULLYING ON SOCIAL NETWORKING SITES 581
Our meta-framework provides a theoretical basis to construct a research mod
el
explaining SNS bullying. First, drawing on crime opportunity theory [30], we
propose that SNS bullying is driven by two primary components: (1) a likely
offender, which is conceptualized as one’s inclination to bully and (2) the evalua-
tion of SNS environmental conditions that offer the criminogenic opportunity,
which include presence of suitable targets and absence of capable
guardianships. Second, subscribing to the affordance perspective [63], we examine
how SNS affordances (i.e., accessibility, information retrieval, editability, and
association) influence the evaluation of environmental conditions for SNS bullying.
Figure 2 depicts the research model.
Likely Offender and SNS Bullying
According to crime opportunity theory, a likely offender refers to a person who
might commit a crime or engage in deviant behavior for any reason [30]. Crime
opportunity theory presumes that crimes would not happen without an offender,
therefore the presence of a likely offender is a prerequisite for any crime or
deviance [30].
In this study, we conceptualize a likely offender as someone who has an inclina-
tion to bully on an SNS, which refers to one’s tendency to engage in SNS bullying
for any reason [42]. Past studies have shown that positive inclinations toward
Figure 2. Proposed Research Model
582 CHAN ET AL.
bullying (e.g., probullying beliefs and favorable attitudes toward cyberbullying)
predicted perpetrators’ engagement in cyberbullying behaviors (e.g., Lazuras et al.
[50] and Wiklund et al. [104]). For instance, adolescents’ inclination to cyberbully
was found to positively predict self-reported cyberbullying behaviors among teen-
agers [42] and secondary students [71]. Therefore, we hypothesize that:
Hypothesis 1: Inclination to bully positively influences SNS bullying.
Evaluation of SNS Environmental Conditions and SNS Bullying
Crime opportunity theory presumes that favorable environmental conditions play
a critical role in the occurrence of any crime or deviance [30]. In this study, we
propose two SNS environmental conditions that offer the criminogenic opportu-
nities for a likely offender to engage in SNS bullying: (1) presence of suitable
targets and (2) absence of capable guardianships [30, 100].
Presence of Suitable Targets
Crime opportunity theory [30] states that “targets of crime can be a person or an object,
whose position in space or time puts it at more or less risk of criminal attack” (p. 5). The
theory asserts that certain characteristics of a target will be of greater interest to a likely
offender, such as being visible (e.g., a valuable good is placed near windows) and
accessible (e.g., a house with doors left unlocked).
In this work, we define presence of suitable targets as the extent to which a perpetrator
believes there are suitable targets in the SNS environment available for SNS bullying. As
discussed earlier, the prevalence and popularity of SNSs create newopportunities for SNS
bullying [47]. In recent years, not only have the number of SNS users dramatically
increased but also the amount of personal information that users posted and shared online.
In 2017, 71 percent of Internet users had an SNS profile on one of the major SNS
platforms [90]. Of these, 92 percent used their real names on their profiles, 91 percent had
a picture of themselves on their profiles, and 82 percent had posted other personal
information on their profiles—such as birth date, gender, education background, occupa-
tion, or country of residence [59]. A large number of users and an ample amount of
sensitive personal information available provide a wealth of opportunity to identify
suitable targets for SNS bullying. Hence, the perception that the SNS environment is
a source of suitable targets is likely to attract more SNS bullying behaviors. This
prediction is also evident in the bullying research, which supports a link between suitable
targets and bullying behaviors. For instance, students who were perceived to be suitable
targets among the perpetrators were more likely to be victimized [76]. Therefore, we
hypothesize that:
Hypothesis 2: Presence of suitable targets positively influences SNS bullying.
CYBERBULLYING ON SOCIAL NETWORKING SITES 583
Absence of Capable Guardianships
Crime opportunity theory suggests that in the absence of capable guardianships, crime
and deviance are more likely to occur [30]. According to the theory, guardianships are
not confined to government officials alone, but rather include “anybody whose pre-
sence or proximity would discourage a crime from happening” [30, p. 4].
In this work, we define absence of capable guardianships as the extent to which
a perpetrator evaluates that guardianships are incapable of fortifying SNS environ-
ments against SNS bullying. Guardianships here represent both offline authorities (e.g.,
laws and regulations) and online mechanisms (e.g., reporting systems and detection
algorithms) that aim to protect users from being victimized on SNSs. For instance,
Facebook has implemented a built-in reporting system that permits users to report any
content that is not commensurate with its community standards (such as nudity, hate
speech, or violence). The Facebook team regularly reviews the reported materials and
removes them if they are deemed inappropriate. These functions serve as
a guardianship, protecting general users against SNS bullying. However, with the
growing number of posts uploaded and shared on SNSs daily, it has become increas-
ingly challenging for these protective measures to effectively tackle bullying activities
on SNSs [5]. Though there have been initiatives to use more advanced techniques—
such as machine learning and natural language processing to detect SNS bullying—
their effectiveness is restricted by computers’ ability to interpret meanings, variations,
and metaphors in human language [11]. It remains difficult for guardianships to fortify
SNS environments against SNS bullying effectively. Past studies have found support
for the link between a lack of guardianships and bullying behaviors. For instance, social
guardianships was found to decrease victimization among young people [57].
Therefore, we hypothesize that:
Hypothesis 3: Absence of capable guardianships positively influences SNS
bullying.
SNS Affordances and the Evaluation of SNS Environmental
Conditions
Drawing on the affordance perspective [63], we further examine how the SNS affor-
dances outlined above (accessibility, information retrieval, editability, and association)
affect the evaluation of SNS environmental conditions (i.e., presence of suitable targets
and absence of capable guardianships), in which criminogenic opportunities for SNS
bullying are perceive
d.
Accessibility Affordance
Accessibility affordance refers to the extent to which a user believes that an SNS offers
the opportunity to connect with a user on the platform. In SNS bullying, accessibility
584 CHAN ET AL.
affordance allows a perpetrator to transcend time and spatial constraints to reach
potential targets. Kane et al. [45] suggested that network transparency is one of the
essential features of a social network; it allows users to view their connections within
a network and offers the opportunity to connect each other. In SNSs, users are given
various opportunities to contact and connect with an unlimited number of users,
including friends, family members, acquaintances, and even strangers. For perpetra-
tors, however, accessibility affordance facilitates overcoming barriers of time and
space to connect with potentially suitable targets. In a recent SNS bullying case, for
example, a perpetrator used the hashtag (i.e., #hashtag) and handle (i.e., @username)
on Instagram to repeatedly bully a group of young people [66]. The unconstrained and
boundless accessibility afforded by SNSs may lead a perpetrator to evaluate that the
SNS provides an environment where suitable targets can be easily identified and
accessed. Therefore, we hypothesize that:
Hypothesis 4: Accessibility affordance positively influences presence of suita-
ble targets.
Information Retrieval Affordance
Information retrieval affordance refers to the extent to which a user believes that an
SNS offers the opportunity to obtain information about a user on the platform. In SNS
bullying, information retrieval affordance allows a perpetrator to access material
created by a potential target, which provides information about the background,
preferences, and daily activities of the potential target. SNS updates often include
new features that aim to entice users to continuously create and share information on
the platforms. For instance, Facebook’s “On This Day” feature shows old photos and
newsfeeds to a user and encourages the user to forward these posts and stories with
their friends. Instagram, Twitter, and other SNSs often ask users to provide precise
information when uploading a photo. Such updates are part of an oversharing phenom-
enon, with a recent survey estimating that about 40 percent of users overshare sensitive
information on SNSs [64]. Such abundance of unrestricted information puts users at
risk for SNS bullying victimization. For instance, the Facebook timeline provides an
easy interface for quickly reading others’ activity logs. It is like a scrapbook, providing
snapshots of information that can be used to understand a particular user. It allows
a perpetrator to trawl back through a target’s history, gleaning information from shared
photos and statuses and eventually using them to create harassing materials or even to
impersonate the person identified as a suitable target [13]. Past studies have also shown
that individuals who did not restrict access to their online profiles or who disclosed too
much sensitive personal information online were considered more attractive and
vulnerable by perpetrators [65, 73]. Therefore, we hypothesize that:
Hypothesis 5: Information retrieval affordance positively influences presence
of suitable targets.
CYBERBULLYING ON SOCIAL NETWORKING SITES 585
Editability Affordance
Editability affordance refers to the extent to which a user believes that an SNS offers
the opportunity to manipulate a content that he or she posted, commented on, and/or
shared on the platform. In SNS bullying, editability affordance allows a perpetrator to
deny his SNS bullying acts by erasing, editing, or otherwise hiding bullying related
contents and identification cues. In offline bullying, it is difficult for a perpetrator to
conceal his or her identity because the victim can at least recognize the physical
appearance of the perpetrator. Physical damages inflicted on the target are also difficult
to hide. In contrast, in SNSs, it is fairly easy for a perpetrator to modify, erase, or hide
identification cues in relation to the bullying and his or her identity. For instance,
Facebook allows users to edit descriptions of their posts or even delete contents
published on their walls. One can also register a new email domain and create an
alternative SNS account to engage in SNS bullying. As a result, this affordance
weakens the effect of guardianships on SNS because it is difficult for authorities to
track and punish SNS bullying behaviors. Therefore, we hypothesize that:
Hypothesis 6: Editability affordance positively influences absence of capable
guardianships.
Association Affordance
Association affordance refers to the extent to which a user believes that an SNS offers
the opportunity to share responsibility for his or her post with other users who interact
with the post on the platform. In SNS bullying, association affordance allows
a perpetrator to avoid accountability for the bullying act by inviting other SNS
members; that is, the perpetrator can deny sole responsibility for carrying out the
action. User engagement and cocreation are core values on most social networking
platforms. SNS providers not only entice users to share more information but also
encourage others to interact with these posts. For instance, Facebook now offers more
nuanced reactions to posts beyond the “like” reaction (i.e., “love,” “ha-ha,” “wow,”
“sad,” and “angry”) to encourage users to express themselves after reading a post. The
long-standing tag feature (@user name) allows users to invite others to respond to
a post and jointly develop the conversation. Recent statistics show that 44 percent of
Facebook users “Liked” content posted by their friends at least once a day, and
31 percent made comments on posts daily [89]. On the one hand, association affor-
dance fosters meaningful exchange among ordinary users. On the other hand, it allows
perpetrators to invite other users to view and participate in bullying posts, making it
difficult to designate responsibility for the hurtful contents [82], mitigating the effect of
guardianships. Therefore, we hypothesize that:
Hypothesis 7: Association affordance positively influences absence of capable
guardianships.
586 CHAN ET AL.
Control Variables
Past studies have demonstrated that demographic characteristics, computer usage,
and cyberbullying self-efficacy can influence cyberbullying [47]. Accordingly, we
include age, gender, education, SNS usage, SNS experience, SNS real name
registration, and self-efficacy in SNS bullying, as the control variables.
Research Design
We used an anonymous, self-reported, longitudinal online survey design with
Facebook users to test the proposed research model. The survey method has been
used to examine a broad range of deviant behaviors related to technology use, such
as online software piracy [43], information system misuse [19], and cyberbullying
[56]. The self-report questionnaire technique has been used to test crime opportu-
nity theory and the affordance perspective in both offline and online contexts, such
as bullying victimization [17], workplace sexual harassment [22], online hate on
SNSs [78], and gamification [93]. Using a longitudinal setting can also reduce the
threat of common method bias and enhance causal inference [75, 81]. We selected
Facebook as the research context because it is the leading SNS worldwide [28].
A recent survey also revealed that cyberbullying is most likely to take place on this
platform [23]. Therefore, we believed that Facebook represents a suitable context
for testing our proposed research model. To participate in the study, individuals had
to: (1) be users of Facebook; (2) live in the United States (this requirement ensured
a standardized perception of laws and norms regarding SNS bullying on Facebook
[56]).
Measure
The measurement items were adapted from the literature where possible (e.g.,
SNS bullying). Minor modifications were made to measurement items to fit the
current research context. When measurement items were unavailable (e.g., SNS
affordances and crime opportunity components), we followed the guidelines set
out in the instrument development literature [68] to develop new instruments to
measure the constructs. The instrument development process and the complete
list of measurement items for the focal constructs are shown in the online
supplement – section A. As the research context examines a socially undesirable
behavior, the social desirability scale was also included to detect for potential
response bias [80].
CYBERBULLYING ON SOCIAL NETWORKING SITES 587
Data Collection and Procedures
Respondents for the online survey were recruited from the Amazon Mechanical Turk
(MTurk). MTurk is an online crowdsourcing platform that allows people to participate
in Human Intelligence Task (HIT) for remuneration. The use of MTurk is appropriate
for the current research purpose, as suggested in recent cyberbullying research [e.g.,
83] and advocated in senior IS literature [e.g., 56]. Specifically, cyberbullying is
a sensitive issue and is socially unacceptable in most cultures. Hence, using MTurk
as a portal to reach the target sample helped ensure respondents’ anonymity, thereby
eliciting responses that are more honest and reducing social desirability bias.
Furthermore, since cyberbullying is a general topic that requires minimal expertise,
using MTurk to collect data is a good fit. It allows researchers to reach a huge pool of
potential respondents with SNS bullying experiences, which is virtually impossible
using other data collection methods. To ensure data quality, we followed guidelines as
described in the latest methodological literature onMTurk in designing and distributing
the survey study [34, 54]. For instance, we checked the workers’ location based on their
IP address to ensure they reside in the United States.We detected “super workers,”who
generally put less time and effort into a task, using their completion time and number of
tasks completed. We also included randomly appearing attention-check questions and
reverse-coded questions to affirm the accuracy of the responses.
The data collection consisted of two waves. At time t (Wave 1), HIT requests
were posted on MTurk. At this stage, responses related to independent variables
(i.e., SNS affordances and crime opportunity components) were collected. The
respondents in Wave 1 were then invited to answer another online questionnaire
at time t+1 (Wave 2), in which responses related to the dependent variables (i.e.,
SNS bullying behaviors) were collected. A unique code was used to match respon-
dents’ responses across the two waves of data collection.
At the beginning of the survey, respondents were asked to answer screening
questions to determine their eligibility to participate. In particular, they were
asked to indicate the three social networking platforms they had visited most
frequently during the past three months and asked to report their country of
residence. We filtered out respondents who did not pass these screening ques-
tions. Following the screening questions, respondents were asked to complete
a questionnaire that included measures of the variables of interest in each wave.
Finally, they were asked to answer the social desirability items. We collected
their demographic information at the end of the survey. We provided a monetary
incentive upon successful completion of the questionnaire. Ten randomly pre-
sented attention-check questions were included to detect any careless, random,
or haphazard responses that may have occurred as a result of the online survey
method. Responses from individuals who attempted to participate multiple times
(as identified through respondents’ MTurk ID and IP address), failed to pass the
attention-check questions, and from those who completed the survey in an
exceptionally short time (i.e., less than 15 minutes) were filtered out of the
sample to ensure data quality.
588 CHAN ET AL.
Respondent Profile
We launched the online surveys in June 2018 (time t, Wave 1) and September 2018 (time
t + 1, Wave 2). 1,023 respondents attempted the survey in Wave 1, with 530 indicating
Facebook as their most visited SNS and theUnited States as their country of residence. 32
respondents failed to pass the attention-check questions or provided haphazard responses,
leaving 498 complete and valid responses. For Wave 2, we sent an invitation to respon-
dents who participated in Wave 1. 262 attempted the survey, and 39 respondents did not
pass the attention-check questions or provided haphazard responses, leaving 223 com-
plete and valid responses for subsequent analyses. Of the remaining respondents, 98
(43.9 percent) were male, and 125 (56.1 percent) were female. Most were young adults,
between the ages of 25 and 34 (45.3 percent). Themajority visited Facebook at least once
a day (91.0 percent) and had more than five years of experience using Facebook
(85.2 percent). Table 2 presents the respondent profile.
Survey methodologies may be plagued by common method bias (CMB) and social
desirability bias (SDB), we applied several procedural and statistical remedies to
minimize these threats. The results suggest that both CMB and SDB were
Table 2. Respondent Profile
No. Percent No. Percent
Gender SNS usage
Male 98 43.9 Once a week 4 1.8
Female 125 56.1 2–4 times a week 12 5.4
5–6 times a week 4 1.8
Age Once a day 52 23.3
18–24 15 6.7 2–3 times a day 42 18.8
25–34 101 45.3 4–5 times a day 25 11.2
35–44 51 22.9 More than 5 times a day 84 37.7
45–54 24 10.8
55–64 17 7.6 SNS experience
65 or above 15 6.7 Less than a year 3 1.3
1–2 year(s) 7 3.1
Education 3–4 years 23 10.3
Less than high school 3 1.3 5–6 years 48 21.5
High school 49 22.0 7–8 years 43 19.3
College degree 51 22.9 9–10 years 36 16.1
Bachelor’s degree 79 35.4 More than 10 years 63 28.3
Master’s degree 31 13.9
Doctoral degree 3 1.3
Professional degree 7 3.1
CYBERBULLYING ON SOCIAL NETWORKING SITES 589
negligible in this study [75, 84]. Detailed procedures are reported in the online
supplement – section B.
We assessed the reliability of the measurement items using Cronbach’s alpha
and examined the convergent and discriminant validity of the constructs using
factor analysis and pairwise chi-square tests. Specifically, all of the constructs
demonstrate internal consistency with Cronbach’s alpha values exceeding the
threshold [38]. Factor analysis showed that items load strongly on their
corresponding constructs with low cross-loadings with other constructs.
Furthermore, the chi-square tests showed that all chi-square differences for
each pair of constructs in the research model are statistically significant. An
examination into the variance inflation factors also suggested that the model
does not suffer from multicollinearity issue. Taken together, the measurement
model demonstrates sufficient convergent validity and discriminant validity
[38, 99]. Details of the assessment of the reliability, validity, and multicolli-
nearity can be found in the online supplement – section C and D.
We performed hierarchical regression analyses to test the hypotheses. To test
the direct effects of the crime opportunity components on SNS bullying, we ran
a control effect model and then a main effect model. Table 3 shows the results
of these analyses. We first tested the control variables. The control-only model
explains 29.5 percent of the variance for SNS bullying. After that, we tested the
effects of inclination to bully, presence of suitable targets, and absence of
capable guardianships on SNS bullying. The main effect model explains
Table 3. Results of Regression Analysis on Crime Opportunity Components
SNS Bullying
Dependent variable Control-only Main effect
Control variables
Gender −.165** −.095
Age −.237*** −.102*
Education .111 .051
SNS usage −.037 −.026
SNS experience −.396*** −.216***
SNS real name registration .008 .002
Self-efficacy in SNS bullying .173** .077
Main effects
Inclination to bully .443***
Presence of suitable targets .173***
Absence of capable guardianships .118**
R2 .295 .547
Δ R2 .252***
*p < .05; **p < .01; ***p < .001.
590 CHAN ET AL.
54.7 percent of the variance for SNS bullying. Specifically, inclination to bully
(β = .443, p < .001), presence of suitable targets (β = .173, p < .001), and
absence of capable guardianships (β = .118, p < .01), predict SNS bullying,
supporting Hypothesis 1, Hypothesis 2, and Hypothesis 3.
To test the effects of SNS affordances on the evaluation of SNS environmental
conditions, we ran a control effect model and then a main effect model. Table 4
shows the results of the analyses. The results indicate that information retrieval
affordance (β = .265, p < .001) predicts presence of suitable targets, supporting
Hypothesis 5. The model explains 13.3 percent of the variance for presence of
suitable targets. Furthermore, the analysis shows that editability affordance (β =
.233, p < .01) and association affordance (β = .182, p < .05) predict absence of
capable guardianships, supporting Hypothesis 6 and Hypothesis 7. The model
explains 13.4 percent of the variance for absence of capable guardianships.
However, accessibility affordance has no influence on presence of suitable
targets (β = -.098, p > .05), failing to support Hypothesis 4. Table 5 summarizes
the hypotheses test results.
Table 4. Results of Regression Analysis on SNS Affordances
Presence of suitable
targets
Absence of capable
guardianships
Dependent variable
Control-
only
Main
effect
Control-
only
Main
effect
Control variables
Gender −.064 −.034 .095 .098
Age −.146* −.098 −.038 .007
Education .027 −.022 .049 .036
SNS usage −.053 −.060 .125 .135
SNS experience −.168* −.112 −.029 −.036
SNS real name registration −.011 .018 −.104 −.067
Self-efficacy in SNS bullying .103 .086 .125 .075
Main effects
Accessibility affordance −.098
Information retrieval
affordance
.265***
Editability affordance .233**
Association affordance .182*
R2 .070 .133 .134
Δ R2 .063** .094***
*p < .05; **p < .01; ***p < .001.
CYBERBULLYING ON SOCIAL NETWORKING SITES 591
Comparison of Alternative Models
We performed a pseudo-F test to assess the effects of excluding the components
inclination to bully or evaluation of SNS environmental conditions from the model,
along with the resulting change in variance explained for SNS bullying. As shown
in Table 6, the exclusion of either of these components leads to a significant drop in
variance for SNS bullying. This result indicates that SNS bullying is better
explained by examining the likely offender and the environmental condition com-
ponents together, providing further support to crime opportunity theory.
Assessment of the Mediation Effects
We conducted bootstrapping analyses to examine the mediating effects using
PROCESS [41, 58]. We bootstrapped the effects of SNS affordances (i.e., accessibility,
Table 5. Summary of Hypotheses Test Results
Hypothesis Result
Hypothesis 1: Inclination to bully positively influences SNS bullying.
Supported
Hypothesis 2: Presence of suitable targets positively influences SNS
bullying.
Supported
Hypothesis 3: Absence of capable guardianships positively influences
SNS bullying.
Supported
Hypothesis 4: Accessibility affordance positively influences presence of
suitable targets.
Not Supported
Hypothesis 5: Information retrieval affordance positively influences
presence of suitable targets.
Supported
Hypothesis 6: Editability affordance positively influences absence of
capable guardianships.
Supported
Hypothesis 7: Association affordance positively influences absence of
capable guardianships.
Supported
Table 6. Results of the Pseudo-F Test
Comparison
R2
excluded
R2
full ΔR2 ΔF
Cohen’s
f2
Effect
size
Inclination to bully excluded .411 .547 .135 63.270*** .156 Medium
Evaluation of SNS environmental
conditions excluded
.495 .547 .052 12.137*** .055 Small
Note: f2 ≥ .02, f2 ≥ .15, and f2 ≥ .35 represent small, medium, and large effect sizes, respectively [18].
592 CHAN ET AL.
T
ab
le
7.
R
es
ul
t
s
of
th
e
M
ed
ia
ti
on
Te
st
s
S
N
S
af
fo
rd
an
ce
s
(I
V
)
T
he
ev
al
ua
ti
on
of
S
N
S
en
vi
ro
nm
en
ta
l
co
nd
it
io
ns
(M
)
M
ed
ia
ti
on
te
st
(a
b)
F
ul
l/
P
ar
ti
al
m
ed
ia
ti
on
te
st
(c
’)
In
di
re
ct
ef
fe
ct
B
ia
s-
co
rr
ec
te
d
95
pe
rc
en
t
co
nf
id
en
ce
in
te
rv
al
s
fo
r
in
di
re
ct
ef
fe
ct
Z
er
o?
M
ed
ia
ti
on
?
D
ir
ec
t
ef
fe
ct
B
ia
s-
co
rr
ec
te
d
95
pe
rc
en
t
co
nf
id
en
ce
in
te
rv
al
s
fo
r
di
re
ct
ef
fe
ct
Z
er
o?
T
yp
es
of
m
ed
ia
ti
on
E
ff
ec
t
(S
E
)
L
ow
er
U
pp
er
E
ff
ec
t
(S
E
)
L
ow
er
U
pp
er
A
cc
es
si
bi
lit
y
P
re
se
nc
e
of
su
ita
bl
e
ta
rg
et
s
−
.0
51
(.
03
0)
−
.1
16
.0
04
Y
es
N
o
−
.1
54
(.
12
0)
−
.3
90
.0
83
Y
es
N
on
e
In
fo
rm
at
io
n
re
tr
ie
va
l
.1
10
(.
03
7)
.0
51
.1
98
N
o
Y
es
−
.0
71
(.
06
2)
−
.1
93
.0
52
Y
es
F
ul
l
E
di
ta
bi
lit
y
A
bs
en
ce
of
ca
pa
bl
e
gu
ar
di
an
sh
ip
s
.0
99
(.
03
4)
.0
47
.1
80
N
o
Y
es
−
.1
67
(.
09
7)
−
.3
59
.0
25
Y
es
F
ul
l
A
ss
oc
ia
tio
n
.0
48
(.
02
1)
.0
16
.1
03
N
o
Y
es
.2
04
(.
06
1)
.0
84
.3
24
N
o
P
ar
tia
l
CYBERBULLYING ON SOCIAL NETWORKING SITES 593
information retrieval, editability, and association) on the evaluation of SNS environ-
mental conditions (i.e., presence of suitable targets, and absence of capable guardian-
ships) (a1-4), the effects of the evaluation of SNS environmental conditions on SNS
bullying (b1-2), and the effects of SNS affordances on SNS bullying (c’1-4). Table 7
summarizes the mediation tests.
Full mediation is observed when the confidence intervals (CIs) of the indirect
effect (i.e., ab) does not involve zero but the direct effect (i.e., c’) does. In our
model, presence of suitable targets fully mediates the relationship between
information retrieval affordance and SNS bullying; and absence of capable
guardianships fully mediates the relationships between editability affordance
and SNS bullying. Furthermore, absence of capable guardianships partially
mediates the relationships between association affordance and SNS bullying.
However, there is no mediation effect found between accessibility affordance
and SNS bullying. The results indicate that whereas the effects of information
retrieval affordance and editability affordance are explained wholly by presence
of suitable targets and absence of capable guardianships, respectively, associa-
tion affordance has a direct positive effect on SNS bullying beyond the effect
that is mediated by absence of capable guardianships. In other words, being able
to associate one’s act with other SNS users may have psychological effects, such
as diffusion of responsibility, beyond simply perceiving an absence of capable
guardianships [97].
Assessment of the Interaction Effects
Crime opportunity theory holds that offenders behave rationally and engage in
crime and deviance when the environment is favorable [30]. Accordingly, we
expect that the evaluation of SNS environmental conditions will not only have
a direct effect on SNS bullying but also exacerbate perpetrators’ inclination to
actually engage in SNS bullying behaviors.
Inclination to Bully × The Evaluation of SNS Environmental Conditions
We expect two two-way interaction effects between the inclination to bully and
the evaluation of SNS environmental conditions (i.e., presence of suitable
targets, and absence of capable guardianships). In traditional bullying, most
bullying takes place among primary and secondary students. In these popula-
tions, there is always a large pool of peers from which a perpetrator can easily
select a suitable target. Also, bullying often takes places after school, when
a vulnerable target is away from teachers’ supervision [24]. Based on this logic,
it is plausible that in SNS bullying, when one with an inclination to bully
evaluates the SNS environment as favorable, he or she would believe that the
effort involved in finding suitable targets or the chances of being caught would
be low. As a rational perpetrator, he or she would be more likely to translate the
594 CHAN ET AL.
inclination into action. Therefore, the relationship between inclination to bully
and SNS bullying will be stronger when the evaluation of the SNS environ-
mental conditions is favorable (i.e., high in terms of presence of suitable targets
or absence of capable guardianships).
Presence of Suitable Targets × Absence of Capable Guardianships
We expect a two-way interaction effect between these two environmental conditions.
Prior research report that bullying incidents are less likely when teachers are attentive
to students at school [17] and that high levels of parental support reduce the risk of
cyberbullying victimization among adolescents [101]. These findings suggested that
the attractiveness of a target (i.e., the perception of suitability) could be greatly reduced
by the presence of capable guardianships. Based on this logic, it is plausible that when
the perpetrator perceives a high absence of capable guardianships, he or she would
likely estimate a higher number of suitable targets present in the SNS environment. For
instance, if a perpetrator perceives the detection mechanism of SNS bullying to be
ineffective, he or she would tend to believe that users aremore vulnerable because there
is no one to protect them from being bullied. Conversely, if a perpetrator perceives that
guardianships are effectively filtering and removing bullying content quickly and
therefore safeguarding the potential targets, they may evaluate users on the SNS
platform as less suitable for bullying. Therefore, the relationship between presence of
suitable targets and SNS bullying is stronger when the perpetrator perceives a higher
degree of absence of capable guardianships.
Inclination to Bully × Presence of Suitable Targets × Absence of Capable
Guardianships
We expect a three-way interaction effect on SNS bullying between the inclina-
tion to bully, presence of suitable targets, and absence of capable guardianships.
Crime opportunity theory assumes that crime components (i.e., offender, target,
and guardians) are interrelated [35]. Crime and deviance are most likely to occur
when an offender is situated in favorable environmental conditions [30].
Therefore, when one with an inclination to bully perceives two favorable SNS
environmental conditions existing in time and space (i.e., a high degree of
presence of suitable targets and a high degree of absence of capable guardian-
ships), he or she expects minimal effort and risk when engaging in SNS bullying.
As a result, the perpetrator is more likely to act opportunistically and translate
the inclination into actual behavior.
We conducted bootstrapping analyses to examine the interaction effects using
PROCESS [41]. Table 8 summarizes the moderation tests. The results show two
significant two-way interactions among the crime opportunity components.
CYBERBULLYING ON SOCIAL NETWORKING SITES 595
Specifically, presence of suitable targets (β = .185, p < .05) positively moderates the relationship between inclination to bully and SNS bullying, whereas absence of capable guardianships (β =.197, p <.001) positively moderates the relation- ship between presence of suitable targets and SNS bullying. SNS bullying is more likely to occur when a likely offender who is inclined to bully perceives a higher number of suitable targets. Targets are also more prone to being perceived as vulnerable and suitable for an attack when there is a higher degree of absence of capable guardianships. The significant moderating effects provide additional support for the salience of environmental conditions in exacerbating SNS bullying behaviors, supporting crime opportunity theory. We conducted simple slope analyses to further understand the conditional
effects of the interaction among inclination to bully, presence of suitable targets,
absence of capable guardianships, and SNS bullying. We plotted the significant
interactions at one standard deviation above and below the mean of the variables
[1]. Figure 3a and b show the interaction plots. For the two-way interaction of
inclination to bully × presence of suitable targets, we observe a stronger and
significant positive relationship between inclination to bully and SNS bullying
when presence of suitable targets is high. Furthermore, we observe a stronger
and significant positive relationship between presence of suitable targets and
SNS bullying when there is a high degree of absence of capable guardianships.
Details of the conditional effects at values of the moderators can be found in the
online supplement – section E. These results imply that SNS bullying is more
likely to occur when there are favorable environmental conditions on SNSs. The
results, therefore, support crime opportunity theory, which posits that easy and
tempting environmental conditions attract more crime and deviance.
Table 8. Results of the Interaction Effects of the Crime Opportunity Components
Dependent variable SNS bullying
Interaction effects
Coeff. (β)
(SE) t-value (sig)
Inclination to bully × Presence of suitable targets .185 (.084) 2.207*
Inclination to bully × Absence of capable guardianships .171 (.098) 1.750(n.s.)
Presence of suitable targets × Absence of capable
guardianships
.197 (.052) 3.822***
Inclination to bully × Presence of suitable targets ×
Absence of capable guardianships
.079 (.074) 1.067(n.s.)
*p < .05; ***p < .001. Note: n.s. Not significant.
596 CHAN ET AL.
The objectives of this work are to (1) understand the key factors driving SNS bullying,
and (2) examine how SNS affordances influence the evaluation of SNS environmental
conditions. We build on crime opportunity theory and the affordance perspective to
develop a meta-framework that explains the occurrence of SNS bullying and delineates
the role of technology affordance. The research model was tested using a longitudinal
survey with 223 Facebook users. Empirical results provide strong evidence in support
of the research model, and the overall model explains a substantial amount of variance
for SNS bullying. In the following sections, we discuss implications for research and
practice, limitations, and avenues for future research.
Implications for Research
This work has significant implications for research. First, we offer a comprehensive
theoretical explanation and empirical investigation into SNS bullying that considers
factors associated with both individual characteristic and SNS environmental condi-
tions. We further identify and test the effects of SNS affordances that influence
perpetrators’ evaluation of SNS environmental conditions for SNS bullying. The
empirical results demonstrate strong support of the integration of the two theoretical
perspectives, which offer rich insights into the occurrence of SNS bullying. The meta-
framework also serves as a solid basis for future studies aiming to examine the effects
of technology affordance on technology-related crime and deviance.
Second, our empirical results enrich our scientific understanding of SNS bullying and
add to the knowledge accumulation of the cyberbullying literature. Crime opportunity
theory and its predictive power have been validated previously in offline and
(a) (b)
Figure 3. a) Two-way Interaction between Inclination to Bully and Presence of Suitable
Targets;. b) Two-way Interaction between Presence of Suitable Targets and Absence of
Capable Guardianships
CYBERBULLYING ON SOCIAL NETWORKING SITES 597
organizational contexts. This work extends the generalizability of the theory to the SNS
bullying context, contributing to the cumulative tradition of scientific research and the
ongoing assessment of the theory. Specifically, our results show that crime opportunity
theory is a plausible theoretical lens for investigating technology-related crime and
deviance at an individual level. We further explore the interaction effects between the
components of crime opportunity theory and identify the combinations that exacerbate
SNS bullying.
Third, we enrich the IS literature by introducing the affordance perspective into the
study of SNS bullying research. Based on past research on technological affordances and
social network research, we identify four SNS affordances and examine their effects on
the environmental conditions conducive to SNS bullying. Our empirical results demon-
strate the salience of affordance in giving rise to the favorable evaluation of criminogenic
opportunities. Technological affordances have long been recognized as a useful concept
to explain the action possibilities perceived by users interacting with technologies.
However, previous work has tended to associate affordances with positive behaviors,
such as maintaining friendships and sharing useful content on social networks, with little
understanding of how technological affordances can enable deviant behaviors. Our
results offer a novel perspective on the far-reaching and unintended effects of technolo-
gical affordances as a potential enabler of technology-related crime and deviance.
Implications for Practice
A large body of research on SNS bullying has shown that online users with certain
characteristics are more vulnerable to both SNS bullying perpetration and victimi-
zation (e.g., Peluchette et al. [73]). Although these insights are valuable, we
contend that actionable and proactive measures can be better developed by focusing
on the recertification of the SNS features and environmental conditions.
First, our work observes that SNS bullying could be enabled by SNS affordances. We
found that the information retrieval affordance significantly drives the perception of
suitable targets on SNSs. Educating SNS users to limit the amount of private and
sensitive information that they share on online platforms could help reduce their
attractiveness to potential perpetrators. For instance, educational videos that alert
users about the potential risks of “friending” strangers and disclosing sensitive personal
information could be developed and auto-played on social networking sites themselves.
To mitigate unintended uses of personal information, SNS developers should also
introduce more sophisticated options for users to control their preferences for informa-
tion disclosure. Such measures could help to reduce the attractiveness of users on social
networking platforms and keep them safe from SNS bullying.
Another potential means of reducing SNS bullying would be introducing and reform-
ing legislation that regulates deviant online behaviors. Recently, national governments
have started to engage in legislative action and other measures to protect users from SNS
bullying. For instance, the Prime Minister of the United Kingdom has urged social
networking giants Facebook and Twitter to tighten their rules to prevent cyberbullying
598 CHAN ET AL.
[21]. Such actions might align SNS bullying with higher potential costs, intensifying the
perception of capable guardianships presents on the platform. As editability affordance
and association affordance are important drivers for evaluating the absence of capable
guardianships in SNS environment, new legislation imposing heavier legal consequences
of SNS bullying could be useful in discouraging such deviant behavior. To complement
these legislative initiatives, SNS developers should establish zero-tolerance policies
toward SNS bullying behaviors and indicate clearly the punishment of deviant behavior
to site users. For instance, platforms should give warnings to users if any inappropriate
site use is detected, and temporary account suspension should be imposed if a user is
found guilty of violating the terms of use. It is also essential for SNS developers to be
cautious about their core design principles, which obviously favor maximizing social
interaction. Such design principles have constantly been abused by perpetrators who seek
to involve more accomplices in the incident, thereby allowing them to deny sole
culpability. Finally, SNS platforms should inform users that any information uploaded
onto the site will be stored and subject to investigation upon request by the proper
authorities.
Limitations and Future Research Directions
Our work does have some limitations that should be acknowledged—which, however,
also gesture toward several avenues for future research. First, care must be taken when
extrapolating the findings of this study to bullying on other SNSs and in other countries.
Specifically, we tested the research model using a single SNS platform with American
adult users. The homogeneity of the respondent profile may have affected the general-
izability of our conclusions. However, the sample did consist of respondents with
heterogeneous demographic characteristics—such as SNS usage experience, educational
background, and age—whichmay have helped to overcome sampling limitations. Future
research should replicate our research model and test whether users’ evaluation of SNS
environmental conditions can be generalized to different user groups (e.g., children),
other cultural contexts (e.g., Asia), or social networking platforms (e.g., Twitter).
Second, since we used an online survey to collect the data, our findings may be
influenced by response bias. To address these concerns, we used a third-party platform
and an anonymous survey setting to minimize the threat of response bias and used the
social desirability scale to detect biased responses. We also applied both procedural
remedies and statistical remedies to detect and mitigate concerns related to common
method bias. Nevertheless, our study may have been influenced by self-selection bias,
which is difficult to estimate when using an online survey design. It is also possible that
some respondents with SNS bullying experience left the survey after being exposed to
sensitive questions.
Third, we consolidated four general SNS affordances from the literature and tested
their effects in our research model explaining SNS bullying. Although our study breaks
new ground by investigating the unintended effects of SNS affordances on giving rise
to favorable environmental conditions for SNS bullying, future research should explore
CYBERBULLYING ON SOCIAL NETWORKING SITES 599
other SNS affordances associated with specific social networking platforms. For
instance, Snapchat allows photos to be viewable for a maximum of only 10 seconds.
Such design can be further examined by introducing an “erasability” affordance, which
may affect the evaluation of capable guardianships on Snapchat and alter SNS bullying
behaviors and dynamics. Future research should also examine the technical objects that
giving rise to an affordance. In this study, we broadly considered the technical object to
be the “SNS” (i.e., Facebook). An experimental setup would, therefore, be beneficial
for future studies to better understand and test the exact technical features and char-
acteristics that give rise to these affordances.
Finally, because we used a typical variance model based on longitudinal online
survey design, we were only able to infer causation from the theoretical foundation
and research design. Despite this limitation, we prefer the survey method over other
alternatives. It allows us to maximize the predicted frequency of SNS bullying by
providing a snapshot of the relative effects and interaction effects among the various
crime opportunity components. Future research should use experiments, interviews,
and case studies to validate the research findings. However, the use of these alternative
research designs may inevitably induce undesirable cyberbullying experiences to the
participants, and conflict with participants’ ability to remain anonymous due to the
requirement for identification. This may lead to new challenges in eliciting honest
responses while maintaining confidentiality.
Drawing on crime opportunity theory and the affordance perspective, we develop and
empirically test a research model to explain SNS bullying. The research model explains
a substantial amount of the variance for SNS bullying and highlights the imperative
role of technology affordance and SNS environment in shaping SNS bullying. We
believe that the results have significant implications for research on IT deviant use and
provide practical guidance for formulating preventive measures and educational pro-
grams to combat SNS bullying.
Acknowledgement: The authors wish to thank the Editor-in-Chief, Professor Zwass, and the
reviewers for their support and guidance throughout the review process.
The work described in this article was partially supported by a grant from the Research Grant
Council of the Hong Kong Special Administrative Region, China (Project No. HKBU
12511016).
Supplemental data for this article can be accessed on the publisher’s website.
600 CHAN ET AL.
https://doi.org/10.1080/07421222.2019.1599500
ORCID
Tommy K. H. Chan http://orcid.org/0000-0001-9930-8897
Christy M. K. Cheung http://orcid.org/0000-0003-4411-0570
Randy Y. M. Wong http://orcid.org/0000-0001-6585-9973
REFERENCES
1. Aiken, L.S.; West, S.G.; and Reno, R.R. Multiple regression: Testing and interpreting
interactions. Thousand Oaks: SAGE, 1991.
2. Alhabash, S.; McAlister, A.R.; Hagerstrom, A.; Quilliam, E.T.; Rifon, N.J.; and
Richards, J.I. Between likes and shares: Effects of emotional appeal and virality on the
persuasiveness of anticyberbullying messages on Facebook. Cyberpsychology, Behavior, and
Social Networking, 16, 3 (2013), 175–182.
3. Anderson, J.; Bresnahan, M.; and Musatics, C. Combating weight-based cyberbullying on
Facebook with the dissenter effect. Cyberpsychology, Behavior, and Social Networking, 17, 5
(2014), 281–286.
4. Bastiaensens, S.; Vandebosch, H.; Poels, K.; Van Cleemput, K.; DeSmet, A.; and De
Bourdeaudhuij, I. Cyberbullying on social network sites. An experimental study into bystan-
ders’ behavioural intentions to help the victim or reinforce the bully. Computers in Human
Behavior, 31, February 2014 (2014), 259–271.
5. Bayern, M. How AI became Instagram‘s weapon of choice in the war on cyberbully-
ing, 2017. https://www.techrepublic.com/article/how-ai-became-instagrams-weapon-of-
choice-in-the-war-on-cyberbullying/(accessed on July 7, 2017).
6. Bellmore, A.; Calvin, A.J.; Xu, J.-M.; and Zhu, X. The five W‘s of ‘bullying‘ on
Twitter: Who, What, Why, Where, and When. Computers in Human Behavior, 44,
March 2015 (2015), 305–314.
7. Bowler, L.; Knobel, C.; and Mattern, E. From cyberbullying to well-being: A
narrative-based participatory approach to values-oriented design for social media. Journal
of the Association for Information Science and Technology, 66, 6 (2015), 1274–1293.
8. Brody, N.; and Vangelisti, A.L. Bystander intervention in cyberbullying.
Communication Monographs, 83, 1 (2015), 1–26.
9. Calvin, A.J.; Bellmore, A.; Xu, J.-M.; and Zhu, X. #bully: Uses of hashtags in posts
about bullying on Twitter. Journal of School Violence, 14, 1 (2015), 133–153.
10. Cao, B.; and Lin, W.-Y. How do victims react to cyberbullying on social networking
sites? The influence of previous cyberbullying victimization experiences. Computers in
Human Behavior, 52, November 2015 (2015), 458–465.
11. Cassidy, A. Are Facebook and Twitter doing enough to protect users?, The Guardian, 2016.
12. Chapin, J. Adolescents and cyber bullying: The precaution adoption process model.
Education and Information Technologies, 21, 4 (2016), 719–728.
13. Charles, C. 5 reasons why accepting strangers on Facebook is a bad idea, 2014. http://
www.thatsnonsense.com/5-reasons-why-accepting-strangers-on-facebook-is-a-bad-idea
/(accessed on July 7, 2017).
14. Charleston, W. UPDATE: ‘Cyberbullying‘ bill to make online harassment a crime in
W.Va. passes Senate. WSAZ, 2017.
15. Chatterjee, S; Moody, G; Lowry, P.B.; Chakraborty, S; and Hardin, A. Strategic
Relevance of Organizational Virtues Enabled by Information Technology in Organizational
Innovation, Journal of Management Information Systems, 32, 3 (2015), 158–196.
16. Cheung, C.; Lee, Z.W.; and Chan, T.K. Self-disclosure in social networking sites: the role of
perceived cost, perceived benefits and social influence. Internet Research, 25, 2 (2015), 279–299.
17. Cho, S.; Wooldredge, J.; and Park, C.S. Lifestyles/routine activities and bullying
among South Korean youths. Victims & Offenders, 11(2016), 285–314.
18. Cohen, J. Statistical Power Analysis for the Behavioral Sciences Hillsdale, NJ:
Lawrence Erlbaum, 1988.
CYBERBULLYING ON SOCIAL NETWORKING SITES 601
How AI became Instagram’s weapon of choice in the war on cyberbullying
How AI became Instagram’s weapon of choice in the war on cyberbullying
19. D‘Arcy, J.; Hovav, A.; and Galletta, D. User awareness of security countermeasures
and its impact on information systems misuse: A deterrence approach. Information Systems
Research, 20, 1 (2009), 79–98.
20. Davern, M.; Shaft, T.; and Te‘eni, D. Cognition matters: Enduring questions in cognitive
IS research. Journal of the Association for Information Systems, 13, 4 (2012), 273–314.
21. Davidson, L. STOP THE TROLLS! Theresa May warns social networking giants
Facebook and Twitter to tighten up their rules on preventing cyber-bullying, 2016. https://
www.thesun.co.uk/news/2202601/theresa-may-warns-social-networking-giants-facebook-and-
twitter-to-tighten-up-their-rules-on-preventing-cyber-bullying/(accessed on July 7, 2017).
22. de Coster, S.; Estes, S.B.; and Mueller, C.W. Routine activities and sexual harassment
in the workplace. Work and Occupations, 26, 1 (1999), 21–49.
23. Ditch the Label. The cyberbullying report 2013. http://www.ditchthelabel.org/research-
papers/the-cyberbullying-survey-2013/(accessed on July 8, 2017).
24. Dooley, J.J.; Pyżalsk, J.; and Cross, D. Cyberbullying versus face-to-face bullying:
A theoretical and conceptual review. Journal of Psychology, 217, 4 (2009), 182–188.
25. Dredge, R.; Gleeson, J.; and de la PiedadGarcia, X. Cyberbullying in social networking sites:
An adolescent victim’s perspective. Computers in Human Behavior, 36, July 2014 (2014), 13–20.
26. Dredge, R.; Gleeson, J.; and Garcia, X.d.l.P. Presentation on Facebook and risk of
cyberbullying victimisation.Computers in Human Behavior, 40, November 2014 (2014), 16–22.
27. Dredge, R.; Gleeson, J.F.M.; and de la Piedad Garcia, X. Risk factors associated with
impact of severity of cyberbullying victimization: A qualitative study of adolescent online
social networking. Cyberpsychology, Behavior, and Social Networking, 17, 5 (2014), 287–291.
28. ebizmba.com. Top 15 most popular social networking sites | December 2017, 2017.
http://www.ebizmba.com/articles/social-networking-websites (accessed on July 7, 2017).
29. Fay, L. New teen survey reveals cyberbullying moving beyond social media to email,
messaging apps, YouTube, 2017. https://www.the74million.org/new-teen-survey-reveals-
cyberbullying-moving-beyond-social-media-to-email-messaging-apps-youtube/(accessed on
August 29, 2017).
30. Felson, M.; and Clarke, R., “Opportunity makes the thief: Practical theory for crime
prevention,” The Policing and Reducing Crime Unit, London, 1998.
31. Freis, S.D.; and Gurung, R.A.R. A Facebook analysis of helping behavior in online
bullying. Psychology of Popular Media Culture, 2, 1 (2013), 11–19.
32. Gahagan, K.; Vaterlaus, J.M.; and Frost, L.R. College student cyberbullying on social
networking sites: Conceptualization, prevalence, and perceived bystander responsibility.
Computers in Human Behavior, 55, Part B (2016), 1097–1105.
33. Ging, D.; and Norman, O.H.J. Cyberbullying, conflict management or just messing?
Teenage girls‘ understandings and experiences of gender, friendship, and conflict on Facebook
in an Irish second-level school. Feminist Media Studies, 16, 5 (2016), 805–821.
34. Goodman, J.K.; Cryder, C.E.; and Cheema, A. Data collection in a flat world: The
strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision
Making, 26, 3 (2013), 213–224.
35. Gottfredson, M.; and Hirschi, T. A General Theory of Crime. Stanford, CA: Stanford
University Press, 1990.
36. Grgecic, D.; Holten, R.; and Rosenkranz, C. The impact of functional affordances and
symbolic expressions on the formation of beliefs. Journal of the Association for Information
Systems, 16, 7 (2015), 580–607.
37. GuardChild.com. Cyber bullying statistics, 2016. http://www.guardchild.com/cyber-
bullying-statistics/(accessed on July 7, 2017).
38. Hair, J.F.; Black, W.C.; Babin, B.J.; and Anderson, R.E. Multivariate Data Analysis,
7th Ed. Upper Saddle River: NJ: Prentice-Hall International, 2009.
39. Hamm, M.P.P.; Newton, A.S.P.; Chisholm, A.B.; Shulhan, J.B.; Milne, A.M.;
Sundar, P.P.; Ennis, H.M.A.; Scott, S.D.P.; and Hartling, L.P. Prevalence and effect of
cyberbullying on children and young people: A scoping review of social media studies.
JAMA Pediatrics, 169, 8 (2015), 770–777.
40. Hassan, C. Teen who was relentlessly bullied kills herself in front of her family. CNN, 2016.
602 CHAN ET AL.
https://www.thesun.co.uk/news/2202601/theresa-may-warns-social-networking-giants-facebook-and-twitter-to-tighten-up-their-rules-on-preventing-cyber-bullying/
https://www.thesun.co.uk/news/2202601/theresa-may-warns-social-networking-giants-facebook-and-twitter-to-tighten-up-their-rules-on-preventing-cyber-bullying/
https://www.thesun.co.uk/news/2202601/theresa-may-warns-social-networking-giants-facebook-and-twitter-to-tighten-up-their-rules-on-preventing-cyber-bullying/
http://www.ditchthelabel.org/research-papers/the-cyberbullying-survey-2013/
http://www.ditchthelabel.org/research-papers/the-cyberbullying-survey-2013/
http://www.ebizmba.com/articles/social-networking-websites
New Teen Survey Reveals Cyberbullying Moving Beyond Social Media to Email, Messaging Apps, YouTube
New Teen Survey Reveals Cyberbullying Moving Beyond Social Media to Email, Messaging Apps, YouTube
41. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis:
A Regression-based Approach. New York: Guilford, 2018.
42. Heirman, W.; and Walrave, M. Predicting adolescent perpetration in cyberbullying: An
application of the theory of planned behavior. Psicothema, 24, 4 (2012), 614–620.
43. Hinduja, S. Neutralization theory and online software piracy: An empirical analysis.
Ethics and Information Technology, 9, 3 (2007), 187–204.
44. Hinduja, S.; and Patchin, J.W. State cyberbullying laws: A brief review of state
cyberbullying laws and policies, 2015. https://cyberbullying.org/Bullying-and-
Cyberbullying-Laws (accessed on July 7, 2017).
45. Kane, G.C.; Alavi, M.; Labianca, G.; and Borgatti, S.P. What‘s different about social
media networks? A framework and research agenda. MIS Quarterly, 38, 1 (2014), 275–304.
46. Kokkinos, C.M.; Baltzidis, E.; and Xynogala, D. Prevalence and personality correlates
of Facebook bullying among university undergraduates. Computers in Human Behavior, 55,
Part B (2016), 840–850.
47. Kowalski, R.M.; Giumetti, G.W.; Schroeder, A.N.; and Lattanner, M.R. Bullying in the
digital age: A critical review and meta-analysis of cyberbullying research among youth.
Psychological Bulletin, 140, 4 (2014), 1073–1137.
48. Kowalski, R.M.; Limber, S.P.; and Agatston, P.W. Cyber Bullying: Bullying in the
Digital Age. Oxford: Blackwell Publishing, 2008.
49. Kwan, G.C.E.; and Skoric, M.M. Facebook bullying: An extension of battles in school.
Computers in Human Behavior, 29, 1 (2013), 16–25.
50. Lazuras, L.; Barkoukis, V.; Ourda, D.; and Tsorbatzoudis, H. A process model of
cyberbullying in adolescence. Computers in Human Behavior, 29, 3 (2013), 881–887.
51. Lee, J.Y.; Kwon, Y.; Yang, S.; Park, S.; Kim, E.-M.; and Na, E.-Y. Differences in
friendship networks and experiences of cyberbullying among Korean and Australian
adolescents. The Journal of Genetic Psychology: Research and Theory on Human
Development, 178, 1 (2017), 44–57.
52. Legislation.gov.uk. Communications act 2003, 2018. https://www.legislation.gov.uk/
ukpga/2003/21/section/127 (accessed on August 8, 2017).
53. Leonardi, P.M. When does technology use enable network change in organizations?
A comparative study of feature use and shared affordances.MIS Quarterly, 37, 3 (2013), 749–776.
54. Lowry, P.B.; D’Arcy, J.; Hammer, B.; and Moody, G.D. “Cargo Cult” science in
traditional organization and information systems survey research: A case for using nontradi-
tional methods of data collection, including Mechanical Turk and online panels. The Journal
of Strategic Information Systems, 25, 3 (2016), 232–240.
55. Lowry, P.B.; Moody, G.D.; and Chatterjee, S. Using IT design to prevent
cyberbullying. Journal of Management Information Systems, 34, 3 (2017), 863–901.
56. Lowry, P.B.; Zhang, J.; Wang, C.; and Siponen, M.Why do adults engage in cyberbullying
on social media? An integration of online disinhibition and deindividuation effects with the
social structure and social learningmodel. Information Systems Research, 27, 4 (2016), 962–986.
57. Lwin, M.; Stanaland, A.; and Miyazaki, A. Protecting children’s privacy online: How
parental mediation strategies affect website safeguard effectiveness. Journal of Retailing, 84,
2 (2008), 205–217.
58. MacKinnon, D.P.; and Fairchild, A.J. Current directions in mediation analysis. Current
Directions in Psychological Science, 18, 1 (2009), 16–20.
59. Madden, M.; Lenhart, A.; Cortesi, S.; Gasser, U.; Duggan, M.; Smith, A.; and
Beaton, M., “Teens, Social Media, and Privacy,” Pew Research Center, 2013.
60. Majchrzak, A.; Faraj, S.; Kane, G.C.; and Azad, B. The contradictory influence of
social media affordances on online communal knowledge sharing. Journal of Computer-
Mediated Communication, 19, 1 (2013), 38–55.
61. Majchrzak, A.; Markus, M.L.; and Wareham, J. ICT and societal challenges. MIS
Quarterly, 37, 1 (2013), 1–3.
62. Marcum, C.D.; Higgins, G.E.; Freiburger, T.L.; and Ricketts, M.L. Exploration of the
cyberbullying victim/offender overlap by sex. American Journal of Criminal Justice, 39, 3
(2014), 538–548.
CYBERBULLYING ON SOCIAL NETWORKING SITES 603
https://cyberbullying.org/Bullying-and-Cyberbullying-Laws
https://cyberbullying.org/Bullying-and-Cyberbullying-Laws
https://www.legislation.gov.uk/ukpga/2003/21/section/127
https://www.legislation.gov.uk/ukpga/2003/21/section/127
63. Markus, M.L.; and Silver, M.S. A foundation for the study of IT effects: A new look at
DeSanctis and Poole‘s concepts of structural features and spirit. Journal of the Association
for Information Systems, 9, 10/11 (2008), 609–632.
64. McAfee, “2014 Teens and the screen study: Exploring online privacy, social network-
ing and cyberbullying,” 2014.
65. McHugh, B.C.; Wisniewski, P.; Rosson, M.B.; and Carroll, J.M. When social media
traumatizes teens: The roles of online risk exposure, coping, and post-traumatic stress.
Internet Research, 28, 5 (2018), 1169–1188.
66. Mcneel, B. Latest local cyberbullying case contains valuable lessons. The Rivard Report,
2017.
67. Meter, D.J.; and Bauman, S. When sharing is a bad idea: The effects of online social
network engagement and sharing passwords with friends on cyberbullying involvement.
Cyberpsychology, Behavior, and Social Networking, 18, 8 (2015), 437–442.
68. Moore, G.C.; and Benbasat, I. Development of an instrument to measure the percep-
tions of adopting an information technology innovation. Information Systems Research, 2, 3
(1991), 192–222.
69. Obermaier, M.; Fawzi, N.; and Koch, T. Bystanding or standing by? How the number of
bystanders affects the intention to intervene in cyberbullying. NewMedia & Society (2014), 1–7.
70. Pabian, S.; De Backer, C.J.S.; and Vandebosch, H. Dark Triad personality traits and
adolescent cyber-aggression. Personality and Individual Differences, 75(2015), 41–46.
71. Pabian, S.; and Vandebosch, H. Using the theory of planned behaviour to understand
cyberbullying: The importance of beliefs for developing interventions. European Journal of
Developmental Psychology, 11, 4 (2014), 463–477.
72. Patchin, J.W.; and Hinduja, S. Bullies move beyond the schoolyard. Youth Violence
and Juvenile Justice, 4, 2 (2006), 148–169.
73. Peluchette, J.V.; Karl, K.; Wood, C.; and Williams, J. Cyberbullying victimization: Do
victims’ personality and risky social network behaviors contribute to the problem?
Computers in Human Behavior, 52, November 2015 (2015), 424–435.
74. Pew Research Center, “Online Harassment 2017,” July 11 2017.
75. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; and Podsakoff, N.P. Common method
biases in behavioral research: A critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88, 5 (2003), 879–903.
76. Popp, A.M. The difficulty in measuring suitable targets when modeling victimization.
Violence and Victims, 27, 5 (2012), 689–709.
77. Rachoene, M.; and Oyedemi, T. From self-expression to social aggression:
Cyberbullying culture among South African youth on Facebook. Communicatio, 41, 3
(2015), 302–319.
78. Räsänen, P.; Hawdon, J.; Holkeri, E.; Keipi, T.; Näsi, M.; and Oksanen, A. Targets of
online hate: Examining determinants of victimization among young Finnish Facebook users.
Violence and Victims, 31, 4 (2016), 708–725.
79. Raskauskas, J.; and Stoltz, A.D. Involvement in traditional and electronic bullying
among adolescents. Developmental Psychology, 43, 3 (2007), 564–575.
80. Reynolds, W.M. Development of reliable and valid short forms of the Marlowe-Crowne
Social Desirability Scale. Journal of Clinical Psychology, 38, 1 (1982), 119–125.
81. Rindfleisch, A.; Malter, A.J.; Ganesan, S.; and Moorman, C. Cross-sectional versus
longitudinal survey research: Concepts, findings, and guidelines. Journal of Marketing
Research, 45, 3 (2008), 261–279.
82. Runions, K.C.; and Bak, M. Online moral disengagement, cyberbullying, and
cyber-aggression. Cyberpsychology Behavior and Social Networking, 18, 7 (2015), 400–405.
83. Schacter, H.L.; Greenberg, S.; and Juvonen, J. Who‘s to blame?: The effects of victim
disclosure on bystander reactions to cyberbullying. Computers in Human Behavior, 57,
April 2016 (2016), 115–121.
84. Schwarz, A.; Rizzuto, T.; Carraher-Wolverton, C.; Roldán, J.L.; and Barrera-Barrera, R.
Examining the impact and detection of the urban legend of common method bias. ACM SIGMIS
Database: the DATABASE for Advances in Information Systems, 48, 1 (2017), 93–119.
604 CHAN ET AL.
85. Seidel, S.; Recker, J.C.; and Vom Brocke, J. Sensemaking and sustainable practicing:
functional affordances of information systems in green transformations. MIS Quarterly, 37, 4
(2013), 1275–1299.
86. Sen, R.; and Borle, S. Estimating the contextual risk of data breach: An empirical
approach. Journal of Management Information Systems, 32, 2 (2015), 314–341.
87. Sengupta, A.; and Chaudhuri, A. Are social networking sites a source of online
harassment for teens? Evidence from survey data. Children and Youth Services Review, 33,
2 (2011), 284–290.
88. Slonje, R.; Smith, P.K.; and Frisén, A. The nature of cyberbullying, and strategies for
prevention. Computers in Human Behavior, 29, 1 (2013), 26–32.
89. Smith, A., “6 New Facts about Facebook,” Pew Research Center, 2014.
90. Statista.com, “Number of Social Media Users Worldwide from 2010 to 2021 (in billions),”
2017.
91. Sticca, F.; Ruggieri, S.; Alsaker, F.; and Perren, S. Longitudinal risk factors for cyberbul-
lying in adolescence. Journal of Community & Applied Social Psychology, 23, 1 (2013), 52–67.
92. Strong, D.M.; Johnson, S.A.; Tulu, B.; and Trudel, J. A theory of organization-EHR
affordance actualization. Journal of the Association for Information Systems, 15, 2 (2014), 53–85.
93. Suh, A.; Cheung, C.M.; Ahuja, M.; and Wagner, C. Gamification in the workplace:
The central role of the aesthetic experience. Journal of Management Information Systems,
34, 1 (2017), 268–305.
94. Tarafdar, M.; Gupta, A.; and Turel, O. The dark side of information technology use.
Information Systems Journal, 23, 3 (2013), 269–275.
95. Tokunaga, R.S. Following you home from school: A critical review and synthesis of
research on cyberbullying victimization. Computers in Human Behavior, 26, 3 (2010),
277–287.
96. Treem, J.W.; and Leonardi, P.M. Social media use in organizations: Exploring the
affordances of visibility, editability, persistence, and association. Annals of the International
Communication Association, 36, 1 (2013), 143–189.
97. Vance, A.; Lowry, P.B.; and Eggett, D. Increasing accountability through the user
interface design artifacts: A new approach to addressing the problem of access-policy
violations. MIS Quarterly, 39, 2 (2015), 345–366.
98. Vandebosch, H.; Van Cleemput, K.; and Cleemput, K.V. Cyberbullying among young-
sters: Profiles of bullies and victims. New Media & Society, 11, 8 (2009), 1349–1371.
99. Venkatraman, N. Strategic orientation of business enterprises: The construct, dimen-
sionality, and measurement. Management Science, 35, 8 (1989), 942–962.
100. Vold, G.B.; Bernard, T.J.; and Snipes, J.B. Theoretical Criminology. New York: Oxford
University Press, 1998.
101. Wang, J.; Iannotti, R.J.; and Nansel, T.R. School bullying among adolescents in the
United States: Physical, verbal, relational, and cyber. Journal of Adolescent Health, 45, 4
(2009), 368–375.
102. Wegge, D.; Vandebosch, H.; Eggermont, S.; and Walrave, M. The strong, the weak,
and the unbalanced: The link between tie strength and cyberaggression on a social network
site. Social Science Computer Review, 33, 3 (2015), 315–342.
103. Whittaker, E.; and Kowalski, R.M. Cyberbullying via social media. Journal of School
Violence, 14, 1 (2015), 11–29.
104. Wiklund, G.; Ruchkin, V.V.; Koposov, R.A.; and af Klinteberg, B. Pro-bullying
attitudes among incarcerated juvenile delinquents: Antisocial behavior, psychopathic tenden-
cies and violent crime. International Journal of Law and Psychiatry, 37, 3 (2014), 281–288.
105. Willard, N.E. An educator‘s guide to cyberbullying and cyberthreats, 2004. http://
cyberbully.org/(accessed on July 7, 2017).
106. Williams, K.R.; and Guerra, N.G. Prevalence and predictors of Internet bullying.
Journal of Adolescent Health, 41, 6 (2007), 14–21.
107. Willison, R.; and Backhouse, J. Opportunities for computer crime: considering systems
risk from a criminological perspective. European Journal of Information Systems, 15, 4
(2006), 403–414.
CYBERBULLYING ON SOCIAL NETWORKING SITES 605
http://cyberbully.org/
http://cyberbully.org/
A
pp
en
di
x
T
ab
le
A
.
S
um
m
ar
y
of
P
ri
or
S
tu
di
es
on
S
N
S
B
ul
ly
in
g
S
tu
dy
O
bj
ec
ti
ve
T
he
or
et
ic
al
fo
un
da
ti
on
M
et
ho
d
S
am
pl
e
A
lh
ab
as
h
et
al
.
[2
]
T
o
ex
pl
or
e
th
e
pe
rs
ua
si
ve
ef
fe
ct
s
of
th
e
us
e
of
em
ot
io
na
la
pp
ea
l
an
d
m
es
sa
ge
vi
ra
lit
y
of
F
ac
eb
oo
k
st
at
us
up
da
te
s
as
a
co
rr
ec
tiv
e
to
ol
fo
r
cy
be
rb
ul
ly
in
g
D
id
no
t
sp
ec
ify
E
xp
er
im
en
t
U
ni
ve
rs
ity
st
ud
en
t
(n
=
36
5)
A
nd
er
so
n
et
al
.
[3
]
T
o
te
st
ho
w
so
ci
al
su
pp
or
t
fo
r
th
e
vi
ct
im
,
vi
a
di
ss
en
tin
g
co
m
m
en
ts
,
m
ay
af
fe
ct
by
s
t
an
de
rs
’
be
ha
vi
or
s
in
a
cy
be
rb
ul
ly
in
g
ep
is
od
e
D
id
no
t
sp
ec
ify
E
xp
er
im
en
t
U
ni
ve
rs
ity
st
ud
en
t(
n
=
18
1)
B
as
tia
en
se
ns
et
al
.
[4
]
T
o
ex
am
in
e
th
e
in
flu
en
ce
of
co
nt
ex
tu
al
fa
ct
or
s
on
by
st
an
de
rs
’
be
ha
vi
or
al
in
te
nt
io
ns
to
he
lp
th
e
vi
ct
im
or
re
in
fo
rc
e
th
e
bu
lly
in
ca
se
s
of
ha
ra
ss
m
en
t
on
F
ac
eb
oo
k
D
id
no
t
sp
ec
ify
E
xp
er
im
en
t
H
ig
h
sc
ho
ol
st
ud
en
ts
(n
=
45
3)
B
el
lm
or
e
et
al
.
[6
]
T
o
un
de
rs
ta
nd
cy
be
rb
ul
ly
in
g
us
in
g
so
ci
al
m
ed
ia
da
ta
D
id
no
t
sp
ec
ify
M
ac
hi
ne
le
ar
ni
ng
m
et
ho
ds
P
ub
lic
tw
ee
t
s
be
tw
ee
n
S
ep
te
m
be
r
20
11
,
an
d
A
ug
us
t
20
13
(n
=
97
64
58
3)
B
ow
le
r
et
al
.
[7
]
T
o
ge
ne
ra
te
a
va
lu
es
-o
rie
nt
ed
,
us
er
-g
en
er
at
ed
co
nc
ep
tu
al
fr
am
ew
or
k
fo
r
un
de
rs
ta
nd
in
g
an
d
gu
id
in
g
th
e
de
si
gn
of
so
ci
al
m
ed
ia
th
at
m
ig
ht
co
un
te
ra
ct
or
pr
ev
en
t
cy
be
rb
ul
ly
in
g
C
he
ng
an
d
F
le
is
ch
m
an
’s
va
lu
es
fr
am
ew
or
k
V
is
ua
l
na
rr
at
iv
e
in
qu
iry
U
ni
ve
rs
ity
st
ud
en
ts
an
d
T
ee
n
s
(n
=
9)
B
ro
dy
an
d
V
an
ge
lis
ti
[8
]
T
o
ex
am
in
e
va
ria
bl
es
th
at
w
er
e
ex
pe
ct
ed
to
in
flu
en
ce
th
e
pr
op
en
si
ty
of
a
by
st
an
de
r
to
ac
t
in
cy
be
rb
ul
ly
in
g
in
ci
de
nt
s
B
ys
ta
nd
er
ef
fe
ct
S
ur
ve
y;
E
xp
er
im
en
t
U
ni
ve
rs
ity
st
ud
en
ts
(n
=
26
5;
n
=
37
9)
606 CHAN ET AL.
C
al
vi
n
et
al
.
[9
]
T
o
un
de
rs
ta
nd
th
e
bu
lly
in
g
to
pi
cs
th
at
T
w
itt
er
us
er
s
po
st
ed
ab
ou
t
ac
ro
ss
20
12
by
st
ud
yi
ng
w
hi
ch
ha
sh
ta
gs
w
er
e
em
pl
oy
ed
an
d
ho
w
th
ey
w
er
e
ut
ili
ze
d.
D
id
no
t
sp
ec
ify
D
at
a
m
in
in
g
H
as
ht
ag
s
be
tw
ee
n
Ja
nu
ar
y
1,
20
12
an
d
D
ec
em
be
r
31
,
20
12
(n
=
55
28
31
)
C
ao
an
d
Li
n
[1
0]
T
o
in
ve
st
ig
at
e
ho
w
vi
ct
im
iz
at
io
n
ex
pe
rie
nc
es
,
in
flu
en
ce
te
en
ag
er
s’
re
ac
tio
n
st
ra
te
gi
es
w
he
n
w
itn
es
si
ng
cy
be
rb
ul
ly
in
g
on
S
N
S
s
D
id
no
t
sp
ec
ify
S
ur
ve
y
T
ee
ns
(n
=
62
2)
C
ha
pi
n
[1
2]
T
o
do
cu
m
en
t
ad
ol
es
ce
nt
s
us
e
of
F
ac
eb
oo
k
an
d
ex
pe
rie
nc
e
w
ith
cy
be
rb
ul
ly
in
g
T
he
pr
ec
au
tio
n
ad
op
tio
n
pr
oc
es
s
m
od
el
S
ur
ve
y
A
do
le
sc
en
ts
(n
=
14
88
)
D
re
dg
e
et
al
.
[2
5]
T
o
ex
am
in
e
ad
ol
es
ce
nt
vi
ct
im
s’
un
de
rs
ta
nd
in
g
of
cy
be
rb
ul
ly
in
g,
th
e
sp
ec
ifi
c
cy
be
rb
ul
ly
in
g
ev
en
ts
ex
pe
rie
nc
ed
in
S
N
S
an
d
im
pa
ct
s
D
id
no
t
sp
ec
ify
In
te
rv
ie
w
H
ig
h
sc
ho
ol
st
ud
en
ts
(n
=
25
)
D
re
dg
e
et
al
.
[2
6]
T
o
id
en
tif
y
th
e
fa
ct
or
s
th
at
af
fe
ct
th
e
im
pa
ct
of
cy
be
rb
ul
ly
in
g
up
on
ad
ol
es
ce
nt
vi
ct
im
s
w
ho
us
e
S
N
S
D
id
no
t
sp
ec
ify
In
te
rv
ie
w
H
ig
h
sc
ho
ol
st
ud
en
ts
(n
=
25
)
D
re
dg
e
et
al
.
[2
7]
T
o
in
ve
st
ig
at
e
th
e
as
so
ci
at
io
ns
be
tw
ee
n
se
lf-
pr
es
en
ta
tio
n
be
ha
vi
or
s
in
F
ac
eb
oo
k
an
d
cy
be
rb
ul
ly
in
g
vi
ct
im
iz
at
io
n
T
he
vi
ct
im
pr
ec
ip
ita
tio
n
m
od
el
C
on
te
nt
an
al
ys
is
F
ac
eb
oo
k
pr
of
ile
pa
ge
s
(n
=
14
7)
F
re
is
an
d
G
ur
un
g
[3
1]
T
o
de
te
rm
in
e
w
ha
t
w
ill
m
ak
e
a
pa
rt
ic
ip
an
t
in
te
rv
en
e
in
an
on
lin
e
bu
lly
in
g
si
tu
at
io
n,
an
d
to
m
ea
su
re
th
e
ty
pe
s
of
te
ch
ni
qu
es
pa
rt
ic
ip
an
ts
us
e
to
in
te
rv
en
e
D
id
no
t
sp
ec
ify
E
xp
er
im
en
t
U
ni
ve
rs
ity
st
ud
en
t
(n
=
37
)
G
ah
ag
an
et
al
.
[3
2]
T
o
in
cr
ea
se
un
de
rs
ta
nd
in
g
re
ga
rd
in
g
cy
be
rb
ul
ly
in
g
ex
pe
rie
nc
e
on
so
ci
al
ne
tw
or
ki
ng
si
te
s
am
on
g
co
lle
ge
st
ud
en
ts
D
id
no
t
sp
ec
ify
S
ur
ve
y
U
ni
ve
rs
ity
st
ud
en
t
(n
=
19
6)
G
in
g
an
d
N
or
m
an
[3
3]
T
o
ex
pl
or
e
ho
w
fr
ie
nd
sh
ip
,
co
nf
lic
t,
an
d
bu
lly
in
g
ar
e
ex
pe
rie
nc
ed
an
d
un
de
rs
to
od
by
Ir
is
h
te
en
ag
e
gi
rls
in
re
la
tio
n
to
F
ac
eb
oo
k
D
id
no
t
sp
ec
ify
S
ur
ve
y
H
ig
h
sc
ho
ol
st
ud
en
t
(n
=
11
6)
H
am
m
et
al
.
[3
9]
T
o
re
vi
ew
ex
is
tin
g
pu
bl
ic
at
io
ns
th
at
ex
am
in
e
th
e
he
al
th
-r
el
at
ed
ef
fe
ct
s
of
cy
be
rb
ul
ly
in
g
vi
a
so
ci
al
m
ed
ia
am
on
g
ch
ild
re
n
an
d
ad
ol
es
ce
nt
s
D
id
no
t
sp
ec
ify
Li
te
ra
tu
re
re
vi
ew
P
ee
r-
re
vi
ew
ed
jo
ur
na
l
ar
tic
le
s
(n
=
34
) (c
on
ti
nu
es
)
CYBERBULLYING ON SOCIAL NETWORKING SITES 607
T
ab
le
A
.
C
on
ti
nu
ed
S
tu
dy
O
bj
ec
ti
ve
T
he
or
et
ic
al
fo
un
da
ti
on
M
et
ho
d
S
am
pl
e
K
ok
ki
no
s
et
al
.
[4
6]
T
o
ex
am
in
e
th
e
pr
ev
al
en
ce
of
cy
be
rb
ul
ly
in
g
on
F
ac
eb
oo
k
an
d
its
as
so
ci
at
io
ns
w
ith
in
di
vi
du
al
ch
ar
ac
te
ris
tic
s
D
id
no
t
sp
ec
ify
S
ur
ve
y
U
ni
ve
rs
ity
st
ud
en
ts
(n
=
22
6)
K
w
an
an
d
S
ko
ric
[4
9]
T
o
ex
am
in
e
th
e
ph
en
om
en
on
of
cy
be
rb
ul
ly
in
g
on
F
ac
eb
oo
k
an
d
ho
w
it
is
re
la
te
d
to
sc
ho
ol
bu
lly
in
g
am
on
g
se
co
nd
ar
y
sc
ho
ol
st
ud
en
ts
D
id
no
t
sp
ec
ify
S
ur
ve
y
H
ig
h
sc
ho
ol
st
ud
en
t
(n
=
16
76
)
Le
e
et
al
.
[5
1]
T
o
in
ve
st
ig
at
e
th
e
re
la
tio
ns
hi
ps
be
tw
ee
n
fr
ie
nd
sh
ip
ne
tw
or
ks
w
ith
th
e
ex
pe
rie
nc
es
as
vi
ct
im
s,
pe
rp
et
ra
to
rs
,
an
d
by
st
an
de
rs
of
cy
be
rb
ul
ly
in
g
am
on
g
yo
un
g
ad
ol
es
ce
nt
s
D
id
no
t
sp
ec
ify
S
ur
ve
y
A
do
le
sc
en
ts
(n
=
92
1)
Lo
w
ry
et
al
.
[5
6]
T
o
st
ud
y
ho
w
th
e
in
fo
rm
at
io
n
te
ch
no
lo
gy
ar
tif
ac
t
in
flu
en
ce
an
d
w
hy
pe
op
le
ar
e
so
ci
al
iz
ed
to
en
ga
ge
in
cy
be
rb
ul
ly
in
g
S
oc
ia
ll
ea
rn
in
g
th
eo
ry
of
cr
im
e
S
ur
ve
y
A
du
lt
(n
=
10
03
)
Lo
w
ry
et
al
.
[5
5]
T
o
ex
pl
or
e
sy
st
em
ch
ar
ac
te
ris
tic
s
th
at
pr
ev
en
t
cy
be
rb
ul
ly
in
g
C
on
tr
ol
ba
la
nc
e
th
eo
r
y
F
ac
to
ria
l
su
rv
ey
A
du
lt
(n
=
50
7)
M
ar
cu
m
et
al
.
[6
2]
T
o
ex
pl
or
e
th
e
di
ffe
re
nc
es
in
m
al
e
an
d
fe
m
al
e
cy
be
rb
ul
ly
in
g,
as
w
el
l
as
th
e
vi
ct
im
-o
ffe
nd
er
re
la
tio
ns
hi
p
ex
pe
rie
nc
ed
by
ea
ch
se
x
D
id
no
t
sp
ec
ify
S
ur
ve
y
U
ni
ve
rs
ity
st
ud
en
ts
(n
=
11
39
)
M
et
er
an
d
B
au
m
an
[6
7]
T
o
st
ud
y
th
e
re
la
tio
ns
hi
ps
be
tw
ee
n
so
ci
al
ne
tw
or
k
en
ga
ge
m
en
ta
nd
cy
be
rb
ul
ly
in
g
in
vo
lv
em
en
t
ov
er
tim
e
T
he
so
ci
al
–
ec
ol
og
ic
al
m
od
el
S
ur
ve
y
S
tu
de
nt
s
(n
=
12
72
)
P
ab
ia
n
et
al
.
[7
0]
T
o
em
pi
ric
al
ly
in
ve
st
ig
at
e
th
e
re
la
tio
ns
hi
ps
be
tw
ee
n
th
e
da
rk
tr
ia
d
pe
rs
on
al
ity
tr
ai
ts
an
d
cy
be
r
–
ag
gr
es
si
on
on
F
ac
eb
oo
k
D
id
no
t
sp
ec
ify
S
ur
ve
y
A
do
le
sc
en
ts
(n
=
32
4)
P
el
uc
he
tte
et
al
.
[7
3]
T
o
ex
am
in
e
th
e
im
pa
ct
s
of
ris
ky
so
ci
al
ne
tw
or
k
si
te
pr
ac
tic
es
an
d
in
di
vi
du
al
di
ffe
re
nc
es
in
se
lf-
di
sc
lo
su
re
an
d
pe
rs
on
al
ity
on
cy
be
rb
ul
ly
in
g
vi
ct
im
iz
at
io
n
on
F
ac
eb
oo
k
us
er
s
D
id
no
t
sp
ec
ify
S
ur
ve
y
Y
ou
ng
ad
ul
ts
(n
=
57
2)
O
be
rm
ai
er
et
al
.
[6
9]
T
o
ex
am
in
e
th
e
by
st
an
de
r
ef
fe
ct
in
cy
be
rb
ul
ly
in
g
B
ys
ta
nd
er
ef
fe
ct
E
xp
er
im
en
t
U
ni
ve
rs
ity
st
ud
en
t
(n
=
85
;
n
=
44
0)
608 CHAN ET AL.
R
ac
ho
en
e
an
d
O
ye
de
m
i
[7
7]
T
o
ex
am
in
e
on
lin
e
bu
lly
in
g
am
on
g
S
ou
th
A
fr
ic
an
yo
ut
h
on
F
ac
eb
oo
k
D
id
no
t
sp
ec
ify
D
ig
ita
l
et
hn
og
ra
ph
y
F
ac
eb
oo
k
pa
ge
(n
=
6)
R
äs
än
en
et
al
.
[7
8]
T
o
ex
am
in
e
th
e
de
te
rm
in
an
t
on
lin
e
ha
te
vi
ct
im
iz
at
io
n
on
F
ac
eb
oo
k
D
id
no
t
sp
ec
ify
S
ur
ve
y
F
in
ni
sh
F
ac
eb
oo
k
us
er
s
(n
=
72
3)
S
ch
ac
te
r
et
al
.
[8
3]
T
o
un
de
rs
ta
nd
th
e
co
nd
iti
on
s
un
de
r
w
hi
ch
by
st
an
de
rs
w
ill
sh
ow
in
cr
ea
se
d
su
pp
or
t
fo
r
vi
ct
im
s
of
cy
be
rb
ul
ly
in
g
A
ttr
ib
ut
io
n
th
eo
ry
E
xp
er
im
en
t
A
du
lt
(n
=
11
8)
S
en
gu
pt
a
an
d
C
ha
ud
hu
ri
[8
7]
T
o
id
en
tif
y
th
e
ke
y
fa
ct
or
s
as
so
ci
at
ed
w
ith
cy
be
r-
bu
lly
in
g
an
d
on
lin
e
ha
ra
ss
m
en
t
of
te
en
ag
er
s
in
th
e
U
ni
te
d
S
ta
te
s
D
id
no
t
sp
ec
ify
P
an
el
da
ta
fr
om
P
E
W
T
ee
n
(n
=
93
5)
W
eg
ge
et
al
.
[1
02
]
T
o
ex
am
in
e
ho
w
yo
un
g
pe
op
le
’s
co
nn
ec
tio
ns
on
S
N
S
s
ar
e
re
la
te
d
to
th
ei
r
ris
k
of
be
in
g
in
vo
lv
ed
in
cy
be
r-
ha
ra
ss
m
en
t
an
d
cy
be
rb
ul
ly
in
g
D
id
no
t
sp
ec
ify
S
ur
ve
y
H
ig
h
sc
ho
ol
st
ud
en
t
(n
=
14
58
)
W
hi
tta
ke
r
an
d
K
ow
al
sk
i
[1
03
]
T
o
ex
am
in
e
th
e
pr
ev
al
en
ce
ra
te
s
of
cy
be
rb
ul
ly
in
g
am
on
g
co
lle
ge
–
ag
e
st
ud
en
ts
D
id
no
t
sp
ec
ify
S
ur
ve
y
da
ta
m
in
in
g
U
ni
ve
rs
ity
st
ud
en
t
(n
=
24
4;
n
=
19
7)
F
ac
eb
oo
k
po
st
(n
=
29
61
)
CYBERBULLYING ON SOCIAL NETWORKING SITES 609
Copyright of Journal of Management Information Systems is the property of Taylor & Francis
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
- Abstract
- The authors wish to thank the Editor-in-Chief, Professor Zwass, and the reviewers for their support and guidance throughout the review process.
- References
Introduction
Theoretical Background
Definition of Cyberbullying
Nature of SNS Bullying
Toward a Meta-Framework of SNS Bullying
Crime Opportunity Theory
Affordance Perspective
Research Model and Hypotheses
Likely Offender and SNS Bullying
Evaluation of SNS Environmental Conditions and SNS Bullying
Presence of Suitable Targets
Absence of Capable Guardianships
SNS Affordances and the Evaluation of SNS Environmental Conditions
Accessibility Affordance
Information Retrieval Affordance
Editability Affordance
Association Affordance
Control Variables
Research Method
Research Design
Measure
Data Collection and Procedures
Respondent Profile
Data Analysis and Results
Post Hoc Analyses
Comparison of Alternative Models
Assessment of the Mediation Effects
Assessment of the Interaction Effects
Inclination to Bully × The Evaluation of SNS Environmental Conditions
Presence of Suitable Targets × Absence of Capable Guardianships
Inclination to Bully × Presence of Suitable Targets × Absence of Capable Guardianships
Discussion
Implications for Research
Implications for Practice
Limitations and Future Research Directions
Conclusion
Funding
SUPPLEMENTAL MATERIAL
Appendix
Estimating Network Effects in Two-Sided Markets
Oliver Hinza, Thomas Ottera, and Bernd Skierab
aFaculty of Business and Economics, Goethe University Frankfurt, Frankfurt am Main, Germany; bFaculty of
Business and Economics, Goethe University Frankfurt (& Professorial Fellow at Deakin University, Australia),
Frankfurt am Main, German
y
ABSTRACT
The proliferation of the Internet has enabled platform intermediaries to
create two-sided markets in many industries. Time-series data on the
number of customers on both sides of the markets allow platform
intermediaries for estimating the direction and magnitude of network
effects, which can then support growth predictions and subsequent
information technology (IT) or marketing investment decisions. This
article investigates the conditions under which this estimation of same-
side and cross-side network effects should distinguish between its
impact on the number of new customers (i.e., acquisition) and existing
customers (i.e., their activity). The authors propose an influx-outflow
model for doing so and conduct a simulation study to benchmark the
new model against the traditional model. Further they compare the
models in an illustrative empirical study in which they study the growth
of an Internet auction platform. The results show that this separation of
effects is beneficial because the existing customers on both sides of the
market can influence the acquisition and dropout of other customers
asymmetrically. The paper thus makes an important contribution that
should impact the way how researchers and business practitioners
measure network effects in two-sided markets.
KEYWORDS
Two-sided markets;
electronic commerce; online
intermediaries; customer
churn; customer acquisition;
platform economy
In two-sided markets, an intermediary provides a platform for interactions between two
distinct customer populations [35, 38]. For example, the intermediaries Amazon, Taobao.
com, and eBay use their platforms to enable transactions between sellers and buyers; and
the intermediary Monster.com brings together employers and employees. These two-sided
markets are not an entirely new phenomenon: In medieval times, for example, city
councils provided marketplaces as platforms for farmers to offer their products to buyers.
Yet, the rise of what Shapiro and Varian [40] label the “network economy” has resulted in
a plethora of two-sided markets due to the widespread use of the Internet ([3, 5, 13]; for
comprehensive overviews on both online and offline two-sided markets, see Parker and
Van Alstyne [32]).
Such markets facilitate different kinds of network effects: Cross-side network effects
describe the situation whereby the presence of many sellers attracts more buyers to the
market (e.g., eBay) and vice versa [26, 42]. In contrast, same-side network effects capture
the interplay within one customer population. Same-side and cross-side effects can some-
times go in different directions: For example, more buyers make an auction platform less
CONTACT Oliver Hinz ohinz@wiwi.uni-frankfurt.de Faculty of Business and Economics, Goethe University
Frankfurt, Theodor-W.-Adorno-Platz 4, Frankfurt am Main 60323, Germany
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
2020, VOL. 37, NO. 1, 12–38
https://doi.org/10.1080/07421222.2019.170550
9
© 2020 Taylor & Francis Group, LLC
https://crossmark.crossref.org/dialog/?doi=10.1080/07421222.2019.1705509&domain=pdf&date_stamp=2020-02-19
attractive for buyers because of the heightened competition, but more attractive for sellers
because of the increase in demand.
Companies typically have access to data — in particular, time-series data — on the
development of the number of customers on the two market sides, which can help companies
estimate the direction and magnitude of network effects. Such knowledge can support growth
predictions, as well as the information technologies (IT) and marketing investment decisions
that follow. Yet, measuring network effects remains a troublesome task, and the literature to
date has examined, at best, 2 × 2 = 4 kinds of network effects, that is, a same-side and a cross-
side network effect for each of the two market sides.
However, network effects arise from a variety of mechanisms. For example, on the one
hand, a larger number of customers can lead to a wider range of offerings or more word-of
-mouth within and across both market sides, which can increase the attractiveness of the
market. On the other hand, the same situation can also lead to a decrease in attractiveness
because of stronger competition among customers on one market side. Furthermore, such
effects can differ for new and existing customers. For example, word-of-mouth generated
by existing customers (hereafter called the installed base) might affect the acquisition of
new customers more strongly than the activity of existing customers. As another example,
disclosing a large number of buyers on an auction platform might attract new buyers
because such a large number serves as an indicator of the attractiveness of the market, but
existing buyers might churn because of the expected increase in competition that is as
a result of a higher number of buyers.
The research to date (as we will show in Table 1) has mainly investigated the sum of
these two effects by assessing the net change in the number of customers on one side of
the market. Thus, instead of examining changes in the number of newly acquired
customers and the number of churning customers separately, they simply examine the
sum of both, that is, the change in the number of total customers.
More technically speaking, the market grows on both sides because of an influx (which
constitutes the number of new customers) and shrinks because of an outflow (which
constitutes the dropout, or churn, of existing customers) [19]. However, investments in IT
can have asymmetric effects on influx and outflow; thus, jointly estimating them may
inaccurately summarize both effects because the growth in the number of new and existing
customers may differ across time. Yet, it is important to have knowledge of the separate
effects because organizations usually assign different units to acquire and retain customers
on the two market sides [4].
In this paper, we develop a new model, the influx-outflow model, which allows for
asymmetric network effects1; that is, dropout and acquisition present different effects on
each market side. This model is unique because it is the first to conceptually and empirically
estimate eight network effects (two kinds of same-side network effects, two kinds of cross-
side network effects, and two kinds of effects on influx and outflow). We show under which
circumstances this model should be preferred over the standard model (hereafter labeled the
“net change model”), which does not distinguish between network effects on the acquisition
and dropout of customers. We use a simulation study and an empirical study to compare the
influx-outflow model with the net change model, finding that the former performs signifi-
cantly better, on average, with respect to estimating the true parameters.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 13
Ta
b
le
1.
em
p
ir
ic
al
st
ud
ie
s
es
ti
m
at
in
g
va
ri
ou
s
ki
n
d
s
of
n
et
w
or
k
eff
ec
ts
.
A
ut
h
or
(s
)
M
ai
n
re
se
ar
ch
to
p
ic
(s
)
In
d
us
tr
y/
d
at
a
se
t(
s)
Ec
on
om
ic
d
ep
en
d
en
t
va
ri
ab
le
(s
)
C
on
si
d
er
s
C
N
E
SN
E
In
fl
ux
vs
.
O
ut
fl
ow
Br
yn
jo
lfs
so
n
an
d
Ke
m
er
er
[6
]
In
st
al
le
d
b
as
e
on
p
ri
ce
Sp
re
ad
sh
ee
t
so
ft
w
ar
e
Pr
ic
es
Ye
s
N
o
N
o
G
an
d
al
,
Ke
n
d
e,
an
d
Ro
b
[
1
7]
H
ar
d
w
ar
e,
p
ri
ce
s
an
d
so
ft
w
ar
e
on
d
iff
us
io
n
C
D
p
la
ye
rs
an
d
ti
tl
es
C
h
an
g
e
in
va
ri
et
y
an
d
sa
le
s
Ye
s
N
o
N
o
Sh
an
ka
r
an
d
Ba
yu
s
[
3
9]
N
et
w
or
k
st
re
n
g
th
in
co
m
p
et
it
io
n
V
id
eo
g
am
e
co
n
so
le
s
N
et
w
or
k
st
re
n
g
th
Ye
s
N
o
N
o
A
sv
an
un
d
,C
la
y,
Kr
is
h
n
an
,a
n
d
Sm
it
h
[2
]
In
cr
em
en
ta
l
va
lu
e
of
ad
d
it
io
n
al
us
er
s
Pe
er
-t
o-
p
ee
r
n
et
w
or
ks
N
et
w
or
k
va
lu
e
N
o
Ye
s
(Y
es
)
us
in
g
p
ro
xy
N
ai
r,
C
h
in
ta
g
un
ta
,
an
d
D
ub
é
[3
1]
In
d
ir
ec
t
n
et
w
or
k
eff
ec
ts
in
co
m
p
et
it
io
n
PD
A
s
an
d
so
ft
w
ar
e
H
ar
d
w
ar
e
d
em
an
d
,
so
ft
w
ar
e
p
ro
vi
si
on
Ye
s
N
o
N
o
Ry
sm
an
[3
6]
Im
p
or
ta
n
ce
of
cr
os
s-
si
d
e
n
et
w
or
k
eff
ec
ts
Ye
llo
w
Pa
g
es
C
on
su
m
er
an
d
ad
ve
rt
is
er
d
em
an
d
Ye
s
N
o
N
o
C
le
m
en
ts
an
d
O
h
as
h
i
[1
1]
In
d
ir
ec
t
N
Es
,
h
ar
d
w
ar
e
d
iff
us
io
n
V
id
eo
g
am
e
sy
st
em
s
H
ar
d
w
ar
e
an
d
so
ft
w
ar
e
ad
op
ti
on
Ye
s
N
o
N
o
A
ck
er
b
er
g
an
d
G
ow
ri
sa
n
ka
ra
n
[1
]
N
Es
fo
r
b
an
ks
an
d
cu
st
om
er
s
A
C
H
b
an
ki
n
g
N
um
b
er
of
tr
an
sa
ct
io
n
s
Ye
s
N
o
N
o
M
an
tr
al
a,
N
ai
k,
Sr
id
h
ar
,a
n
d
Th
or
so
n
[3
0]
M
ar
ke
ti
n
g
in
ve
st
on
p
ro
fi
ts
N
ew
sp
ap
er
s
Su
b
sc
ri
p
ti
on
s,
ad
re
ve
n
ue
,
sa
le
s
Ye
s
N
o
N
o
Ry
sm
an
[3
7]
C
ar
d
us
ag
e
an
d
ac
ce
p
ta
n
ce
Pa
ym
en
t
ca
rd
tr
an
sa
ct
io
n
s
C
h
oi
ce
of
fa
vo
ri
te
n
et
w
or
k
Ye
s
N
o
N
o
W
ilb
ur
[4
5]
A
d
s
on
au
d
ie
n
ce
si
ze
an
d
vi
ce
ve
rs
a
TV
ad
s
V
ie
w
er
an
d
ad
ve
rt
is
er
d
em
an
d
Ye
s
N
o
N
o
Li
u
[2
8]
Pr
ic
in
g
st
ra
te
g
ie
s
V
id
eo
g
am
e
co
n
so
le
s
So
ft
w
ar
e
an
d
h
ar
d
w
ar
e
d
em
an
d
Ye
s
N
o
N
o
Tu
ck
er
an
d
Zh
an
g
[4
2]
In
st
al
le
d
b
as
e
on
lis
ti
n
g
b
eh
av
io
r
C
la
ss
ifi
ed
s
p
la
tf
or
m
N
um
b
er
of
lis
ti
n
g
s
Ye
s
Ye
s
N
o
Sr
id
h
ar
,M
an
tr
al
a,
N
ai
k,
an
d
Th
or
so
n
[4
1]
O
p
ti
m
al
m
ar
ke
ti
n
g
in
ve
st
s
w
it
h
cr
os
s-
si
d
e
n
et
w
or
k
eff
ec
ts
Lo
ca
l
n
ew
sp
ap
er
D
em
an
d
fr
om
b
ot
h
si
d
es
Ye
s
(Y
es
)
N
o
C
h
ao
an
d
D
er
d
en
g
er
[
7
]
N
et
w
or
k
eff
ec
ts
on
op
ti
m
al
p
ri
ce
st
ru
ct
ur
e
Po
rt
ab
le
g
am
e
co
n
so
le
s
A
ss
oc
ia
te
d
p
ri
ce
s
Ye
s
N
o
N
o
Le
e
[2
7]
Eff
ec
t
of
ve
rt
ic
al
in
te
g
ra
ti
on
V
id
eo
g
am
e
in
d
us
tr
y
D
em
an
d
fr
om
b
ot
h
si
d
es
Ye
s
N
o
N
o
V
oi
g
t
an
d
H
in
z
[4
3]
N
et
w
or
k
eff
ec
ts
on
re
ve
n
ue
;
re
ve
n
ue
-o
p
ti
m
al
us
er
sp
lit
O
n
lin
e
d
at
in
g
p
la
tf
or
m
Re
ve
n
ue
Ye
s
Ye
s
N
o
C
h
u
an
d
M
an
ch
an
d
a
[1
0]
Q
ua
n
ti
fi
ca
ti
on
of
C
N
E
an
d
SN
E
C
2C
p
la
tf
or
m
G
ro
w
th
of
in
st
al
le
d
b
as
es
Ye
s
Ye
s
N
o
Th
is
Pa
p
er
Se
p
ar
at
io
n
of
in
fl
ux
an
d
ou
tfl
ow
w
it
h
re
sp
ec
t
to
n
et
w
or
k
eff
ec
ts
B2
C
p
la
tf
or
m
G
ro
w
th
of
in
st
al
le
d
b
as
es
Ye
s
Ye
s
Ye
s
N
ot
es
:C
N
E,
C
ro
ss
-s
id
e
n
et
w
or
k
eff
ec
ts
;S
N
E,
sa
m
e-
si
d
e
n
et
w
or
k
eff
ec
ts
;P
D
A
,p
er
so
n
al
d
at
a
as
si
st
an
t;
A
C
H
,a
ut
om
at
ed
cl
ea
ri
n
g
h
ou
se
.I
n
fl
ux
,n
um
b
er
of
cu
st
om
er
s
th
at
fl
ow
to
th
e
m
ar
ke
t,
i.e
.,
ar
e
n
ew
to
th
e
m
ar
ke
t;
O
ut
fl
ow
,
n
um
b
er
of
cu
st
om
er
s
th
at
d
ro
p
ou
t
of
th
e
m
ar
ke
t,
i.e
.,
ch
ur
n
fr
om
th
e
m
ar
ke
t.
14 HINZ ET AL.
Network effects exist if an additional user in a market (alternatively called a “platform”)
affects the value that existing customers derive from that market. If that value increases,
then network effects are positive, and vice versa. Products such as phones and e-mail
constitute one-sided markets that exhibit a positive network effect because the value of
those products for a user increases with the number of interactions that occur with other
users of the platform.
Two-sided markets are interorganizational information systems that provide two user
populations (e.g., buyers and sellers) with rules and processes to identify potential users
with whom to interact, select a specific trading partner, and execute transactions [9].
Through these interactions, user populations create value [38]. Unlike one-sided mar-
kets, two-sided markets offer interactions between two distinct user populations [14].
Usually, a user interacts only with users of the other market side (e.g., transactions
between a buyer and a seller), although they can also influence their own market side
both positively (e.g., by providing advice) or negatively (e.g., by increasing competition)
[8, 46]. Thus, researchers usually examine four network effects in two-sided markets:
one same-side network effect for both market sides, as well as two cross-side network
effects.
As Table 1 depicts, researchers have intensively studied network effects in recent years.
Mainly finding that the estimated cross-side network effects are positive, scholars have then
derived their impact on demand [17, 31, 36] and prices [6, 7]. Chu and Manchanda [10]
contend that previous works have often concentrated on the benefits (or costs) that users
realize from the addition of users from either the same or the other market side, but not
simultaneously from both sides. As a consequence, many studies estimate cross-side net-
work effects but do not consider same-side network effects [6] or they instead rely on
a proxy such as lagged sales for a potential same-side network effect [41]. However, more
recent research underscores the strong influence of same-side network effects in the market.
The few studies that have investigated both types of network effects (see Table 1) focus on
their implications for sales [42] or revenue [2].
Our study builds upon Chu and Manchanda [10] by exploring the impact of same- and
cross-side network effects on a platform’s growth, while adding several important aspects.
First, Chu and Manchanda [10] examined network effects in a consumer-to-consumer
(C2C) market, whereas we investigate a business-to-consumer (B2C) market. Second, and
more importantly, we distinguish between network effects that affect the acquisition of
new users and those that affect the activity of existing users. Results from other settings
highlight the importance of this distinction: Iyengar et al. [23] analyzed peer effects among
physicians (which is similar to a one-sided market) and found that such effects have
a different impact on trial (comparable to our understanding of acquisition) and repeat
purchases (comparable to the activity of existing users). Although they mainly studied
social contagion processes, the distinction they made between the drivers might also be
applicable to markets with network effects. Finally, our study examines the circumstances
under which this distinction between different network effects presents benefits for plat-
form operators.
Table 1 underscores that most studies in the area of two-sided markets do not
distinguish between network effects on the acquisition of new customers and the dropout
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 15
of existing customers. Interestingly, studies such as Ackerberg and Gowrisankaran [1]
assumed that users will endlessly utilize a technology or platform once they have adopted
it. As such, their participation increases the network effects without any temporal limita-
tion, which is a strong assumption according to Wattal et al. [44] . Researchers typically
make this assumption when their data is solely at the market level, such as with sales data
of video game consoles [27]. Clements and Ohashi [11] are among the few who consider
the possibility, at least in a robustness test, that the installed base depreciates at an annual
rate of 5 percent.
If analysts have access to individual customers’ transactional data — which is increas-
ingly the case — then they can consider the dropout of individual customers. Chu and
Manchanda [10] did so for one of the two market sides. They used an activity-adjusted
proxy for the number of sellers, but continued to use the cumulative number of registra-
tions as a proxy for the number of buyers, even though buyers who registered years ago
might have already churned or become inactive. Nevertheless, the authors’ use of an
activity-adjusted proxy for one market side is a substantial step forward for accommodat-
ing dropouts.
Table 1 shows that previous research in the area of two-sided markets provides
important insights, but does not distinguish between the eight kinds of network effects
outlined herein. The insights of Iyengar et al. [23] indicate that a subtler distinction might
be required in a setting in which different features affect the processes that determine
growth. For example, some features of a two-sided platform can create initial trust that
motivates new customers to join the platform. These features are highly important for the
acquisition of new customers, but less so for existing customers. Other features, mean-
while, may only affect existing customers who already use the platform. If a model to
predict platform growth does not separate between these effects, then the estimation
results can be biased and lead to less effective management decisions. In the next section,
we discuss the importance of separating the network effects in two-sided markets and then
analyze the conditions under which a separation should be preferred over a joint
consideration.
Theoretical Considerations for Separating Network Effects in Two-Sided
Markets and Results of a Simulation Study
Analysis of Importance of Separating Network Effects in Two-Sided Markets
Empirically inferring network effects requires econometric and causal identification, as
well as supporting statistical information. When conclusions depend upon the statistical
significance of measured effects — as they should — the amount of statistical information
in the system under study is important. Typically, platform intermediaries possess only
a limited number of observations (e.g., weekly observations of the number of customers
on both sides of the market; changes in the number of customers on a weekly basis). As
a result, the length of the observation period is limited.
Thus, in empirical, non-experimental studies of network effects, the data are, by
definition, outside the analyst’s control. However, in the context of two-sided markets,
the analyst has a choice between separately or jointly modeling the influx of new buyers
(sellers) and the outflow of existing buyers (sellers). Jointly modeling them requires just
16 HINZ ET AL.
two equations (one for the net change of buyers and one for the net change of the sellers)
that characterize the market dynamics and equilibrium market size. Modeling them
separately — our suggested approach — also yields the number of buyers and sellers,
but it requires four separate equations: two equations for the influx of new buyers and the
outflow of existing buyers, and two for the influx and outflow of sellers.
In the following, we discuss the advantages and disadvantages of modeling jointly
versus separately. We show that jointly considering the decisions may result in a loss of
statistical information under rather general conditions because summarizing the number
of new and lost buyers into a net change in the number of customers may decrease the
signal to noise ratio in the data. For example, if a measured cause exerts its influence such
that a larger number of new buyers tend to coincide with a larger number of lost buyers
and vice versa, then the systematic variance in the net change will be low. At the same
time, the error variance may increase by forming net changes. Thus, joint consideration
generally obfuscates the independent influence of causes on the number of new and lost
buyers. Consider the situation where the installed base of buyers attracts even more new
buyers (e.g., because of positive word-of-mouth created by existing buyers’ positive
experience with the platform), but buyers on the platform actually compete for the
same product, as is the case in an auction. In this example, an increase in the number
of acquired buyers and lost buyers may balance out to suggest that network effects are not
important in this market. Formally, we investigate the following proposition:
Proposition 1: The influx-outflow model better estimates network effects if the influx and
outflow of one market side correlate positively (i.e., they even out).
Example: Influx = 2 new customers, Outflow = 2 lost customers, thus no change in the
number of customers. In this example, the influx-outflow model would be superior.
In contrast, if network effects positively (negatively) influence influx, but negatively
(positively) influence outflow, then their joined effect will be better measured by analyzing
net changes. An example would be same-side network effects that may simultaneously
decrease the influx and increase the outflow of buyers, such as a gaming platform that
links game publishers and gamers. Existing gamers’ positive word-of-mouth attracts new
gamers, and the resulting increase in the number of gamers makes the platform more
valuable because gamers have more gamers to play with; this increase in value reduces the
outflow of gamers. Stated differently, a joint consideration leads to more variance in the
network effect of interest, leading to proposition 2:
Proposition 2: The net change model better estimates network effects if influx and outflow of one
market side correlate negatively.
Example: Some variable causes expected Influx = 2 of new customers and expected Outflow =
2 of lost customers to change to Influx = 4 and Outflow = 0, i.e., a change by 2 customers
each, and in opposite directions. The resulting change in the number of customers of course
increases from 0 to +4 customers. In this example, the net change model would be superior.
Note that the influx-outflow model and the net-change model are identical in the limiting
case of a deterministic growth-process. In this case, we can exactly solve for the parameters of
either model and compute the exact net-change model parameters from the influx-outflow
model parameters by differencing deterministic influx-outflow equations, assuming known
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 17
functional forms. However, the influx-outflow parameters generally cannot be recovered from
the net-change parameters. The practically important difference between the two formula-
tions arises when the growth-process is not deterministic. In this case, depending on the
relationship between unobservables affecting growth and the link between observed variables
and growth, either formulation may be more statistically efficient, that is, result in more
reliable inference given the data. In the following, we prove propositions 1 and 2 for the case of
linearly additive error structures. While a general proof is beyond the scope of this paper, we
first note that the empirical literature measuring network effects heavily relies on linear
models. Second, we generalize beyond linear additivity in our simulation study because we
also use multiplicative error terms.
Proof
For non-positive correlations between unobservables affecting influx (εIn) and outflow (εOut)
(on either market side), the variance of the error process in the difference between Influx and
Outflow is larger than the variance of the individual error processes by elementary covariance
algebra: var Δεð Þ ¼ var εIn � εOutð Þ ¼ var εInð Þ þ var εOutð Þ � 2cov εIn; εOutð Þ. At the same time,
the covariance between an explanatory variable x (affecting influx linearly with coefficient δIn
and outflow with coefficient δOut) and the difference between influx and outflow, i.e.,
δIn � δOutð Þvar xð Þ will decrease relative to the covariance with influx δInvar xð Þ(outflow
δOutvar xð Þ), whenever δInδOutð Þ > 0, i.e., x increases (or decreases) both influx and outflow.
Now, because the statistical information about parameters is a function of the signal
(explained variance) to noise (unexplained variance) ratio the net change model will be less
statistically efficient, i.e., will yield less reliable estimates of the influence of x, proving
proposition 1.
However, for covariates Buyerst�1 and Sellerst�1, we often have δInδOutð Þ < 0. For example, more buyers in the past may increase the inflow of (new) sellers, while decreas- ing the outflow of (existing) sellers. Thus, before considering unexplained variance from unobservables, the net-change model results in a stronger signal about the influence of the installed base of buyers that proves proposition 2. Thus, determining whether the influx- outflow approach or the net change model is more efficient depends on the data- generating values in the error process and the mean structure.
Finally, classical statistical inference for parameters in four equations rather than two
increases the risk of false positives and false negatives, assuming everything else equal. For
example, repeated application of a particular criterion for statistical significance may result in
“significance by chance.” A more parsimonious description of the system (using only two
equations for modeling net changes) is more easily handled in a classical framework. Thus:
Proposition 3: The net change model is superior because it has a lower risk to false positively
detect non-existing network effects, which is relevant if same-side network effects are not present
on at least one market side.
In the following sections, we further investigate our propositions in a simulation study
with a multiplicative error structure and an empirical study.
18 HINZ ET AL.
Simulating Two-Sided Markets
Setup of Simulation Study
To test our theoretical considerations, we implemented a large-scaled simulation in C#
and R. To this end, we created 84,672 markets by systematically varying the strength of the
different network effects and the error level, as shown in Table 2.
We assume that a decision-maker or data scientist uses weekly data from the past year
(from T – 52 to T) to calibrate both the net change and the influx-outflow models,
with
the aim of forecasting the development of the installed base (i.e., the number of customers
on both market sides) over the next 52 weeks (from T to T + 52). We then compared the
models’ performance. The results help us better understand when differences between the
modeling approaches occur and under which circumstances one approach outperforms
the other.
The number of sellers in each of the 104 weeks (two years) is given by:
Sellerst ¼ Sellerst�1 þ InfluxSellerst � OutflowSellerst (1)
with
InfluxSellerst ¼ ðδ1 � Buyerst�1Þ � ð1 þ E1Þ þ ðδ2 � Sellerst�1Þ � ð1 þ E2Þ (2)
OutflowSellerst ¼ ðδ3 � Buyerst�1Þ � ð1 þ E3Þ þ ðδ4 � Sellerst�1Þ � ð1 þ E4Þ ; (3)
where δ1 is the cross-side network effect from (existing) buyers on the number of acquired
(new) sellers; δ2 is the same-side network effect from (existing) sellers on the number of
acquired (new) sellers; δ3 is the cross-side network effect from (existing) buyers on the
outflow of sellers, and δ4 is the same-side network effect from sellers on the outflow of
sellers. E1–4 constitute errors given by random numbers that are ~N(0, x) distributed. We
systematically varied x to determine the influence of different sizes of the error on the
prediction accuracy.
Table 2. Experimental design of simulation study.
Experimental Factors Number of Factor Levels Values for Each Factor Level
Number of sellers in t = 0 1 5
0
Number of buyers in t = 0 1 500
Parameter of impact of buyers on seller influx δ1 2 .001/.001
5
Parameter of impact of sellers on seller influx δ2 6 –.03/–.02/–.01/0/.01/.02
Parameter of impact of buyers on seller outflow δ3 2 .001/.0015
Parameter of impact of sellers on seller outflow δ4 6 .03/.02/.01/0/-.01/-.02
Parameter of impact of sellers on buyer influx δ5 2 .01/.015
Parameter of impact of buyers on buyer influx δ6 7 –.003/–.002/–.001/0/.001/.002/.003
Parameter of impact of sellers on buyer outflow δ7 2 .01/.015
Parameter of impact of buyers on buyer outflow δ8 7 –.003/–.002/–.001/0/.001/.002/.003
Random error (E1–E4, each drawn separately) 3 Low/Medium/High
Number of replications 1
Number of simulated markets 1⋅1⋅2⋅6⋅2⋅6⋅2⋅7⋅2⋅7⋅3⋅1 = 84,672
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 19
The number of buyers is given by:
Buyerst ¼ Buyerst�1 þ InfluxBuyerst � OutflowBuyerst (4)
with
InfluxBuyerst ¼ ðδ5 � Sellerst�1Þ � ð1 þ E5Þ þ ðδ6 � Buyerst�1Þ � ð1 þ E6Þ (5)
OutflowBuyerst ¼ ðδ7 � Sellerst�1Þ � ð1 þ E7Þ þ ðδ8 � Buyerst�1Þ � ð1 þ E8Þ ; (6)
where δ5 is the cross-side network effect from (existing) sellers on the number of acquired
(new) buyers; δ6 is the same-side network effect from (existing) buyers on the number of
acquired (new) buyers; δ7 is the cross-side network effect from (existing) sellers on the outflow
of buyers, and δ8 is the same-side network effect from (existing) buyers on the outflow of
buyers. Again, E5–8 constitute errors given by random numbers that are ~N(0, x) distributed.
The net change, that is, the change in the respective number of sellers or buyers, is thus:
ΔSellerst ¼ Sellerst � Sellerst�1 ¼ InfluxSellerst � OutflowSellerst (7)
ΔBuyerst ¼ Buyerst � Buyerst�1 ¼ InfluxBuyerst � OutflowBuyerst (8)
Both models use the observations of the first 52 weeks to estimate their parameters and
estimate all equations jointly; a seemingly unrelated regression (SUR) is used to account for
potential contemporaneous cross-equation error correlation. The influx-outflow model esti-
mates four equations with OutflowSellers, InfluxSellers, OutflowBuyers, and InfluxBuyers as
dependent variables.
The net change model likewise estimates the parameters of the following two equations:
dΔSellerst ¼ β1 � Buyerst�1 þ β2 � Sellerst�1 þ εt (9)
dΔBuyerst ¼ β3 � Sellerst�1 þ β4 � Buyerst�1 þ εt (10)
We then used the estimated parameters to predict the number of buyers and sellers in
each of the following 52 weeks and determine the mean absolute percentage error (MAPE)
in the last week (i.e., in week T + 52).
Results
The results outlined in Table 3 demonstrate that the influx-outflow model leads to better
predictions, on average, than the net change model. The average values of the MAPE are
38.6 percent better for the buyer (=1 – 16.12 percent/11.63 percent) and 95.6 percent better for
the seller (=1 – 48.02 percent/24.55 percent) side. We also compared the predictions in each of
the 84,672 markets. They were equally good in 39,087 markets (46.16 percent) for the number of
buyers, better in 24,570 markets (29.02 percent) and worse in 21,015 markets (24.82 percent).
The respective results for the number of sellers were equally good in 39,248 markets (46.35 per-
cent), better in 23,177 (27.37 percent) and worse in 22,247 markets (26.27 percent). Thus, the
20 HINZ ET AL.
influx-outflow model performs on average significantly better. The Wilcoxon signed-rank test
also supports this conclusion (p < .01 for both market sides).
Test of Hypotheses
We also tested our hypotheses by examining the determinants of a binary outcome
variable that is 1 if the MAPE of the net change model is as good or better than the influx-
outflow model and 0 otherwise. In contrast to a measure such as MAPE, outliers do not
influence this binary measure. We then estimated a logistic regression with robust
standard errors to examine the effect of asymmetric same-side network effects, nonexistent
same-side network effects, and the error levels on the prediction accuracy for the two
market sides. Table 4 presents the results.
Proposition 1 posits that if the outflow and influx of one market side are positively
correlated (i.e., δ2 and δ4 have the same signs so that δ2 ⋅ δ4 ≥ 0), then the influx-outflow
model is preferable. The negative parameters of the asymmetric same-side net effect on buyer/
seller side strongly support this proposition (p < .05 for both market sides). Moreover, the
positive parameter of the constant supports Proposition 2 (p < .01), which posits that the net
change model is superior if outflow and influx of one market side correlate negatively.
Furthermore, if same-side network effects are nonexistent (i.e., if δ6 + δ8 = 0 or δ2 + δ4 = 0),
then the standard net change model is better because of the positive parameters of the variable
Table 3. Comparison of predictions of net change model and influx-outflow model.
Number of
Observations
Avg. MAPE of
Number of Buyers
(percent)
Avg. MAPE of
Number of Sellers
(percent)
Better for Number
of Buyers (percent)
Better for Number
of Sellers
(percent)
Net change
model
84,672 16.12 48.02 24.82 26.27
Influx-outflow
model
84,672 11.63 24.55 25.95 27.37
: MAPE, Mean Average Percentage Error.
The difference between 100 percent and the two cells reflecting the share of models that are better for either the net
change or the influx-outflow model reflects the share of predictions that are equally good. For buyers, it is 49.23 percent
= 100 percent – 24.82 percent – 25.95 percent, and for sellers, it is 46.36 percent = 100 percent – 26.27 percent –
27.37 percent.
Table 4. Results of logistic regression that explains when net change model predicts at least as
good as influx-outflow-model.
(1) (2)
Variable Buyer Side
Seller Side
Asymmetric same-side net effect on buyer side (0/1) -0.190*** -0.037**
(=1 if δ6 and δ8 have same signs, i.e., δ6 ⋅ δ8 ≥ 0, =0 otherwise) (0.016) (0.016)
Asymmetric same-side net effect on seller side (0/1) -0.145*** -0.359***
(=1 if δ2 and δ4 have same signs, i.e., δ2 ⋅ δ4 ≥ 0, =0 otherwise) (0.015) (0.016)
No same-side net effect on buyer side (0/1) 0.425*** 0.122**
(=1 if δ6 + δ8 = 0, =0 otherwise) (0.062) (0.057)
No same-side NE on seller side (0/1) 1.292*** 0.374***
(=1 if δ2 + δ4 = 0, =0 otherwise) (0.071) (0.055)
Error level -0.012*** -0.006***
(0.000) (0.000)
Constant 1.966*** 1.643***
(0.029) (0.029)
Wald Chi2 1,996.23 950.01
Notes: Robust standard errors in parentheses. *p < .1, **p < .05, ***p < .01, N = 84,672. Binary dependent variable is 1 if MAPE net change model ≤ MAPE influx-outflow model.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 21
“no same-side net effect” on both buyer and seller sides (p < .01 for both market sides). This result supports Proposition 3. We also observe that the influx-outflow model is preferable if the error level increases (p < .01 for both market sides).
Robustness Check
In non-contractual settings, such as the one that we will cover in our empirical study, we have
to proxy the number of customers on both market sides by applying heuristics that make use
of the activity of each customer. In that setting, the proxies for the installed bases can suffer
from measurement errors, which can introduce either an additional error or even a systematic
bias. This bias result could lead to an under- or overestimation of the installed bases.
To assess the impact of the different type of measurement errors, we extended our
simulation and analyzed the following three scenarios:
(1) Measurement error E on number of buyers and sellers with E~N(0, 0.01) each
(=adding measurement noise)
(2) Measurement error E on number of buyers and sellers with E~N(0, 0.01) and
adding 5 percent of the installed base on each market side (=measurement noise
plus systematic overestimation)
(3) Measurement error E on number of buyers and sellers with E~N(0, 0.01) and
subtracting 5 percent of the installed bases on each market side (=measurement
noise plus systematic underestimation)
Table 5 shows that the existence of this type of measurement error does not favor any of the
two competing models, and thus it should not impact our main conclusions. Please note that
an underestimation always leads to higher MAPEs because MAPE can take on values larger
than 1.
Description of the Two-Sided Market
We used data from an intermediary that operates a two-sided market to illustrate the
difference between the two models. The intermediary — which we refer to here as
“Platform.com” because we cannot, for confidentiality reasons, disclose the actual name —
provides an e-commerce platform for buyers and sellers. On Platform.com, professional
sellers offer their products (e.g., consumer electronics, household appliances, jewelry, watches,
cosmetics, etc.) to buyers. All products offered by sellers are new and in original packaging,
Table 5. Comparison of average mean absolute percentage error (MAPE) in different scenarios for
measurement error.
Avg. MAPE/Number of Buyers Avg. MAPE/Number of Sellers
Net Change Model
(percent)
Influx-Outflow Model
(percent)
Net Change Model
(percent)
Influx-Outflow Model
(percent)
Scenario 1 14.81 10.94 42.59 23.90
Scenario 2 14.01 10.29 39.66 22.59
Scenario 3 16.19 12.07 46.29 25.75
22 HINZ ET AL.
and the prices already include value-added tax and shipping costs. The professional sellers
must utilize a nickname profile on Platform.com rather than disclose their identity so that
there is no indication where buyers can find the sellers’ online shop, which helps reduce
cannibalization with other channels that sellers use.
Platform.com charges sellers a fee of 3 percent of the transaction price; there are no listing
fees for sellers. Buyers can use the platform for free. Platform.com applies a continuous
double-auction pricing mechanism so that prices reflect the relation between demand and
supply. The product, however, is only sold if the highest bid surpasses the seller’s threshold.
This continuous double-auction pricing mechanism resembles that of stock exchanges and
makes Platform.com unique in the industry. Although Platform.com has had media coverage,
it does not invest in costlier IT feature extensions or marketing activities such as promotions
or advertising. Instead, it relies on organic growth through network effects fostered by
improving its functionalities — particularly those listed in Table 6.
Description of Data
Our illustrative empirical study uses the data on all 102,096 transactions completed
between buyers and sellers on Platform.com over a time period of more than four
Table 6. Investments of Platform.com.
Investment
Targeted
Market
Side(s)
Release
Date Description
Introduction
video
Buyers t = 79 The introduction video provides an easy first access for buyers by explaining the
buying process on Platform.com, from searching for a product to completing
the order.
New tools Sellers t = 89 New tools for sellers include statistical functionalities to analyze the current
market situation at Platform.com. For instance, sellers can compare products
offered on the platform or automate trading activities.
Platform.com
button
Buyers
Sellers
t = 98 The Platform.com button is a logo of the intermediary, which professional
sellers can integrate into their own online shops. If a potential buyer clicks on
this button, a link will forward the buyer to the products this seller offers on
Platform.com.
Automated
processing
Sellers t = 118 A new API enables the automated processing of transactions. The API operates
through XML messages exchanged between sellers and Platform.com via HTTP,
based on Representational State Transfer architecture. Thus, sellers using many
different types of e-commerce shop systems can use the API, regardless of
operating systems or programming languages utilized.
Product news Buyers
Sellers
t = 130 Product news keeps buyers up-to-date regarding new products offered by
sellers. The intermediary provides information and technical details for recently
launched products that can be purchased on Platform.com.
“Trusted
Shop” seal
Buyers t = 165 Platform.com is certified with the “trusted shop” seal. A company that is
specialized in certifying e-commerce shops provides confirmation to buyers that
the buying process via Platform.com is secure and reliable. The certification
comprises more than 100 criteria, including data security, customer service and
price transparency. The certifying company also provides a money-back
guarantee (e.g., in case of nondelivered products or credit card fraud).
Evaluation
system
Buyers
Sellers
t = 183 The evaluation system enables buyers to post their experiences with products
and thereby facilitate the purchase decision for other buyers. On the web pages
for each specific product, buyers can write comments and use a rating system.
Payment
methods
Buyers
Sellers
t = 186 The new payment methods include direct debit, instant bank transfer, payment
via an online payment platform, and credit cards such as Visa, MasterCard and
American Express. Buyers can choose from these new payment methods after
price negotiations have concluded successfully. Prior to the introduction of the
new payment methods, buyers only could pay in advance via an account
managed by Platform.com.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 23
years. We used weekly data (covering 211 weeks) as the unit of analysis. Our proposed
model requires determining the number of (existing) buyers and sellers, the number of
new buyers and sellers (i.e., influx), and the number of lost buyers and sellers (i.e.,
outflow).
Influx, Outflow, and Number of Customers in Noncontractual Settings
Calculating the number of customers is straightforward if the platform intermediary
has contractual relationships with its customers, such as if buyers and sellers pay
a monthly or quarterly fee to use a platform. If, for example, women and men pay
a monthly fee for using a heterosexual dating platform, then we could easily determine
the number of customers by simply counting the number of contracts with women
and men.
The number of customers is less evident in non-contractual settings, where no recur-
ring fee governs the relationship between a platform intermediary and its customers. As
such, the outflow (i.e., the churn, sometime also called “death”) is not observed because
customers are not required to inform the intermediary that they no longer want to use the
platform [21]. Such an observed output occurs for Platform.com: Sellers pay per transac-
tion, and buyers complete transactions without being charged by the platform. If, for
example, a buyer has not made any transactions for a long period of time, then the
intermediary has no knowledge of whether the buyer is still using the platform or has
become permanently inactive. Even if a buyer might not make a transaction for a long
period of time, then the buyer can still have a nonzero probability of making another
transaction [34].
Still, analysts could use models of “customer base analysis,” which essentially model
that each customer is active (“alive”) for an (uncertain) number of periods and then
becomes permanently inactive (“dead”). Fader and Hardie [15] nicely describe
a company’s customer base as a “leaky bucket” whose contents are continually “dripping
away” and outline that customer base analysis models do an excellent job of capturing
customer “leakage” by estimating the probability of being active. The sum of the respective
probabilities across customers can then be used to determine the number of existing
customers (here also called active customers) in each period.
In settings where one can observe repeat-buying behavior but cannot observe customer
dropout, the BG/NBD approach offers excellent data-fitting capabilities [16] while being
easy to calculate [33]. Although the BG/NBD models were developed to determine the
number of buyers and explain their repeated purchases, these models can also serve to
determine the number of sellers and their repeated sales in a two-sided market.2 To do so,
we view the sales as repeated instances of transactions that follow certain characteristics
inherent in a given seller. A seller who has frequently made transactions in the past but
has not done so in some time has a higher probability of being permanently inactive than
another seller who has made at least a few transactions.3 Still, we must assess the
appropriateness of the BG/NBD model for this new application area, which we do in
the following section.
Results of the BG/NBD Model
We used the p(Alive)-function in the R-package “Buy-’Til-You-Die” (BTYD) [12], which
uses BG/NBD model parameters and a customer’s past transaction behavior to calculate
24 HINZ ET AL.
the probability that this customer will be alive at a given point. We summed the individual
probabilities to be alive in each week using the transaction data and the BG/NBD model to
determine the weekly number of customers for both market sides. The total number of
buyers rose from 5 in t = 1 to 60,444 buyers at the end of the observation period, and the
number of sellers increased from 3 at t = 1 to about 203 sellers at the end of the
observation period (Figure 1).
Thus, one seller serves an average of about 297 buyers in t = 211. Figure 1 also reveals
that the platform first focused on the development of the seller side, which paid off in later
phases. Figure 1 shows an acceleration of growth, especially on the seller side in the pre-
Christmas season (around t = 85, t = 137 and t = 190), when buyers and sellers have
a higher probability of making transactions. As a result, we controlled for this seasonal
effect.
We can derive the number of new buyers and sellers from the database because
Platform.com assigns a specific ID number to each buyer and seller in the time period
of the trial (i.e., when an individual buyer or seller makes the first transaction).
Consequently, we can calculate the number of lost buyers (OutflowBuyerst) and sellers
(OutflowSellerst) by comparing the total number of buyers and sellers and the number of
new buyers InfluxBuyerst and sellers InfluxSellerst in different time periods, expressed with
the following equations:
OutflowBuyerst ¼ Buyerst�1 � Buyerst þ InfluxBuyerst (11)
OutflowSellerst ¼ Sellerst�1 � Sellerst þ InfluxSellerst (12)
We used Equations (11) and (12) to calculate the descriptive statistics for the number of
(existing) buyers and sellers, for the influx of (new) buyers and sellers, and for the outflow
of (lost) buyers and sellers. Table 7 shows the detailed descriptive statistics for a selected
number of weeks (i.e., for t = 50, 100, 150, and 200).
Our data show that, on average, Platform.com gained 309 new buyers per week and
about 20 buyers became permanently inactive each week. Therefore, the intermediary has
0
50
100
150
200
250
0
10000
20000
30000
40000
50000
60000
70000
1 9
1
7
2
5
3
3
4
1
4
9
5
7
6
5
7
3
8
1
8
9
9
7
1
0
5
1
1
3
1
2
1
1
2
9
1
3
7
1
4
5
1
5
3
1
6
1
1
6
9
1
7
7
1
8
5
1
9
3
2
0
1
2
0
9
N
u
m
b
e
r
o
f
S
e
ll
e
r
s
N
u
m
b
e
r
o
f
B
u
y
e
r
s
Week
Observed Number of Buyers Observed Number of Sellers
Figure 1. Development of observed number of buyers and seller over time.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 25
been growing at a net rate of 291 buyers per week. On the seller side, Platform.com gained
an average of 1.6 sellers per week while .7 sellers became permanently inactive. These
numbers translate into a growth rate of about .9 sellers every week. Thus, the platform
grew on both market sides.
Evaluation of Goodness of Fit
Using simulations and an empirical application, Fader et al. [16] showed that the BG/NBD
model delivers good results. Likewise, we also test its goodness of fit for our data set by
comparing its predictions with the number of buyers and sellers that actually buy and sell.
We determined the number of buyers and sellers for every period t by checking whether
focal customers conducted a transaction in all periods from t + 1 until t = 211. We
compared this observed number of buyers (sellers) with the predicted numbers of the BG/
NBD model. Note that this forward-looking approach represents a heuristic because we
cannot observe an infinite time horizon; we can only observe 211 – t number of periods.
This heuristic yields more valid outcomes if t is small (i.e., for early weeks) because future
purchases are then observed in more weeks. Stated differently, the validity of our heuristic
to evaluate the goodness of fit decreases for estimates near the end of the observation
period due to the right truncation of our data.
The correlation between the numbers predicted by the BG/NBD model and our
heuristic is .9916 (p < .01) for sellers and .8035 (p < .01) for buyers. The observed numbers
of buyers and sellers drops dramatically when we move to the end of our data set; this
drop is more severe for the buyer side because our buyers have a lower purchase frequency
than our sellers. If we restrict our correlation analyses to the first three years (weeks 1–156,
such that about one year of data is left for observing purchases), then the correlation
between the BG/NBD model and the heuristic is .9957 (p < .01) for sellers and .9484 for
buyers (p < .01). Thus, these results indicate that the BG/NBD model is a valid proxy for
the latent unobservable number of customers, which has the major advantage of being
able to handle right-truncated data.
Identification Strategy
In most industries, inferring network effects in two-sided markets is difficult because
researchers only have access to time-series data such as price and sales. This setting holds
for technologically intensive goods as well because price and costs generally decrease over
time due to technological advances, whereas quantity increases over time. These correla-
tions make it difficult to identify network effects because we cannot determine whether the
increasing quantities are due to positive network effects or simply due to lower prices [18].
Table 7. Description of the number of buyers and sellers in different weeks.
Buyers Sellers
Period
t(Week)
Total (BG/
NBD)
Influx (ID
Number)
Outflow
(Calculation)
Total (BG/
NBD)
Influx (ID
Number)
Outflow
(Calculation)
50 780.85 +9.00 –2.34 23.03 +/–0 –.51
100 3,049.47 +80.00 –.78 58.97 +1.00 –.64
150 16,518.58 +425.00 –40.03 129.29 +2.00 –2.07
200 53,052.47 +666.00 –85.51 203.08 +2.00 –4.54
Mean 12,920.56 +309.00 –19.48 85.10 +1.64 –.70
26 HINZ ET AL.
It is even more difficult to identify the network effects, and thereby estimate the inter-
twined growth process, when time-varying factors can influence the size of both market
sides [29].
By disentangling dropout and acquisition on both market sides, and observing these
changes as weekly variables, we eliminate the problem of identification from
a mathematical point of view. We can use a simple time-series model with lagged variables
of the number of customers (here, seller NSt–1 and buyers N
B
t–1). One problem that arises,
however, is that unobservable variables can influence one or both market sides, which
could lead to an omitted variable bias that restricts causal identification. Thus, we dedicate
the following section to dealing with this possible bias and the resulting endogeneity.
Omitted Variable Bias and Endogeneity
The problems of endogeneity and omitted variable bias are quite common in these types of
time-series models [18]. For example, suppose the estimated model for new buyers ignores the
omitted variable γ⋅Xt such that the true model is Equation (13):
InfluxBuyerst ¼ αB;N þ β1;B;N � Sellerst�1 þ β2;B;N � Buyerst�1 þ γ � Xt þ εt;B;N : (13)
If Equation (13) is the true model and the estimation omits γ⋅Xt, then the error term
will be:
εt;B;N ¼ γ � Xt þ ωt;B;N : (14)
Such an error term violates the assumption of a regression model that Xt and εt are
uncorrelated, which can lead to an omitted variable bias if Xt is correlated with included
regressors. In the case of buyer and seller growth in two-sided Internet platforms, there
could be time-varying factors that can be neither observed nor measured (e.g., trust
toward the online shopping industry).
Let us, for example, assume that the omitted variable Xt describes the number of users
on the Internet at time t, InfluxBuyerst describes the number of new buyers, and both
numbers grow over time due to the growing popularity of the Internet. This simultaneous
growth would lead to γ > 0. If we do not control for Xt, then this omission will lead to
biased parameter estimates. The parameters of other variables such as Sellerst–1 would
absorb this influence, which could result, for instance, in an overestimation of the cross-
side network effect’s positive influence.
To address this problem of endogeneity, we added various control variables on the
market, industry and company levels that might also influence the acquisition of buyers
and sellers and their decisions to leave the platform. On the market level, we controlled for
the gross domestic product (GDP), which might influence both sides of the market.
Further, to control for strong seasonal effects around Christmas [25], we included
a variable that marks the Christmas trade season in the different years.
On the industry level, we included the weekly advertising expenditure of the leading
B2C auction platform eBay, which may directly affect the behavior of buyers on Platform.
com. We expect that high advertising expenditures of eBay may decrease influx and
increase outflow during that particular week on Platform.com. Because only professional
sellers utilize Platform.com, we do not expect eBay’s advertising behavior to exert such
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 27
a direct and immediate impact on the seller side. With respect to the company level, we
controlled for some media coverage that Platform.com received during the observation
period using binary variables for that particular week (note that the platform itself did not
engage in advertising or promotion activities during the observation period).
Although these control variables cover a wide range of potential influences, spurious
correlations can still occur and thus lead to an omitted variable bias. At this stage, data
analysts typically try to find instrumental variables that are correlated with the variable of
interest but are uncorrelated with the error term. However, suitable instrumental variables
are notoriously hard to find [22]. Furthermore, because we model being alive as
a probability that becomes continuously smaller with subsequent inactivity, we cannot
use any exogenous shock as an instrument. We therefore address the problem of poten-
tially omitted variables by equipping our model with additional proxy variables that
capture omitted influences and absorb them from the error term.
Conceptually, we suppose the following relationships. If there are omitted variables that
influence the number of buyers on Platform.com (e.g., number of online shoppers,
Internet connection speed, advances in technology, trust in online shopping), then these
variables must also influence the number of buyers of other noncompeting B2C online
shops. We therefore use the weekly number of orders from an unrelated (one-sided) B2C
online shop as a proxy variable to capture these omitted influences. If, for example, the
number of online shoppers (buyers) increases over time, then this effect would equally
influence the growth on Platform.com and the number of orders at the noncompeting
B2C online shop. The proxy variables can also capture negative influences like media
coverage about, say, general security concerns in e-commerce.
There might also be omitted variables on the seller side, such as variations in the
number of startup online shops due to regulatory or fiscal changes. We used the number
of sellers on a price comparison site as a proxy for omitted variables that may also
influence the seller side on Platform.com. These additional control and proxy variables
lead to the model illustrated in Figure 2.
Table 8 summarizes the descriptive statistics for the dataset.
Results of Influx-Outflow Model
After including our control and proxy variables, we used the influx-outflow model to
estimate the effect of the installed base, that is, the number of (existing) buyers and sellers
on the influx and outflow of (new and lost) buyers and sellers. Specifically, we estimated
all four equations (i.e., the effect on the influx of buyers and sellers as well as the outflow)
simultaneously, using seemingly unrelated regressions (SUR) with maximum-likelihood
estimation while correcting for both heteroskedasticity (using robust standard errors) and
autocorrelation.
Table 9 summarizes the results based on N = 210 observations (because we analyzed the
growth between t = 211 time periods). The chi-squared statistics for all four equations
allow for rejecting the null hypothesis that the parameters are jointly zero (p < .001).
We observed a positive cross-side network effect of +6.374 (p < .01) from the number of sellers on the number of new buyers, which means that more sellers make Platform.com
28 HINZ ET AL.
more attractive for new buyers. More precisely, an additional seller in t – 1 led to the
weekly acquisition of six additional buyers. Furthermore, the results revealed a negative
same-side network effect of –.021 (p < .05) from the number of buyers on new buyers, in
accordance with theory.
For the second dependent variable, the outflow of buyers, we observed that an increase
in the number of buyers increased the outflow of buyers (+.002, p < .05). This network
effect can stem from a high level of competition among buyers.
On the seller side, the number of sellers decreased the number of acquired sellers (–0.062,
p < .05). Furthermore, the number of buyers had no significant effect on the acquisition of new
sellers (p > .1). This result indicates that, in this early phase of a startup, sellers are more
persuaded by other factors when deciding to try out this new platform, and thus management
is justified in first focusing on the acquisition of sellers. Table 9 also shows that a higher
number of sellers increased the outflow of sellers (+.169, p < .01) and more buyers decreased
the outflow of sellers (–0.002, p < .01).
Overall, these results demonstrate face validity. We found significant parameters for six
of eight network effects, and the missing two may conceivably play no role in the focal
Table 8. Descriptive statistics.
Mean Std. Dev. Min. Max.
Number of sellers 85.10 66.35 3 204.45
Number of buyers 12,920.56 17527.63 13 60,443.93
All IT Investments 0 1
Media coverage .028436 .1666102 0 1
eBay Advertising 63,7245.8 49,8280.2 0 25,48116
Gross Domestic Product (GDP) 107.4462 2.887706 102.37 111.88
Sellers’ side, number of sellers of other platform 602.15 477.34 0 1,388.42
Buyers’ side, number of orders on other platform 322.65 83.66 147 605
Market Environment
Industry Level
Company Level
leve
Ltekra
M
G
D
P
S
ea
so
na
lit
y
Competitor Platf orm:
Marketing Activities
Focal Platf orm:
Media Coverage
Buyer Side
Seller Side
Buyer-Side Growth on
Non-competing Platf orm
Seller-Side Growth on
Non-competing Platf orm
Omitted Influences
on Buyer Side
Omitted Influences
on Seller Side
=
Correlation
Correlation
=
Proxy Variable 2
Proxy Variable 1
Figure 2. Overview about control and proxy variables in model.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 29
market. The results indicate an interrelated growth process of customer populations in
two-sided markets through cross-side and same-side network effects.
Effect of Investments into Platform Functionality
The data set also allowed us to evaluate the effect of different investments into the
platform’s functionality which has been found to be an important driver for platform
growth [24] and which could unearth valuable insights for other companies that aim to
grow a two-sided market in the B2C domain. The influx-outflow model makes it possible
to understand the subtle influences of the new functionalities on the acquisition and
dropout of customers on both market sides. This understanding is of particular impor-
tance when different organizational units are not only responsible for customer retention
and customer acquisition, but also serve as profit centers that must account for their
investments.
Table 9. Results of estimation of influx-outflow model.
Influx of Buyers
in t
Outflow of Buyers
in t
Influx of Sellers
in t
Outflow of Sellers
in t
Variable Coeff. (SE) Coeff. (SE) Coeff. (SE) Coeff. (SE)
Number of sellers in t – 1 6.374** .102 –.062* .169**
(2.325) (.274) (.022) (.031)
Number of buyers in t – 1 –.021* .002* –.0000 –.0002**
(.009) (.001) (.0001) (.000)
Introduction video –133.869 .450 1.826* – 1.502*
(69.805) (5.188) (.715) (.643)
New tools 47.620 –4.646 .592 –.799
(54.659) (6.720) (.711) (.697)
Platform.com button –144.998** –.926 .163 – 2.496**
(54.568) (6.395) (.677) (.617)
Automated processing –51.380 – 3.037 .156 – 1.136
(33.092) (7.921) (.694) (.764)
Product news 375.284** – 3.865 1.427* .047
(102.260) (11.320) (.706) (.094)
“Trusted Shop” seal 406.761** – 1.103 2.876* – 3.375**
(85.181) (9.569) (1.178) (1.019)
Evaluation system 63.113 – 17.731 3.941** .772
(103.101) (11.882) (.763) (1.050)
Payment methods 161.475 – 17.414 .443 – 1.552
(126.833) (14.493) (.965) (1.269)
Media coverage 31.786 – 3.101 .208 .331
(40.664) (5.145) (.754) (.426)
eBay Advertising –.0000 –.0000
(.0000) (.0000)
GDP –1.625 –.715 .319* –.504**
(10.030) (1.371) (.141) (.129)
Sellers’ side, number of sellers of other
platform in t
.004
(.002)
– .006**
(.002)
Buyers’ side, number of orders on other
platform in t
.505
(.272)
.008
(.003)
Intercept – 145.419 72.854 – 32.585* 49.700**
(985.517) (134.383) (14.355) (13.033)
Time controls Yes Yes Yes Yes
R2 90.40 percent 80.22 percent 47.30 percent 40.97 percent
Adjusted R2 88.89 percent 77.06 percent 39.17 percent 31.87 percent
Notes: SE, robust standard errors. N = 211.
*p < .05, **p < .01, two−tailed significance.
30 HINZ ET AL.
The product news functionality improvement involves presenting information and
technical details for recently launched products that can be purchased on Platform.com.
New buyers and new sellers greatly appreciate this feature because it increases the
acquisition of both buyers (+375.284, p < .01) and sellers (+1.427, p < .05).
The “Trusted Shop” seal functionality improvement significantly simplifies the acquisi-
tion of buyers (+406.761, p < .01) and sellers (+2.876, p < .05). In this case, a certification
company confirms the security of making transactions on Platform.com, reducing infor-
mation asymmetries regarding the security and reliability of the intermediary for both
market sides. Moreover, the seal also decreases the likelihood of losing sellers (–3.375, p <
.01). In short, it constitutes one of the most effective functionality investments.
By incorporating user feedback, the evaluation system attracts new sellers (+3.941, p < .01) by helping prospective sellers observe activity on the other market side before they put their products on Platform.com. This ability to observe activity reduces information asymmetry for sellers by making the number of buyers easier to assess.
In sum, it appears that investments in trust (either in products on sales, in the platform
itself or in the counterpart on the other market side) made the largest contribution to
market growth. Our results thus suggest that companies should invest in this area if they
want to achieve market growth.
The introduction of the Platform.com button turns out to be a double-edged sword:
Professional sellers could integrate this optional button into their own online shop, but
doing so revealed the seller’s identity to prospective buyers more easily. Thus, they could
start to haggle with and buy from the seller’s online shop directly, thereby bypassing
Platform.com and its 3 percent selling fee. This feature made it more difficult for Platform.
com to acquire new buyers (–144.998, p < .01), but reduced the outflow of sellers (−2.496,
p < .01).
Our analysis of the various investments in the platform shows that they affected the
seller and buyer sides differently, as reflected in different impacts on the activity and
acquisition inherent to the two market sides. The standard net change model, in
contrast, does not illuminate how the new features impact the market population exactly
(i.e., whether they affect the influx of new customers or the outflow of existing
customers).
While we controlled for the effect of media coverage and eBay advertising, we did not
find that these factors had any influence on the dependent variables. Meanwhile, our
control variable for the macroeconomic development revealed that Platform.com can
more easily acquire new sellers (+.319, p < .05) and decrease the outflow of sellers
(–.504, p < .01) when the economy is growing (i.e., when GDP increases). These results
seem plausible.
The proxy variables also point to some interesting findings. On the seller side, we
observe that some latent effect must be occurring that simultaneously affect the seller side
of Platform.com and the sellers on the price comparison site. If the number of sellers on
the price comparison site increases, then the weakly significant parameters suggest that it
is also easier for Platform.com to acquire new sellers (+.004, p < .1). Likewise, the seller
outflow at Platform.com also decreases with an increasing number of sellers on the price
comparison site (–.006, p < .01). These effects seem plausible and could be interpreted as
the general growth of e-commerce with a lower fluctuation.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 31
We further observe an effect of the proxy variable on the buyers’ side: If the number of
orders at the noncompeting B2C shop goes up, then the number of new buyers on
Platform.com will increase (+.505, p < .1). The growth of the market can explain this
relationship.
The significant effects of the proxy variables reflect the presence of latent effects that
influence e-commerce in general. The significance of both proxy variables reveals that they
work as intended and capture some otherwise omitted influences. Although we acknowl-
edge that we cannot fully rule out other uncontrolled effects that might influence and
consequently bias our results to some extent, we also note that the core intent of this paper
is to provide an illustrative application of our arguments rather than to provide an exact
measurement.
Results of Net Change Model
In summary, Platform.com faces a market with asymmetric network effects on customer
influx and outflow. We conceptually argued that a summation of influx and outflow can
lead to problems in measuring network effects. To substantiate this claim, we also
estimated the net change model: Table 10 lists its estimates. As theoretically expected,
the installed base of buyers (–.023, p < .05) negatively affects the net change of buyers and
the installed base of sellers has a positive influence on the net change of buyers (6.299, p <
.01). The analysis does not reveal a significant negative effect of the installed base of buyers
on the net change of sellers; therefore, decision-makers could thus — depending on the
chosen cutoff level for significance — wrongly conclude that the installed base of buyers
exerts no effect on the seller side. However, the results from the influx-outflow model in
Table 9 show that this conclusion is not true: A higher number of buyers actually leads to
a lower outflow of sellers.
The low R2s of the net change model also indicate that the joint consideration
engenders a loss of important information, which becomes even more evident if we
calculate the growth from t = 12 to t = 211 using the estimates of the net change and
influx-outflow models.4 Figure 3 shows that, even over a period of nearly four years, the fit
of the growth estimates remained good when we used the influx-outflow model. The
MAPE on the buyer side was only 2.1 percent at the end of the observation period, and
although it is higher on the seller side, it is still tolerable (16.0 percent).
The net change model’s fit of the growth is clearly inferior to the influx-outflow model,
as depicted by Figure 4. The growth estimates achieved a good fit for about 1.5 years, but
the lack of subtle information about network effects led to a substantial deviation between
the observed and predicted installed bases.
Our literature review shows that researchers have mainly focused on the cross-side net-
work effects of installed bases because they only had access to very aggregated data (e.g.,
sales data on market level; [6,17]). Consequently, these early works made two simplifica-
tions: First, they assumed that every sold unit adds one unit to the installed base and thus
stays active in the market infinitely. However, we all know, for example, that CD players
or videogame consoles get broken or replaced with newer versions or people sign up for
32 HINZ ET AL.
a platform and become inactive after some time. To account for this, some researchers
have relied on robustness checks to ascertain that the installed base depreciates at different
annual rates (see [11] as an example). With better datasets that are usually available these
days, researchers should use more accurate measures for the installed bases.
Second, previous research only rarely accounted for same-side network effects, e.g., that
there is also competition on one or both market sides. In line with theory, our results
suggest that same-side network effects can indeed be negative and thus have an influence
on the growth of the market as well. This result supports the finding of Asvanund et al. [2]
that a growing number of P2P network users can also lead to network congestion and thus
identified a negative same-side network effect.
Furthermore, previous research does not consider the nuanced impact that network
effects can have on acquisition and churn/inactivity, that is, the influx and outflow. Only
Asvanund et al. [2] showed that a larger installed base can lead to an increased availability
Table 10. Results of estimation of net change model.
Net Change of Buyers Net Change of Sellers
Variable Coeff. (SE) Coeff. (SE)
Number of sellers in t – 1 6.299** –.231**
(2.273) (.041)
Number of buyers in t – 1 –.023* .000
(.009) (.000)
Media coverage –34.737 –.121
(39.383) (.889)
Introduction video –133.924 3.321**
(72.056) (.910)
New tools 51.313 1.385
(54.706) (.948)
Platform.com button –144.301** 2.664**
(31.996) (.946)
Automated processing .47.759
31.996
1.305
(1.117)
Product news 378.746** 1.392
101.934 (1.120)
“Trusted Shop” seal 407.429** 6.239**
(84.300) (1.516)
Evaluation system 79.603 3.165**
(110.228) (1.122)
Payment methods 178.352 1.985
(130.645) (1.450)
Media coverage 34.737 –.121
(39.383) (.889)
eBay advertising .0000
(.0000)
Gross Domenstric Product (GDP) –1.301 .822**
(10.137) (.176)
Sellers’ side, number of sellers on other Platform in t .009**
(.002)
Buyers‘ side, number of orders on other platform in t .515
(.271)
Intercept –182.393 –82.159**
(997.956) (17.871)
Time controls Yes Yes
R2
Adjusted R2
88.81% 37.87 percent
87.05 percent 28.35 percent
Notes: SE, robust standard errors. N = 211. *p < .05, **p< .01, two−tailed significance.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 33
of songs in P2P communities, which can generate network growth as well as network
congestion (due to a higher number of users), which can cause existing users to churn.
In general, we expect that more detailed data sets will become available in the near
future and allow for more sophisticated analyses. In this article, we therefore propose
a model that not only distinguishes between cross-side and same-side network effects, but
also allows for network effects that can have an asymmetric impact on the acquisition of
new customers and the outflow of existing customers. We thereby contribute to the stream
of literature that empirically measures network effects and the growth of two-sided
markets. Our findings show that network effects can have an impact on the interrelated
growth process of the two customer populations. We find that the installed base of sellers
positively influences the acquisition of buyers (positive cross-side network effect), but
negatively influences the acquisition and activity of sellers (negative same-side network
effects). Meanwhile, the installed base of buyers decreases the outflow of sellers (positive
Figure 3. Comparison of predicted and observed number of sellers and buyers of influx-outflow model.
Figure 4. Comparison of predicted and observed number of sellers and buyers of net change model.
34 HINZ ET AL.
cross-side network effect), but negatively influences the activity and acquisition of buyers,
potentially due to greater competition (negative same-side network effect).
A large number of papers have measured positive cross-side network effects (see Table
1), but only Asvanund et al. [2], Tucker and Zhang [42], Voigt and Hinz [43], and Chu
and Manchanda [10] have provided empirical evidence for influential same-side network
effects, which tend to be often negative due to competition effects. This paper provides
more empirical evidence in this regard.
However, our results are more nuanced, as we examined not only the joint effects, but
the specific influences on acquisition and churn. By doing so, we found that the installed
base of buyers did not influence the acquisition of sellers, which could be expected when
taking subtle details on market mechanisms into account. In our empirical illustration,
prospective sellers who are considering joining the platform cannot — as “outsiders” —
reliably assess the size of the other market side. Thus, there must obviously be reasons
other than cross-side network effects that spur sellers to join the platform early on.
A detailed analysis of the impact of IT investments reveals that investments that
increase trust (in the platform operator, in products, and in participants on the other
market side) can help to grow the platform. Such an analysis revivifies the idea proposed
by Nair et al. [31], who assessed the impact of investments in hardware and software for
growing networked markets.
Methodologically, we showed that separately modeling the influx of new customers and
the outflow of existing customers on each market side produces more reliable statistical
inferences, on average, than modeling the net changes in the numbers of buyers and
sellers. Thus, we advocate a model that employs more parameters to achieve greater
statistical efficiency.
Our results suggest that it is especially preferable to employ the influx-outflow model in
two-sided markets if one expects a positive (negative) same-side network effect on
acquisition, but a negative (positive) same-side network effect on the activity of that
market side. The theory we outlined suggests that such a setting is likely in markets
with competition among same-side customers (e.g., auction markets), in which case data
scientists should use the proposed influx-outflow model. Our empirical study also sup-
ports this recommendation.
In contrast, the net change model is preferable for markets in which the installed base of
the same side positively influences both acquisition and activity of the same side. This setting
is more common for two-sided markets of gamers and game publishers (i.e., more gamers
make it easier to acquire new gamers, as well as they may increase the activity of all gamers).
The paper’s insights for two-sided markets can also be transferred to one-sided
markets, as there are special cases where the cross-side network effects are zero and the
analysis focuses just on one equation. Even for this special case, our analysis recommends
distinguishing between influx and outflow.
In sum, this paper and in particular the presented influx-outflow model should impact
the way how researchers and business practitioners alike should measure network effects
in two-sided markets. Our analyses show that the use of detailed information, that is, the
nuanced inflow and outflow of customers, could be helpful to arrive at informed measures
for network effects in the platform economy.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 35
Notes
1. In the context of this paper, “asymmetric network effects” mean that the installed base of
customers makes it easier to acquire new customers (inflow), but harder to keep existing
customers (outflow) and vice versa.
2. In principle, such models could also make use of other signals such as messages that buyers
share in forums or online social networks [20] or user-generated content in general.
Plattform.com did not allow such interactions to circumvent the bypassing of the platform.
Purchases are however in all cases the strongest and most credible signal that can be used to
infer the installed bases for market participants. Therefore, the majority of models in the area
of “customer base analysis” use this information.
3. A platform operator could also use other signals of activity as an input for sellers’ churn
model, such as the creation of offers or activities in the back end of the system. Unfortunately,
we do not have access to such data, which prevents us from using this promising alternative
approach to model the number of sellers.
4. We assume constant network effects over time, which is a commonly made assumption.
We also thank Tim Kraemer for helping us to start this project and for his support throughout the
earlier phases of this project.
This work has been [co-]funded by the DFG as part of the CRC 1053 MAKI and by the efl – the
Data Science Institute at Goethe University Frankfurt.
1. Ackerberg, D.A.; and Gowrisankaran, G. Quantifying equilibrium network externalities in the
ACH banking industry. RAND Journal of Economics, 37, 3 (2006), 738–761.
2. Asvanund, A.; Clay, K.; Krishnan, R.; and Smith, M.D. An empirical analysis of network
externalities in peer-to-peer music-sharing networks. Information Systems Research, 15, 2
(2004), 155–174.
3. Bakos, Y.; and Katsamakas, E. Design and ownership of two-sided networks: Implications for
Internet platforms. Journal of Management Information Systems, 25, 2 (2008), 171–202.
4. Blattberg, R.C.; and Deighton, J. Manage marketing by the customer equity test. Harvard
Business Review 74, 1996, 136–144.
5. Brunswicker, S.; Almirall, E.; and Majchrzak, A. Optimizing and satisficing: The Interplay
between platform architecture and producers’ design strategies for platform performance.
MIS Quarterly, 43, 4 (2019), 1249–1277.
6. Brynjolfsson, E.; and Kemerer, C.F. Network externalities in microcomputer software: An
econometric analysis of the spreadsheet market. Management Science, 42, 12 (1996),
1627–1647.
7. Chao, Y.; and Derdenger, T. Mixed bundling in two-sided markets in the presence of installed
base effects. Management Science, 59, 8 (2013), 1904–1926.
8. Chircu, A.M.; and Kauffman, R.J. Limits to value in electronic commerce-related IT
investments. Journal of Management Information Systems, 17, 2 (2000), 59–80.
9. Choudhury, V.; Hartzel, K.S.; and Konsynski, B.R. Uses and consequences of electronic
markets: An empirical investigation in the aircraft parts industry. MIS Quarterly, 22, 4
(1998), 471–507.
36 HINZ ET AL.
10. Chu, J.; and Manchanda, P. Quantifying cross and direct network effects in online
consumer-to-consumer platforms. Marketing Science, 35, 6 (2016), 870–893.
11. Clements, M.T.; and Ohashi, H. Indirect network effects and the product cycle: Video games
in the US, 1994–2002. Journal of Industrial Economics, 53, 4 (2005), 515–542.
12. Dziurzynski, L.; McCarthy, D.; and Wadsworth, E. BTYD-package: Implementing buy ’til you
die models. 2012. https://rdrr.io/cran/BTYD/man/BTYD-package.html.
13. Ellison, G.; and Fisher Ellison, S. Lessons about markets from the Internet. Journal of
Economic Perspectives, 19, 2 (2005), 139–158.
14. Evans, D.S.; and Schmalensee, R. Paying with Plastic: The Digital Revolution in Buying and
Borrowing. Cambridge: MIT Press, 2005.
15. Fader, P.S.; and Hardie, B.G.S. The Pareto/NBD is Not a lost-for-good model. 2014. Accessed
on 17 January 2020: http://brucehardie.com/notes/031/.
16. Fader, P.S.; Hardie, B.G.S.; and Lee, K.L. “Counting your customers” the easy way: An
alternative to the Pareto/NBD Model. Marketing Science, 24, 2 (2005), 275–284.
17. Gandal, N.; Kende, M.; and Rob, R. The dynamics of technological adoption in hardware/
software systems: The case of compact disc players. RAND Journal of Economics, 31, 1 (2000),
43–61.
18. Gowrisankaran, G.; and Stavins, J. Network Externalities and technology adoption: Lessons
from electronic payments. RAND Journal of Economics, 35, 2 (2004), 260–276.
19. Haenlein, M. Social interactions in customer churn decisions: The impact of relationship
directionality. International Journal of Research in Marketing, 30, 3 (2013), 236–248.
20. Heimbach, I.; and Hinz, O. The impact of sharing mechanism design on content sharing in
online social networks. Information Systems Research, 29, 3 (2018), 592–611.
21. Hinz, O., Eckert, J., and Skiera, B. Drivers of the long tail phenomenon: An empirical
analysis. Journal of Management Information Systems, 27, 4 (2011), 43–70.
22. Hinz, O.; Hill, S.; and Kim, J.-Y. TV’s Dirty little secret: The negative effect of popular TV on
online auction sales. MIS Quarterly, 40, 3 (2016), 623–644.
23. Iyengar, R.; Van den Bulte, C.; and Lee, J.Y. Social contagion in new product trial and repeat.
Marketing Science, 34, 3 (2015), 408–429.
24. Jung, D.; Kim, B.C.; Park, M.; and Straub, D.W. Innovation and policy support for two-sided
market platforms: Can government policy makers and executives optimize both societal value
and profits? Information Systems Research, 30, 3 (2019), 1037–1050.
25. Kapoor, S.G.; Madhok, P.; and Wu, S.M. Modeling and forecasting sales data by time series
analysis. Journal of Marketing Research, 18, 1 (1981), 94–100.
26. Kauffman, R.J.; and Weber, T.A. Social influence and networked business interaction. Journal
of Management Information Systems, 36, 4 (2019), 1040–1042.
27. Lee, R.S. Vertical integration and exclusivity in platform and two-sided markets. American
Economic Review, 103, 7 (2013), 2960–3000.
28. Liu, H. Dynamics of pricing in the video game console market: Skimming or penetration?
Journal of Marketing Research, 47, 3 (2010), 428–443.
29. Manski, C.F. Identification Problems in the Social Sciences. Cambridge: Harvard University
Press, 1999.
30. Mantrala, M.K.; Naik, P.A.; Sridhar, S.; and Thorson, E. Uphill or downhill? Locating the firm
on a profit function. Journal of Marketing, 71, 2 (2007), 26–44.
31. Nair, H.; Chintagunta, P.; and Dubé, J.-P. Empirical analysis of indirect network effects in the
market for personal digital assistants. Quantitative Marketing and Economics, 2, 1 (2004),
23–58.
32. Parker, G.G.; and Van Alstyne, M.W. Two-Sided Network Effects: A theory of information
product design. Management Science, 51, 10 (2005), 1494–1504.
33. Platzer, M.; and Reutterer, T. Ticking away the moments: Timing regularity helps to better
predict customer activity. Marketing Science, 35, 5 (2016), 779–799.
34. Reinartz, W.J.; and Kumar, V. The impact of customer relationship characteristics on profit-
able lifetime duration. Journal of Marketing, 67, 1 (2003), 77–99.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 37
https://rdrr.io/cran/BTYD/man/BTYD-package.html
http://brucehardie.com/notes/031/
35. Rochet, J.-C.; and Tirole, J. Two-sided markets: A progress report. RAND Journal of
Economics, 37, 3 (2006), 645–667.
36. Rysman, M. Competition between networks: A study of the market for yellow pages. Review
of Economic Studies, 71, 2 (2004), 483–512.
37. Rysman, M. An empirical analysis of payment card usage. Journal of Industrial Economics, 55,
1 (2007), 1–36.
38. Rysman, M. The economics of two-sided markets. Journal of Economic Perspectives, 23, 3
(2009), 125–143.
39. Shankar, V.; and Bayus, B.L. Network effects and competition: An empirical analysis of the
home video game industry. Strategic Management Journal, 24, 4 (2003), 375–384.
40. Shapiro, C.; and Varian, H.R. Information Rules: A Strategic Guide to the Network Economy.
Boston: Harvard Business School Press, 1999.
41. Sridhar, S.; Mantrala, M.K.; Naik, P.A.; and Thorson, E. Dynamic marketing budgeting for
platform firms: Theory, evidence, and application. Journal of Marketing Research, 48, 6
(2011), 929–943.
42. Tucker, C.; and Zhang, J. Growing two-sided networks by advertising the user base: A field
experiment. Marketing Science, 29, 5 (2010), 805–814.
43. Voigt, S.; and Hinz, O. Network effects in two-sided markets: Why a 50/50 user split is not
necessarily revenue-optimal. Business Research, 8, 1 (2015), 139–170.
44. Wattal, S.; Racherla, P.; and Mandviwalla, M. Network externalities and technology use:
A quantitative analysis of intraorganizational blogs. Journal of Management Information
Systems, 27, 1 (2010), 145–174.
45. Wilbur, K.C. A two-sided, empirical model of television advertising and viewing markets.
Marketing Science, 27, 3 (2008), 356–378.
46. Yoo, B.; Choudhary, V.; and Mukhopadhyay, T. A model of neutral B2B intermediaries.
Journal of Management Information Systems, 19, 3 (2002), 43–68.
About the Authors
Oliver Hinz (ohinz@wiwi.uni-frankfurt.de; corresponding author) is Professor of Information
Systems and Information Management at Goethe University Frankfurt, Germany. He is interested
in research at the intersection of technology and markets. His work has been published in such
journals as Information Systems Research, Journal of Management Information Systems, MIS
Quarterly, Journal of Marketing, and Decision Support Systems, and in a number of proceedings
the leading IS conferences.
Thomas Otter (otter@marketing.uni-frankfurt.de) is Professor of Marketing at Goethe University.
His research focuses on Bayesian modeling with application to marketing. He has worked in the
areas of conjoint measurement, choice modeling, and assessing the effectiveness of marketing
actions when the actions are endogenous to the system. Dr. Otter’s papers have been published
in Journal of Marketing Research, Marketing Science, Quantitative Marketing and Economics, Journal
of Business & Economic Statistics, and other journals. He is co-editor of Quantitative Marketing and
Economics and member of the editorial review boards of Marketing Science other journals.
Bernd Skiera (skiera@skiera.de) holds the Chair of Electronic Commerce at Goethe University and
is also Professorial Fellow at Deakin University. Australia. His interests include e-commerce,
marketing analytics, online marketing, customer management, and integration platforms as
a service (iPaas). Dr. Skiera has published in such journals such as Management Science, Journal
of Management Information Systems. Marketing Science, Journal of Marketing Research, and others.
He was a recipient of an ERC Advanced Grant in 2019.
38 HINZ ET AL.
Copyright of Journal of Management Information Systems is the property of Taylor & Francis
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
- Abstract
- Theoretical Considerations for Separating Network Effects in Two-Sided Markets and Results of aSimulation Study
- Notes on contributors
Motivation
Related Work
Analysis of Importance of Separating Network Effects in Two-Sided Markets
Proof
Simulating Two-Sided Markets
Setup of Simulation Study
Results
Test of Hypotheses
Robustness Check
Illustrative Empirical Study
Description of the Two-Sided Market
Description of Data
Influx, Outflow, and Number of Customers in Noncontractual Settings
Results of the BG/NBD Model
Evaluation of Goodness of Fit
Identification Strategy
Omitted Variable Bias and Endogeneity
Results of Illustrative Empirical Study
Results of Influx-Outflow Model
Effect of Investments into Platform Functionality
Results of Net Change Model
Conclusions
Notes
Acknowledgements
Funding
References
Impact of Cyberattacks by Malicious Hackers on the
Competition in Software Markets
Ravi Sena, Ajay Vermab, and Gregory R. Heima
aTexas A&M University, College Station, Texas, USA; bGuidance, Navigation & Control Engineer, Lockheed
Martin Missiles and Fire Control, Grand Prairie, Texas, USA
ABSTRACT
The number of malicious hacking incidents in our increasingly IT-
enabled world has been increasing over the years. Conventional
wisdom focuses on negative impacts of these malicious hacker activ-
ities. We posit that malicious hacker activities also might lead to some
unintended consequences, specifically related to altering of software
market structure, and associated stakeholder consequences. In this
study, we model the competition between two software platforms in
the presence of malicious hackers who perform cyberattacks against
one or both software platforms. We compare a benchmark case
where malicious hackers are either absent, or if present do not target
the software platforms, against a first scenario where only one soft-
ware platform is targeted, and a second scenario where both soft-
ware platforms are targeted. Interestingly, we find the presence of
malicious hackers’ activities is not always detrimental to all software
industry stakeholders. In general, the results suggest that the pre-
sence of malicious hackers is more likely to result in a competitive
market, while their absence is more likely to result in a monopoly.
Furthermore, we show that under certain market conditions, the
unsecure software platform targeted by hackers potentially can
drive its more secure competitor out of the market.
KEYWORDS
Software competition;
malicious hackers; software
markets; cyberattacks;
software platforms
This paper examines stakeholder implications of cyberattacks performed by malicious
software hackers on software platform vendors within a single software market sector. In
the good old days of the software industry, the term hacker was used as a compliment to
describe very clever programmers, without ascribing ethical or moral valence to the
actions of such individuals. More recently, the term has taken on negative connotations.
Dictionaries today define a hacker as “a person who uses computers to gain unauthorized
access to data” [13], and as “a person who secretly gets access to a computer system in order
to get information, cause damage, etc.” [32]. While there have been attempts to use other
terms to distinguish hackers having malicious intent (e.g., crackers, Black Hats, Grey
Hats,1 unethical hackers), the fact remains that the common understanding of the term
hacker is closer to the previously provided definitions. As such, in this paper, the term
hacker should be understood to mean malicious hackers. We do not focus on ethical/
white hat hackers, security consultants such as pen-testers, or vulnerability discoverers
CONTACT Ravi Sen rsen@mays.tamu.edu Texas A&M University, 320S Wehner Building, 4217 TAMU, College
Station, TX 77843
Supplemental data for this article can be accessed on the publisher’s website.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
2020, VOL. 37, NO. 1, 191–216
https://doi.org/10.1080/07421222.2020.170551
1
© 2020 Taylor & Francis Group, LLC
https://doi.org/10.1080/07421222.2019.1705511
https://crossmark.crossref.org/dialog/?doi=10.1080/07421222.2020.1705511&domain=pdf&date_stamp=2020-02-19
because the actions of these individuals are targeted at and requested by their clients.
Moreover, these non-malicious hackers do not exploit software users, release malware, or
steal data.
Hacking activities are by no means only a contemporary phenomenon, yet anecdotally
they often seem to be. With expanding use of new technology variants, the variety of
innovations in hacking modes continues to expand [40]. Technology hacking today results
in a huge number of hacking incidents (i.e., over 64,000 in 2015) and results in verified
data breaches [17, 38]. Hacker related incidents affect a large proportion of individuals
and lead to large annual stakeholder costs [10, 11, 34]. Annually over the past decade,
verified data breaches have been most frequently caused by several variants of software
hacking attacks [20, 38]. Hacking activities are so pervasive globally that one can observe
the real-time generation of malicious hacking across the globe via resources such as the
Kaspersky Cyberthreat Real-Time Map.2 Yet, neither popular media, nor government
studies, nor academic literatures have investigated the role of this malicious hacking
activity in shaping the software industry.
In this study, we focus on the activities of malicious hackers and observe that perhaps
the net outcome of their hacking efforts is not always bad. The growth of malicious
hacking phenomena across major public, private, and governmental organizations moti-
vates the research questions behind this study: What is the impact of malicious hackers’
attacks on competition in software markets? Is the presence of malicious hacker attacks in
a marketplace all bad? Or are there some positive consequences (intended or unintended) of
malicious hackers’ activities? Among the possible consequences of malicious hacking, the
most obvious outcome generally concerns changes in the competitive market that may
affect stakeholder (e.g., end user, corporate client, or software vendor) utility in some
manner. Through a stylized model, we examine impacts of the presence of malicious
hackers who perform successful cyberattacks against one or several software vendors in
a software industry sector. Thus, from among the potential outcomes of malicious hack-
ing, this paper largely focuses on outcomes derived from resultant changes to competition
within a software industry sector. Studying this impact of malicious hacker attacks on
competition is important because of the wide-ranging and pervasive nature of malicious
hacker attacks today, affecting software industry vendors as well as major users of software
and, thus, the competitive structure of many industries.
We investigate these research questions in the context of a software market. We model
a market consisting of two competing software platforms in the presence of malicious
hackers. We set up a benchmark case where the hackers are either absent or do not target
any cyberattacks against the competing software platforms. We then compare the bench-
mark case against a first scenario where one software platform is targeted by cyberattacks,
and a second scenario where both software platforms are targeted by cyberattacks. We
assume these malicious hacker attacks are successful attacks, since unsuccessful attacks
will not lead to repercussions for software vendors or for software users. Through stylized
analytical models, we dig into the conventional wisdom that malicious hackers cause only
damages and losses for software industry stakeholders. Doing so fills in a literature gap
pertaining to social welfare implications of stakeholders, arising from actions of malicious
hackers, and leads to several useful insights concerning the presence of software hacker
activities on software market competition.
192 R. SEN ET AL.
Interestingly, we find the presence of malicious hackers in a software market is
a double-edged sword and is not always detrimental to software industry stakeholders.
We find that the presence of such hackers makes the software market more competitive.
As a result, consumers may unintentionally benefit from the presence of malicious
hackers, in short because these hackers tend to target actions proportionally with respect
to the software platforms having a large numbers of users. We also observe that under
certain conditions an unsecure software platform targeted by hackers potentially can drive
its more secure competitor out of the software market. This finding illustrates context-
specific managerial insights useful to the long-term management of software platforms in
the presence of malicious hacker activities.
The paper is organized as follows. The second section provides a brief overview of
relevant literature. The third section develops the analytical model. The fourth section
presents our analysis of implications for software markets. The final section concludes
with contributions, limitations, and future directions.
Competition in a marketplace in general, and among software products in particular,
has been studied for a long time. Early studies investigated the impact of network
effects on competition between systems [26], competition between standards in
a software market [19], and on competition within software markets [8, 18]. More
recently, some studies explore the impact of network effects on software completion
between open source and proprietary software products [12, 24, and 35]. Lin [29]
explores this same competition under the influence of network effects and varying
levels of users’ software skills. Lanzi [28] explores the issue under an assumption that
the competing software products are compatible. Since software markets experience
two-sided network externalities, researchers have incorporated this factor into studies
on competition in software markets [33]. In addition to the role of network effects on
software competition, researchers have also studied the impact of different channel
strategies and product heterogeneity on competition between software products. For
example, Bitzer [6] investigated the impact of product heterogeneity on the competition
between open source software (OSS) and commercial software, while Fan et al. [14]
analyze the competition between Software-as-a-Service (SaaS) and traditional shrink-
wrap software. Finally, some studies investigate the impact of piracy on software
markets [e.g., 31].
However, the existing literature on competition between software products or systems
has a major gap. To the best of our knowledge, no study investigates the impact of
malicious hacker attacks on the competition between software products or software plat-
forms. Without accounting for hacker actions, extant models examine a software produ-
cer’s decision about releasing vulnerable software or patching vulnerabilities so hackers
cannot attack them [2]. Models also study liability arising from software vulnerabilities [3,
27] and to what extent known software vulnerabilities can affect software vendor market
values [10, 37]. Studies model markets for the generation of software vulnerabilities [25]
and competitive market policy aspects of vulnerability discovery and disclosure activities
of hackers [9, 36]. Yet, to the best of our knowledge, no prior study examines competitive
statics that arise from malicious hacker activity within a software market structure. This
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 193
situation motivates the following question: Why is it important to understand the impact of
malicious hacker attacks on competition in software markets?
Malicious hacker attacks on technology are not a modern phenomenon. One of the
earliest examples of malicious hacking was the disruption of John Ambrose Fleming’s
public demonstration of Guglielmo Marconi’s wireless telegraphy technology by Nevil
Maskelyne [30]. In 1903, Maskelyne managed to “hack” the demonstration and send
insulting Morse code messages through the auditorium’s projector during this demonstra-
tion [30]. More recently, in January 2016 account information for at least 500 million
Yahoo users was hacked,3 and in May 2017, data on 143 million Americans was exposed
by Equifax.4
It has been established that security breaches caused by malicious hackers lead to
disruptions and downtime in the targeted systems, causing financial loss for the users of
these systems. The users of the software systems incur other costs as well. For instance,
industrial users may have to pay additional insurance premiums [22] for using software
known to have weak security. As a result, potential users are more likely to favor software
that has stronger security. Moreover, existing software users can punish a vendor of
vulnerable software by switching to a competing vendor, or delaying their software
upgrade purchases, while potential customers may avoid buying the vulnerable software
product [37]. Finally, Wright [39] argues that users of software will act rationally, which
implies that all else held equal (e.g., similar features and functionality), a user should
choose more secure software over less secure software. Since malicious hackers often
intentionally draw attention to software security (or a lack thereof), it is reasonable to
assume that the presence of malicious hackers should impact competition in a software
market.
The question remains as to the nature of the malicious hacker impact on market
competition. In this study, we address this issue. We contribute by modeling a software
market as a duopoly, with malicious hackers targeting either one or both of the competing
software platforms. The results show that in the absence of malicious hackers, the most
likely equilibrium market structure is a monopoly. Interestingly, when hackers target at
least one of the competing software products, the resulting market structure is more
competitive. As such, malicious hackers can be seen to play a potentially useful role in
ensuring more competitive software marketplaces. We contribute by demonstrating
potential software industry outcomes under different sets of possible market conditions.
In this section, we first develop a model of competition within a software market where
competing software platforms are targeted by malicious hackers. In the following section,
we then analyze this model for equilibrium outcomes, followed by a discussion of the
results.
Software Market
For any software category, we assume that two broad software platforms (e.g., Android vs.
iOS; Apache vs. IIS) reflect the software market structure sufficiently accurately. We will refer
to these software platforms by the terms Software X and Software Y, or just X and Y for
194 R. SEN ET AL.
simplicity. Also, we will use the term software, software product, and software platform
interchangeably. We assume X and Y compete for the same users (both individuals and
organizations). Competition ensures that X and Y vendors behave in such a manner as to
prevent the competitor from monopolizing the market. Finally, both X and Y are assumed to
contain the essential functional software features sought by potential users, and both software
platforms are assumed to meet the usability criteria of all users. This latter assumption is
required to ensure that any change in software industry market share that we observe over
the long run can be attributed primarily to hacker activities targeted at X and Y. These
assumptions are realistic when seen within the context of several software platform markets
(e.g., mobile operating systems, mobile communications networks, gaming platforms).
Market Share
The model assumes X’s and Y’smarket share growth is a function of new users who decide to
adopt the software, switch from the competing software, and switch to the competing soft-
ware. This growth rate is restrained by the number of malicious hackers who target each of the
software platforms. The generic conceptual model for this growth rate is as follows:
Rate of
Change
of
Market Share
0
BB@
1
CCA ¼
Rate of
New
Users
Adopting
the
Software
0
BBBBBB@
1
CCCCCCA
�
Rate of
Existing
Users
Switching
to Competition
0
BBBB@
1
CCCCAþ
Rate of
Users
Switching
from
Competition
0
BBBB@
1
CCCCA
�
Restraining
Effect of
Hackers
0
@
1
A (1)
We now develop this equation by making some assumptions, and then under these assump-
tions, we mathematically describe the various components of the conceptual model.
Rate of New Users Adopting the Software
The net growth rate of any software is modeled as a function of its intrinsic growth rate (i.e.,
the growth rate in the absence of any competition), and any restraining effects of the market
share of the competing software and the total market size for this type of software. The
conceptual model for the rate of new users adopting each software platform is as follows:
Rate of
New
Users
Adopting
the
Software
0
BBBBBB@
1
CCCCCCA
¼ Intrinsic Growth
Rate of the Software
� �
�
Restraining Effect
of the Total
Installation Base
0
@
1
A
�
Restraining Effect
of the Market Share of
Competing Software
0
@
1
A (2)
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 195
Intrinsic Growth in Market Share
Intrinsic growth is the growth rate of a software platform in the absence of any competi-
tion or malicious hackers. We assume this intrinsic growth to follow an exponential
function. We further assume that the population of potential software users for
a platform is sufficiently large. As a result, the random fluctuations between individual
users are small when compared against the whole population size. Therefore, it is safe to
assume each potential user has an equal chance of adopting, say for example, Software X.
We denote the probability that a new user adopts Software X as “a” and a new user adopts
Software Y as “b.” At any time t, the market share or user installation base (as a percentage
of the total market size) of Software X is X(t) and of Software Y is Y(t). To account for
positive network effects, we assume that the number of new users of any software platform
is proportional to its user installation base at any time “t.” Therefore, the intrinsic growth
rates of X and Y are defined as follows:
Intrinsic Growth Rate of the Software Xð Þ ¼ aXðtÞ
Intrinsic Growth Rate of the Software Yð Þ ¼ bYðtÞ
�
(2a)
Restraining Effect of the Maximum Market Potential
The growth rates of X and Y are adversely affected by a crowding effect. This crowding
effect is the effect of the total potential user installation base of Software X and Software
Y. The crowding effect results in a slower pace of adoption as the combined market
share of the two competing software platforms moves closer to the maximum market
potential. In fact, when the combined market share of the competing software platforms
reaches the maximum market potential, the growth of both of the software platforms
should be reduced to zero. This crowding effect is captured in the model as follows:
Restraining Effect of XðtÞ þ YðtÞ on growth of Xð Þ ¼ aXðtÞðXðtÞ þ YðtÞÞ
Restraining Effect of XðtÞ þ YðtÞ on growth of Yð Þ ¼ bYðtÞðXðtÞ þ YðtÞÞ
�
(2b)
Restraining Effect of the Competing Software’s Market Share
The growth rates of the software platforms are adversely affected by the presence of
competition. Since the two software platforms are competing for the same pool of
potential software users, the market share of the competing software has a restraining
effect, proportional to its own market share, on the growth rate of the competing software.
If we denote the constant of proportionality (i.e. competition coefficient) as “c” for Software
Y and “d” for Software X, then the restraining effect of Software Y on Software X is given
by ½cYðtÞ�XðtÞ and the restraining effect of X on Y is given by½dXðtÞ�YðtÞ.
Restraining Effect of YðtÞ on the growth of Xð Þ ¼ cXðtÞYðtÞ
Restraining Effect of XðtÞ on the growth of Yð Þ ¼ dYðtÞXðtÞ
�
(2c)
Substituting Equations 2a, 2b, and 2c into Equation 2 gives us the rate of growth of
X and Y:
196 R. SEN ET AL.
dXðtÞ
dt ¼ ½a� aðXðtÞ þ YðtÞÞ � cYðtÞ�XðtÞ
dYðtÞ
dt ¼ ½b� bðXðtÞ þ YðtÞÞ � dXðtÞ�YðtÞ
)
(2d)
Users Switching to or from Competition’s Software
Software users are known to benefit from network effects [8, 12, 15, 26]. Therefore, any
switching behavior is assumed to be a consequence of positive network effects. We assume
that the likelihood of a user switching to the competing software increases with the market
share of the competing software. For example, as the market share of Software X increases,
the probability of a user of Y switching to X increases. If the constant of proportionality of
a current user switching (or switching coefficient) is assumed to be sI > 0;where : I ¼ ðX;YÞ,
then the market-share dependent probability of a user switching from X to Y is sXYðtÞ and
that of a user switching from Y to X is sYXðtÞ. Therefore, the expected number of users
switching from X to Y is sXYðtÞXðtÞ and from Y to X is sYXðtÞYðtÞ, as captured in
Equation 3.
Rate of Switching From Software X to Software Yð Þ ¼ sXYðtÞXðtÞ
Rate of Switching From Software Y to Software Xð Þ ¼ sYXðtÞYðtÞ
�
(3)
Combining Equations (2d) and (3) gives us the expression for the rate of change in the
market share of the two competing software platforms.
dXðtÞ
dt ¼ að1� XðtÞ � YðtÞÞ � cYðtÞ þ sYðtÞ½ �XðtÞ
dYðtÞ
dt ¼ bð1� XðtÞ � YðtÞÞ � dXðtÞ � sXðtÞ½ �YðtÞ
)
(4)
Here, s ¼ sY � sX , where � 1< s< 1.
Restraining Effect of Hackers
Malicious hackers could be individuals (e.g., Grey Hats, hactivists), organized groups
(e.g., Lulz security, LulzRaft, the Hacker Encrypters, Team Appunity, Swagg Security, or
other groups), loosely federated groups of like-minded individuals (e.g., Anonymous),
or organized criminals. We model the impact of any of these forms of malicious
hackers on the growth rate of competing software as follows. Z(t) represents the
number of malicious hackers at any time “t.” We assume “e” to be the probability
that the malicious hackers will successfully target Software X and “f” to be the prob-
ability that malicious hackers will successfully target Software Y. We will also call these
probabilities the Restraining Coefficients due to their restraining effects on the growth of
X and Y users, respectively. The value of these coefficients can be affected by factors
such as the type of vulnerability being targeted, the complexity of the exploit needed to
target the vulnerability, the skills set of the hacker, and the resources available to the
hackers. We do not explicitly include these factors in the model, because we are
interested in only modeling the expected number of malicious hackers actively targeting
X and Y. The expected number of hackers targeting X at time t is then eZ(t), while the
expected number of hackers targeting Y is fZ(t).
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 197
We assume that each successful malicious hack results in encouraging more individuals
to become malicious hackers. For the sake of simplicity, we assume this conversion rate to
be 1. Prior research has established that hacking attempts against a software product or
platform appear proportional to the software’s installed user base [21]. Therefore, we
assume that the success of each hack is proportional to the number of X and Y users (i.e.,
the growth in the total number of hackers at any time “t” is proportional to X(t) and Y(t),
respectively). Thus, the overall hacker growth rate at time t is the sum of eZ(t)X(t) and
fZ(t)Y(t). Furthermore, we assume malicious hackers will leave hacking with a probability
of w. Therefore, the number of malicious hackers leaving hacking is wZ(t). The overall
growth rate of these hackers is then given as follows:
dZðtÞ
dt
¼ ½eXðtÞ þ fYðtÞ � w�ZðtÞ (5)
The restraining effect of malicious hackers on the overall market share growth rates of
Software X and Software Y is proportional to the number of hackers targeting X and Y and
the installation base of X and Y. For example, if more malicious hackers are targeting X,
and X has a higher installation user base, then a relatively higher number of X users will be
affected by these hacking attacks. Therefore, the market share growth of X will be
restrained. The restraining effect of malicious hackers on X is eX(t)Z(t) and on Y is
fY(t)Z(t). Substituting these values into Equation 4b gives the net rate of growth for
X and Y in the presence of these hackers.
dXðtÞ
dt ¼ a 1� XðtÞ þ YðtÞð Þ½ � � cYðtÞ þ sYðtÞ � eZðtÞ½ �XðtÞ
dYðtÞ
dt ¼ b 1� XðtÞ þ YðtÞð Þ½ � � dXðtÞ � sXðtÞ � fZðtÞ½ �YðtÞ
)
(6)
The parameters used in Equations 5 and 6 are summarized in Table 1.
The set of equations (Equations 5 and 6) modeling the growth of Software X and
Software Y in the presence of malicious hackers is given as follows:
Table 1. Variables used in Equations 5 and 6.
Parameter Meaning Values
a Intrinsic growth rate of X or the probability of adoption of Software X (i.e. total number of new
users divided by total number of users in a unit time period) in the absence of competition
and hackers.
0< a< 1
b Intrinsic growth rate of Y or the probability of adoption for Software Y (i.e. total number of new
users divided by total number of users in a unit time period) in the absence of competition
and hackers.
0< b< 1
s Switching Coefficient. s < 0 implies Y is gaining net users due to switching; s>0 means that X is
gaining net users due to switching; and s=0 implies that neither X nor Y gain any additional
users due to switching.
� 1< s< 1
c Competition Coefficient for the restraining effect of Y on the growth of X 0< c< 1 d Competition Coefficient for the restraining effect of X on the growth of Y 0< d< 1 e Restraining Coefficient for the restraining effect of hackers on the growth of X 0< e< 1 f Restraining Coefficient for the restraining effect of hackers on the growth of Y 0< f < 1 w Probability of hackers going mainstream, i.e. becoming legitimate, consulting, becoming white
hats, etc. : High Restraining Coefficient (i.e., e and f) implies poorly secured software. Hackers attack one or the other software, 198 R. SEN ET AL. dX 9= Notes: In Equation 7, we replace X(t), Y(t), and Z(t) with X, Y, and Z for simplicity and The complex system of equations presented in Equation 7 is intractable, that is, the Model Analysis — Equilibrium Outcomes
To analyze the stability of the system, following Barnes and Fulford [5] of equations dX dZ 8<
:
9= a 1� 2xeð Þ � aþ c� sð Þye � eze � aþ c� sð Þxe �exe eze fze exe þ fye � w
2 3 Z � ze
8<
:
9= (8)
We next analyze this linearized model (Equation 8) for three key scenarios: (a) when both The assumption that malicious hackers target neither software platform enables one to dX � Solving the set of equations in Equation 9, we identify four equilibrium solutions in this Equilibrium 1 is ðxe; yeÞ ¼ ð0; 0Þ; � � market). JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 199 dX � � � � � The stability of the equilibrium values is analyzed in Appendix A. The results are summar- This result should not be surprising. Existing literature on software competition has investi- For example, simulation of this equation system (Figure 1) with parameters (a = 0.36, Table 2. Equilibrium values when both Software X and Software Y are secure. s< � d Unstable Unstable Stable Unstable
� d< s< c Unstable Stable Stable Unstable
c< s Unstable Stable Unstable Unstable
Figure 1. Initial market edge effect when switching not enough to overcome competition.
200 R. SEN ET AL. models, tipping is reflected in equilibria where adoption of the losing system simply stops The assumption that malicious hackers will only target one software platform eliminates dX 9= The linear model for this system (from Equation 8) is now given as:
dX 8<
:
9=
; ¼
a 1� 2xeð Þ � aþ c� sð Þye � ze � aþ c� sð Þxe �xe ze 0 xe � w
2 Scenario 2 has six equilibrium points, where four of the equilibrium points are shared Equilibrium 1 is ðxe; ye; zeÞ ¼ ð0; 0; 0Þ; � � market consisting of both X & Y) targeting X) � � competitive market consisting of both X & Y, with hackers targeting X) stability analysis, we observe that the stability of the first five equilibrium points is
�� d� JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 201 Note that δ is inversely correlated to the restraining effect of X on the growth of Y (i.e. d) Case 1 (Table 3)
Unlike with Scenario 1, which was defined by a complete monopoly of X (X has all the This outcome was exemplified by the dominance of the Windows OS against its rival Table 3. Analysis of equilibrium values when only Software X is targeted by hackers. When s> δ 1 2 3 4 5 6
Case 1 c< s Unstable Unstable Unstable Unstable Stable Infeasible
Case 2 c > s Unstable Unstable Stable Unstable Stable Infeasible
Note: δ ¼ b 1w � 1 Table 4. Analysis of equilibrium values when only Software X is Targeted by hacker. When s< δ 1 2 3 4 5 6
Case 3 c< s Unstable Unstable Unstable Unstable Unstable Stable/Unstable
Case 4 c > s Unstable Unstable Stable Unstable Unstable Stable/Unstable
Notes: δ ¼ b 1w � 1 202 R. SEN ET AL. effect on the growth of the Windows OS was less than the rate at which users switched Case 2 (Table 3)
In this case, software Y (the secure software platform) can have a monopoly (i.e., Figure 2a shows Example 1, where unsecure Software X dominates the market with What is interesting to note about these results is that the non-secure software (i.e., X) is (a) Example 1: Slow (b) Example 2: Quick (c) Example 3: Convergence to 0 50 100 150 200 0.2
0 0.4
0.6
0.8
1 1.2
Time
X 0 50 100 150 200 0 Figure 2. Stable equilibrium 3 and 5 for s> δ; c > s (Case 2 in Table 3).
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 203 the secure software (i.e., Y) initially has more market share than the unsecure software Case 3 (Table 4)
In this case, while the rate at which users switch from Y to X is less than the modified Example 1: a = 0.32, b = 0.5, c = 0.4, d = 0.06, s = 0.437, e = 1, f = 0, w = 0.4
Example 2: a = 0.32, b = 0.57, c = 0.4, d = 0.06, s = 0.437, e = 1, f = 0, w = 0.33
Example 1 in Figure 3 shows a case where market dynamics converge and ultimately The rate of hackers going to main stream w has a direct impact on the final market 204 R. SEN ET AL. Case 4 (Table 4)
In Case 3, the unsecure software’s (i.e., X) existence was supported through its ability to Situation A: a = 0.33, b = 0.54, c = 0.4, d = 0.34, s = 0.357, e = 1, f = 0, w = 0.29586) software X surviving and competing with Y. However, higher hacker activity (Example Figure 3. Solution converging to stable Equilibrium 6 (market share on the vertical axis).
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 205 X being forced out of the market, and Y (the secure software) remains as a monopoly. Figure 5 demonstrates Situation B with two examples. The initial market shares of Scenario 3: Both X and Y Are Attacked by Hackers
Next, we consider what happens if both software X and software Y are attacked by dX (11)
The linear model for this system (from Equation 8) is given as:
Figure 4. Case 4 (situation A) – one stable equilibrium. Outcome is shaped by initial entry point and 206 R. SEN ET AL. dX a 1� 2xeð Þ � aþ c� sð Þye � eze � aþ c� sð Þxe �exe ez0 fz0 exe þ fye � w
2 The equilibrium values for the system of equations (11) are as follows: � � market consisting of both X & Y) � � ers target both X & Y)
Equilibrium 6: ðxe; ye; zeÞ ¼ be2þaf ½w�f ��ew½sþd� ws2þbeða�cÞþaf ðb�dÞþdwðc�sÞþsðbe�af Þþwðcs�abÞ 0 (i.e. a competitive market consisting of both X & Y, and the hackers target both X & Y)
Equilibrium 7: ðxe; ye; zeÞ ¼ 0; wf ; bðf�wÞf 2 (i.e., monopoly of Software Y, and the hackers This setup represents the most general market scenario. Just as where the vulnerability Figure 5. Case 4 (situation B) – Two stable equilibrium. Outcome is shaped by initial entry point and JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 207 vulnerability of Software Y makes Equilibrium 3 always unstable in this scenario. As the In this most general case represented by Scenario 3, the first four equilibrium points are Equilibrium 6 Feasible and Stable at Parameter Values: Example 1: a = 0.33, b = Table 4 shows that when hackers target both X and Y, a competitive software market Table 5. Analysis of equilibrium values when both X & Y are targeted by hackers. Feasible Stable Feasible Stable Feasible Stable
Equilibrium 1 Yes No Yes No Yes No Notes: Equilibrium in which the market is shared between competing software. The bold text represents scenarios where 208 R. SEN ET AL. operating systems market. According to a 2017 Gartner report, Android and iOS, Interestingly, if we modify the parameter capturing hacker attacks on iOS and Android Attacks by malicious hackers on technological systems are an ongoing managerial chal- We model a software market to examine the impact of malicious hacker attacks as We find that in the absence of malicious hacker activities, a software market is more JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 209 likely to end up a monopoly in theory. In practice, we have the example of MS Windows Figure 6. Subscriber share held by smartphone OS in the United States 2012-2018 (https://www.statista.com).10
Figure 7. Subscriber share held by smartphone OS in the United States 2012-2018 (generated from 210 R. SEN ET AL. https://www.statista.com https://www.statista.com software platforms, their activity introduces a new co-existing equilibrium (i.e., Contributions and Policy Implications
The theoretical, managerial, and policy implications of our results are as follows. First, the The common public policy opinion is that malicious hackers are only bad for the What should government policy do? The study results showing the unintended benefit JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 211 bounty programs,11 and/or government agencies (e.g., NIST’s National Vulnerability What can the software industry do? The software industry, while not a big fan of ● Software developers such as PayPal, Google, and Firefox have Vulnerability Rewards ● Software producers such as Apple and Microsoft sponsor and encourage hacking ● Software producers paying ethical hackers to test their products for software vulner- ● As part of risk assessment, industrial software users now often will hire consulting 212 R. SEN ET AL. services to their clients. Some user firms, such as United Airlines, employ resources In short, by incorporating known hackers into the operational activities and processes of Potential Limitations and Future Research Directions
As with any modeling study, the model documented in this paper exhibits potential Notes black-white-and-grey-hat-hackers.html users-are-still-switching-to-ios time. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 213 http://www.consumer.ftc.gov/blog/2017/09/equifax-data-breach-what-do http://www.gartner.com/newsroom/id/3859963 12. https://nvd.nist.gov/ 1. Algarni, A.M.; and Malaiya, Y.K. Software vulnerability markets: Discoverers and buyers. 2. Arora, A.; Caulkins, J.P.; and Telang, R. Sell first, fix later: Impact of patching on software 3. August, T.; and Tunca, T.I. Who should be responsible for software security? A comparative 4. Anderson, R. Why information security is hard- An economic perspective. 17th Annual 5. Barnes, R.; and Fulford, G.R. Mathematical Modelling with Case Studies: A Differential 6. Bitzer, J. Commercial versus open source software: The role of product heterogeneity in 7. Brewster, T. US cybercrime laws being used to target security researchers. The Guardian, 8. Brynjolfsson, E. and Kemerer, C.F. Network externalities in microcomputer software: An 9. Cavusoglu, H.; Cavusoglu, H.; and Raghunathan, S. Efficiency of vulnerability disclosure 10. Cavusoglu, H.; Mishra, B.; and Raghunathan, S. The effect of internet security breach 11. CBS. These Cybercrime Statistics Will Make You Think Twice About Your Password: 12. Economides, N.; and Katsamakas, E. Two-sided competition of proprietary vs. open source 13. English Oxford Living Dictionaries. Accessed on August 25 2019: https://en.oxforddiction 14. Fan, M.; Kumara, S.; and Whinston, A.B. Short-term and long-term competition between 15. Farrell, J.; and Saloner, G. Installed base and compatibility: Innovation, product pronounce- 16. Finifter, M.; Akhawe, D.; and Wagner, D. An empirical study of vulnerability rewards 214 R. SEN ET AL. http://www.criminaldefenselawyer.com/crime-penalties/federal/Vandalism.htm http://www.united.com/web/en-US/content/contact/bugbounty.aspx/ https://www.acsac.org/2001/papers/110 https://www.theguardian.com/technology/2014/may/29/us-cybercrime-laws-security-researchers https://www.theguardian.com/technology/2014/may/29/us-cybercrime-laws-security-researchers http://www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them-/ http://www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them-/ http://www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them-/ http://www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them-/ https://en.oxforddictionaries.com/definition/hacker https://en.oxforddictionaries.com/definition/hacker Accessed on August 25 2019: https://www.usenix.org/conference/usenixsecurity13/technical- 17. Finkle, J. 6 more stores attacked by same hack as target: Firm. Reuters, January 17th 2014. 18. Gallaugher, J.M., and Wang, Y. Understanding network effects in software markets: Evidence 19. Gandal, N. Competing compatibility standards and network externalities in the PC software 20. Gander, K. Microsoft pays out $100,000 to hacker who exposed Windows security flaws. 21. Garcia, A.; Sun, Y.; and Shen, J. Dynamic platform competition with malicious users. 22. Gordon, L.A.; Loeb, M.P.; and Sohail, T. A Framework for using insurance for cyber risk 23. Gustin, S. U.S. “Hacker” crackdown sparks debate over computer-fraud law. Time, March 19, 24. Jaisingh, J.; See-To, E.; and Tam, KY. The impact of open source software on the strategic 25. Kannan. K.; and Telang, R. Market for software vulnerabilities? Think again. Management 26. Katz, M. L.; and Shapiro, C. Systems competition and network effects. The Journal of 27. Kim, B. C.; Chen, P.; and Mukhopadhyay, T. (2010b). The effect of liability and patch release 28. Lanzi, D. Competition and open source with perfect software compatibility. Information 29. Lin, L. Impact of user skills and network effects on the competition between open source and 30. Marks, P. Dot-dash-diss: The gentleman hacker’s 1903 lulz. New Scientist. Issue 2844, 31. Marshall, A. Causes, effects and solutions of piracy in the computer software market. Review 32. Merriam-Webster Dictionary Accessed on August 25 2019: http://www.merriam-webster. 33. Rochet, J.; and Tirole, J. Platform competition in two-sided markets. Journal of the European 34. Samtani, S.; Chinn, R.; Chen, H.; and Nunamaker, J.F. Exploring emerging hacker assets and 35. Sen, R. A Strategic analysis of competition between open source and proprietary software. 36. Swire, P.P. A model for when disclosure helps security: What is different about computer and 37. Telang, R.; and Wattal, S. An empirical analysis of the impact of software vulnerability 38. Verizon. 2016 Data Breach Investigations Report. http://www.verizonenterprise.com/verizon- JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 215 https://www.usenix.org/conference/usenixsecurity13/technical-sessions/presentation/finifter https://www.usenix.org/conference/usenixsecurity13/technical-sessions/presentation/finifter http://www.huffingtonpost.com/2014/01/17/six-other-stores-are-bein_n_4618414.html U.S. ‘Hacker’ Crackdown Sparks Debate over Computer-Fraud Law U.S. ‘Hacker’ Crackdown Sparks Debate over Computer-Fraud Law http://www.merriam-webster.com/dictionary/hacker http://www.merriam-webster.com/dictionary/hacker http://www.verizonenterprise.com/verizon-insights-lab/dbir/2016/ http://www.verizonenterprise.com/verizon-insights-lab/dbir/2016/ 39. Wright, C.S. Software, vendors and reputation: An analysis of the dilemma in creating secure 40. Zetter, K. The biggest security threats we’ll face in 2016.Wired.com, January 1, 2016. https://www. 41. Zhao, M.; Grossklags, J.; and Liu, P. An empirical study of web vulnerability discovery About the Authors
Ravi Sen (rsen@mays.tamu.edu; corresponding author) is an Associate Professor at Mays Business Ajay Verma (ajay.verma@lmco.com) is a Senior Engineer at Lockheed Martin Missiles and Fire Gregory R. Heim (gheim@mays.tamu.edu) is the Janet and Mark H. Ely `83 Professor in the 216 R. SEN ET AL. https://www.wired.com/2016/01/the-biggest-security-threats-well-face-in-2016/ https://www.wired.com/2016/01/the-biggest-security-threats-well-face-in-2016/ Copyright of Journal of Management Information Systems is the property of Taylor & Francis Introduction Scenario 1 (Benchmark): Both Fully Secured; Hackers Attack Neither Platform Conclusion Using Design-Science Based Gamification to Improve aInstitute of Information Management, University of St. Gallen, St. Gallen, Switzerland; bDepartment of ABSTRACT KEYWORDS Information technology (IT) security compliance deals with techniques and processes that CONTACT Paul Benjamin Lowry Paul.Lowry.PhD@gmail.com Department of Business Information Technology, Supplemental data for this article can be accessed on the publisher’s website.
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS © 2020 Taylor & Francis Group, LLC http://orcid.org/0000-0002-0187-5808 https://doi.org/10.1080/07421222.2019.1705512 https://crossmark.crossref.org/dialog/?doi=10.1080/07421222.2020.1705512&domain=pdf&date_stamp=2020-02-19 behavior can easily undermine it [102]; moreover, it is ultimately the employees’ respon- Understandably, researchers have questioned whether extant organizational security In contrast, security education, training, and awareness (SETA) programs can leverage Employee training is notorious for failing, as even though it often delivers the right Rather than similarly concluding that SETA programs are ineffective, we instead aim to With the final goal of improving security training in organizations, our study strength- 130 M. SILIC AND P.B LOWRY with 420 participants shows that fulfilling users’ motivations and coping needs through Gamification applies knowledge from gaming theory and flow theory [23, 24, 92] to The common themes that emerge from the various definitions over the past decade are: Another key gamification concept is that a game-like user experience activates users’ ● Gamification is the use of game-like IT design artifacts and system processes to ● Security gamification is applying game-like design artifacts and system processes to Previous research has suggested that game design can include the use of goals, rewards, … large gap in research of potential relevance to organizations … more research is needed on Bui et al.’s review supports three of our study’s core assumptions. First, a careful DSR JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 131 carefully contextualized to the instrumental goal of the gamification task—in our context, Several researchers have posited that gamification can foster employees’ training and sub- Given the compelling opportunities in the literature, we argue that an improved approach how these design elements should be chosen for specific tasks, and how they interact among We thus propose that gamified security training represents a natural opportunity to apply Overview of Our DSR Approach
Previous gamification studies have largely lacked a systematic DSR approach [13] to the Proof-of-concept is the point at which enough evidence exists to show that the described Similar approaches to proof-of-concept and proof-of-value have recently been introduced 132 M. SILIC AND P.B LOWRY We systematically established proof-of-concept and proof-of-value and moved toward Establish the Gamified Design as an Artifact Focus on Design Problem Relevance Moreover, our literature review revealed that the traditional approach of encouraging Gamification design – Gamification objects Target system User interaction system Experiential outcomes
Instrumental outcomes principles
Gamified system Challenge Learning
Response efficacy HMSAM
Figure 1. Framework for design and research of gamified systems (adapted from Liu et al. [59]).
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 133 Create Objectives for Design Evaluation Apply a DSR Kernel Theory Contextualized to Gamification To proceed with context-specific theorizing, we used a framework similar to the one Propose Guiding Design Principles to Bridge DSR Design Objectives and the DSR Kernel Design principle #1: The gamified training system should incorporate different design Regarding coping support, it is crucial that the new system has features that sustain and Design principle #2: The gamified training system should provide new knowledge through Here, there are three conceptual design issues that need to be addressed [73]: The first 134 M. SILIC AND P.B LOWRY behavioral change. Both intrinsic and extrinsic motivations can positively influence an The second issue concerns the “conceptual distance between a latent dependent vari- The third and final conceptual design issue concerns the “potential interdependence of Establish Proof-of-Concept Step 1: Before creating a working prototype, we reviewed the gamified security litera- This review is detailed in Supplemental Appendix A (see Tables A.1–A.2). implemented in our context and that we followed to implement multiple versions of our Step 3: We then further bridged design and theory by systematically applying our kernel Step 4: Finally, in Table A.5 we mapped specific motivations to each of the gamification JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 135 Establish Proof-of-Value The key role of our kernel theory, HMSAM, was twofold: to guide the design and help HMSAM was chosen primarily because it is a native IS theory that focuses heavily on HMSAM also leverages the technology acceptance model (TAM) to explain that Figure 2 depicts the HMSAM that we extend for the new context of BI related to 136 M. SILIC AND P.B LOWRY Core Kernel Theory Assumptions for Achieving Immersion
A core assumption of our operationalized kernel theory is that the experience of flow (and feedback, and (3) a balance of challenges and skills [29]. The first condition indicates Enable the accomplishment of dual goals, in which both sides can see benefits (e.g., Infusing Learning and Security Coping into Our Context
Like motivations, positive coping skills can foster behavioral change. A key way to deliver Challenge*
PEOU
PIU
Curiosity
Joy
Control Immersion
Behavioral follow security Actual phishing security policies
H6
Learning Security Security H4
H1
H2a
H2b
H3b
H3a
*Key limiting assumption: Challenge must be “appropriate” in balance with Model part 2 (in grey): to encourage security-related Model part 3 (grey hash): demographic controls
H5
Lines without hypotheses represent Actual Behavior Age Experience OSC BI Controls
Age Figure 2. Operationalized and extended Kernel theory: HMSAM. PEOU, perceived ease of use; PIU, JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 137 more likely to believe they can comply and protect their organizations. Conversely, employees Hypothesis 1. Increased perceived learning in a gamified security training context is asso- Such learning fosters general coping abilities, most commonly termed “security Research [79] has also found that goal-oriented individuals demonstrate higher levels Hypothesis 2a–b. Increased perceived learning in a gamified security learning context is Coping and Behavioral Change
Research has demonstrated the importance of improving coping skills as a means of Efficacy should increase not only as a result of learning but also specifically as a result of 138 M. SILIC AND P.B LOWRY Thus, the increased self-efficacy and response efficacy resulting from gamified systems should Hypothesis 3a–b. Both (a) increased security response efficacy and (b) security self-efficacy in Balancing Skills and Challenges
Again, the third condition of achieving immersion, per Davis and Csikszentmihalyi [29], is Thus, we add to HMSAM the concept of challenges, which when met can fulfill motivations Meng et al. [69] argued that the optimal challenge leads to optimal immersion. We Hypothesis 4. Perceived challenge will have a positive and curvilinear (inverted U-shaped) Fulfilling Motivations for Behavioral Change
CA theory [3] predicts that immersion is positively associated with BI, which has been JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 139 that BI is parallel to our context and thus involves the intention to follow security policies, Gamification and immersion are powerful influences on behavioral change in indivi- Immersion is an experience of total involvement that causes external demands to be The underlying causal mechanisms are not just cognitive, as inferred by the TRA, but Hypothesis 5. Increased immersion in a gamified security training context is associated with Moreover, both the TRA [5] and the TPB [4] predict a strong link between intention 140 M. SILIC AND P.B LOWRY Measuring behaviors is an excellent way to further determine whether gamification can Hypothesis 6. Increased BI should be associated with an increased actual phishing response, Modeling Counter-explanations Through Control Variables
Testing counter-explanations of other possible predictors has pragmatic relevance in IS Pilot Study for Proof-of-Value
Once we deemed the system to have achieved reasonable proof-of-concept, we prepared to Main Study Design for Proof-of-Value in Actual Use
The final study for formal proof-of-value in actual use was designed as a controlled field Notably, the control group received no training or notifications. However, the gamifica- JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 141 to take a quiz after completing each training session. This allowed for a cleaner manipulation Gamified System and Procedures
The gamified system’s objective was to educate users about security topics using various At the first login, the gamemaster appeared and explained the game mechanics (e.g., Figure 3. Main screen of gamified system.
142 M. SILIC AND P.B LOWRY The participants in the e-mail control group did not participate in the controlled field We chose phishing as a key focus of the training because it is a much more urgent Measures for Design Evaluation
The measurement items were borrowed or adapted from previous studies (see Table C.1). Measurement Model
First, a confirmatory factor analysis was conducted. The results indicated that some items’ According to all tests, the measurement model exhibited good reliability, and conver- JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 143 Structural Model Results
We used Mplus 7 software, a covariance-based structural equation modeling (CB-SEM) tool, Finally, H4 was supported, but because Hypothesis 4 was hypothesized as a nonlinear We then performed the transformation through a squared term and entered this new Challenge* PIU Curiosity Joy Control 0.205***
Immersion BI 0.433**
0.673***
0.701***
0.603***
0.661***
0.677***
0.377 n/s
0.473***
0.730***
Actual phishing H6 0.415***
Learning R2 = 0.173
Security R2 = 0.481
H4 0.540***
H1 0.839***
H2a 0.255***
H2b 0.683***
H3b 0.220***
H3a 0.131**
Model part 2 (in grey): H5 0.516***
Lines without hypotheses represent Age n/s Experience n/s OSC n/s OCM n/s
BI Controls Experience n/s OSC n/s OCM n/s 0.783***
*Key limiting assumption: Challenge must be “appropriate” (in balance with Figure 4. Structural model testing results of the operationalized Kernel theory at six months12.
144 M. SILIC AND P.B LOWRY Manipulation Checks of Instrumental Goals
Given that the field experiment was conducted to determine whether the gamified The treatment effects that occurred between the e-mail and gamified groups in terms of was run to compare the values for significant differences. Crucially, our group (i.e., cell) The discussion of our study is guided by the structural example provided by Abbasi et al. Table 1. Summary of who was and was not phished in the control and Control group* (n = 38) 17 (44.7 percent) 21 (55.3 percent) *Note: The control group was a randomly selected group of employees who had not Table 2. Comparisons between the treatment Gamified vs. control 2.2561* *Note: A result is significant at p < 0.05 (assuming
a two-tailed hypothesis test).
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 145 Recap of Our General DSR Study Goals
The goals of our DSR study were pragmatically driven from the serendipitous confluence Applying our goals to practice, we followed rigorous and systematic DSR principles, Recap of Our DSR Approach
To support our DSR approach, we adhered to a DSR methodology that closely followed 146 M. SILIC AND P.B LOWRY Establishing Proof-of-Concept
Before moving to proof-of-value, we carefully followed the steps for proof-of-concept Thus, HMSAM provided the basis for our kernel theory and evaluation model, which Establishing Overall Proof-of-Value
To establish proof-of-value, once the system was deemed ready, we first formally pilot Establishing Proof-of-Value in Actual Practice Likewise, the employees’ learning, efficacy, and behaviors were strongly and positively JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 147 solely on manipulating positive motivations and improving participants’ coping responses Gamified system design elements contributed to a more immersive experience and If our results hold, their implications for security training in practice are noteworthy. Following Baxter et al.’s [8] conclusions, and in consultation with the French company, These overall results provide proof-of-value in actual practice — not just because our Establishing Proof-of-Value in Research 148 M. SILIC AND P.B LOWRY perceptual and objective measurement; (5) being grounded in a native IS kernel theory Notably, we are the first to use a long-term study in a gamified security context. This Another research contribution is that actual security compliance was measured and Establishing Proof-of-Value in Theory As a further empirical demonstration of our theoretical contribution, the R2 for BI in our Moreover, we added the demographic controls and the counter-explanations of TMSC, JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 149 both statistically significant and meaningful in terms of its application in the field of highly Moreover, the challenge-related findings led to a couple of unexpected key contributions We thus conducted a follow-up analysis to run two contrasting regression models; one Given that challenge is essential to our gamification context, we also suspected that Finally, we learned that the “time dimension” does not favor e-mail treatment. From the 150 M. SILIC AND P.B LOWRY Figure 6. Linear and negative relationship between challenge and immersion in a non-gamification context.
Figure 5. The curvilinear and inverted U-shaped relationship between challenge and immersion. : All the statistics used in this figure are standardized. Challenge is on the x-axis and immersion is JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 151 a more rewarding and immersive experience that fostered actual behavioral change. If this Research Agenda to Establish Proof-of-Use
Beyond demonstrating proof-of-concept and proof-of-value, according to [75, p. 16], a third seeks to create self-sustaining and growing communities of practice around a generalizable Thus, proof-of-use is perhaps the greatest limitation and future research opportunity for Likewise, building a working prototype was an iterative process in which we actively Another consideration that needs to be examined for proof-in-use is that we cannot We also believe this study offers an ideal opportunity for the kind of future interdisciplinary 152 M. SILIC AND P.B LOWRY compliance contexts in which intrinsic motivations and immersion play strong roles. Moreover, for further proof-in-use, more research needs to examine each element involved We conducted a DSR project that theoretically and empirically demonstrates that careful Notes foster a useful outcome other than entertainment [32, 92]. The features include design 2. Namely, they statistically rejected the associated hypotheses “H2: Individuals who receive JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 153 3. Ultimately, users’ behaviors should be influenced by the gamified tasks in which a flow 4. We aim for both application to practice but also to tackle the challenge of integrating our 5. Meaningful engagement in this context refers to the outcomes of the gamification design. That 6. Other studies have implemented several of the gamification design principles, but typically in 7. The organization we worked with preferred to have a simple system implemented without 8. For example, a study found that playing the Super Mario Bros. game resulted in a significant 9. A couple of the more notable improvements we made included two major adjustments: (1) 10. We have no further survey data on the employees who opted to not participate. However, as 11. The second step of model validation was to test for discriminant validity. Here, we first 12. PEOU = perceived ease of use; PIU = perceived intrinsic usefulness; BI = behavioral 13. As the design is unbalanced, we tested the equality of covariance matrices using Box’s M test. 14. Ecological validity should not be confused with external validity. Ecological validity indicates 154 M. SILIC AND P.B LOWRY often because they are collected or generated in real-life settings (e.g., actual employees trying 15. To demonstrate these points empirically, we followed Chin et al. [18] The effect of adding our 2 2 ƒ2 = 0.884, which is a “huge” effect size (anything above 0.35 is considered “large”), is rarely seen in 16. ƒ2 (Cohen’s effect size) = R2covariate model – R 2 In this case, ƒ2 = 0.019, which is a “trivial” effect size (“small” requires a size of 0.20 or 17. The model summary statistics between Model 1 (linear) and Model 2 (curvilinear; quadratic) 18. Using only the data in the e-mail treatment, the model summary statistics between Model 1 ORCID
Paul Benjamin Lowry http://orcid.org/0000-0002-0187-5808
1. Abbasi, A; Zhang, Z; Zimbra, D; Chen, H; and Nunamaker Jr, JF. Detecting fake Websites: 2. Adams, M and Makramalla, M. Cybersecurity skills training: An attacker-centric gamified 3. Agarwal, R and Karahanna, E. Time flies when you’re having fun: Cognitive absorption and 4. Ajzen, I. The theory of planned behavior. Organizational Behavior and Human Decision 5. Ajzen, I and Fishbein, M. Understanding Attitudes and Predicting Social Behavior. Englewood Change Statistics
Model R R2 Adjusted R2 Std. Error of the Estimate R2 Change F Change Sig. F Change
1 .332a .111 .109 .945 .111 55.903 .000 apredictors (constant), challenge; bpredictors (contact), challenge, challenge2 (quadratic relationship)
Change Statistics 1 .383a .147 .104 .9687208 .147 3.444 .078 apredictors (constant), challenge; bpredictors (contact), challenge, challenge2 (quadratic relationship) JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 155 6. Bandura, A. Perceived self-efficacy in cognitive development and functioning. Educational 7. Banfield, J and Wilkerson, B. Increasing student intrinsic motivation and self-efficacy 8. Baxter, RJ; Holderness, DK; and Wood, DA. Applying basic gamification techniques to it 9. Benware, CA and Deci, EL. Quality of learning with an active versus passive motivational set. 10. Boot, WR; Kramer, AF; Simons, DJ; Fabiani, M; and Gratton, G. The effects of video game 11. Boss, SR; Galletta, DF; Lowry, PB; Moody, GD; and Polak, P. What do users have to fear? 12. Brown, SA; Dennis, AR; and Venkatesh, V. Predicting collaboration technology use: 13. Bui, A; Veit, D; and Webster, J. Gamification–a novel phenomenon or a new wrapping for 14. Bulgurcu, B; Cavusoglu, H; and Benbasat, I. Information security policy compliance: An 15. Burns, AJ; Roberts, TL; Posey, C; and Lowry, PB. Examining the influence of organizational 16. Chen, X; Chen, L; and Wu, D. Factors that influence employees’ security policy compliance: 17. Chen, Y; Ramamurthy, K; and Wen, K-W. Organizations’ information security policy com- 18. Chin, W; Marcolin, B; and Newsted, P. A partial least squares latent variable modeling 19. Coonradt, C. The Game of Work: How to Enjoy Work As Much As Play. Layton, Utah: Gibbs 20. Cowley, B; Charles, D; Black, M; and Hickey, R. Toward an understanding of flow in video 21. Crossler, RE and Bélanger, F. The effects of security education training and awareness 22. Crossler, RE; Johnston, AC; Lowry, PB; Hu, Q; Warkentin, M; and Baskerville, R. Future 23. Csikszentmihalyi, M. Finding Flow: The Psychology of Engagement with Everyday Life. 24. Csikszentmihalyi, M. Beyond Boredom and Anxiety. San Francisco, CA, US: Jossey-Bass, 2000. a model for cognitive-affective user responses. International Journal of Human Computer 156 M. SILIC AND P.B LOWRY 26. D’Arcy, J and Herath, T. A review and analysis of deterrence theory in the IS security 27. D’Arcy, J; Hovav, A; and Galletta, D. User awareness of security countermeasures and its 28. D’Arcy, J and Lowry, PB. Cognitive-affective drivers of employees’ daily compliance with 29. Davis, M and Csikszentmihalyi, M. Beyond Boredom and Anxiety: The Experience of Play in 30. Deci, EL and Ryan, RM. Intrinsic Motivation and Self-determination in Human Behavior. 31. Deterding, S. Gamification: Designing for motivation. Interactions, 19, 4 (2012), 14–17. ness: Defining gamification. Presented at 15th International Academic MindTrek Conference: 33. Domínguez, A; Saenz-de-Navarrete, J; De-Marcos, L; Fernández-Sanz, L; Pagés, C; and 34. Edwards, DA; Wetzel, K; and Wyner, DR. Intercollegiate soccer: Saliva cortisol and testoster- 35. Fassbender, E; Richards, D; Bilgin, A; Thompson, WF; and Heiden, W. VirSchool: The effect 36. Ferguson, AJ. Fostering e-mail security awareness: The West Point carronade. EDUCASE 37. Floyd, DL; Prentice-Dunn, S; and Rogers, RW. A meta-analysis of research on protection 38. Gregor, S and Hevner, AR. Positioning and presenting design science research for maximum 39. Haans, RF; Pieters, C; and He, ZL. Thinking about U: Theorizing and testing U-and inverted 40. Herath, T and Rao, HR. Encouraging information security behaviors in organizations: Role of 41. Hevner, AR; March, ST; Park, J; and Ram, S. Design science in information systems research. 42. Ho, SM and Warkentin, M. Leader’s dilemma game: An experimental design for cyber insider 43. Hong, W; Chan, FK; Thong, JY; Chasalow, LC; and Dhillon, G. A framework and guidelines 44. Hsu, C-Y; Tsai, C-C; and Wang, H-Y. Facilitating third graders’ acquisition of scientific 45. Hsu, JS; Shih, S; Hung, YW; and Lowry, PB. The role of extra-role behaviors and social 46. Hu, Q; Dinev, T; Hart, P; and Cooke, D. Managing employee compliance with information JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 157 47. Hwang, G-J; Wu, P-H; and Chen, C-C. An online game approach for improving students’ 48. Jennett, C; Cox, AL; Cairns, P; Dhoparee, S; Epps, A; Tijs, T; and Walton, A. Measuring and 49. Jensen, ML; Dinger, M; Wright, RT; and Thatcher, JB. Training to mitigate phishing attacks using 50. Johns, G. The essential impact of context on organizational behavior. Academy of 51. Johnson, RD and Marakas, GM. Research report: The role of behavioral modeling in 52. Johnston, AC; Warkentin, M; and Siponen, M. An enhanced fear appeal rhetorical frame- 53. Kapp, KM. The Gamification of Learning and Instruction: Game-Based Methods and Strategies 54. Koepp, MJ; Gunn, RN; Lawrence, AD; Cunningham, VJ; Dagher, A; Jones, T; Brooks, DJ; 55. Kohn, A. Why incentive plans cannot work. Harvard Business Review, 71, 5 (1993), 54–60. structural brain plasticity: Gray matter changes resulting from training with a commercial 57. Kumaraguru, P; Sheng, S; Acquisti, A; Cranor, LF; and Hong, J. Teaching Johnny not to fall 58. Li, M; Jiang, Q; Tan, C-H; and Wei, K-K. Enhancing user-game engagement through software 59. Liu, D; Santhanam, R; and Webster, J. Toward meaningful engagement: A framework for 60. Lowry, PB; Dinev, T; and Willison, R. Why security and privacy research lies at the centre of 61. Lowry, PB; Gaskin, J; and Moody, GD. Proposing the multimotive information systems 62. Lowry, PB; Gaskin, J; Twyman, N; Hammer, B; and Roberts, T. Taking “fun and games” 63. Lowry, PB and Moody, GD. Proposing the control-reactance compliance model (CRCM) to 64. Lowry, PB; Moody, GD; and Chatterjee, S. Using IT design to prevent cyberbullying. Journal 65. Lowry, PB; Posey, C; Bennett, RJ; and Roberts, TL. Leveraging fairness and reactance theories 66. Ma, Q; Pei, G; and Meng, L. Inverted u-shaped curvilinear relationship between challenge and 158 M. SILIC AND P.B LOWRY 67. Martocchio, JJ and Judge, TA. Relationship between conscientiousness and learning in 68. Mathieu, JE; Martineau, JW; and Tannenbaum, SI. Individual and situational influences on 69. Meng, L; Pei, G; Zheng, J; and Ma, Q. Close games versus blowouts: Optimal challenge 70. Moody, GD; Lowry, PB; and Galletta, DF. It’s complicated: Explaining the relationship 71. Nelson, MJ. Soviet and American precursors to the gamification of work. Presented at 72. Nicholson, S. A recipe for meaningful gamification. Gamification in Education and Business. 73. Niehaves, B and Ortbach, K. The inner and the outer model in explanatory design theory: 74. Nunamaker, JF; Twyman, NW; Giboney, JS; and Briggs, RO. Creating high-value real-world 75. Nunamaker Jr, JF; Briggs, RO; Derrick, DC; and Schwabe, G. The last research mile: 76. Nunamaker Jr, JF; Chen, M; and Purdin, TD. Systems development in information systems 77. Nunamaker Jr., JF and Briggs, RO. Toward a broader vision for information systems. ACM 78. Osterloh, M and Frey, BS. Motivation, knowledge transfer, and organizational forms. 79. Payne, SC; Youngcourt, SS; and Beaubien, JM. A meta-analytic examination of the goal 80. Peffers, K; Gengler, CE; and Tuunanen, T. Extending critical success factors methodology to 81. Peffers, K; Tuunanen, T; Rothenberger, MA; and Chatterjee, S. A design science research 82. Pentland, SJ; Twyman, NW; Burgoon, JK; Nunamaker Jr, JF; and Diller, CB. A video-based 83. Posey, C; Roberts, TL; and Lowry, PB. The impact of organizational commitment on insiders’ 84. Posey, C; Roberts, TL; Lowry, PB; Bennett, RJ; and Courtney, J. Insiders’ protection of 85. Robson, K; Plangger, K; Kietzmann, J; McCarthy, I; and Pitt, L. Understanding gamification 86. Rousseau, DM and Fried, Y. Location, location, location: Contextualizing organizational JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 159 87. Ryan, RM and Deci, EL. Self-determination theory and the facilitation of intrinsic motivation, 88. Sen, R; Subramaniam, C; and Nelson, ML. Determinants of the choice of open source 89. Shernoff, DJ; Kelly, S; Tonks, SM; Anderson, B; Cavanagh, RF; Sinha, S; and Abdi, B. Student 90. Silic, M and Back, A. Shadow IT–A view from behind the curtain. Computers & Security, 45, 91. Siponen, M and Vance, A. Neutralization: New insights into the problem of employee 92. Treiblmaier, H; Putz, L-M; and Lowry, PB. Setting a definition, context, and research agenda 93. Twyman, NW; Lowry, PB; Burgoon, JK; and Jay F. Nunamaker, J. Autonomous scientifically 94. Vance, A; Lowry, PB; and Eggett, D. Using accountability to reduce access policy violations in 95. Vance, A; Lowry, PB; and Eggett, D. A new approach to the problem of access policy 96. Venkatesh, V; Morris, MG; Davis, GB; and Davis, FD. User acceptance of information 97. Venkatesh, V and Speier, C. Computer technology training in the workplace: A longitudinal 98. Wakefield, RL and Whitten, D. Mobile computing: A user study on hedonic/utilitarian 99. Wang, J; Li, Y; and Rao, HR. Overconfidence in phishing email detection. Journal of the 100. Wang, J; Li, Y; and Rao, HR. Coping responses in phishing detection: An investigation of 101. Willison, R; Lowry, PB; and Paternoster, R. A tale of two deterrents: Considering the role of 102. Willison, R and Warkentin, M. Beyond deterrence: An expanded view of employee computer About the Authors
Mario Silic (mario.silic@unisg.ch) is a post-doctoral researcher at the Institute of Information Paul Benjamin Lowry (Paul.Lowry.PhD@gmail.com; corresponding author) is the Suzanne Parker 160 M. SILIC AND P.B LOWRY and supply chains. Dr. Lowry has published over 130 journal articles in Journal of Management JOURNAL OF MANAGEMENT INFORMATION SYSTEMS 161 Copyright of Journal of Management Information Systems is the property of Taylor & Francis Introduction Kernel Theory Foundation for Proof-of-Concept and Proof-of-Value
0
i.e., e + f = 1. In any time period X(t) + Y(t) ≤ 1. Therefore, in equilibrium X + Y = 1.
dt ¼ a� a X þ Yð Þ þ sY � cY � eZ½ �X
dY
dt ¼ b� b X þ Yð Þ � sX � dX � fZ½ �Y
dZ
dt ¼ ½eX þ fY � w�Z
;: (7)
readability.
equations do not yield general solutions. Thus, we next analyze this equation system by
identifying the system’s equilibrium points and then performing a stability analysis of
these equilibrium points.
(Equation 7) near the equilibrium points, a linear model about the equilibrium point is
used. The linearized model about an equilibrium point xe; ye; zeð Þ is given as follows:
dt
dY
dt
dt
; ¼
� aþ dþ sð Þye b 1� 2yeð Þ � bþ dþ sð Þxe � fze �fye
4
5 X � xeY � ye
;
software platforms are fully secure, that is, they are impervious and thus not targeted by
hackers; (b) when only one software is secure, and thus only one software can be targeted
by hackers; and (c) when neither of the software platforms are completely secure, and thus
both are targeted by hackers.
eliminate all terms pertaining to the restraining effect of hacker activity. The system of
equations (see Equation 7) is thus reduced as follows:
dt ¼ a� aðX þ YÞ þ sY � cY½ �X;
dY
dt ¼ b� bðX þ YÞ � sX � dX½ �Y
(9)
scenario:
Equilibrium 2 is ðxe; yeÞ ¼ ð1; 0Þ (i.e. monopoly of Software X);
Equilibrium 3 is ðxe; yeÞ ¼ ð0; 1Þ (i.e. monopoly of Software Y);
Equilibrium 4 is ðxe; yeÞ ¼ b c�sð Þa dþsð Þþ bþðdþsÞ½ � c�sð Þ ; a dþsð Þaþðc�sÞ½ � dþsð Þþb c�sð Þ
(i.e. competitive
The reduced linear model for this system (from Equation 8) is then given as:
dt
dY
dt
¼ a 1� 2xeð Þ � aþ c� sð Þye � aþ c� sð Þxe� bþ d þ sð Þye b 1� 2yeð Þ � bþ d þ sð Þxe
X � xe
Y � ye
ized in Table 2. As one can see from the stability analysis, the long-term stable equilibriums
result in a monopoly market (Equilibrium 2 or 3). From Table 2, we also notice that if the
amount of user switching to one software platform overcomes the restraining effect of the
competitor (s> c or s< � d), then the competitor cannot have a monopoly. Otherwise
(� d< s< c), the initial market share will dictate the monopoly outcome. Given a market
share of each software platform at the initial condition (say, Y 0ð Þ), there is a particular tipping
market share point for software X 0ð Þ either to win and monopolize the market, or to get
obliterated. The only equilibrium in which both of the software platforms can co-exist in the
market is Equilibrium 4. However, this equilibrium is unstable and therefore unsustainable
mainly due to the destabilizing restraining factors from the competitor software.
gated these issues at length. For example, in markets with a network effect and incompatible
software platforms (as is the case in this study), there is a natural tendency towards de facto
standardization, which means that everyone tends to use the same software platform [26]. This
resulting monopoly can be explained by “tipping,” which is the tendency of one system to gain
substantial market share relative to its competition once it has gained an initial edge.
b = 0.36, c = 0.64, d = 0.19, s = 0.45) and initial market share Y(0) = 0.86 results in
a tipping value of 0.25 for X(0). In static competition models, this tipping phenomenon is
reflected in equilibria in which a single system dominates [26]. In dynamic competition
Condition Equilibrium 1 Equilibrium 2 Equilibrium 3 Equilibrium 4
once a rival system is introduced or accepted in the marketplace [15, 26]. Consumer
heterogeneity and product differentiation tend to limit this tipping phenomenon and to
sustain multiple networks. If the rival systems have distinct features sought by certain
consumers, two or more systems may be able to survive by catering to consumers who
care more about product attributes rather than about network size. In this situation,
market equilibrium with multiple incompatible products reflects the social value of
variety. However, in our case, we assume the competing software platforms offer similar
functionality, features, and usability to the users. As a result, in the absence of malicious
hackers, monopoly is the most likely long-term equilibrium outcome in the market.
from (Equation 7) the hacking activity terms for one of the software platforms. Without
loss of generality, we assume X is targeted by the hackers. Hence, we assume that
e ¼ 1; f ¼ 0, which gives the following reduced set of equations (see Equation 7):
dt ¼ a� aðX þ YÞ þ sY � cY � Z½ �X
dY
dt ¼ b� bðX þ YÞ � sX � dX½ �Y
dZ
dt ¼ ½X � w�Z:
; (10)
dt
dY
dt
dZ
dt
� aþ dþ sð Þye b 1� 2yeð Þ � bþ dþ sð Þxe 0
4
3
5 X � xeY � ye
Z � ze
8<
:
9=
;
with Scenario 1 (with hacker state at zero). However, the malicious hackers’ activities
influence the stability of some of the common equilibrium points. The equilibrium values
for the Scenario 2 system of equations (Equation 10) now are as follows.
Equilibrium 2 is ðxe; ye; zeÞ ¼ ð1; 0; 0Þ; (i.e. monopoly of Software X)
Equilibrium 3 is ðxe; ye; zeÞ ¼ ð0; 1; 0Þ; (i.e. monopoly of Software Y)
Equilibrium 4 is ðxe; ye; zeÞ ¼ b c�sð Þa dþsð Þþ bþðdþsÞ½ � c�sð Þ ; a dþsð Þaþðc�sÞ½ � dþsð Þþb c�sð Þ ; 0
; (i.e. a competitive
Equilibrium 5 is ðxe; ye; zeÞ ¼ w; 0; að1� wÞð Þ; (i.e., monopoly of Software X, with hackers
Equilibrium 6 is ðxe; ye; zeÞ ¼ w; wb δ � sð Þ; � 1� wð Þðc� sÞ þ wðaþc�sÞðdþsÞb
(i.e., a
We examine the stability of the above equilibrium values in Appendix B. Based on this
determined by the value of the switching constants “s” relative to two factors: the
restraining factor c of software Y on the growth of X; and the modified growth δ of
software Y in the presence of malicious hackers, where δ is defined as δ ¼ b 1w � 1
.
and the rate at which malicious hackers leave the market (i.e. w). Since the two factors (i.e.
c and δ) are independent, the analysis of various equilibriums can be divided into two
major groups: (1) s > δ; and (2) s< δ. The results for both groups are summarized in Table
3 and Table 4. We first analyze the two cases in Table 3, followed by the analysis of Case 3
and Case 4 provided in Table 4.
market to itself), in this scenario, Equilibrium 2 has become unsustainable for all condi-
tions. This outcome is the direct result of the hackers’ activity and the vulnerability of the
platform X. Instead, we have Equilibrium 5 (i.e., monopoly of the unsecure software X)
but with market share less than the maximum potential market size that is sustainable (see
Case 1 in Table 3). This equilibrium is the only stable equilibrium when the switching rate
(from Y to X) is greater than the modified growth rate of software Y (s > δ) and the
restraining effect of Y on the growth of X is less than the switching rate from Y to X (c< s).
What this means is that (a) users are leaving Y and moving to X at a higher rate than new
users are joining Y, and (b) Y’s restraining effect on the growth of X is not strong enough
to overcome the number of users of Y switching to X. In short, software X has a higher
intrinsic growth rate and is attracting more users from software Y than the other way
around. As a result, software X tends to end up in a monopoly market state (in the long
run) despite being an unsecure software platform. Note that, in this case, it does not
matter whether software X is an early or late entrant.
Apple in the desktop operating systems market in the 1990s. MS Windows was perceived
to be the less secure of the two operating systems. As per our results, the other determi-
nant of this outcome is the relationship between Apple’s restraining effect on the growth
of the Windows OS and the rate at which users switched from Apple to the Windows OS.
The equilibrium in which the Windows OS dominates the market is feasible and stable
irrespective of the aforementioned relationship. If we assume that Apple’s restraining
Equilibrium
�� d�
Equilibrium
�� d�
.
from Apple to the Windows OS, then Equilibrium 5 in Case 1 (Table 3) represents the
outcome. On the other hand, if we assume that Apple’s restraining effect on the growth of
the Windows OS was more than the rate at which users switched from Apple to the
Windows OS, then Equilibrium 5 in Case 2 (Table 3) again represents the outcome. This
leads us to the discussion of Case 2.
Equilibrium 3). Just like in Scenario 1, the stability of this equilibrium, in which there
is a monopoly of Y (Equilibrium 3), depends upon the restraining effect of software
Y on the growth of software X being more dominant than any switching effect from
Y to X (c > s). Simply put, Y is able to curtail the growth of X despite some users
switching from Y to X. Note that in this case, there is another feasible and stable
equilibrium, that is, Equilibrium 5 (Table 3). As per our analysis, both Equilibrium 3
and Equilibrium 5 are stable and exhibit the “tipping” phenomenon based on the
relative initial market share. Figure 2 shows simulation results for various initial market
conditions, with a set of system parameters given as: (a = 0.33, b = 0.9, c = 0.4, d = 0.23,
s = 0.317, e = 1, f = 0, w = 0.828, δ= −0.04).
more users switching from Software Y to Software X in spite of X being a late entrant.
However, the dynamics of hackers’ activities ultimately resulted in X losing some market
share and finally stabilizing at Equilibrium 5. The results in Figure 2a show that a slow
buildup of hackers’ activities allowed software X to reach and capture the whole market
for some duration (Equilibrium 2) before stabilizing at Equilibrium 5. In Figure 2b, the
high initial market share of software X encourages early hacker activities to build up,
resulting in reaching the equilibrium quicker. In both simulations, Software Y does not
survive, whereas Software X dominates the market.
the one that is more likely to dominate the market in the long run, whereas the secure
software is driven out of the market in the long run. This long-run result holds even when
convergence to Equilibrium 5
convergence to Equilibrium 5
Equilibrium 3
–
0.2
Y
Z
-0.2
0.2
0.4
0.6
0.8
1
1.2
Time
X
Y
Z
(i.e., X) at t = 0 (see Figure 2-Example 1). This outcome is due to the result of a positive
value of the switching parameter (i.e., s). What could justify the high switching rate from
Y to X despite X being less secure? Anderson [4] suggests one reason could be the lack of
knowledge on the part of the software users about quality security aspects (e.g. security) of
X and Y. Furthermore, under such conditions of information asymmetry, developers of
software X have a minimal incentive to spend resources on making more secure software.
Despite this, software X can become the dominant software platform by focusing on
marketing and promotional activities that encourage the users of Y to switch to X. The
result suggests that given a choice between investing in developing more secure software,
or investing in marketing campaigns to encourage users of the competing software to
switch, the software vendor would be better off by doing the latter. However, early entry of
software Y and a relatively large initial market share will allow software Y to overcome the
switching factor and monopolize the market (Equilibrium 3 as shown in Figure 2c.).
growth rate of Y (i.e., s< δ), the restraining effect of Y on the growth of X is also relatively
weak (i.e., c< s). In contrast, for Equilibrium 6, where two competing software platforms
co-existing in an equilibrium is feasible, the equilibrium’s stability is governed by various
parameter values. If stable, Equilibrium 6 is the only equilibrium in this scenario where
the hacker’s activity results in the software market being shared by both competing
software platforms in a steady state. If unstable, then the hacker activity results in an
alternative dynamic state where the two software platforms coexist together, however their
market share is always in flux. Figure 3 shows two examples of market share and market
dynamics phase plots for both situations, using the following market parameters:
stabilize at the Equilibrium 6, irrespective of the entry point or initial market share of the
two software platforms.
share of software X. The more the hackers stay, the less will be the share of market for
software X. This outcome makes sense since the hackers are only targeting X. On the other
hand, the market share of the secure software Y depends on its intrinsic growth (i.e., b). In
Example 2 in Figure 3, where no feasible equilibrium exists, the market enters into a stable
limit cycle irrespective of market entry point for the two software platforms. To break out
of this cycle, X and Y will have to take steps such as investing in marketing and promotion
to influence their intrinsic growth rates, discourage users from switching to the competi-
tion, and encourage hackers to become mainstream stakeholders in the software market
(e.g., security consultants, vulnerability researchers).
steal users through switching from the other software (i.e., Y). If the switching support to
software X is further reduced, for example through aggressive marketing by software Y,
(i.e., δ > s; c > s), the existence of X becomes tenuous. However, hacking activity makes it
possible that the two software platforms still can coexist. In this case, there are two
possible situations: (a) only Equilibrium 3 (i.e., monopoly of Y) is stable; and (b) both
Equilibriums 3 and 6 are stable. As in Case 3, the unsecure software can survive in both
situations, but unlike in Case 3 its survival is not guaranteed as it depends upon
a favorable entry point and limited hacking activity. Figure 4 and Figure 5 show different
examples of market outcome for the two situations defined by the following model
parameters:
Situation B: a = 0.33, b = 0.54, c = 0.4, d = 0.34, s = 0.357, e = 1, f = 0, w = 0.37586
Figure 4 demonstrates three examples for Situation A. Example 1 shows unsecure
2 in Figure 4) or little unfavorable initial market share (Example 3 in Figure 4) results in
That is, only Equilibrium 3 is feasible and stable in Situation A.
X and Y, and the level of hacker activity, dictate where the market dynamics converge and
which software platform dominates the market. In Example 1, both X and Y coexist in the
market (i.e., Equilibrium 6), while in Example 2, Y ends up monopolizing the market in
the long run (i.e., Equilibrium 3).
hackers. The set of equations representing this scenario is shown in Equation 11:
dt ¼ a� aX � aY � sY � cY � eZ½ �X
dY
dt ¼ b� bX � bY þ sX � dX � fZ½ �Y
dZ
dt ¼ ½eX þ fY � w�Z
hackers activity.
dt
dY
dt
dZ
dt
8<
:
9=
; ¼
� aþ dþ sð Þye b 1� 2yeð Þ � bþ dþ sð Þxe fye
4
3
5 X � xeY � ye
Z � ze
8<
:
9=
;
Equilibrium 1: ðxe; ye; zeÞ ¼ ð0; 0; 0Þ;
Equilibrium 2: ðxe; ye; zeÞ ¼ ð1; 0; 0Þ; (i.e., monopoly of Software X)
Equilibrium 3: ðxe; ye; zeÞ ¼ ð0; 1; 0Þ; (i.e., monopoly of Software Y)
Equilibrium 4: ðxe; ye; zeÞ ¼ �b c�sð Þab�ðaþc�sÞðbþdþsÞ ; �a dþsð Þab�ðaþc�sÞðbþdþsÞ ; 0
; (i.e., a competitive
Equilibrium 5: ðxe; ye; zeÞ ¼ we ; 0; aðe�wÞe2
; (i.e., monopoly of Software X, and the hack-
af 2þbe½w�f �þfw½s�c�
eðbe�cf Þþf ðaf�deÞ ;
eðbe�cf Þþf ðaf�deÞ ;
eðbe�cf Þþf ðaf�deÞ
@
1
A
� �
target both X & Y)
of Software X results in making Equilibrium 2 always unstable, the simultaneous
hackers activity.
model suggests, the basic nature of the various equilibrium points remains similar to those
equilibria obtained in Scenario 1 and Scenario 2. This observation is not surprising since
these scenarios are special cases of Scenario 3. For example, in Scenario 2 we saw that the
unsecure software X introduced an equilibrium point (i.e., Equilibrium 5 in Scenario 2)
corresponding to a case of market monopoly without fully capturing the full market share.
In this most general scenario, a similar equilibrium point is expected for Y and is defined
by Equilibrium 7. However, the simultaneous presence of the restraining effect of hackers
for both software platforms slightly modifies Equilibrium 5 (and its counterpart
Equilibrium 7) from Scenario 2. The restraining parameters (“e” and “f “) also modify
Equilibrium 6, which remains the most interesting equilibrium, as it can be a feasible and
stable equilibrium with both of the software platforms co-existing. Thus, in this scenario,
we expect similar situations.
unstable. All of the parameters that define the software market affect the dynamics of the
market near the last three equilibrium points (Equilibriums 5 to 7), making the corre-
sponding eigenvalues complex functions of these parameters. Thus, the stability of these
equilibrium points varies based on the values of the system parameters. Furthermore, not
all equilibriums are always feasible, as the equilibrium point may move outside of the
scope of variables. As a result, it is difficult to define the scope of parameters for the
feasibility and stability of these equilibriums using an analytical approach. Therefore, we
use a numerical approach [5] to test the stability, by using three examples corresponding
to different values of the parameters (see Appendix C). The results are summarized in
Table 5.
0.44828, c = 0.4, d = 0.23, s = -0.37931, e = 0.11, f = 0.89, w = 0.1669 (See Appendix C,
Figure C1).
(i.e., Equilibrium 6 in which both X and Y have positive market share) is both feasible and
stable, under certain conditions. For this competitive market to evolve as the equilibrium,
the software market should be characterized as follows. The probability of a user choosing
X (Y) is less than that of choosing Y (X), more X (Y) users are switching to Y (X) than vice
versa, and the likelihood of a hacker targeting Y (X) is significantly greater than that of
targeting X (Y). While we don’t have empirical data to test these conditions, we do have
anecdotal evidence for such a market. These conditions are present in the mobile
Example 1 Example 2 Example 3
Equilibrium 2 Yes No Yes No Yes No
Equilibrium 3 Yes No Yes No Yes No
Equilibrium 4 No Yes Yes No No No
Equilibrium 5 Yes No Yes Yes Yes Yes
Equilibrium 6* No No Yes Yes Yes No
Equilibrium 7 Yes Yes Yes No No No
the equilibrium is both Feasible & Stable.
together, account for more than 99 percent of the market share.5 So this market closely
resembles the one modeled in this study. Furthermore, we will represent iOS with X and
Android with Y. Android is available on more mobile devices than iOS and Android based
products come in a wider price range than iOS based products. Therefore, it is reasonable
to assume that the probability of a user choosing X (iOS) is less than that of choosing
Y (Android). A search for vulnerabilities in iOS and Android in the National Vulnerability
database (NVD)6 shows that between January 2006 and April 2018, 2587 vulnerabilities
were discovered in iOS, and 4676 vulnerabilities were discovered in Android. According
to Kaspersky Lab, vulnerabilities are frequently exploited in successful cyberattacks
(Kaspersky Lab 2017).7 Also, since the Android platform is more open compared to
iOS, it is slightly more vulnerable to hacks and cyber threats. Therefore, it is safe to
assume that hackers are more likely to target Android (Y) than iOS (X). If we use loyalty
of the software’s users as a surrogate measure for their likelihood to switch to the
competing platform, then some trade publications suggest that Android (Y) users are
more loyal than iOS (X) users.8,9 Therefore, it is safe to assume that more iOS (Y) users are
switching to Android (X) than vice versa. As a result, both Android and iOS still exist in
the mobile marketplace. Figure 6 shows the US market shares of Android and iOS.
and assume that instead of Android being more at risk than iOS, both face the same or
approximately the same risk from hackers, then Android is more likely to emerge as the
dominant player in the mobile operating systems market. This trend is already evident in
the global market for mobile operating systems, as demonstrated by the outcome in
Figure 7.
lenge. As such, these malicious hackers have become an integral part of the software
ecosystem that also consists of software developers and users. Prior research largely
focuses on low-level drivers of technology bugs, related malicious hacker exploits, and
protection against or remediation of their impacts. However, no research investigates the
higher-level strategic impact of the presence of malicious hackers and their activities on
the long-term structure of markets for software products, systems, and platforms. This
study fills this gap, by studying whether the presence of malicious hackers within software
markets is necessarily a bad thing.
a restraining effect on the targeted software’s rate of change in market share. We
incorporate factors related to network effects and consumer switching between competing
software platforms. We investigate the competition in this software market both in the
absence of, and in the presence of, malicious hacker activities. Using numerical analysis
and simulation, we find that in many situations, the malicious hacker activities make it
possible for multiple competing platforms to co-exist together.
likely to become a monopoly in the long term. When malicious hackers are present and
they predominantly target only one of the competing platforms, the market is again more
dominating the market for desktop operating systems in the 1990s, even when it was the
platform most frequently targeted by malicious hackers. In this scenario, a market con-
sisting of both competing software platforms is feasible and stable. However, the stability
of this market structure depends on the number of users switching from one software to
another software, and the number of active malicious hackers targeting one of the
competing software platforms. When the malicious hackers target both of the competing
Statista at https://www.statista.com).
Equilibrium 6) and modified monopoly equilibriums (i.e., Equilibriums 5 and 7) that
result in various complex market dynamics. We observe that, due to the hackers’ presence,
the two competing software platforms can coexist: (a) in a stable state at Equilibrium 6
irrespective of entry point and initial conditions (e.g., the market for mobile operating
systems); (b) in stable cyclic or market flux conditions irrespective of entry point and
initial conditions; (c) in an entry point and initial conditions dependent stable state at
Equilibrium 6 — where unfavorable initial conditions can tip the market towards mono-
poly equilibriums (Equilibrium 5 and 7); and (d) in an entry point and initial conditions
dependent cyclic or flux state – where the unfavorable initial conditions can tip the market
toward monopoly equilibriums (Equilibriums 5 and 7).
results add to our understanding of competition in software markets. The findings
illustrate the important, even though unintended, consequence of the presence of mal-
icious hackers in the software ecosystem. That is, malicious hackers can foster competition
among software vendors. Second, given these theoretical implications, managers can take
into account the presence of malicious hackers in their markets, while making policies
regarding software development and technology management investment decisions. For
example, in a case where malicious hackers target only one of two competing software
platforms, the software targeted can still end up monopolizing the market as long as its
vendor invests sufficiently in campaigns that encourage more users to switch to using the
non-secure software. Third, from a regulatory policy perspective, the results should
encourage a balanced debate regarding the pros and cons of malicious hacker activities.
software industry ecosystem and software markets. Yet, there are some hacker advocates,
particularly in the open source community, who believe hackers often do more good than
harm, by drawing our attention to security flaws in popular software. This study illustrates
another (albeit unintended) benefit of hacker activities. The study findings show that by
encouraging competition among software vendors, the hackers provide software platform
users with more choices, and therefore any other benefits associated with more choices in
the marketplace. Therefore, before making ad hoc policy, such as making all hacker
activities completely illegal, a more calibrated approach may be needed.
of malicious hacking should not be taken as support for all such activities. Some hacker
activities can be classified as malicious (e.g., because the target user has not given the
hacker permission to engage in such activity) but harmless. For example, the intentions of
the malicious hacker could be taken into account in any law, and if it can be proved that
the intentions are to cause harm (e.g., identify theft, data breach, and financial fraud), then
legal ramifications should be more severe. However, hacking activities that do not result in
harm to individuals, organizations, and nations should be treated less severely. For
example, activities of hackers, commonly referred to as Grey Hats, that is, those hackers
who discover vulnerabilities in a system without the owner’s permission or knowledge,
and then report their findings to the owner, vendor of the vulnerable software/system, bug
Database12) or other such forums (e.g. CVE13), should not be treated on par with
malicious (or Black Hat) hackers. However, the Computer Fraud and Abuse Act
(CFAA), a 1980s-era law originally designed to punish and deter intrusions into govern-
ment and financial-industry computer systems — the main federal law still used today to
punish hackers — is often applied to all hacking activities, irrespective of the intention of
the hacker. For example, many legal and popular publications have debated the legal
outcomes regarding the conviction of Andrew Aurnheimer for exposing 114,000 emails of
iPad customers to AT&T due to a vulnerability in the AT&T website, the prosecution of
Aaron Swartz for downloading (without subscription) JSTOR research articles, and the
conviction of Mathew Keys for providing a password to a Los Angeles Times website
account [23]. Those parties in favor of modifying the CFAA law (e.g., the Electronic
Frontier Foundation) are however not having much success in changing policy. In fact, the
U.S. federal government’s intention of doubling down on its policy efforts to curb
cybercrime via CFAA was evident in President’s Obama’s 2015 State of the Union
Address. This study shows that an indiscriminate policy of targeting all hacker activities
under the CFAA law is not necessarily consistent with good public policy. More specifi-
cally, the sentencing guidelines under CFAA are very strict and very broadly defined.
While vandalizing physical property usually carries some fine, a jail sentence of a few
weeks, and/or community service,14 vandalizing a website could result in a jail sentence of
several years! This study provides analytical support to those who argue that CFAA
policies, especially with respect to its sentencing guidelines, should be modified to protect
the interests of software security researchers and ethical hackers. Such a policy update
should include specific guidelines on vulnerability research and disclosure such that
ethical hackers do not break the law. For example, if a hacker discovers vulnerability,
informs the vendor of the software product first, and then discloses the vulnerability to the
public, then he should not be prosecuted. However, currently under the CFAA, even the
act of discovering software vulnerability could be a criminal activity [7].
hacking activities, has come to accept the presence of hackers as a key stakeholder in the
software ecosystem. Many industry vendors and users actually have taken steps already to
reduce their risk from unethical hacking by encouraging ethical hacking. Ethical hacking
activities will result in more secure software and thereby it will reduce the risks from
unethical hacking. For example:
programs (VRP) that encourage hackers to discover and responsibly disclose vulner-
abilities [1, 16, and 41].
competition (e.g., Pwn2Own) to identify vulnerabilities in their products.
abilities [e.g., 20].
firms staffed with ethical hackers to audit the security of their systems. In fact, all
leading consulting firms have cybersecurity divisions that provide pen-testing
of individual ethical hackers using bug-finder loyalty program bonuses.15
software producing and software using firms, the managers of those firms indeed can
benefit from reduced hacking activities from unethical hackers. These ongoing policy
changes within select leading-edge firms illustrate the potential implications of our
modeling exercise. If one or more firms are successful in eliminating attacks from
unethical hackers, then Scenario 1, where no malicious hacking exists, or Scenario 2,
where not all of the competition is targeted comes into play. In both these scenarios, the
firms competing in various software market segments would benefit from a lack of
competition. However, this is not necessarily a good outcome for the software users.
These scenarios leave the users less well off because: (a) lack of competition is not good for
consumers; and (b) in case of Scenario 2, they risk cyberattacks from unethical hackers.
limitations that provide opportunities for additional study of related issues. As analytical
studies are built on specific modeling assumptions, researchers might always improve the
rigor of findings by examining different modeling frameworks and different assumptions.
While there are known to be several categories of hacker attacks, in this manuscript we
view malicious hacker attacks as a generic construct, and do not disentangle the effects of
different hacking types. Given we are modeling industry level outcomes, we also are
unable to incorporate into our model variables such as the attack severity. Perhaps future
researchers can extend the work here to examine how different types of hacker attacks,
severity of attacks, and benefits/harms of attacks, among other issues, may affect the
competitive software market structure. Finally, while we focus on long-term equilibriums
of the software market, future research might extend the examination to focus on
(a) short-run impacts of hacker actions on software market competition and (b) the
associated social welfare implications resulting from a competitive software market.
1. https://us.norton.com/internetsecurity-emerging-threats-what-is-the-difference-between-
2. https://cybermap.kaspersky.com/
3. https://investor.yahoo.net/releasedetail.cfm?ReleaseID=990570
4. https://www.consumer.ftc.gov/blog/2017/09/equifax-data-breach-what-do
5. https://www.gartner.com/newsroom/id/3859963
6. https://nvd.nist.gov/
7. https://securelist.com/exploits-how-great-is-the-threat/78125/
8. http://www.applemust.com/are-android-users-really-more-loyal-than-iphone-users/
9. https://appleinsider.com/articles/18/03/08/survey-calls-android-buyers-more-loyal-but-more-
10. There is one data point for Blackberry/Microsoft because of their short partnership at that
11. https://hackerone.com/bug-bounty-programs
13. https://cve.mitre.org/
14. http://www.criminaldefenselawyer.com/crime-penalties/federal/Vandalism.htm
15. https://www.united.com/web/en-US/content/contact/bugbounty.aspx
International Journal of Computer, Electrical, Automation, Control and Information
Engineering, 8, 3 (2014), 480–490.
quality. Management Science, 52, 3 (March 2006), 465–471.
analysis of liability policies in network environments. Management Science, 57, 5 (May 2011),
934–959.
Computer Security Applications Conference (ACSAC’01), IEEE Computer Society, December,
2001. https://www.acsac.org/2001/papers/110 , (accessed March 12, 2017).
Equations Approach using Maple and MATLAB. Boca Raton, FL: CRC Press, 2002, pp.
99–109.
competition. Economic Systems, 28, 4 (December 2004), 369–381.
Thursday 29 May 2014. https://www.theguardian.com/technology/2014/may/29/us-
cybercrime-laws-security-researchers.
econometric analysis of the spreadsheet market. Management Science, 42, 12 (December
1996), 1627–1647.
mechanisms to disseminate vulnerability knowledge. IEEE Transactions on Software
Engineering, 33, 3 (March 2007), 171–185.
announcements on market value: capital market reactions for breached firms and internet
security developers. International Journal of Electronic Commerce, 9, 1 (2004), 69.
Where’s The CSI Cyber Team When You Need Them? CBS.com, March 3, 2015. http://
www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-
think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them
-/(accessed November 2, 2016).
technology platforms and the implications for the software industry. Management Science, 52,
7 (July 2006), 1057–1071.
aries.com/definition/hacker
providers of shrink-wrap software and software as a service. European Journal of Operational
Research, 196, 2, 16 (July 2009), 661–671.
ments, and predation. The American Economic Review, 76, 5 (December 1986), 940–955.
programs. Presented at 22nd USENIX Security Symposium, August 14-16, Washington DC.
sessions/presentation/finifter.
http://www.huffingtonpost.com/2014/01/17/six-other-stores-are-bein_n_4618414.html
(accessed January 22, 2014).
from web server pricing. MIS Quarterly, 26, 4 (December 2002), 303–327.
market. The Review of Economics and Statistics, 77, 4 (November 1995), 599–608.
Independent, Thursday 10 October 2013.
Dynamic Games and Applications 4, 3 (2014), 209–308.
management. Communications ACM, 46, 3 (2003), 81–85.
2013. Accessed on August 25 2019: http://business.time.com/2013/03/19/u-s-hacker-
crackdown-sparks-debate-over-computer-fraud-law/.
choices of firms developing proprietary software. Journal of Management Information
Systems, 25, 3 (2008), pp.243–277.
Science, 51, 5 (May 2005) 726–740.
Economic Perspectives, 8, 2 (Spring 1994), 93–115.
on software security: The monopoly case. Production and Operations Management, 20, 4,
603–617.
Economics and Policy, 21, 3 (August 2009), 192–200.
proprietary software. Electronic Commerce Research and Applications, 7, 1 (Spring 2008),
68–81.
December 27, 2011. (accessed October 14, 2013).
of Economic Research on Copyright Issues, 4, 1 (2007), 63–86.
com/dictionary/hacker.
Economic Association, 1, 4 (June 2003), 990–1029.
key hackers for proactive cyber threat intelligence. Journal of Management Information
Systems, 34, 4 (2017), 1023–1053.
Journal of Management Information Systems, 24, 1 (Summer 2007), 233–257.
network security? In M.F. Grady and F. Parisi (eds.), The Law and Economics of Cybersecurity.
Cambridge: Cambridge University Press, 2005, pp. 29–70.
announcements on firm stock price. IEEE Transactions on Software Engineering, 33, 8
(August 2007), 544–557.
insights-lab/dbir/2016/(accessed November 2, 2016).
software. Lecture Notes in Computer Science, 6802 (2011), 346–360.
wired.com/2016/01/the-biggest-security-threats-well-face-in-2016/. (accessed on November 2,
2016).
ecosystems. In Proceeding CCS ‘15 Proceedings of the 22nd ACM SIGSAC Conference on
Computer and Communications Security, 2015, pp. 1105–1117.
School, Texas A&M. He received his Ph.D. from the University of Illinois at Urbana-Champaign.
His research interests include cybersecurity, open source software, and economics of electronic
commerce. Dr. Sen has published in Journal of Management Information Systems, Decision Sciences,
International Journal of Electronic Commerce, Communications of AIS, and other journals.
Control, involved in innovative conceptual design of new systems. Dr. Verma received his
Ph.D. from Texas A&M University. He was previously Senior Research Scientist for Knowledge
Based Systems, where he was principal investigator for innovative research sponsored by
Department of Defense. His research interests include simulation and modelling, dynamic analysis
and control of large multi-agent complex systems, system optimization, control, and guidance of
aerospace systems.
Department of Information & Operations Management, Mays Business School at Texas A&M
University. He holds Ph.D. in Business Administration from the Carlson School of Management
at the University of Minnesota. Dr. Heim’s research focuses on service and e-service/e-retail
operations, management of technology, supply chain management, and quality management. He
is a Department Editor of the Technology Management area of Journal of Operations Management
and Senior Editor of Production and Operations Management.
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Literature Review
Software Market Model
Software Market
Market Share
Rate of New Users Adopting the Software
Intrinsic Growth in Market Share
Restraining Effect of the Maximum Market Potential
Restraining Effect of the Competing Software’s Market Share
Users Switching to or from Competition’s Software
Restraining Effect of Hackers
Scenario 2: One Software Is Targeted by Hackers
Case 1 (Table 3)
Case 2 (Table 3)
Case 3 (Table 4)
Case 4 (Table 4)
Contributions and Policy Implications
Potential Limitations and Future Research Directions
Notes
References
Organizational Security Training and Compliance
Mario Silica and Paul Benjamin Lowry b
Business Information Technology, Pamplin College of Business, Virginia Tech, Blacksburg, VA, USA
We conducted a design-science research project to improve an orga-
nization’s compound problems of (1) unsuccessful employee phish-
ing prevention and (2) poorly received internal security training. To
do so, we created a gamified security training system focusing on
two factors: (1) enhancing intrinsic motivation through gamification
and (2) improving security learning and efficacy. Our key theoretical
contribution is proposing a recontextualized kernel theory from the
hedonic-motivation system adoption model that can be used to
assess employee security constructs along with their intrinsic motiva-
tions and coping for learning and compliance. A six-month field
study with 420 participants shows that fulfilling users’ motivations
and coping needs through gamified security training can result in
statistically significant positive behavioral changes. We also provide
a novel empirical demonstration of the conceptual importance of
“appropriate challenge” in this context. We vet our work using the
principles of proof-of-concept and proof-of-value, and we conclude
with a research agenda that leads toward final proof-in-use.
computer security;
gamification; design science
research; hedonic
motivation; system adoption
model; immersion; flow;
security compliance; security
education; training;
awareness; SETA
motivate employees to behave more securely when engaging with organizational systems
and information [cf. 14]. Such compliance is of increasing concern for management and
executives because of the global explosion of organizational security issues. Generally, IT
security compliance has three objectives: (1) to mitigate or avoid security incidents and
risks often caused by negligent employees [22, 65, 102], (2) to thwart criminal security
behavior and computer abuse [65, 101, 102], and (3) to encourage prosocial and protective
security behaviors in employees [45, 84]. A number of promising studies have applied
various techniques to motivate employees to adopt secure intentions and behavior — from
deterrence techniques [26, 101, 102] and discouraging employee neutralization [e.g., 91] to
increasing the awareness of the risks and potential costs of noncompliance [e.g., 14], to
increasing accountability [94, 95], to leveraging positive psychology or affect [15, 28], and
even using more explicit threats and fear appeals [11, 52, 83]. Despite these efforts,
employees remain the “weakest link” in organizational IT security because employee
Pamplin College of Business, Virginia Tech, Pamplin Hall, Suite 1007, 880 West Campus Drive, Blacksburg, VA 24061 USA
2020, VOL. 37, NO. 1, 129–161
https://doi.org/10.1080/07421222.2020.1705512
sibility to comply, and they often do not [22, 83].
approaches are efficacious. For example, deterrence techniques were designed for criminal
behavior and may be inappropriate for security policy noncompliance [26, 102].
Techniques that employ threats and intensified risks can have unintended consequences,
including negative employee reactance [63, 65].
a more positive approach. SETA programs aim to provide employees with the knowledge
and motivation necessary to comply with security policies when confronted with a security
risk [21]. However, it is evident that many of the current compliance-related training
approaches are relatively ineffective; many employees continue to be noncompliant [102].
This is troubling, as SETA programs have long been considered fundamental to organiza-
tional security governance, and despite repeated calls to address this promising research
area, researchers have not examined how to make SETA programs more effective, with
a few promising exceptions [e.g., 21].
content, employees often lack the motivation to embrace the training and apply it in their
everyday work, thus causing performance and even reputational failures [67]. Employees
also have difficulty focusing on lengthy training sessions, especially when they are con-
cerned about their actual work tasks. This is especially true in the context of security, in
which most employees are not experts and lack efficacy. Most employees do not recognize
the importance of caring about security in the context of everyday work. Thus, changing
users’ security-related behaviors through training is highly complex and prone to failure
[57]. This is a common problem in employee training, during which employees lack
conscientiousness and thus do not develop the efficacy needed to apply what they have
learned [67]. Ferguson [36] essentially declared SETA programs useless after conducting
an experiment involving four hours of training, as the participants were generally unmo-
tivated and 90 percent failed to detect a phishing attack.
improve them. We propose that a solution must begin with the recognition that most
security training is not enjoyable or motivating—it is perfunctory, arcane, and outside
employees’ normal practice and expertise. We posit that security training based on
gamification principles1 (e.g., game-like features applied to nongaming contexts) is an
effective approach for improving intrinsic motivation, learning, coping skills, and subse-
quent security compliance. People are more motivated and conscientious when they have
an enjoyable, immersive experience. However, a recent cross-sectional study complicates
our proposition: although Baxter et al. [8] established that their gamified security training
system was fun, enjoyable, and preferred over other methods, no statistically significant
evidence showed that the gamified system actually increased the users’ knowledge.2
ens the promising foundation of this literature and applies an approach to gamification
grounded in both motivation theory and design-science research (DSR). Our aim is to
improve not only the delivery of organizational security training through gamification, but
also the security-related motivations, efficacy, learning, intentions, and behaviors of
employees receiving such training. Our six-month field study in an actual organization
gamified security training can result in statistically significant changes—including an
improved ability to efficaciously respond to actual phishing attempts.
nongaming contexts. Thus, gamification is “the application of lessons from the gaming
domain in order to change stakeholder behaviors and outcomes in non-game situations”
[85, p. 352]. Gamification was first implemented in an organizational context during the
“Cold War” when workers and factories in the Soviet Union used a points-based system of
competition to increase productivity (which was detached from economic reality and thus
backfired) [71]. In 1984, Coonradt [19] became one of the first researchers to apply
gamification to a business context to motivate employees by including frequent feedback,
clear goals, personal choice, and gaming features. Although gamification emerged from
the flow literature as it applied to gaming, scholars have not reached a consensus regard-
ing gamification’s definition [92]. Similarly, Liu et al. [59] concluded
gamified systems must have specific user engagement and instrumental goals, and the way to
achieve these is by the selection of game design elements. (p. 3)
individual motives [22, 61].3 Summarizing the various definitions of gamification in the
literature, we propose the following working definitions of gamification:
strengthen motivations and encourage specific behavioral changes in users for specific
instrumental goals.
strengthen employees’ motivations to encourage learning, efficacy, and increased
employee compliance with organizational security initiatives.
and storytelling [53] to stimulate experiences of challenge and curiosity [33] and that the
conceptualization of gaming elements is highly important for user–game engagement [58].
However, Bui et al.’s [13] review of gamification design artifacts offered two interesting
conclusions: (1) most studies did not explain the technological elements of the gamified
systems, such as how these artifacts foster gamification, and (2) there is a
employees interacting with group systems resulting in collaboration dynamics and longer-
term behavioral outcomes [13, p. 11].
approach should be used to create gamified systems. Second, gamification must be applied
in a realistic organizational context using longer-term approaches that focus on mean-
ingful engagement to produce meaningful results. Third, the DSR kernel theory must be
improving organizational security through training interventions.4
sequent compliance with organizational security [e.g., 2, 8]. These studies are reviewed in Online
Table A.1. This research stream faces several challenges, which we address fully in our research:
(1) the majority of the studies used one-time cross-sectional data, and none used long-term or
longitudinal data; (2) the participants were mainly students, and thus many of the tasks had no
ecologically valid relationship to actual organizational security in practice [cf. 60]; (3) the
research designs lacked control groups, so there was no way to empirically establish that the
gamification context was an improvement over the status quo; (4) actual behaviors were not
measured; (5) many studies did not use theory, and none developed a cohesive theoretical
foundation; (6) most did not involve a working system; and (7) most did not achieve meaningful
engagement5 or articulate the importance of instrumental (e.g., improved IT security compli-
ance) and interaction outcomes (e.g., measurable increased immersion) [cf. 59].6
is needed. Likewise Liu et al. [59] concluded that the gamification literature in general
does not explain
themselves and create the desired user interactions that engage the user and lead to the
intended instrumental goals (p. 3) [emphasis added in bold typeface].
a DSR approach to bridge the related opportunities in design, theory, methodology, and
practice from our introduction.
security context. In a non-gamified security context, Vance et al. [95] explained that
although there is no single, authoritative approach to DSR, a common expectation of
DSR is that the solution can be described and evaluated in terms of proof-of-concept and
proof-of-value [e.g., 38, 41, 77, 93]:
conceptual solution of design is feasible and promising, at least in a limited context … . In
contrast, proof-of-value is achieved when researchers show that an IT artifact actually works
in reality. [95, p. A6] [emphasis added in bold typeface]
in contexts such as cyberbullying [64], autonomous scientifically controlled screening
systems [93], and a video-based screening system [82]. However, according to [75, p. 16],
the third concept of proof-of-use can also be applied to DSR. To support our DSR approach,
we adhered to a DSR methodology that closely follows the method advocated by
Nunamaker et al. [76] and elaborated on by Peffers et al. [81].
ongoing proof-of-use by implementing a system actually used in practice. Next, we explain
how we systematically combined relevance with theoretical rigor, leveraging additional
DSR principles to embody the principles of the “last research mile” as advocated by
Nunamaker et al. [75]. This involved an extensive, iterative process based on the security
gamification literature, DSR, system development, and feedback from the target organiza-
tion. Despite its iterative nature, the DSR process we leveraged can be described in the
following seven steps (two final steps are addressed in the discussion section).
We followed Liu et al. [59], who proposed a key gamification design principles illustrated
by a running case (HealthyMe). Although we applied the majority of their design
principles, some were inapplicable to our organizational security context or specific design
choices.7 Figure 1 depicts our final design framework in which we were able to focus on
design as an artifact [cf. 41]. Liu et al. [59] suggested focusing on the design and
development of the gamified system before focusing on the outcomes. We did so following
the DSR approach advocated by Nunamaker et al. [76] and shown in Figure A.1
(Supplemental Appendix A): (1) theory building, (2) systems development, (3) experi-
mentation, and (4) observations. These steps encapsulate several subprocesses, such as
those of Peffers et al. [81].
Our research started when the French company invited one of the authors to help create
a system that would encourage better employee IT security compliance through online
training. The company had faced an ongoing problem of employee carelessness regarding
security issues, including falling for phishing attacks. Their existing e-mail-based training
system was not positively viewed within the firm.
IT security compliance through sanctions is inconsistent and can backfire. We also learned
that gamification could potentially positively influence employee training and motivation.
However, no prior research has established clear empirical evidence that employees’
security learning and efficacy perceptions could be positively influenced by gamification.
elements
– Gamification mechanics
– User
– Task
– Technology
– User to system
– System-to-user
– User to user
Gamification design
Meaningful gamification
Self Efficacy
Our objectives were to build a gamified training system based on a native information
systems (IS) motivational theory as the kernel theory that was tested in an ecologically
valid manner using a long-term field experiment. We thus undertook an iterative process
of design and development, balancing concepts, designs, and concepts from the literature
with the client’s training requirements. We unit tested the system and then ran a pilot test
with human subjects to further evaluate the design objectives.
A key step of designing a gamified system is to carefully choose the gamification design
principles that serve as the bridge between the system and meaningful engagement [59].
This step establishes the user-interaction processes that occur between user-system-user
actors. We first analyzed kernel theories [73] that would support and motivate the
employees’ security learning and behavioral change. We surmised that the hedonic-
motivation system adoption model (HMSAM) [62] was particularly suitable as a kernel
theory and evaluation model when extended to the security context and coping support.
This extended model consisted of two main components that further inspired design
principles: (1) motivation fulfillment to inspire gamified systems use and (2) coping support
so the users can deal with security issues and engage in security-related behavioral change.
suggested by Hong et al. [43], which suggests that the technology artifact is an additional
element in theorizing that should be considered. In IS research, contextualization usually
involves the introduction of contextual features into previously established general mod-
els, as in the contextualization of the unified theory of acceptance and use of technology
[96] to the adoption and use of collaboration technologies [12]. Our most important
contextualization consisted of adding context-specific factors—learning, security response
efficacy, and security self-efficacy—to HMSAM.
Theory
Following DSR, the subsequent design principles needed to rely on carefully chosen design
elements. Thus, we proposed the first design principle:
elements that increase employees’ motivation and fulfillment.
leverage employees’ knowledge in such a way that employees will not only be intrinsically
motivated through enjoyment but will also acquire the new knowledge effectively. This led
to the second design principle:
a learning process that is meaningful, entertaining, and fun.
is the “conceptual distance between a latent independent variable (cause) and its corre-
sponding design items” [73, p. 311], which in our case translates into the potential for
both intrinsic and extrinsic motivations to positively influence security learning and
individual’s security behavioral change [e.g., 14, 40]. However, extrinsic motivations may
provide only temporary compliance [55] and intrinsic motivations are more powerful in
driving employee’s behaviors [78]. Likewise, intrinsically motivated learners were found to
demonstrate higher achievements in learning [9]. To satisfy this meta-requirement, we
focused primarily on intrinsic motivations when designing the system, although there may
be some spillover into extrinsic motivations.
able (effect) and its corresponding measurements” [73, p. 312]. Here, the challenge
involves choosing the right measurement items, which is especially important for DSR
so that design evaluation and research rigor can be established [41]. We thus carefully
reviewed the literature and, whenever possible, selected established measures, as further
documented in the method section.
simultaneously implemented design items” [73, p. 312]. This is the problem of confounding
design elements that may have different effects on the artifact evaluation. For example, we had
to decide whether to guide the learning process through a recorded video or through a series
of brief, interactive lessons that used graphical examples of phishing mistakes typically made
by employees. Here, the design decision influenced the evaluation of the artifact. Our target
organizations placed a premium on simplicity; thus, we chose short informative lessons. Such
decisions can influence the “solution space for other design decisions; however, this may lead
to lock-in situations with respect to the final artifact” [73, p. 312].
We followed four primary steps to establish proof-of-concept [cf. 81] before we proceeded
to empirical testing. The company was pleased by the positive results, and the feedback
from employees was highly positive. Thus, the solution worked well in practice, which
provided evidence of proof-of-concept [80].
ture to learn about gamification features that may work well in a security context and
understand why this is the case.
Step 2: We then created Table A.3 to propose how gamification elements should be
security training system. We also mapped these elements to the various ways that flow (in
our context, immersion) can be fostered [24] and mapped the elements to the intrinsic
motivations they could potentially fulfill.
theory, HMSAM, by mapping the derived gamification element relationships to HMSAM
constructs, as shown in Table A.4. This allowed us to conceptually check whether our design
could fulfill intrinsic motivations and provide an “appropriate challenge.”
design elements. We used a taxonomy of major motivations for system use in [61] where the
motivations were suited for the security training context. By analyzing mappings from Table
A.4 and Table A.5, we observed the same relationships with motivations. For example, play/
enjoyment/fun can be found in 11 gamification design elements (Table A.5.).
To establish proof-of-value, once the system was deemed ready, we first formally pilot
tested it and the kernel theory with students. However, the key step in establishing proof-
of-value involved a long-term field experiment with actual employees using the gamified
security training system. These details, and the subsequent rigorous analyses, are
addressed fully in the section after the next section. Before addressing the full proof-of-
value methodologies and analyses, the next section details how we operationalized our
kernel theory, HMSAM, to develop testable hypotheses for empirically establishing proof-
of-value.
establish proof-of-concept, and to be operationalized to test it for further proof-of-value.
Again, this process was iterative such that what we learned in developing hypotheses
informed design, and vice versa. Here, our focus is on the derived operationalized
hypotheses and the logic behind them.
intrinsic motivations in systems use [62], which we found to be a natural fit for our
gamified context. Namely, HMSAM was designed to explain how fulfilling motivations
can lead to increased immersion and behavioral intention (BI) and ultimately to
behavioral change [62]. These explanations are more theoretically powerful and appro-
priate predictors of BI than traditional factors, such as perceived ease of use (PEOU) or
joy [62]. HMSAM builds on flow theory by re-envisioning the original conceptualiza-
tion of cognitive absorption (CA) developed in [3]. The CA construct was inspired by
flow theory, which was not proposed with systems in mind, and is defined as a deep
state of involvement with systems (i.e., immersive systems use). Gamified systems thus
represent an ideal setting in which to investigate CA, which has affective and cognitive
components and is an intrinsic motivator. Whereas the original conceptualization of
CA assumed that its components (curiosity, joy, control, and immersion) occurred
simultaneously as one formative construct [3], HMSAM examines CA’s components
independently and explains how the fulfillment of intrinsic motivations fosters asso-
ciated BI (or, in the original HMSAM, system acceptance intentions). Lowry et al. [62]
argued that this approach is more consistent with flow theory’s understanding of flow
as a process that unfolds over time and involves multiple constructs.
intrinsic TAM elements are lower-order factors in the creation of immersion and BI.
Consequently, HMSAM is a process-variance model in which intrinsic TAM elements,
like PEOU and enjoyment, are lower-order elements that precede immersion and combine
to change BI.
security learning and compliance. Our extensions are shown as hypotheses; all remaining
paths are replications of HMSAM. Our model suggests that the factors of improved
security learning, efficacy perceptions, and the ability to cope with security challenges
encourage positive behavioral change by strengthening employees’ intentions to follow
security policies and improving their phishing-response behaviors.
thus, in our context, immersion) arises from the satisfaction of three conditions: (1) clear
goals, (2) unambiguous
the importance of instrumental goals, as stressed by Liu et al. [59], which suggests that the
gamification system should
improved security knowledge for the employee and fewer security breaches for the company).
Unambiguous feedback can be delivered by providing gamified feedback in the training itself.
In the gamified system, this could be augmented with leaderboards, points, measurement
against goals, features that convey a sense of general progress, and the presence of
a gamemaster [2, 8, 42]. Balancing challenge and skills, fostered through learning and the
efficacy and coping derived from it, is a core focus of the remainder of this section.
coping skills is through SETA programs [e.g., 21]. Our gamified environment provides
common SETA-based training related to organizational security systems, particularly to
help employees learn how to identify and avoid phishing attacks and suspicious e-mails.
Research has found a link between learning and behavioral engagement [44]. The more
employees learn, the more they will be prepared to implement protective security behaviors
[cf. 84]. Employees who have a deeper knowledge of security risks and ways to thwart them are
intention to
policies
response,
following
response
efficacy
self-efficacy
learning and efficacy) and progress over time to sustain curiosity; otherwise, it
can decrease immersion.
Coping support for security issues
behavioral change
security-related and
replications of previously established
relationships
Controls
Gender
Education
TMSC
OCM
Gender
Experience
Education
OSC
TMSC
OCM
perceived intrinsic usefulness; BI, behavioral intention to follow security policies; OSC, organization
security communication; TMSC, top management security commitment; OCM, organization computer
monitoring.
who have little knowledge in this area are more likely to be uncertain and make poor security
decisions. Research shows that the learning process strengthens one’s abilities; as a result, one
pays more attention to the context, content, and environment, all of which must be properly
assessed to make effective security decisions [51]. Thus,
ciated with increased BI.
response efficacy” and “security self-efficacy” [e.g., 11, 52]. Response efficacy is “the belief
that the adaptive response will work, that taking the protective action will be effective in
protecting the self or others” [37, p. 411]. Self-efficacy is the degree to which individuals
believe they are capable of preventing threats [11]. Security researchers have reconceptua-
lized these concepts extensively and from several perspectives [11, 49, 52, 99, 100]. In our
context, security response efficacy means that employees believe that what they were told to
do in their security/phishing training will work to prevent the threat, and security self-
efficacy means that they believe they can deal with the security response themselves. Thus,
if employees learn a new protocol that is purported to mitigate phishing attacks and they
believe the process is efficacious, they will be more likely to follow it.
of task-specific efficacy. Our gamified environment fosters a goal orientation with a clear
task objective and concrete feedback. Performance and achievement lead to higher levels
of self-efficacy, and an informal social learning environment directly influences employee
efficacy levels [68]. This suggests that employees will not only demonstrate higher levels of
efficacy but also be more certain of their ability to apply newly acquired knowledge in
practice. Learning also leads to greater efficacy, which in turn generates more interest and
more learning [51]. Thus, it is likely that there are feedback mechanisms between efficacy
and learning. However, for concision, we predict:
associated with increased (a) security response efficacy and (b) security self-efficacy.
encouraging behavioral changes in employees that result in better adherence to security
policies [e.g., 11, 14, 52, 100]. Recent research [100] has identified a clear link between
coping adaptiveness (e.g., task-focused coping) and perceived phishing detection efficacy.
This is partially supported by recent findings that awareness and motivation are crucial for
security compliance [16].
learning through a gamified system, because such systems make learning more efficacious.
Gamified systems provide “powerful social psychological processes such as self-efficacy … [that]
provide rewards … [and] drive most of the long-term participation” [31, p.16]. Per Bandura [6],
setting and assigning goals (e.g., badges or levels in gamified systems) enhances self-efficacy.
lead to an increased intention to act securely, as more employees will feel capable of acting
securely and believe that the desired security decision will be effective.
a gamified security training context are associated with increased BI.
balancing skills and challenges. In gamified contexts, flow occurs when perceived skill and
challenge levels are balanced; however, if such levels are initially low, apathy instead of
engagement can occur [20]. Likewise, a key role of gamified components is to stimulate
experiences of both curiosity and challenge [33], with challenges driving immersive
engagement [47]. Thus, “if stimuli from an experience are either too challenging or not
challenging enough, interest and curiosity decline” [61, p. 539].
and ultimately facilitate immersion. However, the key limiting assumption of this addition is
that a challenge is most likely to be useful if it takes the form of an appropriate challenge, which
is “the degree to which the perceived positive challenge of an activity matches the perceived
skills of the user” [61, p. 539]. Thus, as it relates to an employee’s instrumental goals, learning,
and efficacy, a gamified training task should be neither too challenging nor too facile. The
greater the challenge, the greater the behavioral engagement required to overcome it [89].
Likewise, we assume that the challenge should become more difficult (e.g., “levels up”) as the
employee learns and becomes more efficacious [e.g., 7]. Otherwise, curiosity will be under-
mined, and boredom can ensue.
likewise argue that (1) good gamification delivery involves progressive challenges, but (2)
such challenges must be appropriate, and thus, a challenge might become “too much” for
an end user and cause diminishing returns. This state represents an inverted U-shaped
relationship in which a “relationship exists if the dependent variable Y first increases with
the independent variable X at a decreasing rate to reach a maximum, after which
Y decreases at an increasing rate” [39, p. 4]. A recent study [66] employed the two-
player StopWatch game to confirm through electrophysiological evidence that this
inverted U-shaped relationship exists between perceived challenges and one’s intrinsic
motivation. Namely, in situations in which the challenge is optimal, one’s immersion
should increase up to the apex of the curve, whereas further increases of the challenge
beyond the optimal point should lead to decreased immersion. Thus,
relationship with perceived immersion in a gamified security training context.
replicated in HMSAM gaming research [62]. However, we have extended HMSAM, such
not the intention to use a system. This is a theoretically reasonable extension, because
HMSAM’s behavioral predictions are rooted in TAM, which is rooted in the theory of
reasoned action (TRA) [5]. TAM, the TRA, and the related theory of planned behavior
(TPB) [4] consistently exhibit a strong link between attitude formation, intention, and
behavior that extends far beyond mere system usage. This is the case regardless of shifts in
the behavioral target, as long as the target is in the same context.
duals, and they should be especially apt in our gamified security training context. An
earlier study [72] predicted, but did not empirically show, that meaningful gamification
should motivate and lead to long-term behavioral changes. A key reason for this is that
motivations can be fulfilled through immersion. Immersion in gaming contexts is the
experience of being engaged in the game-playing experience while having partial aware-
ness of reality [62]. In learning contexts, immersion occurs as a result of appealing to
intrinsic motivations, such as learning new things and being engaged [35].
ignored [3, 62]. This increased focus, combined with the fulfillment of motivations, creates
ideal conditions for learning and behavioral change. Research [98] has found that higher
levels of immersion lead to greater usage intentions than lower levels of immersion. By
influencing the state of flow/immersion, gamification positively and continuously influ-
ences intentions and actual behaviors. Numerous studies have found that intrinsic moti-
vations are strong predictors of meaningful user behavioral change outcomes, such as
satisfaction, continuance intentions, and perceived performance [25, 61].
also physiological. Thus, they are surprisingly powerful. Prior research has identified
several neurological causal mechanisms involved in flow and gaming, showing that
games lead to numerous neurological changes: (1) the brain releases more dopamine,
which is associated with pleasure and consequently increases motivation [54]; (2) testos-
terone is increased, affecting energy, mood, and self-esteem [34]; and (3) memory is
improved by training the amygdala, the brain’s memory and decision center, to better
respond to similar situations in the future [10]. These factors can lead to dramatic
behavioral changes.8 Thus, assuming that the underlying mechanisms of the TRA and
of the gamification of intrinsic motivations and learning hold true in our context, employ-
ees should be motivated to strengthen their context-related intentions when they have
a more immersive learning experience.
increased BI to comply with the security policies employees are learning.
and behavior. In the information security context, several studies have suggested that it is
more realistic and valid to measure actual behaviors than intentions [11, 22, 60]. It is
particularly important to measure actual behaviors, as it is clear that good intentions do
not always lead to good behaviors in organizational security contexts, as employees often
have conflicting roles and motivations with respect to security requirements [11, 22, 83].
result in meaningful security training and behavioral changes. Thus,
when following the same security policies.
security research [83]. We do so by modeling common demographic covariates and
alternative security constructs, as follows: age, gender, experience, education, organization
computer monitoring (OCM) [27], organization security communication (OSC) [17, 88];
and top management security commitment (TMSC) [46].
rigorously establish its proof-of-value. Thus, we first conducted a pilot study with uni-
versity students (N = 45). The study spanned three months and included monthly data
collection. This allowed us to refine the procedures and test the instruments’ validity and
reliability.9
experiment using an unbalanced design of two treatment and one control groups.
A total of 800 employees from a large international French company were invited to
participate, who were confirmed from HR records to have not received security training.
Only offices in which English was the main language (i.e., the United Kingdom, the
United States, and Australia) participated to prevent potential language issues and
website localization. The 488 employees who positively responded10 were randomly
assigned to one of two groups: the gamified system treatment group (420 employees)
or the e-mail treatment group (68 employees); they were determined to be demogra-
phically equivalent. The control group (38 employees) was not explicitly invited so they
would not know they were used as controls; this was created using a random sample
from the organization’s HR database of employees. The participation rate of over
50 percent is high for organizational field studies. Thirty-six participants were removed
because of implausibly short response times [under eight minutes], incomplete answers,
and illogical response patterns. The final sample included 384 responses. The average
age of the participants was 33.4 years (SD = 11.2 years); 52 percent were male and
48 percent were female.
tion and e-mail groups received the same training content and the same frequency of
training, reminders/notifications, and quizzes. These two sets of participants were invited
between gamified interaction versus non-gamified e-mail interaction. A custom Web-based
gamification application was created by one of the researchers using .NET technology, and all
design elements were developed based on previously identified game mechanics.
game design elements. In the first step, users registered for and signed in the website.
Next, users chose an avatar (Supplemental Appendix B, Figure B.1), and after completion,
users were redirected to the main screen (see Figure 3).
how to earn points). The gamemaster appeared at different stages/levels of the game. For
example, if the user had not logged in for over one week, the gamemaster sent an e-mail
(the same notification frequency was used for the e-mail group) inviting the user to
continue and providing the user with information about current achievements and top
scorers (via the leaderboard). The objective of the game was to complete quizzes and read
different tips related to security education about malicious software (malware), spam, and
especially how to avoid falling victim to phishing attempts. By playing different rounds,
users accumulated points that allowed them to receive additional incentives in the form of
monsters (monsters represented trophies) and to advance to another level (bronze, silver,
and gold). In addition, a leaderboard of top employees with their corresponding scores
was displayed on the main menu interface. Different rounds with quizzes and other
educational elements were offered to users every two weeks (again, the same frequency
was used for the e-mail group). This gave users time to educate themselves about different
security and phishing topics and to acquire the knowledge necessary to correctly answer
questions.
experiment but followed a more traditional security education approach limited to e-mail
communication. E-mail communication (Figure B.5) offered the same content as the
gamified system, but the format was less visually appealing and contained more textual
explanations. Nonetheless, the content of the e-mail communication was useful, clearly
written, and easy for employees to understand.
concern for management than behaviors like reading spam or failing to check for viruses.
Moreover, responding to a phishing attack is an objectively auditable security behavior.
Participants completed a survey at three months and at the end of the game (i.e., six
months). To measure users’ security behaviors, we sent a phishing e-mail to employees’
inboxes without their knowledge. This process was administered by a third-party company
that specializes in phishing testing/training. An e-mail was sent to employees asking them
to change their passwords by clicking on the internal company’s link (the link led to the
third party’s website, which tracked a lack of compliance). Employees’ decisions were
coded as binary variables (“0” for not clicking or “1” for clicking), which measured users’
security behaviors. To establish anonymity, and a link between each employee’s security
gamification platform presence and the phishing e-mail, a unique random number was
created for each participant and that number was used for the survey.
All scales were reflective, using a seven-point Likert-type scale ranging from completely
disagree (1) to completely agree (7). A new measure was created for challenge, which
corresponded to the perception of the game’s level of difficulty. The actual phishing
behavior construct was a binary value (0 or 1).
standardized regression weights were lower than 0.60 (e.g., JOY1 and PEOU1) and were
thus removed. After rerunning the model, all other factor loadings were higher than the
recommended 0.60 value. Next, the average variance extracted (AVE) values were checked
to ensure that all values exceeded 0.50.
gent validity and discriminant validity11 were established. Table D.1, in Supplemental
Appendix D, details the loadings. Table D.2 summarizes the discriminant validity and
AVEs for the model. Table D.7 presents the statistics used to assess the quality of the
measurement model’s measures. We confirmed that the Cronbach’s α values for all scales
were higher than 0.70 and found that multicollinearity was not an issue. In addition to
taking several measures to prevent common methods bias, we conducted two tests to
demonstrate that it was likely not a factor in our data (see “CMB and Multicollinearity” in
Supplemental Appendix D).
to test the model. Mplus 7 allowed for the theory and hypotheses to be assessed for model fit
and provided a logistic regression analysis for the dichotomous outcome variable (i.e., actual
phishing response behavior). Age, gender, experience, and education were included in the
analysis as controls for intentions and behaviors; the organizational security constructs of
TMSC, OSC, and OCM were added as counter-explanations. Figure 4 depicts the structural
model results. Table D.8 summarizes the full structural model testing details, which included
three stages of model testing: Model part 1 (HMSAM replication only), Model part 2
(extension to add coping and challenge), and Model part 3 (full model with controls and
theoretical counter-explanations). All HMSAM replications were supported, except joy to BI
and control to immersion. All hypotheses were supported (Hypothesis 4 is addressed last);
our results at month three were similar, but not as strong (Table D.3). When we modeled the
data for the e-mail treatment alone, the results were much worse (see Tables D.4 and D.5).
Interestingly, the e-mail treatment results worsened or remained the same between the initial
three-month period and the six-month period.
relationship, we first ran the model with original indicators and then estimated the
construct that had the proposed nonlinearity.
variable in the SEM model, in which both the main effect and the squared term were
related to the same dependent variable. A similar approach was used in Moody et al. [70],
which tested a curvilinear model with covariance-based SEM. The variance inflation
factors (VIFs) increased and ranged from 1.945 to 9.453 with the model fit RMSEA
0.062, SRMR 0.069, CFI 0.929, and TLI 0.923 for the gamification treatment and
RMSEA 0.072, SRMR 0.078, CFI 0.905, and TLI 0.902 for the e-mail treatment.
Although the model fit and VIFs worsened, the values were still within the acceptable
ranges and are expected to worsen when including a squared term.
PEOU
R2 = 0.177
R2 = 0.123
R2 = 0.472
R2 = 0.505
R2 = 0.645
R2 = 0.628
response,
following
security policies
Security
response
efficacy
self-efficacy
Coping support for security issues
to encourage security-related
behavioral change
Model part 3 (grey hash):
security-related and
demographic controls
replications of previously established
relationships
Actual Behavior
Controls
Gender n/s
Education n/s
TMSC 0.122*
Age n/s
Gender n/s
Education n/s
TMSC 0.113*
learning and efficacy) and progress over time to sustain curiosity; otherwise, it
can decrease immersion.
training system could increase learning and immersion as well as decrease employee
susceptibility to phishing, a crucial piece of the analysis was the manipulation checks, as
they indicated whether the gamified system delivered on its instrumental goals to
improve security learning and compliance. This was confirmed by the two manipulation
checks. First, we compared the degree to which the small group of randomly selected
employees (those in the control group who did not participate in the gamified group or
e-mail group) and the gamified treatment group were successfully phished. The results
in Tables 1 and 2 indicate that there was a significant difference in the expected
direction. Strikingly, those with e-mail training performed no better than those who
received no training at all.
the model variables were also examined. A multivariate analysis of variance (MANOVA)13
sizes were different; thus, we carefully checked to ensure that we adhered to the assump-
tions of MANOVA, which included the confirmation of Box M (see “Box M and
MANOVA assumptions” Supplemental Appendix D). Table D.9 summarizes the means
and SDs comparing these two groups at the end of six months (Tables D.6 and D.7
provide the respective correlations). To compare the actual behaviors between the two
groups, the Z-score (2.2561, p < 0.05) was calculated, confirming that the two groups’
actual behaviors were significantly different and in the expected direction.
[1] and the DSR evaluation principles of Hevner et al. [41]. We also lean heavily on
inspiration found in following Liu et al. [59], Nunamaker et al. [76], Peffers et al. [81], and
Gregor and Hevner [38].
treatment groups.
Group Phished (n = 149) Not phished (n = 341)
Gamified group (n = 384) 105 (27.3 percent) 279 (72.7 percent)
E-mail group (n = 68) 27 (39.7 percent) 41 (60.3 percent)
received training through gamification or e-mail. The n’s represented in this table
were used for the analysis after all data drops.
and control groups.
Comparison Phished (Z-score)*
Gamified vs. e-mail 2.0664*
Control vs. e-mail 0.5041 n/s
of several opportunities: First, we were approached by a French international company
that wanted help improving their internal SETA program to increase organizational
security compliance. Second, we saw gamified security training as a way to improve
their training, but we observed that previous research efforts in this area were incomplete,
with too little focus on the design artifact, long-term data, objective behavioral assessment,
use with actual employees, and so on. Third, the recent gamification editorial by Liu et al.
[59] pointed to similar issues in the gamification literature that has thus far largely failed
bridge theory, design, and methodology. Fourth, previous gamification studies in
a security context have largely lacked a systematic DSR approach. Consequently, we
thus proposed that gamified security training represents a natural opportunity to apply
a DSR approach to bridge the related opportunities in design, theory, methodology, and
practice.
and created a working gamified SETA system based on an iterative application of theory,
extant literature, prototyping, and feedback from the target organization. In the field, the
goal of our study was to extend and recontextualize kernel theory (i.e., HMSAM) to
explain how organizations can positively bring about security learning and associated
behavioral changes in employees, specifically in a gamified security training context. We
aimed to do so through the novel application of two parallel factors: (1) focusing on
positive interventions through gamified training (as opposed to traditional manipulations
of punishments, fear, and threats) and (2) improving employees’ security learning and
efficacy to strengthen their ability to cope with security challenges (in our context,
phishing). Together, these two factors were predicted to result in positive behavioral
change in employees through their increased intentions to follow security policies and
the alignment of their actual phishing response behaviors with the organizational security
policies in which they were trained.
the method advocated by Nunamaker et al. [76] and elaborated on by Peffers et al. [81].
This involved an extensive, iterative process based on the security gamification litera-
ture, DSR, system development, and feedback from the target organization. In doing so,
we followed a rigorous but highly iterative process that can be best described in nine
steps: (1) established the gamified security training system as an artifact; (2) focused on
the design problem relevance; (3) created objectives for design evaluation; (4) applied
a DSR kernel theory that is contextualized to gamification; (5) proposed design princi-
ples that bridge DSR design objectives and the DSR kernel theory; (6) established proof-
of-concept through multiple methods; (7) established proof-of-value through multiple
methods; (8) created a working foundation in which proof-in-use can be established
over time; and (9) evaluated the results rigorously according to multiple DSR evaluation
guidelines.
suggested by Peffers et al. [81], as detailed extensively earlier in the paper. Of the many
discoveries and design artifacts that were created through this process, perhaps the most
fundamental outcome was driven by the ideas from Liu et al. [59] that a key step of
designing a gamified system is to carefully choose the gamification design principles that
serve as the bridge between the system and meaningful engagement. This step establishes
the user-interaction processes that occur between user-system-user actors. We see this
approach as key to tying the design to a meaningful kernel theory (i.e., HMSAM) that
further explains meaningful engagement and measures that can be used to evaluate it. We
posit that these ideas are core to fostering proof-of-concept.
consisted of two main components that further inspire design assumptions and principles:
(1) the importance of designing for motivation fulfillment to inspire meaningful and
engaged gamified systems use and (2) the importance of designing for coping support so
the users can deal with security issues and thus encourage security-related behavioral
change. These ideas also inspired the two design principles we carefully applied in
building our training artifact. These principles were systematically applied with the
literature and iterative design sessions, finally yielding a strong case for proof-of-
concept, as detailed in our earlier DSR section and the supplemental appendices.
tested it and the kernel theory with students. However, the key step in establishing proof-
of-value involved a long-term field experiment with actual employees using the gamified
security training system. Our overall proof-of-value is demonstrated in that the DSR
artifact worked as intended and as theorized. Their SETA program was thus improved.
Going forward, we discuss proof-of-value in three details respects: (1) in actual practice,
(2) in research, and (3) in theory.
Our proof-of-value in actual practice was demonstrated in multiple respects. First, our
long-term study demonstrated both strong ecological validity14 and meaningful engage-
ment [59]. Achieving meaningful engagement is an important factor of building gamified
information systems and should be addressed in view of both instrumental and experi-
ential benefits [59]. Not only did the participants use the gamified system over six months
during their normal course of work, but also a third party phished the unwitting
participants and control group to objectively assess whether they followed the phishing
response outlined by the organization’s security policies.
influenced (and thus the SETA program), thereby demonstrating the utility of our extended
model as well as the value of the gamified design elements included in the system. Aside
from the strong manipulations and statistical results of our design, we received positive
feedback from the organizational leadership and participating employees. Again, we focused
and did not use typical approaches involving deterrence, threats, or fear.
appealed to powerful motivations while building employees’ coping capabilities. We thus
demonstrated that a gamified security training system approach offers a new and unique
way to improve employee security learning and compliance and can be implemented
without the usual “carrots and sticks.” Although threats, fear, sanctions, and costs/benefits
may have an appropriate place in organizations [e.g., 11, 27, 52], these approaches also run
the risk of backfiring, causing reactance, a sense of injustice, or employee engagement in
“malicious compliance” or other microaggressions [63, 65]. Most employees prefer to
work in an enjoyable and supportive work environment rather than one laden with rules,
regulations, fear, and punishments. This is also an important consideration when choosing
the design characteristics of an organizational e-training system. We demonstrated that
adding the abovementioned design elements could improve a system’s efficacy and lead to
higher levels of motivation.
Our study provides empirical evidence that e-mail training and e-mail notifications
designed to help employees avoid phishing attacks might be largely futile. This was
particularly useful information for the French company with whom we worked, as they
used e-mail training extensively and thought it was more efficacious than our results
indicated. It was no surprise that this contrasting approach yielded far less motivation and
immersion; after all, it was not a gamified system. However, we were surprised that there
was no statistically significant difference in terms of the actual behavior of the e-mail
group and the pure control group. The e-mails were thoughtfully constructed, and they
used the same content and many of the same visuals as the gamification system; however,
employees who received the e-mail treatment had the same outcomes as those who
received no training. This is clear evidence that pushing security content to end users
via e-mail is not effective in this context; in contrast to a gamified training system, it
neither fosters motivations nor strengthens coping.
we also realized that conducting training in short, spaced-out segments is more helpful
and natural to employees than long training segments. Traditional training in corporate
environments can be highly disruptive, time-consuming, unmotivating, and even irritat-
ing. We suspect this is also likely true with gamification itself: it is more likely to remain
novel and fresh if introduced in short segments that provide welcome relief from normal
work duties.
system worked as intended, but because meaningful pragmatic change was introduced to
improve the client organization through improved systems and practices.
Aside from providing proof-of-value in actual practice, our value extends to challenging
and extending gamified security research. We do so by offering a study that addresses
compelling research gaps and opportunities in this area and uniquely involves all of the
following: (1) long-term data collection; (2) actual working employees in large, interna-
tional for-profit organizations; (3) control and treatment conditions; (4) a mix of
(i.e., HMSAM) that was contextualized to gamified organizational security training; (6) an
actual working gamified training system rigorously designed and developed through DSR
principles; and (7) actual empirical demonstration of “meaningful engagement” (e.g.,
improved IT security compliance) and interaction outcomes (e.g., measurable increased
immersion) [cf. 59].
approach has long been recommended for researching technology-related training in the
workplace [97]. As noted, related attempts at one-off, cross-sectional SETA [36] and gamified
security training [8] have failed to produce increased learning and behavioral change. We
argue that the likely reasons for this failure are simple: fulfilling motivations, inducing a state
of immersion, fostering learning, and developing coping responses all take time, so it is
exceedingly difficult to produce these outcomes over the course of a brief cross-sectional
study. We conclude that gamification should be studied using a long-term approach because
flow and immersion occur in stages rather than simultaneously [e.g., 48, 62].
a positive relationship between BI and employee actions in response to the phishing
attempts they were trained to recognize was confirmed. This finding is in line with
previous studies in non-security contexts, and researchers have called for additional
studies confirming the link between intentions and actual security behaviors in various
security contexts [22]. Our study is the first to examine this important relationship in
a gamified security context.
We not only were able to demonstrate an effective DSR kernel theory with our HMSAM
application, but we also did so in a manner that can contribute to theory development
beyond DSR. Our first key contribution here is the extension of HMSAM to a gamified
security training and compliance context. To do so, we added new constructs to the
original model (i.e., security self-efficacy, security response efficacy, challenge, learning,
and actual security behavior). The addition of new constructs was crucial, as it enabled us
to build a working prototype that could empirically establish proof-of-value.
baseline replicated HMSAM model was 0.318; furthermore, our modeling extensions
(excluding the trivial contributions of the control variables) literally doubled the R2 for BI
to 0.638. In terms of a pragmatic effect size, this change is statistically huge (ƒ2 = 0.884) and
pseudo F-test results show that the change is highly significant (F = 328.84, p < 0.001).15
OSC, and OCM. Only TMSC was significant, and it contributed only to an extremely small
increase in R2. As with all of these additions, the R2 for BI only went from 0.638 to 0.645.
This change is statistically trivial (ƒ2 = 0.019).16 Because a pseudo F-test may not be strictly
correct and can have limited value, we also used the method to compare nested models by
calculating AIC/BIC values for the nested models. We found fit statistics of 2,945.3 (Akaike’s
information criterion [AIC]) and 2,988.1 (Schwarz’s Bayesian information criterion [BIC])
for true treatment and fit statistics of 2,231.4 (AIC) and 2,362.3 (Schwarz’s BIC) for e-mail
treatment. Overall, these tests provide further evidence that our theoretical contribution is
efficacious gamification interventions.
that have the capacity to improve theory, research, and practice in gamified security
training. We showed that challenge did lead to immersion, but this finding comes with
a crucial theoretical limitation: if the challenge is not appropriate, the results might be
undermined. This has long been an underlying assumption of gamification and flow theory
[24]. Researchers in these fields [29] have explained that to experience flow (or immersion),
three conditions should be satisfied: clear goals, unambiguous feedback, and a balance of
challenges and skills. However, to the best our knowledge, what constitutes an appropriate
challenge has never been empirically confirmed. Reviewer feedback on our paper led us to
realize that if this limiting assumption indeed holds, there is a point at which a challenge
becomes detrimental to fostering immersion; it becomes overly challenging and thus
inappropriate. If this continues to hold elsewhere, the relationship between challenges and
immersion should not be linear; instead, it should be curvilinear and ideally a quadratic,
inverted U-shaped curve that reaches a diminishing marginal return at a certain apex.
presenting challenge and immersion as a linear relationship and another presenting it as
a curvilinear relationship. The curvilinear model was statistically superior, yielding
a statistically higher increase in R2 (a nearly twofold increase).17 This means that the
relationship between challenge and immersion is in fact ideally modeled as curvilinear.
When we visually depicted this relationship with fitted regression lines, the best fit was
shown by an inverted U-shaped curve (see Figure 5). This is the first empirical evidence
for two long-held notions: (1) good gamification delivery involves progressive challenges,
but (2) such challenges must be appropriate and thus there is a certain point at which
a challenge can overwhelm an end user and cause diminishing returns.
challenge would not have a similarly beneficial relationship in the non-gamified e-mail
treatment. We thus conducted a similar analysis to test whether the relationship between
challenge and immersion in this case was curvilinear. We found two unexpected and
fascinating results. First, there was no significant difference in this context between linear
or curvilinear modeling;18 thus, we can conclude that in our non-gamified e-mail training
context, the relationship between challenge and immersion was linear. We also found that
this was a negative relationship, such that challenge was a detrimental factor (see Figure 6).
This makes sense, as an e-mail training environment does not offer the gamified features
that can turn a challenge into a positive factor, with the result that challenge in an e-mail
training environment simply becomes a source of frustration for many employees.
initial three months (Table D.4) to six months (Table D.5), we observed stable or decreasing
statistical power as time passed, meaning that the effects of the e-mail treatment diminished
over time. The time factor appeared to play an important role in the gamified systems (see
Figures 5 and 6) because it added a new dimension that should be carefully positioned and
built into the gamified system. The right balance among time, play, and learning should be
carefully designed and chosen so that users do not lose their motivation to learn and play.
Our conclusion thus is that the key to improving the French company’s security climate was
through gamified security training that offered an appropriate challenge and thus led to
on the y-axis. This shows that challenges are helpful to immersion, but only to a certain point.
holds, the theoretical implications are compelling.
concept, proof-of-use can also be applied to DSR. Proof-of-use is demonstrated when DSR
solution, and to demonstrate that practitioners can successfully create and gain value from their
own instances of the generalizable solution.
this research. The first obvious issue and opportunity here is that of generalizability.
Although we obtained a high degree of ecological validity by using an actual organization
and an actual gamified security training system, using one organization limits the general-
izability of our results. Each organization has slightly different and unique security and
compliance climates, just as the executives, managers, and employees vary widely. For
example, in some organizations, a “shadow IT culture” is widespread [90]. This could
produce different results, as the security expectations in these organizations are higher
than average. Such organizations could exhibit differences in how employees learn and
comply based on individual-level and national-level cultural differences.
involved the French organization, as we sought to receive meaningful feedback on the
prototype to align it as closely as possible to organizational realities and needs.
Consequently, the working prototype may need further modifications if adapted to
another organization. Regardless, our design principles and kernel theory need to be
further modified, applied, and tested, such that the broader practice community is further
positively influenced — not just the French organization.
entirely know what outcome would have occurred had traditional manipulations of extrin-
sic security motivations been used in this setting, as we intentionally did not use them.
Again, we took this approach because research indicates that extrinsic motivations are
inherently weaker than intrinsic motivations [30, 61] and can backfire in organizational
settings. However, mixed motivations are common and can be dealt with effectively in
systems use [61]; thus, it might be possible to create a security environment where the
outcomes are maximized through a careful combination of extrinsic and intrinsic motiva-
tions. For example, a prize scheme for top performers (e.g., salary increase, bonus payment,
or recognition as “security employee of the quarter”) could facilitate further investigations
of whether and how these types of motivation influence behaviors. It is also important to
determine which kinds of extrinsic motivations are the most problematic for this setting.
and programmatic research called for by Nunamaker et al. [74]. For example, our work was
conducted over the course of six months with a continual infusion of fresh material. What
would happen if the use was extended and the fresh material ran out, such that novelty and
challenge diminished? At what point would learning and behavioral change deteriorate?
Further research should explore these issues and apply HMSAM to other gamification and
Moreover, the extended gamified HMSAM model could likely be applied to other areas of
compliance training, such as those related to corporate governance, risk assessment, audit, and
other financial controls. Our extensions might also work in a compelling manner for iterative
IS development processes and requirements engineering.
in gamified security training. We studied an entire system, but each part should also receive
further attention. For example, each of the gamification elements in Table A.3 could be
studied as its own dependent variable with a highly contextualized model and series of studies.
Thus, researchers could examine the kinds of avatars that are more likely to enhance a loss of
self-consciousness and that are the most autotelic. Or the gamemaster could be the subject of
many models and studies. The lack of a gamemaster is a drawback of traditional e-training
systems that focus on completion rates or on quantity over quality. The gamemaster, who
plays the role of a “positive virtual mentor,” could motivate increased participation. Most
employees would likely prefer to be supported by a positive person or positive virtual mentor
than nagged by a negative virtual mentor. Based on the analysis of quantitative answers, the
gamemaster could provide an individual improvement activity in which learners could
improve their knowledge by taking additional quizzes/tests. The gamification effects should
then be more effective and the overall motivation to participate and learn should increase.
Consequently, Table A.3 alone points to many possibilities for programmatic research.
Another related avenue for future research is to examine in more detail how various levels
of media richness (e.g., the use of video, sound, or animation in the communication media)
may further influence the individual’s security learning process.
design with selected gamified IT artifacts can improve extant organizational security
training systems. Namely, we show through a long-term field experiment that gamification
can be used to foster training systems that are less invasive of employees’ everyday work
routines, that provide intrinsic motivation to learn and comply with security efforts, and
that provide the efficacy necessary so that employees will actually comply. We also
demonstrated improvement in actual anti-phishing behaviors by hiring a third-party
firm that phished the employees as a natural experiment to test their reactions. We also
provide a novel empirical demonstration of the conceptual importance of “appropriate
challenge” in this context. We conclude that a mix of DSR, carefully contextualized kernel
theory, and long-term research in an empirical field setting is a promising way to
effectively implement gamification in organizations.
1. Generally, gamification is the application of game-like features to nongaming systems to help
elements, such as points, levels, leaderboards, and badges.
gamified training will exhibit greater knowledge acquisition than individuals who receive
non-gamified training or no training.” See page 20 of their text for statistical details.
experience — or “immersion” in the systems version [3, 22] — is the objective. This objective
can be achieved either through intrinsic or extrinsic motivation, but intrinsic motivation
tends to be stronger for an instrumental goal [61, 87]. Intrinsic motivation can be involved in
the task itself, whereas extrinsic motivation results from external factors (e.g., financial
rewards or career goals).
unique gamified security learning context into theory [50]. This is challenging because
contextualization is about “linking observations to a set of relevant facts, events, or points
of view that make possible research and theory that form part of a larger whole” [86, p. 1].
Following Johns [50], we carefully evaluated, designed, and implemented the implications of
contextual appreciation for both theory building and practice to achieve the best possible
match between theoretical relevance and practical implications.
is, the gamified system should foster (1) enjoyment, (2) interaction/engagement, and (3)
enhanced instrumental task outcomes [59].
fields like computer science. Such studies are especially important for advancing gamifica-
tion-related design and algorithms. However, most either used student subjects, did not
advance a “cohesive theoretical foundation,” or did not focus on achieving meaningful
engagement, as suggested by Liu et al. [59].
too much interaction between employees to prevent distractions from their normal work.
Thus, we did not apply pie/bar charts, activity stream, giving kudos, social networking,
forming teams, providing cash incentives, personalized goals, or social support.
gray matter increase, impacting spatial navigation, strategic planning, and working memory
[56]. Another example is the use of video games by public safety and military organizations to
recruit and train soldiers and to treat their psychological disorders by literally improving their
coping and cognitive processes.
the number of times a participant could take a quiz was limited because some pilot
participants had used automatic clicking tools (such as AutoClicker) as a workaround to
earn additional points, and (2) a gamemaster role was implemented, as this role can be an
important motivational factor for users.
an accepted surrogate test to assess nonresponse bias, we tested to ensure that there was no
statistical difference between “early” and “late” respondents. We used time stamps of when
they accepted joining the project. We grouped early and late respondents and compared their
responses to the Likert-type scale questions using a MANOVA test. The results did not reveal
any statistical significance (F = 1.976, p = 0.313).
considered whether there was any discriminant overlap in the items in the factor analysis, and
we consequently dropped two more items that yielded poor discriminant validity. We then
examined overall discriminant validity by placing the square root of the reflective construct’s
AVE on the diagonal line and the correlations between the constructs below it. The square
root value of the AVE should be higher than all latent constructs, which was the case.
intentions to follow security policies; OSC = organization security communication; TMSC
= top management security commitment; OCM = organization computer monitoring.
The result was not significant.
the degree to which the findings of a research study can be generalized to real-life settings,
to solve real work tasks). Although this form of validity — unlike internal and external
validity — is not strictly required for a study to be valid, it is a particularly meaningful but
often overlooked consideration for research areas that are highly intertwined with practice,
such as security and privacy research [cf. 60].
contextualized improvements to HMSAM (step 2 of model building) was calculated as follows [18]:
ƒ2 (Cohen’s effect size) = R2extended model – R
HMSAM) (.320)/(1 – R
extended model) (.362). In this case,
the organizational security literature. To test the statistical significance of this increase, we con-
ducted a pseudo F-test as follows: ƒ2 (Cohen’s effect size) * (n – k – 1), where n is the sample size and
k is the number of independent variables. In our case, n = 384; and we conservatively set k to 11 for
all of the constructs preceding BI. This resulted in F = 328.84, p < 0.001.
2
extended model) (.007)/(1 – R
extended model) (.362).
greater).
are listed in the following table:
(linear) and Model 2 (curvilinear; quadratic) are listed in the following table:
The contribution of statistical learning theory. MIS Quarterly, 34, 3 (2010), 435–461.
approach. Technology Innovation Management Review, 5, 1 (2015), 5–14.
beliefs about information technology usage. MIS Quarterly, 24, 4 (2000), 665–694.
Processes, 50, 2 (1991), 179–211.
Cliffs, NJ: Prentice-Hall, 1980.
2 .438b .192 .188 .902 .081 45.070 .000
Model R R2 Adjusted R2 Std. Error of the Estimate R2 Change F Change Sig. F Change
2 .391b .153 .064 .9903298 .006 .137 .716
Psychologist, 28, 2 (1993), 117–148.
through gamification pedagogy. Contemporary Issues in Education Research, 7, 4 (2014),
291–298.
compliance training: Evidence from the lab and field. Journal of Information Systems, 30, 3
(2016), 119–133.
American Educational Research Journal, 21, 4 (1984), 755–765.
playing on attention, memory, and executive control. Acta Psychologica, 129, 3 (2008),
387–398.
Using fear appeals to engender threats and fear that motivate protective security behaviors.
MIS Quarterly, 39, 4 (2015), 837–864.
Integrating technology adoption and collaboration research. Journal of Management
Information Systems, 27, 2 (2010), 9–54.
existing concepts? In Proceedings of International Conference on Information Systems, Fort
Worth, US, 2015.
empirical study of rationality-based beliefs and information security awareness. MIS
Quarterly, 34, 3 (2010), 523–548.
insiders’ psychological capital on information security threat and coping appraisals.
Computers in Human Behavior, 68, March (2017), 190–209.
An awareness-motivation-capability perspective. Journal of Computer Information Systems,
58, 4 (2018), 312–324.
pliance: Stick or carrot approach? Journal of Management Information Systems, 29, 3 (2012),
157–188.
approach for measuring interaction effects: Results from a Monte Carlo simulation study
and an electronic mail emotion/adoption study. Information Systems Research, 14, 2 (2003),
189–217.
Smith, 2007.
games. Computers in Entertainment (CIE), 6, 2 (2008), 20.
programs and individual characteristics on end user security tool usage. Journal of
Information System Security, 5, 3 (2009),
directions for behavioral information security research. Computers & Security, 32, (2013),
90–101.
New York, NY: Basic Books, 1997.
25. Cyr, D; Head, M; and Ivanov, A. Perceived interactivity leading to e-loyalty: Development of
Studies, 67, 10 (2009), 850–869.
literature: Making sense of the disparate findings. European Journal of Information Systems,
20, 6 (2011), 643–658.
impact on information systems misuse: A deterrence approach. Information Systems Research,
20, 1 (2009), 79–98.
information security policies: A multilevel, longitudinal study. Information Systems Journal,
29, 1 (2019), 43–69.
Work and Games. Washington, DC: Amer Sociological Assoc, 1977.
New York, NY: Plenum Press, 1985.
32. Deterding, S; Dixon, D; Khaled, R; and Nacke, LE. From game design elements to gameful-
Envisioning Future Media Environments, Tampere, Finland, 2011, pp. 9–15.
Martínez-Herráiz, J-J. Gamifying learning experiences: Practical implications and outcomes.
Computers & Education, 63, April (2013), 380–392.
one are elevated during competition, and testosterone is related to status and social con-
nectedness with teammates. Physiology & Behavior, 87, 1 (2006), 135–143.
of background music and immersive display systems on memory for facts learned in an
educational virtual environment. Computers & Education, 58, 1 (2012), 490–500.
Quarterly, 28, 1 (2005), 54–57.
motivation theory. Journal of Applied Social Psychology, 30, 2 (2000), 407–429.
impact. MIS Quarterly, 37, 2 (2013), 337–355.
U-shaped relationships in strategy research. Strategic Management Journal, 37, 7 (2016),
1177–1195.
penalties, pressures and perceived effectiveness. Decision Support Systems, 47, 2 (2009),
154–165.
MIS Quarterly, 28, 1 (2004), 75–105.
threat research. Information Systems Frontiers, 19, 2 (2015), 1–20.
for context-specific theorizing in information systems research. Information Systems Research,
25, 1 (2013), 111–136.
concepts through digital game-based learning: The effects of self-explanation principles. The
Asia-Pacific Education Researcher, 21, 1 (2012), 71–82.
controls in information security policy effectiveness. Information Systems Research, 26, 2
(2015), 282–300.
security policies: The critical role of top management and organizational culture. Decision
Sciences, 43, 4 (2012), 615–660.
learning performance in web-based problem-solving activities. Computers & Education, 59, 4
(2012), 1246–1256.
defining the experience of immersion in games. International Journal of Human-computer
Studies, 66, 9 (2008), 641–661.
mindfulness techniques. Journal of Management Information Systems, 34, 2 (2017), 597–626.
Management Review, 31, 2 (2006), 386–408.
computer skills acquisition: Toward refinement of the model. Information Systems Research,
11, 4 (2000), 402–417.
work: Leveraging threats to the human asset through sanctioning rhetoric. MIS Quarterly, 39,
1 (2015), 113–134.
for Training and Education. San Francisco, US: John Wiley & Sons, 2012.
Bench, C; and Grasby, P. Evidence for striatal dopamine release during a video game. Nature,
93, 6682 (1998), 266–268.
56. Kühn, S; Gleich, T; Lorenz, R; Lindenberger, U; and Gallinat, J. Playing Super Mario induces
video game. Molecular Psychiatry, 19, 2 (2014), 265–271.
for phish. ACM Transactions on Internet Technology, 10, 2 (2010), 1–31.
gaming elements. Journal of Management Information Systems, 30, 4 (2014), 115–150.
design and research of gamified information systems. MIS Quarterly, 41, 4 (2017), 1011–1034.
the information systems (IS) artefact: Proposing a bold research agenda. European Journal of
Information Systems, 26, 6 (2017), 546–563.
continuance model (misc) to better explain end-user system evaluations and continuance
intentions. Journal of the Association for Information Systems, 16, 7 (2015), 515–579.
seriously: Proposing the hedonic-motivation system adoption model (HMSAM). Journal of
the Association for Information Systems, 14, 11 (2013), 617–671.
explain opposing motivations to comply with organizational information security policies.
Information Systems Journal, 25, 5 (2015), 433–463.
of Management Information Systems, 34, 3 (2017), 863–901.
to deter reactive computer abuse following enhanced organisational information security
policies: An empirical study of the influence of counterfactual reasoning and organisational
trust. Information Systems Journal, 25, 3 (2015), 193–230.
one’s intrinsic motivation: Evidence from event-related potentials. Frontiers in Neuroscience,
11, (2017), 131.
employee training: Mediating influences of self-deception and self-efficacy. Journal of
Applied Psychology, 82, 5 (1997), 764.
the development of self-efficacy: Implications for training effectiveness. Personnel Psychology,
46, 1 (1993), 125.
reinforces one’s intrinsic motivation to win. International Journal of Psychophysiology, 110,
December (2016), 102–108.
between trust, distrust, and ambivalence in online transaction relationships using polynomial
regression analysis and response surface analysis. European Journal of Information Systems,
26, 4 (2017), 379–413.
Proceedings of the 16th International Academic MindTrek Conference, Tampere, Finland,
2012, pp. 23–26.
Switzerland: Springer International Publishing, 2015, pp. 1–20.
The case of designing electronic feedback systems. European Journal of Information Systems,
25, 4 (2016), 303–316.
impact through systematic programs of research. MIS Quarterly, 41, 2 (2017), 335–351.
Achieving both rigor and relevance in information systems research. Journal of
Management Information Systems, 32, 3 (2015), 10–47.
research. Journal of Management Information Systems, 7, 3 (1990), 89–106.
Transactions on Management Information Systems, 2, 4 (2011), 1–12.
Organization Science, 11, 5 (2000), 538–550.
orientation nomological net. Journal of Applied Psychology, 92, 1 (2007), 128.
facilitate broadly participative information systems planning. Journal of Management
Information Systems, 20, 1 (2003), 51–85.
methodology for information systems research. Journal of Management Information Systems,
24, 3 (2007), 45–77.
screening system for automated risk assessment using nuanced facial features. Journal of
Management Information Systems, 34, 4 (2017), 970–993.
motivation to protect organizational information assets. Journal of Management Information
Systems, 32, 4 (2015), 179–214.
organizational information assets: Development of a systematics-based taxonomy and theory
of diversity for protection-motivated behaviors. MIS Quarterly, 37, 4 (2013), 1189–1210.
of consumer experiences. Advances in Consumer Research, 42, (2014), 352–356.
research. Journal of Organizational Behavior, 22, 1 (2001), 1–13.
social development, and well-being. American Psychologist, 55, 1 (2000), 68.
software license. Journal of Management Information Systems, 25, 3 (2008), 207–240.
engagement as a function of environmental complexity in high school classrooms. Learning
and Instruction, 43, (2016), 52–60.
(2014), 274–283.
information systems security policy violations. MIS Quarterly, 34, 3 (2010), 487-502.
for the gamification of non-gaming systems. Association for Information Systems Transactions
on Human-Computer Interaction, 10, 3 (2018), 129–163.
controlled screening systems for detecting information purposely concealed by individuals.
Journal of Management Information Systems, 31, 3 (2014), 106–137.
information systems. Journal of Management Information Systems, 29, 4 (2013), 263–289.
violations: Increasing perceptions of accountability through the user interface. MIS
Quarterly, 39, 2 (2015), 345–366.
technology: Toward a unified view. MIS Quarterly, 27, 3 (2003), 425–478.
investigation of the effect of mood. Organizational Behavior and Human-Decision Processes,
79, 1 (1999), 1–28.
mobile device usage. European Journal of Information Systems, 15, 3 (2006), 292–300.
Association for Information Systems, 17, 11 (2016), 759.
antecedents and consequences. Information Systems Research, 28, 2 (2017), 378–396.
absolute and restrictive deterrence in inspiring new directions in behavioral and organiza-
tional security. Journal of the Association for Information Systems, 19, 12 (2018), 1187–1216.
abuse. MIS Quarterly, 37, 1 (2013), 1–20.
Management, University of St. Gallen, Switzerland. He holds a Ph.D. from that university.
Dr. Silic’s research focuses on information security, open source software, human-computer inter-
action and mobile commerce. He has published in Journal of Management Information Systems;
Security Journal; Information & Management; Computers & Security; and other journals.
Thornhill Chair Professor and Eminent Scholar in Business Information Technology at the Pamplin
College of Business at Virginia Tech. He received his Ph.D. in Management Information Systems
from the University of Arizona. His research interests include organizational and behavioral
security and privacy; online deviance, online harassment, and computer ethics; human-computer
interaction, social media, and gamification; and business analytics, decision sciences, innovation,
Information Systems (JMIS), MIS Quarterly, Information Systems Research, Journal of the AIS, and
other journals. He is a member of the Editorial Board of JMIS, department editor at Decision
Sciences Journal, and senior or associate editor of several other journals. He has also served multiple
times as track co-chair at the International Conference on Information Systems, European
Conference on Information Systems, and Pacific Asia Conference on Information Systems.
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Gamification Literature Review
DSR Applied To Gamified Security Training
Overview of Our DSR Approach
Establish the Gamified Design as an Artifact
Focus on Design Problem Relevance
Create Objectives for Design Evaluation
Apply aDSR Kernel Theory Contextualized to Gamification
Propose Guiding Design Principles to Bridge DSR Design Objectives and the DSR Kernel Theory
Establish Proof-of-Concept
Establish Proof-of-Value
Core Kernel Theory Assumptions for Achieving Immersion
Infusing Learning and Security Coping into Our Context
Coping and Behavioral Change
Balancing Skills and Challenges
Fulfilling Motivations for Behavioral Change
Modeling Counter-explanations Through Control Variables
Procedures for Design Evaluation for Proof-of-Value
Pilot Study for Proof-of-Value
Main Study Design for Proof-of-Value in Actual Use
Gamified System and Procedures
Measures for Design Evaluation
Analysis for Final Proof-of-Value
Measurement Model
Structural Model Results
Manipulation Checks of Instrumental Goals
Discussion
Recap of Our General DSR Study Goals
Recap of Our DSR Approach
Establishing Proof-of-Concept
Establishing Overall Proof-of-Value
Establishing Proof-of-Value in Actual Practice
Establishing Proof-of-Value in Research
Establishing Proof-of-Value in Theory
Research Agenda to Establish Proof-of-Use
Conclusion
Notes
References