Major is Cybersecurity
Finding the Conversation
What are the major academic journals and professional/scholarly
organizations in your field?
As mentioned this week’s lecture, an important part of being an academic is
knowing where the conversation about your field is located. Part of the goal of this class
is to help you find out where scholars in your particular field are having conversations.
Knowing this will be useful in your upper division classes because you will likely be
required to find scholarly articles in your field.
A basic Google search can get you started. I searched “What are the major
scholarly organizations for criminal justice” and I found this nice list:
• Criminal Justice | Professional Organizations
I searched “What are the primary scholarly journals in criminal justice” and
found this site:
• Criminal Justice and Criminology Journals
Try some basic searches with your academic field and see what results come up.
There are many places you can look to find out this information. Another place
to look is in the works cited of your textbooks. Note which journals you often see in
citations. If you have a journal article that you have studied in a class before, look at the
citations in that article. Which journals do you see again and again? It is likely that those
journals are important in your field.
Many times journals are associated with scholarly or professional organizations.
For example, the primary organization of scholars interested in Composition Pedagogy
(the study of how best to teach college composition courses) is called the Conference on
College Composition and Communication, and it has a related journal called College
Composition and Communication (the CCC).
Assignment:
Find one professional organization in your field. Link to the website. List the
following information:
• What is the organization’s mission?
▪ Look for a mission statement on the first page, a page called “mission” or maybe
“history”
http://www.unco.edu/hss/criminology-criminal-justice/current-students/get-involved.aspx
http://www.tamut.edu/academics/tjordan/rsrchhlp/cjjournalsv2.htm
• What kind of scholars are members of the organization?
• In what areas/fields does the organization specialize?
Next, find three scholarly journals in your field. List their names. Briefly report
on one. Provide a link to the journal’s webpage and answer the following questions:
• What kinds of articles does the journal publish?
▪ Does the journal state its focus?
• What citation style is used in the journal?
• Is the journal peer reviewed?
• Does the NU library carry the journal?
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 1 The Clute Institute
Contemporary Issues in Cybersecurity
Harry Katzan, Jr., Director, Institute for Cybersecurity Research, USA
ABSTRACT1
The effectiveness of modern computer applications is normally regarded as a function of five basic attributes of
secure computer and information systems: availability, accuracy, authenticity, confidentiality, and integrity. The
concepts generally apply to government, business, education, and the ordinary lives of private individuals. The
considerations normally involve extended Internet applications – hence the name Cybersecurity. Achieving and
maintaining a secure cyberspace is a complicated process, and some of the concerns involve personal identity,
privacy, intellectual property, the critical infrastructure, and the sustainability of organizations. The threats to a
secure operating infrastructure are serious and profound: cyber terrorism, cyber war, cyber espionage, and cyber
crime, to which the technical community has responded with safeguards and procedures, usually supplied by the
private sector. This paper provides a comprehensive view of security in the cyber domain with the ultimate
objective of developing a science of cybersecurity.
Keywords: Cybersecurity; Information Assurance; Critical Infrastructure Protection
INTRODUCTION
he Internet is the newest form of communication between organizations and people in modern
society. Everyday commerce depends on it, and individuals use it for social interactions, as well as
for reference and learning. To some, the Internet is a convenience for shopping, information retrieval,
and entertainment. To others, such as large organizations, the Internet makes national and global expansion cost
effective and allows disparate groups to profitably work together through reduced storage and communication costs.
It gives government entities facilities for providing convenient service to constituents. The Internet is also efficient,
because it usually can provide total service on a large variety of subjects in a few seconds, as compared to a much
longer time for the same results that would have been required in earlier times (Katzan, 2012).
From a security perspective, the use of the term “cyber” generally means more than just the Internet, and usually
refers to the use of electronics to communicate between entities. The subject of cyber includes the Internet as the
major data transportation element, but can also include wireless, fixed hard wires, and electromagnetic transference
via satellites and other devices. Cyber elements incorporate networks, electrical and mechanical devices, individual
computers, and a variety of smart devices, such as phones, tablets, pads, and electronic game and entertainment
systems. The near future portends road vehicles that communicate and driverless automobiles. A reasonable view
would be that cyber is the seamless fabric of the modern information technology infrastructure that enables
organizations and private citizens to sustain most aspects of modern everyday life.
Cyber supports the commercial, educational, governmental, and critical national infrastructure. Cyber facilities are
pervasive and extend beyond national borders. As such, individuals, organizations, and nation-states can use cyber
for productive and also destructive purposes. A single individual or a small group can use cyber for commercial
gain or surreptitious invasion of assets. Activities in the latter category are usually classed as penetration and
include attempts designed to compromise systems that contain vital information. In a similar vein, intrusion can also
effect the day-to-day operation of critical resources, such as private utility companies.
Interconnectivity between elements is desirable and usually cost effective, so that a wide variety of dependencies
have evolved in normal circumstances, and cyber intrusions have emerged. Thus, a small group of individuals can
compromise a large organization or facility, which is commonly known as an asymmetric threat against which
methodological protection is necessary. In many cases, a single computer with software obtained over the Internet
1 This article is a partial reprint from the Journal of Service Science, 5(2), 71-78.
T
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 2 The Clute Institute
can do untold damage to a business, utility, governmental structure, or personal information. Willful invasion of the
property of other entities is illegal, regardless of the purpose or intent. However, the openness of the Internet often
makes it difficult to identify and apprehend cyber criminals – especially when the subject’s illegal actives span
international borders.
CYBERSECURITY OPERATIONS
It is well established that cybersecurity is a complicated and complex subject encompassing computer security,
information assurance, comprehensive infrastructure protection, commercial integrity, and ubiquitous personal
interactions. Most people look at the subject from a personal perspective. Is my computer and information secure
from outside interference? Is the operation of my online business vulnerable to outside threats? Will I get the item I
ordered? Are my utilities safe from international intrusion? Have I done enough to protect my personal privacy?
Are my bank accounts and credit cards safe? How do we protect our websites and online information systems from
hackers? Can my identity be stolen? The list of everyday concerns that people have over the modern system of
communication could go on and on. Clearly, concerned citizens and organizations look to someone or something
else, such as their Internet service provider or their company or the government, to solve the problem and just tell
them what to do.
So far, it hasn’t been that simple and probably never will be. The digital infrastructure based on the Internet that we
call cyberspace is something that we depend on every day for a prosperous economy, a strong military, and an
enlightened lifestyle. Cyberspace, as a concept, is a virtual world synthesized from computer hardware and
software, desktops and laptops, tablets and cell phones, and broadband and wireless signals that power our schools,
businesses, hospitals, government, utilities, and personal lives through a sophisticated set of communication
systems, available worldwide. However, the power to build also provides the power to disrupt and destroy. Many
persons associate cybersecurity with cyber crime, since it costs persons, commercial organizations, and governments
more than a $1 trillion per year, (Obama, 2009). However, there is considerably more to cybersecurity than cyber
crime, so it is necessary to start off with a few concepts and definitions.
Cyberspace has been defined as the interdependent network of information technology infrastructure, and includes
the Internet, telecommunication networks, computer systems, and embedded processors and controllers in critical
industries, (The White House, 2008). Alternately, cyberspace is often regarded as any process, program, or protocol
relating to the use of the Internet for data processing transmission or use in telecommunication. As such, cyberspace
is instrumental in sustaining the everyday activities of millions of people and thousands of organizations worldwide.
The key terminology is that in a security event, a subject executes the crime against an object and that both entities
incorporate computer and networking facilities.
CYBER ATTACKS
Cyber attacks can be divided into four distinct groups (Shackelford, 2012): cyber terrorism, cyber war, cyber crime,
and cyber espionage. It would seem that cyber crime and cyber espionage are the most pressing issues, but the
others are just offstage. Here are some definitions (Lord & Sharp, 2011):
Cyber crime is the use of computers or related systems to steal or compromise confidential information for criminal
purposes, most often for financial gain.
Cyber espionage is the use of computers or related systems to collect intelligence or enable certain operations,
whether in cyberspace or the real world.
Cyber terrorism is the use of computers or related systems to create fear or panic in a society and may result in
physical destruction by cyber agitation.
Cyber war consists of military operations conducted within cyberspace to deny an adversary, whether a state or non-
state actor, the effective use of information systems and weapons, or systems controlled by information technology,
in order to achieve a political end.
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 3 The Clute Institute
As such, cybersecurity has been identified as one of the most serious economic and national security challenges
facing the nation (The White House, n.d). There is also a personal component to cybersecurity. The necessity of
having to protect one’s identity and private information from outside intrusion is a nuisance resulting in the use of
costly and inconvenient safeguards.
CYBERSPACE DOMAIN, ITS ELEMENTS AND ACTORS
Cyberspace is a unique domain that is operationally distinct from the other operational domains of land, sea, air, and
space. It provides, through the Internet, the capability to create, transmit, manipulate, and use digital information
(McConnell, 2011). The digital information includes data, voice, video, and graphics transmitted over wired and
wireless facilities between a wide range of devices that include computers, tablets, smart phones, and control
systems. The Internet serves as the transport mechanism for cyberspace. The extensive variety of content is
attractive to hackers, criminal elements, and nation states with the objective of disrupting commercial, military, and
social activities. Below is a list of areas at risk in the cyberspace domain (Stewart, 2009). Many cyber events,
classified as cyber attacks, are not deliberate and result from everyday mistakes and poor training. Others result
from disgruntled employees. Unfortunately, security metrics include non-serious as well as serious intrusions, so
that the cybersecurity threat appears to be overstated in some instances. This phenomenon requires that we
concentrate on deliberate software attacks and how they are in fact related, since the object is to develop a
conceptual model of the relationship between security countermeasures and vulnerabilities.
Areas at Risk in the Cyberspace Domain:
• Commerce
• Industry
• Trade
• Finance
• Security
• Intellectual property
• Technology
• Culture
• Policy
• Diplomacy
Many of the software threats can be perpetrated by individuals or small groups against major organizations and
nation-states – referred to as asymmetric attacks, as mentioned previously. The threats are reasonably well known
and are summarized below. It’s clear that effective countermeasures are both technical and procedural, in some
instances, and must be linked to hardware and software resources on the defensive side. The security risks that
involve computers and auxiliary equipment target low-end firmware or embedded software, such as BIOS, USB
devices, cell phones and tablets, and removable and network storage. Operating system risks encompass service
packs, hotfixes, patches, and various configuration elements. Established counter measures, include intrusion
detection and handling systems, hardware and software firewalls, and antivirus and anti-spam software.
Security Threats:
• Privilege escalation
• Virus
• Worm
• Trojan horse
• Spyware
• Spam
• Hoax
• Adware
• Rootkit
• Botnet
• Logic bomb
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 4 The Clute Institute
The cybersecurity network infrastructure involves unique security threats and countermeasures. Most of the threats
relate to the use of out-of-date network protocols, specific hacker techniques, such as packet sniffing, spoofing,
phishing and spear phishing, man-in-the-middle attacks, denial-of-service procedures, and exploiting vulnerabilities
related to domain name systems. Countermeasures include hardware, software, and protective procedures of various
kinds. Hardware, software, and organizational resources customarily execute the security measures. There is much
more to security threats and countermeasures, and the information presented here gives only a flavor to the subject.
There is an additional category of threats and countermeasures that primarily involves end-users and what they are
permitted to do. In order for a threat agent to infiltrate a system, three elements are required: network presence,
access control, and authorization. This subject is normally covered as the major features of information assurance
and refers to the process of “getting on the system,” such as the Internet or a local-area network. A threat agent
cannot address a system if the computer is not turned on or a network presence is not possible. Once an end user is
connected to the computer system or network, then access control and authorization take over. It has been estimated
that 80% of security violations originate at the end-user level (Stewart, 2009). Access control concerns the
identification of the entity requesting accessibility and whether that entity is permitted to use the system.
Authorization refers to precisely what that entity is permitted to do, once permitted access. There is a high-degree of
specificity to access-control and authorization procedures. For example, access control can be based on something
the requestor knows, a physical artifact, or a biometric attribute. Similarly, authorization can be based on role,
group membership, level in the organization, and so forth. Clearly, this category reflects considerations which the
organizations has control over, and as such, constitutes security measures that are self-postulated.
CYBERSECURITY COLLABORATION
A collaboration group exists when a set of service providers P supplies a totality of services for a specific
operational domain to a set of clients C. Not every provider pi performs the same service but the members of P can
collectively supply all of the service needed for that domain. The client set C constitutes the functions in the
operational system that require protection.
The controls that constitute a cyber security domain form a collaboration group. Diverse elements of hardware and
software are used for network and operating system security. Clearly, processes are necessary for gaining network
presence, access control to a given resource, and user authentication. Intrusion detection and prevention systems
(IPDS) are implemented to perform continuous monitoring and cyber protection. Access roles and operational rules
are developed to facilitate use of cyber security procedures and elements.
When a client adopts cybersecurity principles for network presence, access control, and authentication, for example,
it applies the inherent methods for and by itself, thereby assuming the dual role of provider and client. Similarly,
when an organization installs a hardware or software firewall for network protection, it is effectively applying a
product for its own security.
In a security system, security controls exchange information and behavior in order to achieve mutually beneficial
results. As security systems become more complex, the security entities adapt to optimize their behavior – a process
often referred to as evolution (Mainzer, 1997). Differing forms of organization emerge such that the system exhibits
intelligent behavior based on information interchange and the following nine properties: emergence, co-evolution,
sub-optimal, requisite variety, connectivity, simple rules, self organization, edge of chaos, and nestability. Systems
of this type are usually known as complex adaptive systems (Katzan, 2008). Complex adaptive systems are often
known as “smart systems,” and cybersecurity researchers are looking at the operation of such systems as a model for
the design of cybersecurity systems that can prevent attacks through the exchange of information between security
elements.
DISTRIBUTED SECURITY
The major characteristic of a cybersecurity system designed to prevent and mediate a cyber attack is that the totality
of security elements in a particular domain are organized into a smart service system. This characteristic refers to
the facility of cyber elements to communicate on a real-time basis in response to cyber threats. Currently, threat
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 5 The Clute Institute
determination is largely manual and human-oriented. An intrusion detection system recognizes an intrusion and
informs a security manager. That manager then contacts other managers via email, personal contact, or telephone to
warn of the cyber threat. In a smart cybersecurity system, the intrusion detection software would isolate the cyber
threat and automatically contact other elements in the domain to defend their system. Thus, the security service
would handle intruders in a manner similar to the way biological systems handle analogous invasions: recognize the
threat; attempt to neutralize it; and alert other similar elements.
In a definitive white paper on distributed security, McConnell (2011) recognizes the need for cyber devices to work
together in near real-time to minimize cyber attacks and defend against them. This is a form of continuous
monitoring and referred to as a cyber ecosystem in which relevant participants interact to provide security and
maintain a persistent state of security. Clearly, a cyber ecosystem would establish a basis for cybersecurity through
individually designed hierarchies of security elements, referred to as security devices. Ostensibly, security devices
would be programmed to communicate in the event of a cyber attack. The conceptual building blocks of an
ecosystem are automation, interoperability, and authentication. Automation refers to the notion of security devices
being able to detect intrusion detection and respond to other security devices without human intervention. Thus, the
security ecosystem could behave as a security service and provide speed and in the activation of automated
prevention systems. Interoperability refers to the ability of the cyber ecosystem to incorporate differing
assessments, hardware facilities, and organizations with strategically distinct policy structures. Authentication refers
to the capability to extend the ecosystem across differing network technologies, devices, organizations, and
participants.
Thus, the cyber ecosystem responds as a service system in requests for security service to participants that are
members of the ecosystem, namely private firms, non-profit organizations, governments, individuals, processes,
cyber devices comprised of computers, software, and communications equipment.
MONROE DOCTRINE FOR CYBERSECURITY
Internet governance refers to an attempt at the global level to legislate operations in cyberspace taking into
consideration the economic, cultural, developmental, legal, political, and cultural interests of its stakeholders
(Conway, 2007). A more specific definition would be the development and application by governments and the
private sector of shared principles, norms, rules, decision-making, and programs that determine the evolution and
use of the Internet (Conway, 2007). Internet governance is a difficult process because it encompasses, web sites,
Internet service providers, hackers, and activists, involving differing forms of content and operational intent ranging
from pornography and terrorist information to intrusion and malicious content. Cybersecurity is a complex form of
service that purports to protect against intrusion, invasion, and other forms of cyber terrorism, crime, espionage, and
war. But, attacks can be carried out by anyone with an Internet connection and a little bit of knowledge of hacking
techniques. NATO has addressed the subject of cyber defense with articles that state the members will consult
together in the event of cyber attacks but are not duty bound to render aid (Cavelty, 2011). It would seem that
deterrence, where one party is able to suggest to an adversary that it is capable and willing to use appropriate
offensive measures, is perhaps a useful adjunct to cybersecurity service. However, successful attribution of cyber
attacks is not a fail proof endeavor so that offensive behavior is not a total solution to the problem of deterrence.
Cybersecurity is a pervasive problem that deserves different approaches. Davidson (2009) has noted an interesting
possibility, based on the volume of recent cyber attacks. The context is that we are in a cyber war and a war is not
won on strictly defensive behavior. A “Monroe Doctrine in Cyberspace” is proposed, similar to the Monroe
Doctrine of 1823 that states “here is our turf; stay out or face the consequences.”
SUMMARY
The Internet is a seamless means of communication between organizations and people in modern society; it supports
an infrastructure that permits cost effective commerce, social interaction, reference, and learning. The use of the
term “cyber” means more than just the Internet and refers to the use of electronics in a wide variety of forms
between disparate entities. Cyber facilities are pervasive and extend beyond national borders and can be used by
individuals, organizations, and nation states for productive and destructive purposes. A single individual or small
Journal of Cybersecurity Research – June Quarter 2016 Volume 1, Number 1
Copyright by author(s); CC-BY 6 The Clute Institute
group can use cyber technology for surreptitious invasion of assets to obtain vital information or to cause the
disruption of critical resources.
Cybersecurity is conceptualized as a unique kind of service in which providers and clients collaborate to supply
service through shared responsibility, referred to as collaborative security. Cybersecurity is achieved through
distributed security implemented as a smart system with three important attributes: automation, interoperability, and
authentication. A Monroe Doctrine for Cybersecurity is proposed.
AUTHOR INFORMATION
Professor Harry Katzan, Jr. is the author of books and papers on computer science, service science, and security.
He teaches cybersecurity in the graduate program at Webster University and directs the Institute for Cybersecurity
Research. His email address is katzanh@twc.com.
REFERENCES
Cavelty, M., (2011) Cyber-Allies: Strengths and Weaknesses of NATO’s Cyberdefense Posture, IP – Global Edition, ETH
Zurich.
Conway, M. (2007). Terrorism and Internet Governance: Core Issues, Dublin: Disarmament Forum 3.
Davidson, M. (2009, March 10). The Monroe Doctrine in Cyberspace, Testimony given to the Homeland Security Subcommittee
on Emerging Threats, Cybersecurity, and Technology.
Katzan, H., (2008). Foundations of Service Science: A Pragmatic Approach, New York: iUniverse, Inc.
Katzan, H., (2008). Service Science: Concepts, Technology, Management, New York: iUniverse, Inc.
Katzan, H., (2010, January 4-6). Service Analysis and Design, International Applied Business Research Conference, Orlando,
FL.
Katzan, H., (2010, January 4-6). Service Collectivism, Collaboration, and Duality Theory, International Applied Business
Research Conference, Orlando, FL.
Katzan, H., (2012, October 4-5). Essentials of Cybersecurity, Southeastern INFORMS Conference, Myrtle Beach, SC.
Lord, K.M. and Sharp, T. (2011). America’s Cyber Future: Security and Prosperity in the Information Age, 1, Center for New
American Security.
Mainzer, K. (1997). Thinking in Complexity: The Complex Dynamics of Matter, Mind, and Mankind, New York: Springer.
McConnell, B. (2011, March 3). The Department of Homeland Security, Enabling Distributed Security in Cyberspace: Building
a Healthy and Resilient Cyber Ecosystem with Automated Collective Action, Retrieved from
http://www.dhs.gov/xlibrary/assets/nppd-cyber-ecosystem-white-paper-03-23-2011
Norman, D. (2011). Living with Complexity, Cambridge: The MIT Press.
Obama, B. H. (2009, May 29). Remarks by the U.S. President on Securing Our Nation’s Cyber Infrastructure. The White House.
East Room, Retrieved from https://www.whitehouse.gov/the-press-office/remarks-president-securing-our-nations-
cyber-infrastructure
Shackelford, S.L., (2012) In Search of Cyber Peace: A Response to the Cybersecurity Act of 2012, Stanford Law Review.
Retrieved from http://www.stanfordlawreview.org/sites/default/files/online/articles/64-SLRO-106 .
Stewart, J., (2009). CompTIA Security+ Review Guide, Indianapolis: Wiley Publishing, Inc.
The Department of Homeland Security (2009), National Infrastructure Protection Plan: Partnering to enhance protection and
resiliency.
The Department of Homeland Security, More About the Office of Infrastructure Protection,
(http://www.dhs.gov/xabout/structure/gc_1189775491423.shtm).
The White House, (2003, February). The National Strategy to Secure Cyberspace.
The White House, (2008, January 8). National Security Presidential Directive 54/Homeland Security Presidential Directive 23
(NSPD-54/HSPD-23).
The White House. (n.d) National Security Council, The Comprehensive National Cybersecurity Initiative, Retrieved from
http://www.whitehouse.gov/cybersecurity/comprehensive-national-cybersecurity-initiative).
Vargo, S. and Akaka, M., (2009), Service-Dominant Logic as a Foundation for Service Science: Clarification, Service Science
1(1): 32-41.
Working Group on Internet Governance, (2005 August) Report Document WSIS-II/PC-3/DOC/5-E.
Organizational Cybersecurity
Journal editorial introduction
Technical cybersecurity dominates discussion and investment even though there is a
realization to approach cybersecurity problems from organizational cybersecurity or
information security management perspectives. The technical focus is further reflected in the
predominance of technical education programs and even “Capture-the-flag” competitions.
Yet, calls to focus on the management aspects of protecting information assets or considering
the business risks can be traced back over half a century. In an article entitled “Danger
Ahead! Safeguard Your Computer,” the author raises “. . . serious questions about security for
the management to consider: Could the company continue to transact its business if its
computer center and everything in it were suddenly destroyed? Has the company properly
protected its programs, files, and equipment against sabotage?” (Allen, 1968, p. 97). The
author further notes (p. 101):
Although perfect security systems, as always, are beyond reach, a company can implement a very
satisfactory one at reasonable cost. What is needed most right now is management’s awareness of
the problem, an appreciation of the hazards involved and a determination to prevent severe
misfortunes.
Shortly after that warning, Sorensen (197
2
) opened an article with words that continue to
resonate and are reflected in everyday news stories (p.
3
79):
Suddenly, everyone’s concerned about computer security . . . New Companies have been formed
specializing in products that provide better security in a computer department; and seminars are
being offered around the country dealing with control and security of computer installations.
Management is concerned. It has realized that its computer department, the heart of its day-to-day
vital information and control system, has unique vulnerability to theft, disruption and destruction.
Our chosen area of cybersecurity management is still considered to be a nascent field and
needs nurturing. Cybersecurity impacts not only the technical side of an organization but also
brand image, ethical and legal obligations, continuing operations, customer relations, internal
processes, risk management, system audits, strategic initiatives and almost every dimension
of sustaining and growing a successful organization. We collectively have a shared
responsibility to get actively involved and mold it towards a mature management discipline.
Of the many journals that publish cyber and computer security research, most focus on
technology, systems, crime and data protection. Those that consider organizational issues
cover a very broad scope and do not zero in on the aspects that impact organizations.
Organizational Cybersecurity Journal: Practice, Process, and People (OCJ) seeks to publish
advances in scientific knowledge directly related to cybersecurity management. We target
research relating to the behaviors and practices that influence the successful management of
cybersecurity. The journal welcomes papers from human, technical and process perspectives
on the topic. We endeavor to establish this journal as a prominent journal in the emerging
discipline of cybersecurity.
Editorial
1
© Gurvirender Tejay and Gary Klein. Published in Organizational Cybersecurity Journal: Practice,
Process and People. Published by Emerald Publishing Limited. This article is published under the
Creative Commons Attribution (CC BY
4
.0) license. Anyone may reproduce, distribute, translate and
create derivative works of this article (for both commercial and non-commercial purposes), subject to full
attribution to the original publication and authors. The full terms of this license may be seen at http://
creativecommons.org/licences/by/4.0/legalcode
Organizational Cybersecurity
Journal: Practice, Process and
People
Vol. 1 No. 1, 2021
pp. 1-4
Emerald Publishing Limited
e-ISSN: 2635-0289
p-ISSN: 2635-0270
DOI 10.1108/OCJ-09-2021-017
http://creativecommons.org/licences/by/4.0/legalcode
http://creativecommons.org/licences/by/4.0/legalcode
https://doi.org/10.1108/OCJ-09-2021-017
Why cybersecurity (not information systems security)?
We echo the sentiments of Curtin (2017): “Call it cybersecurity, information security, data
security, or information assurance; the world has a problem with it,” (p. 1). We can add
computer security, information technology security and information system security to the
above statement, all of which have unique nuances. The world (government, industry and
media) seems to be coalescing towards using a single term, cybersecurity, in place of various
terms denoting different facets of securing information and critical infrastructure. The Joint
Task Force on Cybersecurity Education (JTF) – representing collaboration between the major
international computing societies of the Association for Computing Machinery (ACM), IEEE
Computer Society (IEEE CS), Association for Information Systems Special Interest Group on
Security (AIS SIGSEC) and International Federation for Information Processing Technical
Committee on Information Security Education (IFIP WG 11.8) – adopted a similar approach.
We sheepishly decide to follow the guidance of the JTF in the following.
Definition
The JTF defines cybersecurity as “A computing-based discipline involving technology,
people, information, and processes to enable assured operations. It involves the creation,
operation, analysis, and testing of secure computer systems. It is an interdisciplinary course
of study, including aspects of law, policy, human factors, ethics, and risk management in the
context of adversaries” (JTF, 2017, p. 16). The people, information and processes are the
purview of cybersecurity management or organizational cybersecurity.
Challenges
The cybersecurity landscape is complex and uncertain. The creation and use of information
attain value within the context of organizations. Securing such information gets complicated
due to organizational members (insiders), users (customers), regulations and cyber-
miscreants. The users want access to information anytime and anywhere, and
cybersecurity professionals need to design security for such complex information systems.
Additionally, there are increased regulations to follow, often written by non-cybersecurity
experts. Then, there is the nuisance of cyber-miscreants that led to the emergence of an
underground cybercrime economy. The field attracted organized crime and hacking groups
(along with hacktivists), resulting in the increased industrialization of cybercrime estimated
to be in the trillions of dollars.
Focus
The goal of organizational cybersecurity should be to efficiently protect critical information
assets (along with infrastructure) while attaining organizational objectives. Since the early
days of addressing this problem, there has been an emphasis on securing only critical data or
information. However, over the past decade, both practitioners and researchers have dropped
this sole emphasis. We should not be surprised to find subsequent solutions providing mixed
results in protecting critical information and impact on business (especially in terms of cost-
effectiveness). We wish to draw the focus back to the protection of critical information and
infrastructure while considering the constraints of organizations. Such endeavors will help us
acknowledge the limited resources available to an organization or society (tipping our hat to
economists) and pay attention to the intricate complexities of an organizational environment
with social, political, psychological and economic forces at play.
Approach
OCJ encourages rigorous research focused on, but not limited to, cybersecurity
governance, managing information security, behavioral and cognitive cybersecurity,
OCJ
1,1
2
compliance and audit, business process assurance, digital privacy and ethics and secure
use of emergent technologies. The journal will publish quarterly issues predominantly
focusing on research and conceptual papers. Conceptual papers will develop hypotheses
and be discursive, covering philosophical discussions and comparative studies of other
works and thinking. The editorial team’s approach is developmental, with constructive
feedback, editorial transparency and reasonable turnaround from submission to
publishing. We are paradigm and method agnostic believing in the value of diverse
views and pluralism in scientific endeavor. Our editors are committed to supporting
authors in finding the best version of their paper with an explicit contribution in the
context of organizational cybersecurity.
Societal benefit
The Editorial Board firmly believes in disseminating the research results to a wider public for
societal benefit. OCJispublished under aPlatinumOpen Accessarrangement, inthat thereisno
charge to the author, and all articles are made freely available in their entirety to the public. We
sincerely thank the State of Colorado and the University of Colorado Colorado Springs College
of Business for providing funds and necessary support. OCJ follows the guidelines provided by
the Committee on Publication Ethics (COPE) to ensure the content is ethically sound.
First issue
The papers appearing in the inaugural first issue focus on behavioral and cognitive
cybersecurity, with one paper addressing the organizational concerns of small- and medium-
sized enterprises (SMEs). These papers represent a balance between American and European
perspectives. Botong Xue, Feng Xu, Xin Luo and Merrill Warkentin emphasize the role of
ethical leadership in influencing employees’ security behavior by drawing on social learning
and social exchange theories. The results indicate that ethical leadership influences
employees’ information security policy violation intention through information security
climate rather than affective commitment. Karen Renaud and Jacques Ophoff promote
understanding why SMEs do not implement cybersecurity best practice measures. The
authors’ developed a cyber-situational awareness model based on the theory of situational
awareness. The results highlight the influence of understanding the importance of
cybersecurity, followed by the availability of resources.
Molly Cooper, Yair Levy, Ling Wang and Laurie Dringus introduce the concept of
audiovisual alerts and warnings as a way to reduce phishing susceptibility. The authors test a
prototypedevelopedonthepremisethatthealertsandwarningscantrigger “System 2Thinking
Mode” proposed by Daniel Kahneman. The results from a three-phased study indicate audio
alerts and visual warnings potentially lower phishing susceptibility in emails. Kavya Sharma,
Xinhui Zhan, Fiona Nah, Keng Siau and Maggie Cheng also explore how to reduce user
susceptibility to phishing and extend the concept of digital nudging from the human-computer
interaction field. The authors examine the impact of framing and priming on users’ behavior in a
cybersecurity setting. The study draws on prospect theory, instance-based learning theory and
dual-process theory. The results establish the role of digital nudging in the form of priming to
reduce users’ exposure to cybersecurity risks. In doing so, they also demonstrate the primacy of
instance-based learning theory in the context of cybersecurity behavior.
We sincerely hope our readers will find these research papers to be stimulating. We invite
you to participate in the journal as contributing author, reviewer, or special issue editor. Feel
free to contact us with suggestions or proposals.
Gurvirender Tejay and Gary Klein
Editorial
3
References
Allen, B. (1968), “Danger ahead-safeguard your computer”, Harvard Business Review, Vol. 46 No. 6,
pp. 97-101.
Curtin, C.M. (2017), “Protection of Data and Prevention: Advice for Chief Executive Officers, Managers,
and Information Technology Staff”, Interhack Report (5/5), available at http://web.interhack.com/
publications/protection-prevention (accessed 29 August, 2021).
JTF (Joint Task Force on Cybersecurity Education) (2017), Cybersecurity Curricular Guidelines, CSEC,
available at: https://cybered.hosting.acm.org/wp-content/uploads/2018/02/newcover_csec2017.
pdf (accessed 30 August 2021).
Sorensen, J.L. (1972), “Common sense in computer security”, The CPA Journal, Vol. 42 No. 5, p. 379.
OCJ
1,1
4
http://web.interhack.com/publications/protection-prevention
http://web.interhack.com/publications/protection-prevention
https://cybered.hosting.acm.org/wp-content/uploads/2018/02/newcover_csec2017
https://cybered.hosting.acm.org/wp-content/uploads/2018/02/newcover_csec2017
© Gurvirender Tejay and Gary Klein. This work is published
under (the “License”). Notwithstanding the ProQuest Terms
and Conditions, you may use this content in accordance with
the terms of the License. https://creativecommons.org/l
icenses/by-nc/3.0/legalcode
- Organizational Cybersecurity Journal editorial introduction
Why cybersecurity (not information systems security)?
Definition
Challenges
Focus
Approach
Societal benefit
First issue
References
Research Paper
Attribution and Knowledge Creation
Assemblages in Cybersecurity Politics
Florian J. Egloff 1* and Myriam Dunn Cavelty
1Senior Researcher in Cybersecurity, Center for Security Studies (CSS), ETH Zürich, Haldeneggsteig 4, IFW, 8092
Zürich, Switzerland and Research Associate, Centre for Technology and Global Affairs, Department of Politics and
International Relations, University of Oxford, Oxford, United Kingdom. E-mail: florianegloff@ethz.ch. Twitter: https://
twitter.com/egflo (@egflo); 2Senior Lecturer for Security Studies and Deputy for Research and Teaching, Center
for Security Studies (CSS), ETH Zürich, Haldeneggsteig 4, IFW, 8092 Zürich, Switzerland. E-mail: dunn@sipo.ges-
s.ethz.ch. Twitter: https://twitter.com/CyberMyri (@CyberMyri).
*Correspondence address. Center for Security Studies (CSS), ETH Zürich, Haldeneggsteig 4, IFW, 8092 Zürich,
Switzerland. Email: florianegloff@ethz.ch
Received 2 July 2020; revised 13 January 2021; accepted 21 January 2021
Abstract
Attribution is central to cybersecurity politics. It establishes a link between technical occurrences
and political consequences by reducing the uncertainty about who is behind an intrusion and what
the likely intent was, ultimately creating cybersecurity “truths” with political consequences. In a
critical security studies’ spirit, we purport that the “truth” about cyber-incidents that is established
through attribution is constructed through a knowledge creation process that is neither value-free
nor purely objective but built on assumptions and choices that make certain outcomes more or
less likely. We conceptualize attribution as a knowledge creation process in three phases – incident
creation, incident response, and public attribution – and embark on identifying who creates what
kind of knowledge in this process, when they do it, and on what kind of assumptions and previous
knowledge this is based on. Using assemblage theory as a backdrop, we highlight attribution as
happening in complex networks that are never stable but always shifting, assembled, disas-
sembled and reassembled in different contexts, with multiple functionalities. To illustrate, we use
the intrusions at the US Office of Personnel Management (OPM) discovered in 2014 and 2015 with
a focus on three factors: assumptions about threat actors, entanglement of public and private
knowledge creation, and self-reflection about uncertainties. When it comes to attribution as know-
ledge creation processes, we critique the strong focus on existing enemy images as potentially
crowding out knowledge on other threat actors, which in turn shapes the knowledge structure
about security in cyberspace. One remedy, so we argue, is to bring in additional data collectors
from the academic sector who can provide alternative interpretations based on independent know-
ledge creation processes.
Key words: Attribution, assemblage, cybersecurity politics, knowledge creation process, threat intelligence
1 Introduction
Attribution is central to the politics of cybersecurity. The process of
attribution, which involves several phases and spans different
communities, establishes a link between technical occurrences and
their political implications. An attribution judgement can be viewed
as a result of a knowledge creation process through which
VC The Author(s) 2021. Published by Oxford University Press. 1
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unre-
stricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Journal of Cybersecurity, 2021, 1–12
doi: 10.1093/cybsec/tyab002
Research Paper
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
https://orcid.org/0000-0002-0290-667X
https://orcid.org/0000-0002-3775-1284
uncertainties about who is behind an intrusion and what the likely
intent of it was are reduced, thereby giving some cyber-incidents
political significance.1
Shared and accepted knowledge about the crucial parameters of
a cyber-incident is necessary if technical, legal or political action is
to be taken in retaliation. It is thus little surprising that, as part of
the increasing budgets for cybersecurity, the investment in attribu-
tion capabilities – manifested in skilled people with the right know-
ledge, organisations and processes to enable such people, and
technology and data to support both – has increased in the private
and the public sector (for an example, see [1]). Such investments are
in part underlying the increasing number of attribution judgements
offered in the public domain.
Given such developments, attribution practices and processes are
not only considered a practical-operational issue but have also be-
come the focus of scholarly publications (exemplary, but not ex-
haustively, see [2–18]). The literature has covered the scope and
limits of attribution processes, as well as the uncertainty existing at
an international level regarding the judgements reached by individ-
ual stakeholders from the public and the private sector. The majority
of existing publications on attribution assumes that there are “facts”
about cyber-incidents that improved attribution capabilities can un-
cover. As a notable exception, Lupovici describes the “attribution
problem”, i.e. the difficulty or even impossibility of establishing
who is behind a cyber-intrusion, as socially constructed and high-
lights both the human agency in the construction of the underlying
technology and the practices that stabilize the interpretation of the
“attribution problem” [19].
Conceptually, attribution processes can be split into first, mecha-
nisms that lead to a public attribution [20] and second, what hap-
pens after an incident is publicly attributed [21].2 Both parts open
up relevant questions of how attributive knowledge is created, estab-
lished, and disseminated. In the former, one can distinguish the
sense-making process of attribution, which refers to the knowledge
creation process that leads to an attribution judgement.
Subsequently, there is a meaning-making process in which an attri-
bution judgement is communicated to others in order to change the
uncertainty structures associated with the particular intrusion and
to exert political effects [16, 22]. We are deliberately not focusing
extensively on the latter here. However, it is noteworthy for future
research that when attribution judgements have been introduced
publicly, they enter a contested information environment, where
attackers and other motivated parties contribute to destabilizing at-
tribution claims leading to fractured narratives around the responsi-
bility for a specific intrusion [17].
In this article, we aim to look at how attributive knowledge is
created, established, and disseminated in the sense-making phase.
We argue that attribution processes have to be scrutinized more
carefully in order to understand how public knowledge about cyber-
incidents that includes certainty about the perpetrators stabilizes
and affects cybersecurity politics. We ask what “truths” – under-
stood here as temporarily stable and accepted knowledge about the
parameters of cyber-incidents – are created by whom, when, and
based on what kind of assumptions and previous knowledge in attri-
bution processes?
The writing of this article was triggered in part by the observa-
tion that there is an increasing “normalization” of enemy images in
attribution judgments, evident in technical reports and governmen-
tal statements. They follow very familiar and long-standing patterns
of enmity that overlap with existing enemy images, particularly US
strategic rivals (see e.g. [23]). This has a direct impact on scholars in
the field of cybersecurity politics. Attribution judgments from the
private and public sector provide the largest empirical basis for
scholars studying cyber-conflict. For example, based on such attri-
butions, the scholarly community has concluded that many of the
more spectacular cyber-incidents that we are aware of are not just
isolated occurrences but should rather be understood in the context
of great power rivalry [24, 25]. That made us wonder, are these real-
ly the only actors that are active on our networks or might we have
become blind towards other developments in the process of honing
in on these strategic rivals? Academically speaking, are our infer-
ences about the strategic use of cyberspace and the effects on inter-
national politics correct or is it possible that we are missing
important dynamics because we depend on data that looks to be
highly selective?
In a critical security studies’ spirit, we purport that the “truth”
about cyber-incidents that is established through attribution is con-
structed through a knowledge creation process that is neither value-
free nor purely objective. This process is based on a series of
assumptions and choices made by different groups of people along
the way. It is our goal to identify these assumptions and choices in
each step of the attribution process to understand how they focus
the attention of analysts on specific aspects while eclipsing others.
This is in line with work from (critical) intelligence studies that
holds against the assumption that “raw data” or neutral facts can
ever be collected in the intelligence cycle, given how previous deci-
sions on what to collect and the established practices and methods
that are used to target data, create very specific cognitive and heuris-
tic dependencies [26]. However, we do not aim to judge these proc-
esses as suboptimal or skewed and will refrain from using the word
“bias”. Rather, by highlighting assumptions along the way, we want
to open up the discussion about the possibility of alternatives. Our
contribution is thus to be understood as an intervention to inject al-
ternative views and possibilities into the discourse, not to replace
one way of doing things with another.
Analytically, we focus on knowledge creation assemblages. The
concept of “the assemblage” highlights complex networks that are
never stable but always shifting, assembled, disassembled and reas-
sembled in different contexts, with multiple functionalities [27].
This suits attribution processes particularly well, since they always
consist of several phases (we look at three, as outlined in the next
section), involving a different set of actors and knowledge steps. To
illustrate an attribution process with its assumptions and choices,
we use the intrusions at the US Office of Personnel Management
(OPM) discovered in 2014 and 2015. The case consists of two
known intrusions, allowing us to trace the assumptions driving
1 We deliberately do not want to define the concept of “attribution” fur-
ther than this nor are we interested in excluding particular forms of attri-
bution. Important for this article and our argument are the knowledge
creation processes that are geared towards the reduction of uncertainty.
What form this judgment takes – for example, whether it attributes to a
particular network or geographic location without identifying an actor
group or identifies threat groups or individuals – plays no substantial role
here. Furthermore, we do not pay attention to what political actions fol-
low an attribution judgment; i.e. whether states decide to indict someone,
impose sanctions, or other retaliatory measures is secondary for our
argument.
2 A note about the use of the word “public” here. Since we are primarily
interested in the creation of public knowledge – knowledge that is access-
ible and shared in the public space – we do not look at attributions that
are non-public. However, the steps in the knowledge creation process
that we are describing in this article are the same for all attributions,
whether they are publicly communicated or not.
2 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
knowledge creation more than once in a single case. The case is very
well documented, allowing us to delve into technical and organiza-
tional details. We use a qualitative document analysis, guided by
several theoretical concepts we outline below.
The article has three parts. In the first, we situate our undertak-
ing in existing critical cybersecurity studies research, with specific
attention on the theoretical concept of “assemblage”. The attribu-
tion assemblage and its knowledge creation process can best be dis-
sected by paying attention to three lenses: assumptions about threat
actors, entanglement of public and private knowledge creation, and
self-reflection about uncertainties in the process. We then introduce
three temporal phases of the attribution knowledge creation process:
incident creation, incident response, and public attribution. In the
second part, we use each of the three lenses to analyse an empirical
case across the three phases of the attribution knowledge creation
process. In the conclusion, we summarize our findings and discuss
their implications.
2 Cybersecurity and Knowledge Creation
Assemblages
Two fundamentally different meta-theoretical views shape the way
we go about scholarly projects: The positivist worldview stands for
the belief that it is possible to represent the objective truth about a
study object if adequate methods are used. In contrast, the post-posi-
tivist worldview stands for the belief that there is no truth outside of
our representation of it. Our ways of pursuing knowledge are never
neutral but subjective and embedded in a historically grown system
of practices that tell us “how to do things the right way”. The first
view dominates research on cybersecurity politics.
Intellectual disagreement on how to study issues of politics are
part and parcel of academia – debates about ontology, epistemol-
ogy, and methodology are at the heart of some of the most fruitful
key debates in International Relations (IR) and security studies [28].
At the same time, however, they tend to sharply divide the discip-
line. We are not interested in adding to this division – nor do we be-
lieve that it is fruitful to fight old, entrenched battles over different
conceptions of science and their respective value. Rather, we intend
this article to be an invitation for cybersecurity scholars and practi-
tioners to reflect upon normalized practices, without claiming super-
ior intellectual ground.
In this article, we purport that cybersecurity understood from a
post-positivist vantage point takes its known shape through a series
of knowledge creation processes and that it is those we need to study
in order to understand various political forms and implications. In
order to systematize the empirical study of attribution that follows,
this section does two things. First, it introduces relevant post-posi-
tivist literature in order to identify key assumptions that guide this
article, particularly highlighting the concept of the assemblage as
analytically fruitful. Second, it will present a generic, three-phase at-
tribution model as a knowledge creation process and will briefly dis-
cuss methodological issues that arise when studying processes that
are partially hidden from the public eye.
2.1 Cybersecurity as Assemblage
Post-positivist cybersecurity studies are carving out a niche for them-
selves in a variety of (mainly European-based) journals [29–34]. In
contrast to the relatively narrow set of questions traditional
international relations scholars focus on by using strategic studies’
concepts and theories with roots in the Cold War [35–37] there is
no single topical focus in the post-positivist literature. Nonetheless,
it is united by the assumption that cybersecurity comes into being
through an interactive, non-hierarchical, multi-layered assemblage
of people, objects, technologies and ideas, is thus co-produced be-
tween a wide range of users, institutions, laws, materials, protocols,
etc. [22, 31].
In the words of one of the originators of the concept, an assem-
blage is “a multiplicity which is made up of many heterogeneous
terms and which establishes liaisons, relations between them (. . .)
Thus, the assemblage’s only unity is that of co-functioning” [38].
The most radical philosophical consequence of the theory of
assemblages is that it does not assume we already know the finished
shape or product of what we analyse. An assemblage has no essence
and no fixed defining features but is contingent on “social and his-
torical processes to which it is connected” [39]. There is no finality,
but a continuous, observable effort in the form of multifaceted prac-
tices to produce and stabilize cybersecurity, including the creation of
technical and political facts in shifting networks, an idea influenced
by science and technology studies (STS) [40].
There are three interrelated lenses into the “knowledge creation
assemblages” that we will focus on. Below, we explain their
importance.
2.1.1. Public-private production of attributive knowledge
First, it is noteworthy how the knowledge creation efforts of public
and private actors are intermeshed in intriguing ways in the threat
intelligence space. Cybersecurity companies3 have played a role
from very beginning of the cybersecurity story. Not only were tech-
nical experts often called upon for testimonies in parliamentary
settings, they were also paramount in forging common images
of “good” and “bad” hackers, whereas the “good” hacker is usually
employed by a company, follows the law and is considered
“a professional”. This way, hacking was gradually turned into “a
service rather than a risk, and hackers become a valuable resource
rather than a threat” (p.114 [34]), creating the foundation for
a striving segment of the IT security market, which in turn effects
the attribution space. Furthermore, private entities have come to the
fore as “norm entrepreneurs” in emerging technology governance
arrangements, giving them a much more active role in the
shaping of political matters than previously acknowledged [41, 42].
Calls for international attribution standards or an international at-
tribution organization is one of the demands made in this context
[7, 43–45].
More recently, there is a growing interest in how certain cyberse-
curity companies – especially threat intelligence companies – are
connected to state policy and practice at the national and inter-
national level. In her recent publication, Stevens calls Symantec’s
reports on Stuxnet a “landmark occurrence in the emergence of
commercial cybersecurity expertise in the context of strategic state
cyber-operations”, which makes it “an important constitutive elem-
ent in wider practices of hardening facts about threats” (pp.130-
131, [33]). The point that intrigues us about this is how the trad-
itional division between public and private, between national secur-
ity interests and commercial interests are blurred in the attribution
process. Technical reports get entangled with politics, while at the
same time, political attributions are entangled with economic and
3 If we say “cybersecurity companies” we mean companies that “supply”
cybersecurity, i.e. whose services one buys to increase the security of
one’s digital net
works.
Journal of Cybersecurity, 2021, Vol. 00, No. 0 3
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
commercial incentives. Knowledge created by the private sector
feeds into larger processes of political attribution, thus shaping
which incidents become visible, are communicated and acted upon
(similar points have been raised by [31, 46]).
2.1.2. Uncertainty in attributive knowledge creation
The second lens we want to focus on is how these knowledge cre-
ation assemblages deal with uncertainties – how are they perceived,
communicated and managed? (see also [47, 48]). We observe that
most of the ideas behind basic collection practices in private threat
intelligence companies and in the more traditional intelligence agen-
cies do not differ in fundamentals though they may differ in some of
the details. Therefore, we turned to intelligence studies to get further
insights. In general, intelligence studies have expended significant ef-
fort examining reasons for and remedies to “intelligence failures”.
An “intelligence failure” is present if the intelligence community
fails to provide the right type of knowledge to policy makers in a
timely manner even though they should have been able to, which
says as much about expectations as it does about actual performance
[49]. The assumption behind the classification of “failures” as such
is that if the system of knowledge production were optimized, the
probability of failures would be reduced.
Uncertainty is a key aspect of such “failures”. The fundamental
question in intelligence circles has often been how uncertainties can
be communicated or better managed in general [58]. The father of
modern intelligence analysis Sherman Kent proposed a standard set
of verbal expressions still in use today (words of estimative probabil-
ity, WEPs) that an analyst can use to express uncertainty in what
they think is a uniform and unambiguous way – a practice we also
see in attribution processes [50]. The “truth” about cyber-incidents
can only stabilize when uncertainties are removed from the narra-
tives – a practice we will focus on in our analysis of the knowledge
creation assemblage.
2.1.3. Actor-centric and actor-agnostic knowledge creation
processes
The third lens is related to assumptions that guide attribution meth-
ods, which are directly related to the active reduction of uncertain-
ties from the very beginning. One of the key reasons why attribution
is considered achievable today in contrast to initial debates that cen-
tred on the anonymity of action in cyberspace, is a shift away from
mainly technical considerations towards the premise of human hab-
its. Habits develop in any persistently conducted human activity,
which means that we can track and trace the known habits of threat
actors across time and space. To further explain this, we distinguish
between actor-centric and actor-agnostic knowledge creation proc-
esses. Actor-centric knowledge creation processes focus on creating
knowledge about specific threat actors and derive security controls
from that knowledge. The paradigm they adopt is: the more know-
ledge you have about threat actors, the better you can defend your-
self against them. This stands in contrast to actor-agnostic
knowledge creation processes, which focus on other forms of protec-
tion. The paradigm they adopt is: the more you know about one’s
own network, technologies, data, and practices, the superior your
judgements about what is considered ‘malicious’ behaviour will be
(exemplary for this view, see U.S. NSA official Rob Joyce’s presenta-
tion [51]). We expect the interplay between the two paradigms and
particularly the choice of one paradigm over another to be highly
relevant for the knowledge creation assemblage.
In the next subsection, we want to describe the details of this
knowledge creation process step by step in order to then identify the
assumptions and choices behind it as well as the omissions that are
taken into account, deliberately or not.
2.2 Sense-Making in Three Phases
We classify the attribution knowledge creation processes (sense-
making) into three temporal phases. In the first, a security event is
noticed and, upon initial assessments, established as an incident; the
second is about incident response, and the third looks at the dissem-
ination of attributing information. Whilst public attribution is part
of a meaning-making process (communicating an attribution judge-
ment), in contrast to previous work by one of the authors [17] here
we do not look at the further effects of public, official attributions
and what kind of further political processes they set in motion (the
meaning-making process part of attribution). Rather, we are par-
ticularly interested in public attribution’s function as an updating of
the knowledge assemblage, where knowledge from public attribu-
tions is used again as an input for defensive practices that lead to in-
cident creations.
For the purpose of our conceptualization, it is a public attribu-
tion if any actor that is part of the knowledge creation process
disseminates knowledge about a cyber-incident publicly. This judg-
ment can be, and often is, contested, but since the attribution pro-
cess always aims to reduce uncertainties and attribution judgments
contain authoritative statements about the “truth”, an attribution
judgment narrows interpretative possibilities and leads to a sedimen-
tation of knowledge. Throughout this process, the entanglement of
public and private actors and processes, assumptions and previous
knowledge about threat actors, as well as practices to reduce, man-
age, silence or foreground uncertainties are paramount.
2.2.1 Phase 1: How Security Events Become Incidents
Social order in cyberspace is produced by the heterogeneous rela-
tions within and through relevant assemblages. The ultimate aim of
cybersecurity as a practice is to stabilize these assemblages, which
are there to execute a specific performance, namely the uninterrupt-
ed provision of specific data flows for the efficient functioning of the
economy, society, and the state. The success of such a stabilization
is the degree to which it does not appear to be a network that
demands effort for keeping it together, but rather a coherent, inde-
pendent entity that “just works” [52]. This desired state, however, is
repeatedly challenged by security events, some of which are elevated
to what we call “cyber-incidents”. Latour calls these moments of
disruptions are called depunctualization [53] because they make net-
work performances “break down”. Luckily for researchers, in such
moments, parts of the assemblage become visible to the observer,
allowing us to study previous hidden aspects of the knowledge cre-
ation process [54].
Generally, in larger organisations, security relevant events
are monitored in security operation centers (SOCs), which have
in recent years increasingly been tasked with integrating incident
response, threat intelligence, and threat-hunting capabilities.4 SOCs
are tasked with sorting through security relevant events (often called
‘alerts’) and do initial assessments of whether an “incident” should
be established. Thereby, an incident requires incident response
and thus leads us to the next phase in the knowledge generation
process.
4 Gartner Identifies the Top Seven Security and Risk Management Trends
for 2019. https://www.gartner.com/en/newsroom/press-releases/2019-
03-05-gartner-identifies-the-top-seven-security-and-risk-ma (12 August
2019,
last accessed).
4 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
https://www.gartner.com/en/newsroom/press-releases/2019-03-05-gartner-identifies-the-top-seven-security-and-risk-ma
https://www.gartner.com/en/newsroom/press-releases/2019-03-05-gartner-identifies-the-top-seven-security-and-risk-ma
As the discovery of a security related event is a “disruption” and
offers the potential for radically new knowledge to emerge, it is use-
ful to reflect on where security related events come from and on
what type of knowledge they are based on. We distinguish security
related events that are generated as a result of actor-centric or actor-
agnostic security knowledge creation processes. Actor-centric
approaches have gained prominence in recent years, with the threat
intelligence market advertising itself as an enabler to “spot” known
actors on one’s networks. To further understand actor-centric secur-
ity controls, and how they interact with other actor-centric know-
ledge creation processes (specifically threat intelligence) it is
insightful to look at an example. We use threat hunting as such an
example (Note: there are other techniques, such as using technical
threat intelligence feeds directly to “spot” malicious traffic, or, in its
most elementary form, the part of an anti-virus product that relies
on previously seen malware (i.e. signatures)).
Threat hunting has been proposed as one measure to reduce the
uncertainty about the presence of threat actors in an organisation’s
network, particularly those threat actors that are expected to be able
to circumvent the baseline security controls [55–57]. Threat hunting
relies on the stance of assuming that your network has been
breached and looking for second and third order consequences of
threat activity (beyond “just” looking for security relevant events).
Threat hunting, as the name implies, relies on previous hypotheses
on what types of threat actors might be active on your network and,
if they were, where evidence of their activity might be found. Thus,
the hunting aspect of the practice refers to the active search for activ-
ity previously hypothesized about.
In order to judge the impact an actor-centric practice such as
threat hunting might have, it is pertinent to observe where those
hypotheses come from, i.e. what knowledge creation assemblages
they are embedded within. As said, the practice aims to reduce the
uncertainty around the presence of threat actors on an organisa-
tion’s network. The information about those threat actors can be
gathered from public and private sources: other security professio-
nals, past breaches, public reporting, or, in recent years an industry
tailored to this specific information need has emerged: threat intelli-
gence [58].
Threat intelligence aims to provide actionable information that
enables an organisation to better defend itself against the threats it
faces. Threat intelligence as marketed includes many services, from
atomic indicators of compromise, to tactics, techniques, and proce-
dures of specific threat actors, to contextual analysis of the goals,
strategies, and long-term motivations of threat actors. Threat intelli-
gence is most adept at tracking persistently operating actors, whilst
having more difficulty at identifying novel, one-off operations.
Threat actor-centric security controls are natural follow-ons, opera-
tionalizing threat intelligence into data collection and monitoring
practices and hypotheses tailored to one’s own networks. If the
threat intelligence is useful, it follows that organisations – ceteris
paribus – that adopt threat actor-centric practices are more likely to
find specific threat actors that they were informed about, than those
that they were not informed about.
Consequently, through the practice of threat intelligence, organi-
sations that adopt actor-centric practices, reinforce the knowledge
creation assemblage that threat intelligence shapes. Threat intelli-
gence serves as a spotlight that illuminates a very specific part of the
threat space but leaves other parts in the dark. A recent study of
publicly available threat intelligence reports found public reporting
with regard to civil society organisations to be skewed towards trad-
itional adversaries of the West (China, Russia, Iran, North Korea)
[42]. Furthermore, a study of two commercial threat intelligence
providers showed that the two providers cover very different indica-
tors, even as they pertain to the same threat actors, suggesting a
low-level of coverage of their overall operations, even for the most
tracked threat actors [59]. The input of threat intelligence into secur-
ity controls contributes to a self-reinforcing cycle, as threat intelli-
gence is partially based on the output of incident response and
attribution processes. We will return to this aspect in the analysis
below. Here it suffices to point out that actor-centric defensive prac-
tices are more likely to reinforce pre-existing knowledge assemb-
lages than generic defence strategies that are not tailored to a
particular actor.5
To conclude, the discovery of incidents itself can skew the know-
ledge creation process in a certain direction. Timely indicators of
compromise are often provided by cybersecurity vendors, who them-
selves are not actor-agnostic. Thus, via threat intelligence, the indi-
cators nudge the “discovery” towards very particular types of
intrusions (and away from others).
2.2.2 Phase 2: Incident Response
This section covers the second generic phase of our overall analysis
of knowledge creation practices, namely, the incident response pro-
cess. Incident response starts when an incident is established (output
of Phase 1) (For an incident definition, see [60]). Most incidents are
minor and can be dealt with swiftly. Some are major and need in-
depth incident response, often involving cybersecurity companies
offering specialized incident response capabilities and sometimes
involving government resources (law enforcement, intelligence, and/
or technical help). Here, we are interested in these major incidents,
particularly, incidents that are deemed to be caused by threat actors
(i.e. incidents assessed to be based on intentional human activity,
not based on technical sources or inadvertent human mistakes).
When evidence of activity by a threat actor on a network is
found, large uncertainties open up: questions such as what hap-
pened, and, more specifically, how did the threat actor pursue their
goals on your networks, what were those goals, and were they
achieved, loom large. Incident response processes are thus specific
kind of knowledge creation processes where we can see the know-
ledge creation assemblage at work. Those incident response proc-
esses geared towards finding the original breach(es) (as opposed to
those that focus on recovery at the detriment of evidence collection)
try to establish the answers to the question of what happened. Rid
and Buchanan refer to this as the “tactical” (what?) and
“operational” (how?) parts of the attribution process [1].
In incidents with significant consequences, a relevant audience
will want to know who, and why, they engaged in this malicious ac-
tivity against the organisation (referred to by Rid and Buchanan as
the “strategic” part of the attribution process). We refer here to the
relevant audience, as this audience may be incident/organisationally
specific, and dependent on who else may know of the incident. The
audience often includes senior decision-makers that have the respon-
sibility to decide what to do in response to an incident. Particularly,
decision-makers may care a lot about the intent of the intruder, an
element of attribution that can often be hard to substantiate with ro-
bust data.
5 It is important to highlight once more that there are many defensive in-
formation controls that lead to indicators of compromise, which are not
tied to threat intelligence. We just note that the trend to using threat
intelligence as a ‘lead’ to find actors on the network directs energy and at-
tention towards a specific type of knowledge creation process.
Journal of Cybersecurity, 2021, Vol. 00, No. 0 5
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
The whole process of incident response and attribution may hap-
pen away from public scrutiny. Most jurisdictions do not mandate
public breach disclosure, unless a special legally protected type of
data was touched (examples are personally identifiable data or
health data), or the breach is expected to have a significant business
impact (i.e. material event), in which case, publicly listed companies
may carry a financial obligation to disclose [61]. Only few incidents
are constructed under the scrutiny of the public eye. Thereby, the
publicity may shape the knowledge creation and knowledge dissem-
ination practices. This public element is the focus of the next
section.
2.2.3 Phase 3: Public Attribution Practices
Under public attribution practices, we understand any public know-
ledge dissemination by any actor involved in the knowledge creation
process about a cyber-incident. To relate it to our conceptualization
of incident response as a knowledge creation practice above, public
attribution is a knowledge dissemination practice that establishes
public knowledge. In recent years, public attributions of cyber-inci-
dents have been increasing, including attributions to attain political
effects [54]. A public statement assigning blame to a specific party
can thereby address multiple audiences.
For example, public attribution can be used as a tool for uncer-
tainty reduction in a particular audience. For incidents that are
publicly known (e.g. data breaches), there exists uncertainty about
the context and purpose of the intruder. This context and purpose
are important particularly to third parties that are affected by the
incident (e.g. customers, suppliers, investors, citizens). Public know-
ledge of an incident without any context around the possible identity
of the intruder and its purpose fuels the uncertainty about the re-
sponse that other parties would demand of an organisation.
Any data released publicly by the diverse public and private
actors involved in the knowledge creation processes will be linked to
and embedded in existing knowledge. The newness of the data there-
by has the potential to change and update existing knowledge
assemblages. Particularly the current public representation of a
threat actor that the data released is associated with, feeds back into
the threat intelligence process. Thereby, this association of one’s
public data dissemination into current knowledge is not a process
that the breached organisation can fully control. As soon as infor-
mation about the intrusion becomes public, other actors may start
to observe the public elements of the intrusion and publish their
interpretations of it (e.g. on a threat intelligence company’s blog
post). Thus, publicity starts these processes of (re-)assembly of the
knowledge creation assemblage, with other actors acting upon that
new knowledge and reconfiguring themselves to the changed
circumstances.
An additional complication in the case of cyber-incidents is that
the full data, on which one bases one’s attribution claims, are rarely
disclosed, due to sensitivities around proprietary data, as well as
sources and methods of the intelligence provider. This brings the dis-
closer in a credibility dilemma: the less data one discloses, the more
one has to rely on a general sense of trust in one’s own institution by
the audiences addressed, opening up possibilities for adversarial
strategies to discredit one’s attribution claims (see [17]).
Publicly disclosed information about intruders is picked up by
the specialist press and sometimes by the more generalist media.
Truth claims about particular incidents have the potential to re-
inforce or challenge pre-existing strategic narratives. To best illus-
trate this, in our next section, we will look at a specific case and
trace the knowledge creation processes throughout those three
phases using the three lenses introduced above.
3 OPM: The Attribution Knowledge Creation
Assemblage in Action
This section applies the concepts we introduced above to a particu-
larly impactful and well-documented set of intrusions at the United
States Office of Personnel Management (OPM). Why did we choose
this case? First, the case consists of two sets of intrusions that took
place between ca. 2012 and 2015. The first intrusion (here intrusion
A) was discovered in 2014, the second (intrusion B) in 2015. In in-
trusion A the intruders familiarized themselves with the network
layout at OPM. In intrusion B (presumably using information
derived from intrusion A), the intruders exfiltrated personnel
records pertaining to people applying for a US government security
clearance. This creates an interesting dynamic: we are able to wit-
ness the limits of the knowledge creation process, as the organisation
gets hacked a second time during the remediation of the first. It pro-
vides us with a great illustration of the power of actor-agnostic se-
curity controls and how they interact with actor-centric incident
response practices.
Second, we deliberately chose a very well documented case
in order to be able to highlight the details of the knowledge creation
processes. Stuxnet, another case we could have used to show the
working of the assemblage, has already received a lot of attention
[41, 62–64]. In other cases, such as the Sony Pictures Hack, it
would be more interesting to study the contestation process (and
knowledge re-assemblage) that happens in the meaning-making
phase (as was done in [22]). It should be noted here anew that we
do not claim that the attribution in the OPM case could have been
done better or that the attribution to China is wrong. Our aim is
to “simply” demonstrate how the knowledge creation assemblage
works.
Thirdly, the OPM case is also an important case that scholarship
should treat in-depth. It is embedded into international political
processes, as it represents not economic espionage (something the
U.S. at the time tried to get China to stop), but rather classical espi-
onage at scale. Further, it represents somewhat of a turning point,
as classical espionage at scale is something that the U.S. would
later come to recognize as strategically significant and reorient
its policy of restraint to persistent engagement, which seeks to
engage adversaries continuously, i.e. before becoming a victim as
was the case in OPM [65]. Finally, scholarship would also come to
debate whether and how espionage at scale ought to be regulated
more [22, 25, 66–71]. All three reasons make OPM worthy of in-
depth study.
Building on our conceptual toolset developed above, here we
will analytically focus on three particular lenses onto the knowledge
creation assemblages in the following order: (1) actor-centric and
actor agnostic knowledge, (2) public and private actors, and (3)
whether and how uncertainty is represented. These three are present
across the different temporal phases of the knowledge creation pro-
cess (incident creation, incident response, and public attribution).
We start with the actor-centric and actor-agnostic lens, as it is par-
ticularly relevant for the first two temporal phases (in the third
phase, public attribution, one necessarily has an actor-centric per-
spective). Through each lens, we will examine different ways the
practices interact with the larger knowledge creation assemblage.
Thereby, we deliberately stay relatively technical in focus, as our
6 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
aim is to follow the knowledge creation process as closely as
possible.
To do so, we will draw on the majority staff report of the
Committee on Oversight and Reform by the U.S. House of
Representatives dated 7. September 2016 [72]. We acknowledge
that this is an imperfect and politically one-sided, i.e. adversarial to
the government, source, though, we also note that the part of the
document that we are interested in, i.e. the timelines of the intru-
sions rather than the political appreciation thereof (especially pp.
51-172), is often sourced in documentary evidence from the time of
the intrusions by the actors involved or in congressional testimony
by actors involved. We thus consider the document reliable to re-
trace the knowledge creation processes at the time. Where appropri-
ate, we also draw connections to the broader knowledge creation
assemblages and how they existed at the time.
3.1 Actor-centric and actor-agnostic knowledge creation
processes in action
Both actor-centric and actor-agnostic knowledge creation processes
are present in the OPM intrusions. Intrusion A was originally dis-
covered by a perimeter protection appliance named Einstein,
possibly operated by an Internet Service Provider, who flagged it to
US-CERT, which then reported it to OPM in March 2014.6 The
Einstein appliance (version 1 or 2) at the time used unclassified
threat information to detect and block known threats on classified
[73]. We can thus classify it as an actor-centric perimeter defence,
using known threat information. Verifying this initial warning from
US-CERT by analysing their record, OPM had enough forensic evi-
dence of adversary activity within their networks to elevate the se-
curity related event into an incident (p. 53, [54]). Interestingly, the
incident response knowledge creation process that it triggered would
find that the threat actor was active on OPM’s network since at least
July 2012 (p. 64, [54]).
After five days, OPM had established the “who and what” and
“what [the hackers] are interested in (p. 54 [54]).” From then on,
they worked on the how the intruders were able to get into and op-
erate within the network. For the incident response OPM used
actor-agnostic technologies, such as full-packet capture of any traf-
fic going to the command-and-control server and traffic traversing
to/from the most sensitive/high-value part of their system, as well as
forensic imaging software. In conjunction with US-CERT, OPM
used that to monitor the intruders until 27. May 2014, when they
removed all compromised systems, exchanged all potentially com-
promised account credentials, and forced all Windows administra-
tors to use hardware based personal identity verification cards (p.
60, [54]).
Intrusion B was originally discovered with an actor-agnostic
technology (Websense). A contractor had been tasked to assist the
adoption of a new Websense functionality and noticed the a
“certificate error for the domain called opmsecurity.org” on 14
April 2015 (p. 84, [54] quoting [74]). OPM discovered that an
“alert” to this unknown SSL certificate was discovered on 24.
February 2015 and that traffic was leaving the OPM network since
December 2014 (p. 85, [54] quoting [75]). An initial investigation
showed four malicious binaries, three suspicious IP addresses, and a
number of indicators in the domain and certificate registration that
produced red flags for the OPM security team: thus, an incident re-
sponse process was initiated.
During the incident response, both actor-agnostic and actor-cen-
tric knowledge creation processes were leveraged. Particularly,
actor-agnostic technologies were used to extract those four mali-
cious binaries and further in reverse engineering the malware.
The result was then compared to previously known malware (in this
case PlugX variants) (p. 99, [54]). The results of the initial investiga-
tion could also be corroborated with previously existing reporting
on the same campaign (e.g. against an insurance company, Anthem)
(p. 87, [54]). Furthermore, OPM would testify that they were
“uncomfortable with trusting that we knew all the indicators of
compromise. And so we obtained the Cylance endpoint client and
deployed it [. . .]. Cylance was able to find things other tools could
not ‘because of the unique way that Cylance operates. It doesn’t
utilize a standard signature of heuristics or indicators, like normal
signatures in the past have done, it utilizes a unique proprietary
method” (pp. 101-102, [54]).7 Thus, they were specifically looking
for an actor-agnostic technology. This new technology provided
them with more visibility and identified 41 pieces of malware on dif-
ferent parts of OPM’s network (p. 102 & p. 108, [54]). It also meant
that OPM, with the help of Cylance engineers, had to sort through
lots of false-positives (alerts that were not security relevant). The in-
cident responders traced the actor’s activity through logs and foren-
sic images back to the first appearance on 7 May 2014, 20 days
before the remediation of Intrusion A went into effect. Had it not
been for the use of actor-agnostic knowledge creation processes,
Intrusion B may have been found much later, or worse, not at all.
3.2 Public and private actors are producing attributive
knowledge
Public and private actors both were fundamentally intertwined in a
knowledge creation assemblage during the two OPM intrusions in
all three stages: incident creation, response, and attribution. In inci-
dent creation, both in Intrusion A and B, public and private actors
and technologies played a role in discovery. In Intrusion A, it was
likely a private actor using a public technology (Einstein) that noti-
fied US-CERT of OPM’s network beaconing out to a known com-
mand-and-control server. In Intrusion B, a contractor installing a
new version of a private technology noticed the intrusion and
flagged it to OPM.
In the incident response processes, private and public actors and
technologies worked alongside to create knowledge. This included
private contractors managing other private incident response and
protection technology providers (Mandiant, Cylance, Encase,
CyTech), but also teams (and likely technologies) from across gov-
ernment (FBI, NSA, DHS). Interestingly, both CyTech and Cylance
provided services in advance, without having a contractual security
of getting paid. In CyTech’s case, they provided services “to OPM
out of a sense of duty and with the expectation that there would be
a contractual arrangement put in place” (p. 133, [54]) but ended up
not getting paid. Together, the public and private actors produced
knowledge on what happened in each
of the intrusions.
The attribution processes are less well documented. The over-
sight report quotes a Washington Post article which posits that the
government has “chosen not to make any official assertions about
attribution,” but also mentions that officials have hinted at China
being the leading suspect (p. 157, [54]). Drawing on both the details
of the investigation at OPM (including testimony), and integrating
this with private sector threat research, the oversight report makes
6 P.52 FN206 notes that it was first detected via an Einstein device and
notes that it was “possible” that this device was operated by an Internet
Service Provider. We follow that interpretation here.
7 The proprietary method is explained in the document on pp. 93-94 as
being a classification method of every action happening on an endpoint.
Journal of Cybersecurity, 2021, Vol. 00, No. 0 7
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
concrete claims about what intruders were behind the intrusions
(namely Axiom behind Intrusion A and DeepPanda behind Intrusion
B). It makes these claims drawing on knowledge that was available
in the public domain, and creating new knowledge by showing how
the intrusions fit into existing knowledge assemblages. Note: by
pointing out that much of the attributing knowledge to a particular
actor is still classified, the report adds weight to the public claims it
is making, as it implies that the authors have awareness of the classi-
fied assessments and, one assumes, they do not want to deliberately
mislead the public (p. 167, [54]).
The report particularly lays emphasis on a malware found in
Intrusion A, aliased by industry with the name Hitkit, to be strongly
associated with Axiom. It quotes various industry reports to support
that judgement. Note that the knowledge creation processes that
lead to the public “truth” do not just rely on “official” statements
made by the victim. Various indirect ways of assembling knowledge
in the public domain, including through public and private actors,
were important for the public attribution processes.8 For example,
on 10 July 2014, the Washington Post reported authorities having
traced the Intrusion A to China, but not having identified yet
whether the intruders work for the government, based on an an-
onymous U.S. official [76].
For Intrusion B, the report documents not only the overlapping
infrastructures used against various other targets with US govern-
ment personnel, but also quotes the knowledge making industry
engaged in in linking these intrusions. For example, on 27 February
2015, two weeks before Intrusion B was discovered by OPM,
ThreatConnect published a blog post outlining part of the attack in-
frastructure used in Intrusion B and attributing it to Deep Panda
[77]. Furthermore, the oversight report cites from testimony docu-
menting OPM personnel connecting the intrusion to DeepPanda,
worth quoting in full:
“So I’ll use the word ‘actor’, the ones that were identified in prior
exhibits. You had Shell Crew, or sometimes known as Deep
Panda, as well as Deputy Dog, and it has many, many other
names. So those were the two that, at least as it relates to indus-
try research being done, that the malware that we found was
closest related to it. By no means are we saying it was them; it’s
just it was a relationship or similarity” (p. 167, [54]).
Similarities and differences are what clusters these knowledge cre-
ation processes. We can also note that there are significant difficul-
ties in naming actor groups, not only between different industry
players, but also for people in government, who would be using a
different set of clustering to generate their own actors/activity sets.
As with Intrusion A, after Intrusion B became public, the
Washington Post reported that the intrusion was conducted by
Chinese state-sponsored actors based on anonymous officials and
private sector reporting [78].
3.3 Uncertainty differs across the three stages of
attributive knowledge creation
Uncertainty was represented differently across the three stages of
interest. First, in the incident creation phase, certainty and radical
uncertainty interplay with one another. On the one hand, OPM
makes clear that about Intrusion A, there are a number of factors
one cannot know, as OPM did not have logging in place for certain
actions. Thus, the report concludes that the US “will never know
with complete certainty the universe of documents the attacker
exfiltrated” (p. 51 [54]), thereby recognizing fundamental con-
straints on the “knowable”. On the other hand, it is forensic evi-
dence that establishes certainty of “actual adversary activity” that
necessitates the opening of incident response (as opposed to “just”
having the presence of malware). For example, in Intrusion B, the
discovery of a Windows Credentials Editor was found to be con-
firming the presence of an adversary with ill intent (p. 97, [54]).
Thus, forensic evidence is used to close-off uncertainties.
Second, in the incident response phase, the defenders are dealing
with the uncertainties of how deep the adversaries are buried in the
network. If possible, the defenders need to establish the breadth and
depth of compromise, both for effective remediation and for damage
assessment. In Intrusion A, this phase lasted from March-May 2014,
in which the defenders watched the intruders move laterally on the
network and prepared the infrastructure to remediate unexpectedly
(what OPM called the “big bang”).
Third and finally, in the attribution phase, judgements represent
uncertainties by using estimative language standardized in the intel-
ligence community. Thus, the oversight report uses “likely”, both
for Intrusion A and Intrusion B, to indicate that uncertainty around
the attribution to a specific threat actor (p. 17, [54]). Nevertheless,
it quotes Director of National Intelligence James Clapper that
referred to China s “the leading suspect” (p. 157, [54]). Thus, by
representing the findings of the investigation about Intrusion A and
Intrusion B as likely, and putting them into the context of the state-
ment by the DNI, whilst not presenting any alternative explanations,
the oversight report narrows the interpretative possibilities and
encourages the sedimentation of knowledge in the form of “truth”.
The oversight report omits the part of DNI Clapper’s quote that
many people, including two out of the three anonymous peer-
reviewers, remember and hence were crucial for the knowledge sedi-
mentation. It was not the uncertain, nuanced, and potentially open-
ended “leading suspect” part of the quote that was spread widely in
the media and scholarship, but rather this more definitive statement:
“you have to, kind of, salute the Chinese for what they did. If we
had the opportunity to do that, I don’t think we’d hesitate for a mi-
nute’ [79]. Thus, by attaching the operation to the Chinese (govern-
ment) and integrating and legitimising it using the U.S. operational
framework as a basis, the DNI left no doubt about the provenance
of the intrusions.
4 Conclusions
Cybersecurity knowledge creation processes in the form of public at-
tribution judgments fundamentally shape what we, the general pub-
lic, know about cyber-threats and their political implications. This
article offered a science-technology studies inspired lens towards
understanding such knowledge creation processes. We argued that
through using the concept of the knowledge creation assemblage,
we can shed light onto important dynamical aspects that explain
how knowledge about cyber-incidents is created. To do so, we con-
ceptually split up the knowledge creation process into three tem-
poral phases (incident creation, incident response, and public
attribution). This conceptual slicing was used to describe which sets
of knowledges, actors, and technologies are informing each phase,
and, through so doing, to analytically highlight how each phase
interacts with the larger knowledge creation assemblage.
8 In principal-agent theory these processes would be interpreted as forms
of «proxies». In assemblage theory, we note that the knowledge creation
assemblage is fully and continuously intertwined between public and pri-
vate actors.
8 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
We used three analytic lenses (actor-centric and actor-agnostic,
public and private, and how is uncertainty represented) to unravel
the knowledge creation processes in these three phases in an empiric-
al case with two intrusions. In the first analytic lens, we highlighted
how actor-centric technologies and processes lead to a reinforce-
ment and differentiation of current knowledge. Contrary to that,
actor-agnostic technologies and processes have the possibility to
broaden and disrupt it.
The second analytic lens showed how public and private actors
seamlessly work together in the incident creation and response proc-
esses observed. With some private actors operating out of sense of
“duty”, it also reconfirmed how blurry the distinction public/private
gets in the field of cybersecurity [23, 26, 80]. In the attribution
knowledge creation processes, the private sector reports were par-
ticularly relevant, building up an overall knowledge assemblage into
which the new knowledge gained as a result of the incident response
knowledge creation processes are integrated. Importantly, the over-
sight report that we drew on used private sector actor grouping ali-
ases (Axiom and Deep Panda) to assign responsibility.
The third analytic lens identified how uncertainty is present/ab-
sent in the different stages. It is both a driver and a constraint for the
knowledge creation processes observed. Thus, whilst uncertainty
about the status of one’s network can be a driver for practices trying
to discover security relevant events, the creation of an incident is a
moment of radical uncertainty. It is at that point that an organisa-
tion uses the incident response knowledge creation processes to dis-
place that uncertainty. Attribution, in that sense, is an afterthought
from an organisational perspective. The resulting attribution judge-
ments and the way that they are represented were found to make use
of language standardized in the intelligence community. Thereby un-
certainty is represented with words of estimative probability.
What can we conclude? Knowledge creation processes are messy
and contingent and without further in-depth case studies, general-
izations beyond the case we studied in these pages will be impos-
sible. However, it does seem apparent that assumptions and choices
in these processes produce certain outcomes – or rather, make cer-
tain conclusions more likely than others. One of the consequences
such practices inevitably have is that they feed into additional know-
ledge creation processes, for example, into academia. Following our
three analytical lenses, we draw the following conclusions:
First, the public-private nexus demands more scrutiny.
Foremost, there is a need for systematic study of private threat intel-
ligence reports. We know that market logics are not usually based
on fairness but favour the financially potent in the public and the
private sector who can pay for attribution services (p. 61, [17]). A
recent empirical analysis of commercial threat reporting shows con-
vincingly that “high end threats to high-profile victims are priori-
tized in commercial reporting while threats to civil society
organizations, which lack the resources to pay for high-end cyber-
defense, tend to be neglected or entirely bracketed” [81]. If the goal
is more security for everyone, this is a problem. Maschmeyer,
Deibert, and Lindsay offer a promising route by studying the public-
ly available threat reporting and comparing it with targeting of civil
society organisations, thereby demonstrating a systematic skewing
of the public threat reporting due to the commercial incentives to do
so. This work can be extended by studying the private threat report-
ing and assessing to what degree this public skewing is also present
in the private intelligence products (for a promising start, see [58]).
However, we could expect that, as the threat intelligence market
matures and digitalization deepens across the globe, threat intelli-
gence companies from different parts of the world may broaden the
insight into the diverse actors engaged in offensive cyber behaviour,
leading to a more global insight into cyber conflict overall.
Second, it is necessary to focus even more closely on the repre-
sentations or non-representation of uncertainties in attribution proc-
esses. From a critical scholar’s perspective, the current practices
mask the socially constructed nature of intelligence and by extension
the practical handling of uncertainties [82]. Uncertainties have a ten-
dency to disappear from the discourse and from view; they get
masked by the practices themselves. This highlights the need to “pay
attention to how information is defined, created, managed, and used
in particular contexts.” (p. 659, [26]). In fact, a focus on the know-
ledge creation assemblage reveals that “uncertainty” manifests dif-
ferently in different phases of the process and that the practices of
reducing those uncertainties are manifold. Those particular varia-
tions in uncertainties could also be linked to particular “funnels” in
the attribution process (see Figure 1), where particular types of prac-
tices that react to different types of uncertainties narrow (i.e. funnel)
the knowledge construction process.
Figure 1: Self-reinforcing tendencies in cybersecurity assemblages
Journal of Cybersecurity, 2021, Vol. 00, No. 0 9
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
Each phase is associated with different uncertainties, which,
coupled with specific ways of acting, could further exacerbate the
self-reinforcing tendencies of the intelligence cycle, here denoted
with a “funnel”. Actor-centric technologies are an example of a fun-
neling practice reacting to the uncertainty about the state of security
of digital systems in the incident creation process. During the inci-
dent response phase, analysts’ preconceived notions (not discussed
in this paper) may become relevant in shaping the attribution pro-
cess. Finally, during the public attribution phase, uncertainty about
the acceptance of truth claims may shape, what knowledge seeps out
in the public domain, and leads to concerns about the reproduction
and reinforcement of currently existing threat narratives. We suggest
that further researching the nuances between the interactions of dif-
ferent uncertainties and their practices with funneling tendencies
could lead to a more fine-grained analysis in the future.
Third and finally, the actor-centric and actor-agnostic lens
showed the impacts current trends in knowledge creation practices
can have on the overall knowledge structure. Crucially, we are not
suggesting abandoning actor-centric practices. Rather, we highlight
the importance of reflecting on the trade-offs between practices that
lead to the discovery of known actors vs. practices that have the po-
tential to lead to the discovery of unknown actors. We expect, in
practice, a combination of both will be required.
Closely related to this is the question of who could provide dif-
ferent data to challenge dominant threat images. We note academia
has mostly focused on being interpreters of the analysis produced by
public attribution statements and has used it to study cyber-conflict,
thereby partially recreating the knowledge assemblages shaped by
the actor-centric and actor-agnostic trade-offs of other actors. Only
few academic institutions (notably the CitizenLab, CrySySLab, and
Civilsphere) have independently collected forensic artefacts, upon
which incident response knowledge creation processes were used,
leading to an independent assessment, one that differed from the
one represented by the other actors described.
But universities themselves are connected to targeted intrusions,
be it as victims, operational infrastructure, or as host of victims (see
e.g. in times of SARS-CoV-2 vaccine research the fears of cyber espi-
onage affecting the integrity of clinical trials [83]). In that sense, uni-
versities could contribute more to an independent systematic data
collection on cyber-incidents, allowing for comparisons between
their datasets and private actors’ telemetry, thereby contributing an
independent viewpoint into cyber-conflict. Universities should invest
in more interdisciplinary knowledge on attribution practices, there-
by empowering themselves to be in a better position to assess other
attribution outcomes of industry and government reports.
The measures addressed above ameliorate the transparency in
knowledge creation assemblages. Knowing the type of knowledge
creation processes that make up the knowledge assemblage that is
“the cyber-threat” enables a more transparent discussion of the
shaping of realities of cyber-conflict faced by different actors inter-
nationally. Thereby, we can work towards challenging and reformu-
lating the current narrative of cyber-conflict into a more open
conversation about how different communities worldwide experi-
ence cyber-conflict.
Acknowledgments
A previous version of this article has been presented at the 2019 Conference
on Cyber Norms at The Hague, at a workshop on “Exploring the socio-cul-
tural fabric of digital (in)security” in August 2020, and at a research collo-
quium at the Center for Security Studies at ETH Zürich in October 2020. We
received further comments from Timo Steffens and Lilly Pijnenburg Muller.
On all occasions, we received valuable feedback. We gratefully acknowledge
Jasper Frei’s feedback and assistance with the referencing. Special thanks go
to three anonymous reviewers whose comments helped us sharpen our argu-
ment further.
Reference
1. Raiu C. Attribution 2.0. In: Area41 Conference, Zurich, 2018. https://
perma.cc/5FZ4-6SJR.
2. Rid T, Buchanan B. Attributing cyber attacks. J Strateg Stud 2015;38:
4–37.
3. Lin, H. Attribution of malicious cyber incidents: from soup to nuts. J Int
Aff 2016;70:75–137.
4. Deibert, RJ. Toward a human-centric approach to cybersecurity. Ethics
Int Aff 2018;32:441–24.
5. Eichensehr, KE. Decentralized cyberattack attribution. AJIL Unbound
2019;113:213–17.
6. Eichensehr, KE. The law & politics of cyberattack attribution. UCLA
Law Rev 2020;67:520–98.
7. Finnemore M, Hollis DB. Beyond naming and shaming: accusations and
international law in cybersecurity. Eur J Int Law 2020;31:969–1003.
8. Delerue, F. Cyber Operations and International Law. Cambridge Studies
in International and Comparative Law. Cambridge: Cambridge
University Press, 2020.
9. Mikanagi, T, Mac�ák, K. Attribution of cyber operations: an international
law perspective on the Park Jin Hyok Case. Camb Int Law J 2020;9:
51–75.
10. Steffens, T. Attribution of Advanced Persistent Threats: How to Identify
the Actors Behind Cyber-Espionage. Berlin: Springer, 2020.
11. Grindal, K, Kuerbis, B, Badii, F et al. Is It Time to Institutionalize Cyber-
Attribution?, Internet Governance Project White Paper. Atlanta: Georgia
Tech, 2018.
12. Grotto, A. Deconstructing cyber attribution: a proposed framework and
lexicon. IEEE Secur Priv 2020; 18:12–20.
13. Guerrero-Saade, JA. Draw me like one of your french apts—expanding
our descriptive palette for cyber threat actors. In: Virus Bulletin
Conference, Montreal, 2018. 1–20.
14. Guerrero-Saade, JA, Raiu, C. Walking in your enemy’s shadow: when
fourth-party collection becomes attribution hell. In: Virus Bulletin
Conference, Madrid, 2017. 1–15.
15. Bartholomew, B, Guerrero-Saade, JA. Wave your false flags! deception
tactics muddying attribution in targeted attacks. In: Virus Bulletin
Conference, Denver, CO, 2016. 1–9.
16. Guerrero-Saade, JA. The ethics and perils of apt research: an unexpected
transition into intelligence brokerage. In: Virus Bulletin Conference,
Prague, 2015. 1–9.
17. Lindsay, JR. Tipping the scales: the attribution problem and the feasibility
of deterrence against cyberattack. J Cybersecur 2015;1:53–67.
18. Romanosky, S, Boudreaux, B. Private-sector attribution of cyber inci-
dents: benefits and risks to the U.S. Government. Int J Intell
CounterIntelligence 2020:1–31.
19. Lupovici, A. The “Attribution Problem” and the social construction of
“Violence”: taking cyber deterrence literature a step forward. Int Stud
Perspect 2016;17:322–42.
20. Egloff, FJ. Public attribution of cyber intrusions. J Cybersecur 2020;6:
1–12.
21. Egloff, FJ. Contested public attributions of cyber incidents and the role of
academia. Contemp Secur Policy 2020;41:55–81.
22. Egloff FJ. Cybersecurity and non-state actors: a historical analogy with
mercantile companies, privateers, and pirates. DPhil Thesis. University of
Oxford, 2018.
23. Roth, F, Stirparo, P, Bizeul, D, et al. APT Groups and Operations. https://
web.archive.org/web/20201217131838/ https://docs.google.com/spread
sheets/u/1/d/1H9_xaxQHpWaa4O_
Son4Gx0YOIzlcBWMsdvePFX68EKU/pubhtml
24. Harknett, RJ, Smeets, M. Cyber campaigns and strategic outcomes. J
Strateg Stud 2020:1–34.
10 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
https://perma.cc/5FZ4-6SJR
https://perma.cc/5FZ4-6SJR
https://web.archive.org/web/20201217131838/https://docs.google.com/spreadsheets/u/1/d/1H9_xaxQHpWaa4O_Son4Gx0YOIzlcBWMsdvePFX68EKU/pubhtml
https://web.archive.org/web/20201217131838/https://docs.google.com/spreadsheets/u/1/d/1H9_xaxQHpWaa4O_Son4Gx0YOIzlcBWMsdvePFX68EKU/pubhtml
https://web.archive.org/web/20201217131838/https://docs.google.com/spreadsheets/u/1/d/1H9_xaxQHpWaa4O_Son4Gx0YOIzlcBWMsdvePFX68EKU/pubhtml
https://web.archive.org/web/20201217131838/https://docs.google.com/spreadsheets/u/1/d/1H9_xaxQHpWaa4O_Son4Gx0YOIzlcBWMsdvePFX68EKU/pubhtml
25. Kostyuk, N, Zhukov, YM. Invisible digital front: can cyber attacks shape
battlefield events? J Conflict Resolut 2019;63:317–47.
26. Räsänen, M, Nyce, JM. The raw is cooked: data in intelligence practice.
Sci Technol Human Values 2013;38:655–77.
27. DeLanda, M. Assemblage Theory. Edinburgh: Edinburgh University
Press, 2016.
28. Dunn Cavelty, M, Wenger, A. Cybersecurity meets security politics: com-
plex technology, fragmented politics, and networked science. Contemp
Secur Policy 2019;41:5–32.
29. Stevens, T. Cybersecurity and the Politics of Time. Cambridge:
Cambridge University Press, 2016.
30. Balzacq, T, Dunn Cavelty, M. A theory of actor-network for cyber-secur-
ity. Eur J Int Secur 2016;1:176–98.
31. Collier, J. Cybersecurity assemblages: a framework for understanding the
dynamic and contested nature of security provision. Politics Gov 2018;6:
13–21.
32. Shires, J. Enacting expertise: ritual and risk in cybersecurity. Politics Gov
2018; 6:31–40.
33. Stevens, C. Assembling cybersecurity: the politics and materiality of tech-
nical malware reports and the case of Stuxnet. Contemp Secur Policy
2019;41:129–52.
34. Tanczer, LM. 50 shades of hacking: how IT and cybersecurity industry
actors perceive good, bad, and former hackers. Contemp Secur Policy
2020;41:108–28.
35. Maness, RC, Valeriano, B. The impact of cyber conflict on international
interactions. Armed Forces Soc 2016;42:301–23.
36. Borghard, ED, Lonergan, SW. The logic of coercion in cyberspace. Secur
Stud 2017;26:452–81.
37. Kello, L. The Virtual Weapon and International Order. London: Yale
University Press,
2017.
38. Deleuze, G, Parnet, C. Dialogues. New York: Columbia University Press,
2002, 69.
39. Nail, T. What is an assemblage? SubStance 2017;46:24.
40. Latour, B, Woolgar, S. Laboratory Life: The Construction of Scientific
Facts. Princeton, NJ: Princeton University Press, 2013.
41. Hurel, LM, Lobato, LC. Unpacking cyber norms: private companies as
norm entrepreneurs. J Cyber Policy 2018;3:61–76.
42. Gorwa, R, Peez, A. Big Tech Hits the Diplomatic Circuit: Norm
Entrepreneurship, Policy Advocacy, and Microsoft’s Cybersecurity Tech
Accord. https://berlinpolicyjournal.com/big-tech- hits-the-diplomatic-cir
cuit/ (30
June 2020, last accessed).
43. Egloff, FJ, Wenger, A. Public attribution of cyber incidents. In: Merz, F
(ed.). CSS Analyses in Security Policy No. 244. Center for Security
Studies: Zurich, 2019, 1–4.
44. Solomon, H. RightsCon report: universities should form cyber attribution
network. IT World Canada Web Page, 2018. https://perma.cc/226K-
LL4E
45. Mueller, M, Grindal, K, Kuerbis, B, and Badiei, F. Cyber attribution: can
a new institution achieve transnational credibility?. Cyber Defense Rev
2019;4:107–22.
46. Leander, A. Understanding US national intelligence: analyzing practices to
capture the chimera. In: Best, J, Gheciu, A (eds.), The Return of the Public
in Global Governance. New York: Cambridge University Press, 2014,
197–221.
47. Canton, B. The active management of uncertainty. Int J Intell
CounterIntelligence 2008;21:487–518.
48. Slayton, R. Governing uncertainty or uncertain governance? information
security and the challenge of cutting ties. Sci Technol Human Values
2021;46:81–111.
49. Jensen, MA. Intelligence failures: what are they really and what do we do
about them? Intell Natl Secur 2017;27:261–82.
50. Kent, S. Words of estimative probability. Studies in Intelligence, 1964.
51. Joyce, R. USENIX Enigma 2016—NSA TAO Chief on Disrupting Nation
State Hackers, YouTube, 2016. https://www.youtube.com/watch?v¼
bDJb8WOJYdA.
52. Callon, M. Techno-economic networks and irreversibility. In: Law, J
(ed.), A Sociology of Monsters: Essays on Power, Technology and
Domination, Sociological Review Monograph, Vol. 38. New York:
Routledge, 1991, 153.
53. Latour, B. Pandora’s Hope: Essays on the Reality of Science Studies.
Cambridge: Harvard University Press, 1999.
54. Best, J, Walters, W. “Actor-Network Theory” and international relation-
ality: lost (and found) in translation: introduction. Int Politic Sociol 2013;
7:346.
55. FOR508. Advanced Incident Response, Threat Hunting, and Digital
Forensics. https://www.sans.org/event/threat-hunting-and-incident-re
sponse-summit-2019/ course/advanced-incident-response-threat-hunting-
training (12 August 2019, last accessed).
56. FOR572. Advanced Network Forensics: Threat Hunting, Analysis, and
Incident Response. https://www.sans.org/event/threat-hunting-and-inci
dent-response-summit-2019/course/ advanced-network-forensics-threat-
hunting-incident-response (12 August 2019, last accessed).
57. FOR578. Cyber Threat Intelligence. https://www.sans.org/event/threat-
hunting-and- incident-response-summit-2019/ course/cyber-threat-intelli
gence (12 August 2019, last accessed).
58. Work, JD. Evaluating commercial cyber intelligence activity. Int J Intell
CounterIntelligence 2020;33:278–308.
59. Bouwman, X, Griffioen, H, Egbers, J, Doerr, C, Klievink, B, van Eeten,
M. A different cup of TI? The added value of commercial threat intelli-
gence. In: Proceedings of the 29th USENIX Security Symposium, 433–50.
San Diego, CA: USENIX Association, 2020.
60. ISO. ISO/IEC 27000:2018. Geneva, 2018.
61. Exemplary, see Security and Exchange Commission, 17 CFR Parts 229
and 249 [Release Nos. 33-10459; 34-82746] Commission Statement and
Guidance on Public Company Cybersecurity Disclosures. https://www.
sec.gov/rules/ interp/2018/33-10459 (12 August 2019, last accessed).
62. Langner, R. Stuxnet: dissecting a cyberwarfare weapon. IEEE Secur
Privacy 2011;9:49–51.
63. Lindsay, JR. Stuxnet and the limits of cyber warfare. Secur Stud 2013;22:
365–404.
64. Slayton, R. What is the cyber offense-defense balance? Conceptions,
causes, and assessment. Int Secur 2017;41:72–109.
65. U.S. Cyber Command. Achieve and Maintain Cyberspace Superiority:
Command Vision for US Cyber Command, 2018. https://web.archive.org/
web/20210108165528/https://www.cybercom.mil/Portals/56/ Documen
ts/USCYBERCOM%20Vision%20April%202018 ?ver¼2018-06-14-
152556-010
66. Libicki M. The coming of cyber espionage norms. Paper Presented at the
9th International Conference on Cyber Conflict (CyCon), 30 May–2 June
2017.
67. Boeke, S, Broeders, D. The demilitarisation of cyber conflict. Survival
2018;60:73–90.
68. Georgieva I. The unexpected norm-setters: intelligence agencies in cyber-
space. Contemp Secur Policy 2020;41:33–54.
69. Egloff, FJ, Maschmeyer, L. Shaping not signaling: understanding cyber
operations as a means of espionage, attack, and destabilization.
International Studies Review, 2020.
70. Chesney, R, Smeets, M, Rovner, J, Warner M, Lindsay, JR, Fischerkeller,
MP, Harknett, RJ, Kollars, N. Policy roundtable: cyber conflict as an intel-
ligence contest. Texas National Security Review, 2020.
71. Lindsay, JR. Cyber conflict vs. cyber command: hidden dangers in the
american military solution to a large-scale intelligence problem. Intell
Natl Secur 2020;36:1–19.
72. U.S. Congress, House of Representatives, Committee on Oversight and
Government Reform. The OPM Data Breach: How the Government
Jeopardized Our National Security for More Than a Generation.
Washington, DC: Government Printing Office, 2016.
73. Written testimony of Dr. Andy Ozment, Assistant Secretary for
Cybersecurity and Communications, U.S. Department of Homeland
Security, Before the U.S. House of Representatives Committee on
Oversight and Government Reform, regarding the DHS Role in Federal
Cybersecurity and the Recent Compromise at the Office of Personnel
Management, pp. 2–5. https://docs.house.gov/meetings/GO/GO00/
20150616/103617/HHRG-114-GO00-Bio-OzmentA-20150616 (30
June 2020, last accessed).
Journal of Cybersecurity, 2021, Vol. 00, No. 0 11
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
https://berlinpolicyjournal.com/big-tech-hits-the-diplomatic-circuit/
https://berlinpolicyjournal.com/big-tech-hits-the-diplomatic-circuit/
https://perma.cc/226K-LL4E
https://perma.cc/226K-LL4E
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-incident-response-threat-hunting-training
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-incident-response-threat-hunting-training
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-incident-response-threat-hunting-training
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-network-forensics-threat-hunting-incident-response
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-network-forensics-threat-hunting-incident-response
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/advanced-network-forensics-threat-hunting-incident-response
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/cyber-threat-intelligence
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/cyber-threat-intelligence
https://www.sans.org/event/threat-hunting-and-incident-response-summit-2019/course/cyber-threat-intelligence
https://www.sec.gov/rules/interp/2018/33-10459
https://www.sec.gov/rules/interp/2018/33-10459
https://web.archive.org/web/20210108165528/https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018 ?ver=2018-06-14-152556-010
https://web.archive.org/web/20210108165528/https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018 ?ver=2018-06-14-152556-010
https://web.archive.org/web/20210108165528/https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018 ?ver=2018-06-14-152556-010
https://web.archive.org/web/20210108165528/https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018 ?ver=2018-06-14-152556-010
https://web.archive.org/web/20210108165528/https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018 ?ver=2018-06-14-152556-010
https://docs.house.gov/meetings/GO/GO00/20150616/103617/HHRG-114-GO00-Bio-OzmentA-20150616
https://docs.house.gov/meetings/GO/GO00/20150616/103617/HHRG-114-GO00-Bio-OzmentA-20150616
74. H. Comm. On Oversight and Gov’t Reform. Interview of Brendan
Saulsbury, Senior Cybersecurity Engineer, SRA, Ex. 4 (17 February
2016). https://archive.org/stream/ReportFromThe CommitteeOnOversi
ghtAndGovernmentReformOnTheOPMBreach/Report%20from %
20the%20Committee%20on%20Oversight%20and%20Government%
20Reform%20on%20the%20OPM% 20Breach_djvu.txt (30 June 2020,
last accessed).
75. AAR Timeline—Unknown SSL Certificate (15 April 2015) at
HOGR020316-1922 (OPM Production: 29 April 2016). https://archive.
org/stream/ReportFromTheCommitteeOnOversightAndGovernmentRefo
rmOnTheOPMBreach/Report%20from%20the%20Committee%
20on%20Oversight%20 and%20Government%20Reform%20on%
20the% 20OPM%20Breach_djvu.txt (30 June 2020, last accessed).
76. Chinese hackers go after U.S. workers’ personal data. Washington Post, 10
July 2014. http://www.washingtonpost.com/world/ national-security/chin
ese-hackers-go-after-us- workers-personal-data/2014/07/10/92db92e8-
0846-11e4-8a6a-19355c7e870a_story.html (30 June 2020, last accessed).
77. The Anthem Hack: All Roads Lead to China. https://web.archive.org/
web/20200520133650/https://threatconnect.com/blog/the-anthem-hack-
all-roads-lead-to-china/(30 June 2020, last accessed).
78. Chinese breach data of 4 million federal workers. Washington Post, 4
June 2015. https://www.washingtonpost.com/ world/national-security/
chinese-hackers-breach-federal -governments-personnel-office/2015/ 06/
04/889c0e52-0af7-11e5-95fd- d580f1c5d44e_story.html (30 June 2020,
last accessed).
79. Clapper JR, Brown T. Facts and Fears: Hard Truths from a Life in
Intelligence. New York: Viking, 2018.
80. Egloff, FJ. Cybersecurity and the age of privateering. In: Perkovich George
and Levite Ariel (eds.), Understanding Cyberconflict: Fourteen Analogies.
Washington DC: Georgetown University Press, 2017, 231–47.
81. Maschmeyer, L, Deibert, RJ, Lindsay JR. A tale of two cybers—how
threat reporting by cybersecurity firms systematically underrepresents
threats to civil society. J Inf Technol Polit 2020;18:1–20.
82. Kreuter, N. The US intelligence community’s mathematical ideology of
technical communication. Tech Commun Q 2015;24:217–34.
83. Grierson, J, Devlin, H. Hostile States Trying to Steal Coronavirus
Research, Says UK Agency. TheGuardian, 3 May 2020. https://www.the
guardian.com/ world/2020/may/03/hostile- states-trying-to-steal- corona
virus-research-says-uk-agency
12 Journal of Cybersecurity, 2021, Vol. 00, No. 0
D
ow
nloaded from
https://academ
ic.oup.com
/cybersecurity/article/7/1/tyab002/6261798 by guest on 11 January 2022
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
https://archive.org/stream/ReportFromTheCommitteeOnOversightAndGovernmentReformOnTheOPMBreach/Report%20from%20the%20Committee%20on%20Oversight%20and%20Government%20Reform%20on%20the%20OPM%20Breach_djvu.txt
http://www.washingtonpost.com/world/national-security/chinese-hackers-go-after-us-workers-personal-data/2014/07/10/92db92e8-0846-11e4-8a6a-19355c7e870a_story.html
http://www.washingtonpost.com/world/national-security/chinese-hackers-go-after-us-workers-personal-data/2014/07/10/92db92e8-0846-11e4-8a6a-19355c7e870a_story.html
http://www.washingtonpost.com/world/national-security/chinese-hackers-go-after-us-workers-personal-data/2014/07/10/92db92e8-0846-11e4-8a6a-19355c7e870a_story.html
https://web.archive.org/web/20200520133650/https://threatconnect.com/blog/the-anthem-hack-all-roads-lead-to-china/
https://web.archive.org/web/20200520133650/https://threatconnect.com/blog/the-anthem-hack-all-roads-lead-to-china/
https://web.archive.org/web/20200520133650/https://threatconnect.com/blog/the-anthem-hack-all-roads-lead-to-china/
https://www.washingtonpost.com/world/national-security/chinese-hackers-breach-federal-governments-personnel-office/2015/06/04/889c0e52-0af7-11e5-95fd-d580f1c5d44e_story.html
https://www.washingtonpost.com/world/national-security/chinese-hackers-breach-federal-governments-personnel-office/2015/06/04/889c0e52-0af7-11e5-95fd-d580f1c5d44e_story.html
https://www.washingtonpost.com/world/national-security/chinese-hackers-breach-federal-governments-personnel-office/2015/06/04/889c0e52-0af7-11e5-95fd-d580f1c5d44e_story.html
https://www.theguardian.com/world/2020/may/03/hostile-states-trying-to-steal-coronavirus-research-says-uk-agency
https://www.theguardian.com/world/2020/may/03/hostile-states-trying-to-steal-coronavirus-research-says-uk-agency
https://www.theguardian.com/world/2020/may/03/hostile-states-trying-to-steal-coronavirus-research-says-uk-agency