Refer to the attached documents in the uploads for the references and guidelines.
Assignment Guidelines
When we think of an enterprise we typically think of a traditional business with a specific product or service. However, the term is broader than that, and we can apply EA principles to things as big and abstract as an entire nation. The Government of Canada performed just such an audit on their own IT infrastructure. Through this audit, you will follow along and learn about the process, the steps taken, and how an analysis is performed. First, review the study at
Audit of IT Enterprise Architecture (AU1802). (Links to an external site.)
Consider and address the following:
· What was the purpose of this audit?
· What types of methodologies did the reviewers use in their audit?
· What strengths and weaknesses did they identify?
· What risks were mentioned?
· What factors affect these risks?
· How should these risks be managed?
· What forms of EA governance does Canada employ?
· How should they proceed? What recommendations would you suggest beyond the ones provided?
Your study should be a minimum of two to three pages double spaced (750 words) and include at least three external citations beyond the course textbooks. Your study should address all of the points outlined above.
Parameters
· The assignment should be double-spaced, 12-point Times New Roman font, with one-inch margins
· Use APA for citing references and quotations
Enterprise
Risk Management
Stefan Hunziker
Modern Approaches to
Balancing Risk and Reward
Enterprise Risk Management
Stefan Hunziker
Enterprise Risk Management
Modern Approaches to Balancing
Risk and Reward
Stefan Hunziker
Rotkreuz, Switzerland
ISBN 978-3-658-25356-1 ISBN 978-3-658-25357-8 (eBook)
https://doi.org/10.1007/978-3-658-25357-8
Library of Congress Control Number: 2019936302
Springer Gabler
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage
and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or
hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does
not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective
laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors
or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
This Springer Gabler imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH
part of Springer Nature
The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
https://doi.org/10.1007/978-3-658-25357-8
v
Preface
Now more than ever, students, junior staff, instructors, managers and decision-makers
have to understand the value-creating aspect of modern Enterprise Risk Management
(ERM).
Welcome to the world of enterprise risk management (ERM), one of the most popular
and misunderstood of today’s important business topics. It is not very complex. It is not
very expensive. It does add value. We just have to get it right. Until recently, we have been
getting it wrong (Hampton 2009, p. vii).
This is a quote from Professor Hampton, director at St. Peters’ College and former direc-
tor of the Risk and Insurance Management Society (RIMS). His statement is representa-
tive of what still applies to many companies today: ERM is considered as an expensive
and unprofitable “business inhibitor”. Traditionally, it does only embrace a few areas
of the company (in many cases the finance department). Usually, there is no equal
company-wide management of all risk categories in a consistent framework and risk
management is often an independent stand-alone process, which is not linked to deci-
sion-making processes and business planning. In this way, traditional risk management
is unable to generate any benefits and unnecessarily ties up resources in the company. A
positive risk culture, which considers information provided by risk management as being
supportive to management, is often wishful thinking. Modern risk management aims to
be a strategic management tool that creates value for the company. In order for the risk
manager to be welcomed at the strategy table, a rethinking from traditional risk manage-
ment to modern ERM is required.
Didactic Philosophy and Learning Objectives
Amongst other, ERM is a powerful tool that enhances a manager’s and board’s ability to
make better decisions under uncertainty. Pure learning of ERM definitions, theories and
techniques by heart is much less important for students than being able to apply relevant
ERM concepts to practical situations. For this reason, Enterprise Risk Management—
Modern Approaches to Balancing Risk and Reward embraces theory, concepts and
practical examples so that students get a sound understanding of how ERM can be
vi Preface
implemented in practice. I encourage students to make use of the offered learning mate-
rials at the very end of each chapter.
The content of Enterprise Risk Management—Modern Approaches to Balancing Risk
and Reward is applicable to all business sectors, including non-profit, service, selling,
manufacturing, retail and administrative situations. The focus of the textbook is clearly
on improving decision-making under uncertain situations, not on operational risk man-
agement or internal control at very low organisational levels.
My goal is to encourage students to apply modern approaches to good ERM and to
link ERM to decision-making processes. Students begin their understanding of why
ERM matters in today’s complex business environment and progress to more complex
questions of how assessing risk and opportunities by the means of consistent and effec-
tive assessment techniques and how to create a risk culture that enables effective ERM.
To support the student’s learning success, my approach is to introduce concepts accessi-
bly and to complement them with practical examples from diverse companies.
The textbook has been primarily developed for training and continuing education at
university level in German-speaking countries. However, it is also of high practical rel-
evance. Based on concrete cases of medium-sized and large companies, concepts pre-
sented in Enterprise Risk Management—Modern Approaches to Balancing Risk and
Reward of ERM are transferred into practice. It serves students and practitioners alike
as a source of ideas on how ERM can generate value to all stakeholders. The novelty of
this textbook is reflected primarily in the fact that theoretical and psychological findings
relevant to decision-making situations will be explicitly incorporated.
Acknowledgements
I have received many valuable comments and suggestions for this textbook during the
last few years from ERM professionals, consultants, managers and professors. I cordially
thank each of these contributors. In addition, I wish to thank the following people and
institutions:
• Mr Marcel Fallegger, CMA, CSCA, Lucerne School of Business. Besides his subject
matter expertise, he supported me in all administrative matters.
• Lucerne School of Business for its financial support.
• Springer Gabler. All colleagues from the editorial, production and marketing depart-
ments for their great support in making this textbook possible.
• My relatives, for their patience and understanding of the many “write-related absences”.
Finally, students in my graduate and undergraduate classes on Enterprise Risk
Management have inspired me to write this textbook and contributed many thoughtful
ideas.
Stefan Hunziker
vii
1 Introducing ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Why ERM Matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definition of ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Risk Definition in the ERM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 ERM Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Challenges to ERM Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Countering Biases in Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Motivational Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Affect Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.2 Attribute Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Confirmation Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.4 Desirability of Options and Choice . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.5 Optimism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.6 Transparency Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Cognitive Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.1 Anchoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.2 Availability Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.3 Dissonance Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Zero Risk Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.5 Conjunction Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.6 Conservatism Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.7 Endowment and Status Quo Bias . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.8 Framing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.9 Gambler’s Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.10 Hindsight Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.11 Overconfidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.12 Perceived Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Contents
viii Contents
2.3 Group-Specific Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Authority Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.2 Conformity Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.3 Groupthink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.4 Hidden Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.5 Social Loafing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3 Creating Value Through ERM Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1 Balance Rationality with Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Embrace Uncertainty Governance as Part of ERM . . . . . . . . . . . . . . . . . . . 52
3.3 Collect Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Identify Sources, Events and Impacts of All Risks . . . . . . . . . . . . 55
3.3.2 Develop an Effective and Structured Risk Identification
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.3.3 Identify Risks Enterprise-Wide . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3.4 Treat Business and Decision Problems not as True Risks . . . . . . . 59
3.3.5 Don’t Let Reputation Risk Fool You . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.6 Focus on Management Assumptions . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.7 Conduct One-on-One Interviews with Key Stakeholders . . . . . . . 76
3.3.8 Complement with Traditional Risk Identification . . . . . . . . . . . . . 83
3.4 Assess Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.4.1 Identify Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4.2 Quantify Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.4.3 Support Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.4 Differentiate between Decisions and Outcomes . . . . . . . . . . . . . . 115
3.4.5 Overcome the Regulatory Risk Management Approach . . . . . . . . 115
3.4.6 Overcome the Separation of Risk Analysis and
Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.4.7 Assess Impact on Relevant Objectives . . . . . . . . . . . . . . . . . . . . . . 118
3.4.8 Avoid Pseudo-Risk Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.4.9 Develop Useful Risk Appetite Statements . . . . . . . . . . . . . . . . . . . 121
3.4.10 Make Uncertainties Transparent and Comprehensible . . . . . . . . . 128
3.4.11 Exploit the Full Decision-Making Potential of ERM . . . . . . . . . . 133
3.4.12 Align ERM with Business Planning . . . . . . . . . . . . . . . . . . . . . . . 136
3.4.13 Replace Standard Risk Reporting . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.4.14 Disclose Risks Appropriately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3.5 Assess and Improve ERM Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.5.1 Test ERM Effectiveness Appropriately . . . . . . . . . . . . . . . . . . . . . 149
3.5.2 Increase ERM Maturity Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
ixContents
4 Setting up Enterprise Risk Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.1 Comply with Laws and Check Relevant Governance Codes . . . . . . . . . . . . 165
4.2 Consider ERM-Frameworks Thoughtfully . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.2.1 Motivation for Risk Management Standards . . . . . . . . . . . . . . . . . 168
4.2.2 ISO 31000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.2.3 COSO ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.2.4 Similarities and Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.2.5 Limitations of ERM Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.3 Develop a Sound Risk Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.3.1 Risk Policy and Corporate Strategy . . . . . . . . . . . . . . . . . . . . . . . . 177
4.3.2 Risk Policy as the Basis for Dealing with Risks . . . . . . . . . . . . . . 178
4.3.3 Limitations of Risk Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.4 Enhance Risk Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.4.1 Relate Risk Culture to Corporate Culture . . . . . . . . . . . . . . . . . . . 184
4.4.2 Understand How Risk Culture Evolves . . . . . . . . . . . . . . . . . . . . . 188
4.4.3 Increase Risk Culture Maturity Level . . . . . . . . . . . . . . . . . . . . . . 189
4.5 Organise ERM Properly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.5.1 Does a Best-Practice ERM Organisation Exist? . . . . . . . . . . . . . . 197
4.5.2 ERM Organisation Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
4.5.3 Some Thoughts on Roles and Responsibilities . . . . . . . . . . . . . . . 201
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5 Looking at Trends in ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.1 Emerging Digital Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.1.1 Impact of Disruptive Technologies . . . . . . . . . . . . . . . . . . . . . . . . 210
5.1.2 Digital Risk Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.2 Digitization of ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.3 Using Multiple Sources of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.4 Increasing Demand for Analytic Skill Sets . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.5 Increasingly Sophisticated Software Tools . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.6 Networked Economy and Collective ERM . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.7 Improving ERM Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
1© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019
S. Hunziker, Enterprise Risk Management,
https://doi.org/10.1007/978-3-658-25357-8_1
Learning Objectives
When you have finished studying this chapter, you should be able to:
• Define the term ERM and its key attributes
• Contrast ERM with traditional risk management
• Explain which characteristics distinguish the term risk in the ERM approach
• Explain why ERM is important to support decision-making processes
• Describe the main challenges of ERM
1.1 Why ERM Matters
Many, if not all corporate activities are linked to uncertainties of future developments
that can result in either new threats or opportunities. The volatile nature of markets
(e.g. for raw materials) and business environments (e.g. regulatory changes, behav-
iour of competitors) poses a great challenge to the existence and success of companies.
Introducing ERM 1
Contents
1.1 Why ERM Matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definition of ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Risk Definition in the ERM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 ERM Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Challenges to ERM Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
https://doi.org/10.1007/978-3-658-25357-8_1
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-658-25357-8_1&domain=pdf
2 1 Introducing ERM
The growing complexity and dynamics of the context in which companies nowadays
operate has caused a relentless increase in the level of risk in all areas of corporate
management and business activities. As a result, the discipline and practice of risk
management has enforced itself gradually in various sectors and industries, as well as
across different company sizes (Verbano and Venturini 2013).
Risk management within corporations has gone through various stages starting in the
post-World War II times. Whereas historically risk management activities were mostly
uncoordinated with a strong focus on the mitigation of financial risk by the means of
insurance and derivative instruments to protect the company against financial loss,
a more holistic approach has emerged in the 1990s. This advanced approach is rather
intended to achieve a coordinated management of all significant risk sources a company
might be exposed to (McShane et al. 2011; Mishkin and Eakins 2018). Simultaneously
the concept of Enterprise Risk Management (ERM) has emerged in the early 1990s as
a programme that manages the total risk exposure in one integrated and comprehensive
tool (Hampton 2015, p. 18). Clearly, one of the main advocate of ERM adoption in the
1990s has been the release of the COSO Framework in 2004 (Committee of Sponsoring
Organizations of the Treadway Commission) “Enterprise Risk Management—Integrated
Framework” (COSO 2004). In the 2000s Risk management became even more important
mainly due to negative events with high public awareness such as September 11th, cor-
porate accounting fraud and the financial crisis.
Although ERM was a much-debated business topic in the 2000s, there has also been
severe critique. In particular, with the evolvement of the financial crisis in 2008 and 2009
that resulted in many corporate failures and bankruptcies, the effectiveness of ERM pro-
grammes within firms was heavily questioned. Critics brought forward the argument that
the effectiveness of ERM had not yet been proven, and consequently, the promotion and
implementation within companies slowed down shortly after the financial crisis (Hoyt
and Liebenberg 2011, p. 796).
In the meantime, most of the criticism has fortunately faded. Specifically, over the last
couple of years, the perspective on ERM has significantly changed. Many organisations
have recently implemented policies and processes and started to intensively apply mod-
ern ERM practices. The main reason for that is that ERM has substantially evolved as a
management tool and is no longer seen as a pure regulatory requirement to prevent nega-
tive events. In fact, academics and risk professionals appreciate ERM as a value adding
function (Lam 2017, pp. 34–37). Various empirical studies (e.g. Smithson and Simkins
2005; Hoyt and Liebenberg 2011; Eckles et al. 2014) have been undertaken which con-
firm that companies with ERM systems in place have a significantly higher company
value than non-ERM companies. Ultimately, from a very modern perspective, value crea-
tion is the sole reason for implementing an ERM programme. This is also the only cor-
rect answer to the “why Enterprise Risk Management”-question from an economic point
of view: If ERM consumes more resources than the value it creates, companies should
refrain from implementing it.
3
To be more concrete, the most important features of modern ERM which all con-
tribute to the value creation are briefly introduced. First and foremost, value creation is
facilitated if ERM is directly linked or built into to the decision-making processes within
the company, which in turn affect the prosperity of an organisation. ERM creates value
by allowing firms to gain a more optimised risk-return trade-off of their decisions. A
commonly misunderstood characteristic of ERM in this context is that the goal of risk
management is to minimise total risk exposure. However, ERM is about determining the
ideal level of risk to maximise value: Some risks might be deliberately taken in order to
exploit opportunities and hence to create a higher return (Romeike 2018, p. 14). Thus, a
key reason why to deal with ERM is the improved internal decision-making by consider-
ing and balancing the upside and downside potential of each decision and by providing a
more rational basis for decisions.
A second key reason for implementing ERM is to gain a comprehensive view on all
risks, opportunities and their respective interdependencies. This enables both the senior
management’s and the board’s capability to oversee total risk exposure and its poten-
tial effect on certain business objectives. The availability of transparent and fully quan-
tified risk exposures offers new opportunities for effective strategic decision-making
and risk taking which is in line with the corresponding risk appetite statements (Farrell
and Gallagher 2014, pp. 628–629). Moreover, the risk aggregation approach enables
the management of residual risks rather than dealing with single independent risks.
Companies adopting aggregation techniques may benefit from a risk diversification
effect and can make advantage of natural risk hedges. Thus, only a few remaining risk
needs to be managed which is more efficient and effective way than dealing with each
single risk independently (McShane et al. 2011).
In addition, ERM has recently been observed to be of great benefit to organisations
because it has led to:
• Stabilised earnings which improve shareholder’s value;
• Decreased cost of capital via improved ratings from credit rating agencies
• Better exploitation of equity (risk) capital
• Lessened dynamics in stock price, which also improves shareholder’s value;
• Boosted investors’ confidence (still a much-debated and controversial topic);
• Enhanced competitive advantage through the identification of significant risks which
can be actively managed.
So far, we keep in mind that ERM can add value to the company. If you were asked, why
a firm should deal with ERM, your very first answer would definitely be value creation
through improved decision-making. Before we can embark on our journey into the con-
crete process of ERM implementation, we have to define ERM properly and in particular
the often misunderstood term “risk”.
1.1 Why ERM Matters
4 1 Introducing ERM
1.2 Definition of ERM
In theory, a vast amount of ERM definitions is available, but essentially many of these
descriptions comprise similar aspects. Hampton (2015) states that the ERM concept is
a comprehensive and complex system that concerns major areas of a company and for
that reason, many definitions of ERM exist (p. 19). In order not to lose oneself in the
numerous definitions, it makes sense to have a closer look at the two most well-known
risk management frameworks and their definitions, published by the Committee of
Sponsoring Organizations of the Treadway Commission (COSO) and the International
Organization for Standardization (ISO). Both frameworks have been recently updated in
2017 (COSO) and 2018 (ISO), respectively. According to the COSO ERM Framework,
ERM is defined as:
The culture, capabilities, and practices, integrated with strategy-setting and its execu-
tion, that organizations rely on to manage risk in creating, preserving, and realizing value.
(COSO 2017, p. 10)
As we can easily notice, COSO puts emphasis not only on the capabilities, techniques
and tools, but also on the very important cultural aspects. Many risk professionals have
argued in the last couple years that cultural aspects are perhaps even more relevant for
an effective risk management than the existence and implementation of ERM techniques
per se (Levy et al. 2010, p. 2; Vazquez 2014, p. 10). A second aspect of COSO’s ERM
definition stands out—it shall be integrated with strategy-setting and its execution. Thus,
COSO stipulates that ERM should be linked to business objectives in order to create
value, which is fully in line with our main reasoning of “why ERM” (see Sect. 1.1). In
contrast, ISO defines Risk Management as (even if ISO promotes a modern, integrated
risk management approach, the term ERM is not mentioned at all in the guidelines):
…coordinated activities to direct and control an organization with regard to risk. (ISO
31000:2018, p. 1)
Although ISO’s definition does not explicitly comprise the link between risk manage-
ment and value creation, it specifies the purpose of risk management in the principles
section as the creation and protection of value, quite similar to COSO’s approach (ISO
2018, p. 2). In addition, ISO clearly states that culture significantly impacts all aspects of
risk management what is again in line with COSO’s view on ERM. Overall, both defini-
tions represent a sound basis for modern ERM as they both promote the link between
ERM and value creation. As such, both definitions perfectly serve the purpose of the
textbook at hand and we could stop discussing approaches. For the sake of not relying
only on definitions created by risk management frameworks and norms, here are a few
others which don’t fundamentally deviate from COSO and ISO.
The Risk Management Society (RIMS) for example defines ERM as
5
…a strategic business discipline that supports the achievement of an organization’s objec-
tives by addressing the full spectrum of its risks and managing the combined impact of
those risks as an interrelated risk portfolio. (Hopkin 2017, p. 53)
This definition puts emphasis on the aspect of having a unified and integrated approach
where separate management of individual risks is abandoned and risks are treated holisti-
cally throughout the whole organisation (Hopkin 2017, p. 98; Segal 2011, p. 3). Again,
in line with the two former ones, the reference of the link to the company’s objectives is
obvious. This is similarly confirmed by Segal (2011, p. 3) and by Hunziker (2018, p. 2)
who describe that modern ERM is a comprehensive approach to identify, evaluate, man-
age and disclose important risks in order to increase company value.
Based on the previous discussion, the following deliberately brief definition is best
suited to this textbook:
u ERM embraces enterprise-wide coordinated activities with which companies identify,
assess, actively manage and report all key risks in order to create value for the firm.
At this point, we conclude that many ERM definitions have been created by consultants,
risk professionals, agencies and legislative bodies. Modern definitions of ERM typically
postulate a company-wide (i.e. in all areas and across all risk categories) identification,
assessment and management of risks plus a clear link between ERM and the strategy,
business objectives, decision-making processes and ultimately value creation.
1.3 Risk Definition in the ERM Approach
In practice, firms often expect that ERM as a comprehensive approach inevitably leads
to the management of hundreds or even thousands of risks. Particularly in the US, after
COSO ERM was released in 2004, there had initially been a great deal of scepticism that
ERM might be nothing else but an extended task that ties up many resources. Since the
COSO ERM framework is generally based on the COSO framework for Internal Control,
firms felt confirmed by that. However, ERM does clearly not aim to assess, manage and
monitor all risks identified by a company. ERM has a different focus and deals only with
so-called key risks.
Basically, a risk can evolve to a key risk over time or it is being considered as a key
risk by the time of its first assessment. We define a key risk as a risk that exceeds a sig-
nificance threshold in the case of risk occurring set by the company and thus can sig-
nificantly affect one or several business objectives and subsequently can impact company
value or any another financial benchmark. Let’s consider the following example:
1.3 Risk Definition in the ERM Approach
6 1 Introducing ERM
Example
The Swiss company FarAway AG operating in the travel industry markets holiday
trips in Switzerland in business unit A and holiday trips to the euro zone in business
unit B, mainly Germany and Austria. The risk database includes the following two
risks, among others:
• petty cash theft
• entry of a new competitor
As a financial benchmark, FarAway AG defined an acceptable lower bound of 8%
EBIT margin for the next business year (excepted EBIT margin is 10%).
After a first risk assessment, the following worst-case scenarios for both risk look
as follows:
• petty cash theft, worst case −0.01% on expected EBIT margin (= 9.99% after risk
impact)
• loss of market share, worst case −4% on expected EBIT margin (= 6% after risk
impact)
Based on that simple analysis, FarAway AG concluded that petty cash theft is cur-
rently no key risk and therefore not included in the further ERM process, instead put
on a watch list. In contrast, loss of market share is considered as a key risk due to the
severe threat it poses on the financial objectives of FarAway AG.
We conclude that ERM will never have to deal with several hundreds or thousands risks,
as this can certainly be the case while maintaining an Internal Control system of a large
company. A practicable ERM approach thus requires meaningful criteria, which risks
qualify as key risks and which are only stored in a database as a “watch list”, but are
not included in the ERM model. Practical experience shows that, regardless of the size
and industry of a company, many traditional risk management approaches fail because of
their complexity and attempt to incorporate and manage all risks instead of focusing on
key risks.
Another challenge in properly defining risk for the purpose of ERM is the fact that
managers tend to think predominantly about the (financial) impacts of risks. These con-
siderations are clearly important, but not sufficient. To develop effective risk strategies,
we need to know the sources (causes) of each risk. The relevant question to define risks
effectively shall be: How can we prevent a risk from occurring so that it does not have
any financial impact? The answer is to create a plausible story, embedded in a cause-
effect chain. The cause at the very beginning of that story is usually the starting point for
discussing effective risk mitigation strategies. Let’s consider again our practical example:
7
Example
FarAway AG identified and assessed the key risk “loss of market share”. The worst
case is a loss of −4% EBIT margin. The Chief Financial Officer (CFO) of FarAway
AG claimed that this risk must be categorised as a financial risk due to its significant
impact on the financial performance. In a meeting with the risk manager, however,
he learned that every risk is to be categorised by its source rather than its impact to
develop preventive risk mitigation measures.
The Chief Risk Officer (CRO) together with the CFO created a simplified cause-
effect chain for that specific key risk:
Due to missing out on a timely tracking of new trends and customer needs in the
travel industry, the competitors may gain a competitive advantage over FarAway
AG with new and innovative offers. This may lead to less customer satisfaction of
our customer base and to less new customers. In turn, this has a negative impact on
our revenues and consequently leads to a loss of 4% EBIT margin in a worst case
scenario.
The CRO showed understanding and agreed to change that risk from the financial
category to the strategic risk category. “Now we can think of preventive measures”, he
suggested.
Thirdly, it is obvious that many risks can have both an upside potential (opportunity)
and a downside potential (risk), possibly to varying degrees. However, the term risk is
traditionally negatively interpreted. Questions such as “What can go wrong?” and “What
can we (financially) lose?” are the main focus in many risk management workshops.
The assessment of a potential impact and a corresponding probability of occurrence is
still prevailing in practice (Hampton 2009, pp. 4–5). The following figure illustrates the
modern approach to define risk as a possible positive and/or negative deviation of an
expected outcome. This understanding of risk is crucial for a realistic assessment of the
total risk exposure at company-wide level.
Looking at Fig. 1.1, it becomes apparent that different risks involve different upside
and downside potentials. For example, the debtor default risk and the IT failure risk do
not have a symmetrical risk/opportunity distribution, but are strongly downside-oriented
(unrewarded risks). On the other hand, the early recognition of changing customer needs
or market entry with new products can become a strategic competitive advantage with
disproportionate potential opportunities (rewarded risks with an expected positive out-
come). To decide which risk strategy is adequate for each risk, an ERM model deals with
various positive and negative scenarios, covering the best case and the worst case at both
ends of the possible ranges. Let’s assume a company only takes into account the negative
scenarios of all risks in its ERM model. This would sum up to a severe overvaluation of
the overall risk exposure, since the positive scenarios (opportunities) and their diversifi-
cation effects on entity level are not considered in the risk assessment.
1.3 Risk Definition in the ERM Approach
8 1 Introducing ERM
The following example illustrates risk balancing between two business areas, and how
ERM can help create value for the company.
Example
The Swiss travel company FarAway AG identified the risk of an unexpected change in
the CHF/€ currency pair as another key risk. The news from the Swiss National Bank
(SNB) on January 15, 2015 that the minimum exchange rate of CHF 1.20 per euro
would be raised hit the company unexpectedly. The minimum price was introduced at
a time of strong overvaluation of the Swiss franc and great uncertainty on the finan-
cial markets. The aim of this temporary measure was to protect the Swiss economy
from financial loss. One reason for the SNB’s move was that the overvaluation had
been somewhat generally reduced since the introduction of the minimum price and
companies had been able to adjust to this new situation (SNB 2015).
The impact of the appreciation of the CHF against the euro was twofold: business
unit A lost around 20% of sales in 2015, as fewer holidays were booked in “expen-
sive” Switzerland. However, the company recorded a significant 10% increase in sales
in the important euro business. If both effects are offset against each other, this has a
net positive impact at company-wide level. Traditional risk management would have
significantly overestimated this risk, as only the negative impact from business unit A
would have been included in the overall risk assessment (Hunziker 2018, p. 12–13).
Suppliers
Customer
needs
Debtors
Opportunity potential of all key risks
Risk potential of all key risks
Market
entry
Key risks business unit A
IT failure
Currencies
Fire
Key risks business unit B
Customer
needs
Fig. 1.1 Risk in the ERM approach. (based on Hunziker 2018, p. 11)
9
We conclude that the term “risk” in the modern ERM approach must be understood as an
enabler to seize opportunities, as it directly and measurably compares the opportunities
and the downside risk associated with a business goal or a strategic option. In addition,
dependencies between risks must be identified and communication about risks must be
promoted. If risk is defined in this way (deviation from expected), ERM leads to better
decisions, as they can be evaluated more rationally and realistically.
1.4 ERM Frameworks
There are many options for the practical implementation of ERM. While companies
have recently increased their ERM activities and developed approaches by themselves,
consulting and auditing firms as well as standards bodies have published many ERM
guidelines, and specialised expert teams and rating agencies included ERM as a specific
assessment criterion into their rating systems (Hoyt and Liebenberg 2011, p. 795). As
COSO ERM (2017) and ISO 31000:2018 are by far the best-known and most widely
used aids to implement ERM, we will focus on these two frameworks. Basically, we
have to answer the following two questions:
• Which of these two frameworks is better suited for a modern ERM implementation?
• What is the relationship between this textbook and the COSO ERM/ISO 31000
frameworks?
The answer of the first question is not quite straightforward and needs some elabora-
tion. The following brief assessment of the two frameworks is only related to the recently
updated versions of COSO ERM 2017 and ISO 31000:2018. Generally speaking, the
two frameworks lag behind the extant literature and research on proper risk manage-
ment. Surprisingly, to date no empirical studies as to whether the two standards actually
work in practice, i.e. create value for companies, are available. In light of the fact that
ISO:31000 and COSO ERM have existed many years, no publications with concrete case
studies that have successfully implemented COSO ERM or ISO 31000 as a whole can be
found.
Although both frameworks postulate a strong link between ERM and business objec-
tives, they both approach the “story of risk management” differently: ISO 31000 is much
shorter and contains only 16 pages and starts with core risk management definitions. ISO
recommends in note form to examine and understand its external and internal context
such as mission, vision, strategy and the complexity of networks and dependencies (ISO
2018, p. 6). In contrast, COSO ERM is written in much more detail and contains about
110 pages without appendices. It aims to gain a sound understanding of corporate strate-
gies as a starting point for ERM implementation, followed by a risk analysis that allows
risks to be aligned with the corresponding strategies. Moreover, COSO released in 2018
a supplement to its framework. The compendium includes many practical examples for
1.4 ERM Frameworks
10 1 Introducing ERM
implementing their 20 principles of the COSO ERM framework. Again, this supplements
puts emphasis on the link between ERM, strategy setting and value creation.
COSO ERM has been criticised by many practitioners as too extensive, only top-
down oriented, too lengthy and too “prescriptive”. To understand this, we need to know
who developed COSO ERM: Essentially, the main contributors to the framework are
large US accounting and auditing associations that share a common interest in a highly
compliance-oriented ERM that emphasises the importance of internal control and inter-
nal auditing. On the contrary, ISO 31000 is much more generic in nature. As a result, it
can be used to support both a top-down and bottom-up approach to ERM.
To finally answer the first question: Neither COSO ERM nor ISO 31000 fully cover
all modern ERM topics in a way companies can easily implement. However, both frame-
works basically support a modern, value-creating view on ERM (see also Sect. 1.2). In
principle, they can be used complementarily, as they complement each other in many
areas, are considered mature, holistic and largely consistent. However, it should be noted
that such frameworks in general have to reflect the consensus of many different opin-
ions and hence can per definition only be valid for the “average company”. Significant
innovations don’t find their way into ERM frameworks, because they are usually not
capable of winning a majority. Thus, every risk professional should be aware of both
frameworks. They are helpful guidelines and can—to a certain extent—support a sound
ERM implementation.
To answer the second question: Neither COSO ERM nor ISO 31000 reflect all rel-
evant topics in this textbook. Or to put it differently: Both frameworks can not fully
replace the textbook at hand. Where appropriate, the two frameworks are referenced and
examples are discussed. At this point, we note that both frameworks basically do support
the paradigm of modern, value-creating risk management. To give the reader an impres-
sion of how this book differs from the recommendations of the frameworks, a few exam-
ples are discussed below (Hunziker 2018, pp. 6–7):
• Although both frameworks emphasise the importance of the connection to strategic
management, it remains unclear how the economic benefit (i.e. the value contribu-
tion) can be justified or measured in practice. In light of the fact that many companies
(still) do not recognise the benefits of ERM enough, this is very crucial.
• ISO 31000 and COSO ERM do not manage to establish a practical link between risk
appetite and decision-making processes. Risk appetite are concrete statements of what
types of risks (or the amount of uncertainty) a company consciously accepts regarding
potential impact and probability of occurrence in order to achieve its business objec-
tives. Both ISO and COSO struggle to explain how a company can discuss and set
its “risk appetite” properly. First, the statements on risk appetite made by COSO are
rather confusing and unrealistic. COSO ERM suggests that companies can formulate
very simple, qualitative risk appetite statements, such as “we do not accept serious
risks that could endanger our strategy”. These kind of statements are useless for deci-
sion-makers as they cannot be broken down into concrete recommendations for action
11
at lower organisational levels. If risk appetite is not reflected in the decisions which
impact business objectives on a daily base, risk appetite statements are not actionable.
• ISO 31000:2018 does not use the term risk appetite at all. Instead, the phrase “risk
criteria” is used: “The organization should specify the amount and type of risk that it
may or may not take, relative to objectives. It should also define criteria to evaluate
the significance of risk and to support decision-making processes” (ISO 2018, p. 10).
As the term risk appetite is well-known by most organisations and annual reports fre-
quently contain risk appetite statements, guidance how to concretely set risk appetite
would be helpful (IRM 2018, p. 11).
• Risk identification should also include a scanning process of the external environ-
ment, but COSO ERM is strongly internally focused. Many risks are neglected if
no external screening (competitors, trends, legal developments, international market
developments, etc.) is carried out. Moreover, COSO ERM ignores so-called “black
swan” events, i.e. risks with a very low probability of occurrence and a high potential
for negative impact.
• COSO uses the term “risk event” throughout the framework. By definition, a risk
event can suddenly become acute. However, there are many risks that manifest them-
selves slowly, sometimes over months or even years (e.g. changes in customer needs).
These so called emerging risks cannot be reflected in “risk events”. In addition, the
downside risk (what can go wrong?) dominates COSO’s view on risk. This can lead to
a significant overestimation of the overall business risk if opportunities are excluded
from the risk assessment.
• Practitioners may find ISO 31000 too generic in the sense of that the effort needed to
define and develop their own ERM framework is too time-consuming, too costly and
too less supported by the framework.
To sum up, we appreciate both frameworks as valuable sources for modern ERM imple-
mentation. As both frameworks partially lack the incorporation of well-accepted empiri-
cal evidence on methods, approaches and techniques in risk management, the textbook at
hand aims to contribute to closing these gaps as far as possible.
1.5 Challenges to ERM Implementation
Although we now know the main benefits of modern ERM, the potential is not yet being
fully exploited in practice. Risk management is still perceived mainly as a regulatory
requirement without significant added value. There are various reasons for this (see also
Segal 2011, pp. 28–31).
First, historically grown so called risk silos in the company must be eliminated.
Traditionally, risks have been managed by assigning risk responsibilities to specific
business unit leaders. For example, the CFO manages risks related to the organisation’s
financial risks (interest rates, liquidity, currencies). The Chief Operating Officer (COO)
1.5 Challenges to ERM Implementation
12 1 Introducing ERM
deals with risks in his or her area of responsibility, i.e. production and distribution. The
Chief Information Officer (CIO) is responsible for cyber risks and IT failure risks, and
so on. Each of these functional leaders is charged with managing risks related to their
key areas of responsibility. Each “silo leader” is responsible for identifying, assessing
and managing risks within their silo (Beasley 2016, p. 1). ERM language and techniques
have grown consistently within these silos, but not across the various silos. This often
impedes to assess enterprise-wide risk exposures due to inconsistencies of the diverse
assessment techniques applied in the risk silos.
The “E” in the term ERM requires an enterprise-wide risk assessment. However,
in practice, some business areas or support functions may not be considered relevant
enough from an overall perspective because they appear financially unimportant. As very
common in the audit profession, companies might apply a similar concept of material-
ity in planning and performing ERM activities. Very often, the scope of ERM projects
is defined according to certain significance thresholds. For example, a company could
assess the relative contribution (economic relevance) of each business area to the over-
all firm performance. For reasons of resource constraints, ERM processes are then often
not implemented in the areas defined as economically less important. However, this can
severely undermine the effectiveness of an ERM. A risk can originate, for example, in
rather unobtrusive, stable and smaller business areas and may impact the company as a
whole later on.
Thirdly, many companies strongly focus on financial risk management and financial
risks, which can be explained, among other things, by the recent financial crisis (global
phenomenon) and currency crisis (i.e. in Switzerland due to the strong Swiss franc).
From an ERM perspective, the question arises as to whether financial risks must indeed
be of highest priority for all companies. The management of financial risks is undoubt-
edly important, but for most non-financial companies, it often accounts for only an insig-
nificant amount of overall risk exposure. Various studies have shown that strategic risks
have by far the greatest impact potential on company value, followed by operational risks
(e.g. Smit and Trigeorgis 2004). Thus, for non-financial companies, most significant risk
sources can usually be identified in the development and implementation of the corporate
strategy. In most cases, risks and opportunities of technological change, the digitization
of business models, changing customer needs, growing competition or wrong decisions
in strategic project prioritization are far more important than pure financial risks spring-
ing from interest rates or currencies.
Fourthly, many practitioners and consultants obstinately believe that strategic and
operational risks cannot be quantified. However, only an appropriate quantification of all
risk categories allows a meaningful prioritization, assessment and management of risks
and opportunities. Since the well-known techniques of financial risk management can-
not be easily transferred to other risk categories, quantification of other risks does not
happen. In addition, other arguments are brought forward against risk quantification,
e.g. missing historical data, complexity of risks, non-applicability of stochastic models
and spurious accuracy. Other approaches, such as scenario analyses or Failure Mode and
13
Effects Analysis (FMEA), which draw on human intuition and subject matter expertise,
are not or too less used.
Finally, the training and professional experience of many risk managers is another
challenge to ERM. As a rule, the background and experience of the risk manager (or the
person in charge of risk management) significantly influences the specific approach of
ERM implementation. For example, risk managers with predominant experience in the
financial industry, equipped with training in mathematics, statistics and quantitative risk
modelling, are more focused on financial risks than on strategic risks.
With these challenges in mind, we proceed with the next chapter outlining the very
relevant topic on how to counter motivational, cognitive and group-specific biases in risk
analysis. Although a great deal of empirical evidence already exists on these biases, it is
still predominantly neglected in the practical application of ERM.
Key Aspects to Remember
Define the term ERM and its key attributes
ERM is an enterprise-wide coordinated process with which companies identify,
assess and actively manage all key risks in order to create value for all stakehold-
ers. An up-to-date ERM approach thus addresses risks in all business areas and
across all risk categories and considers the aggregated impact of those risks as an
interrelated risk portfolio on business objectives.
Contrast ERM with traditional risk management
Unlike ERM, many traditional risk management approaches fail because of their
complexity, their silo approach and their attempt to manage hundreds of risks at
the same time. Moreover, risk is traditionally only negatively interpreted and there-
fore diversification effects of upside risk potentials are neglected. Modern ERM
assesses risks and opportunities on an enterprise-wide level by the means of a con-
sistent “ERM language” which is understood across the company. Moreover, ERM
is directly linked to decision-making processes.
Explain which characteristics distinguish the term risk in the ERM approach
In the ERM approach, the primary causes of risk, which may be strategic, opera-
tional and financial, are relevant for the development of effective risk mitigation
strategies. It is crucial not to confuse cause with impact. By definition, risks can
both have an upside potential (better than expected) and downside potential (worse
than expected). Risk assessments thus deal with scenario development, covering
the sources and impacts (plausible story) of specific risks and result in providing
1.5 Challenges to ERM Implementation
14 1 Introducing ERM
realistic “quantified uncertainty ranges” between the worst and best case scenario
of each risk.
Explain why ERM is important to support decision-making processes
An integrated ERM approach enables decision-makers to include risk/return-con-
siderations in their judgements. Measured in terms of aggregated risk exposure and
contrasted with risk appetite, it becomes clear whether a company takes too few
risks and thus misses promising strategic opportunities (and vice versa). If compa-
nies understand how to manage their risk exposures, lower borrowing costs from
better ratings, higher firm value through better decisions, and greater capital effi-
ciency can result.
Describe the main challenges for ERM implementation
Although ERM emerged as an important business topic in practice, major chal-
lenges still pose a threat to successful ERM implementation. First, a stronger focus
on strategic risks is required. Many important risk sources spring from strategic
choices and strategy implementation. Second, all risks must be consistently quan-
tified to enable prioritization and evaluation. Thirdly, the background and expe-
rience of the risk manager in charge heavily determine the success of an ERM
programme. Finally, ERM has to cover all relevant business areas of the company,
even allegedly unimportant ones.
Critical Thinking Questions
1. Why is it important to differentiate between risk and uncertainty?
2. What role do cultural aspects play for the success and value creation of ERM?
3. What types of risks typically have an asymmetric risk distribution?
4. What is the main purpose of the 2017 updated COSO ERM Framework? To
what extent does the framework meet these intentions?
5. Why is it considered difficult to assess strategic and operational risks
quantitatively?
References
Beasley, M. S. (2016). What is Enterprise Risk Management? Poole College of Management,
Enterprise Risk Management Initiative, 1–6.
15
Committee of Sponsoring Organizations of the Treadway Commission (COSO) (2017). Enterprise
Risk Management – Integrating with Strategy and Performance. Jersey City, NJ: AICPA.
Committee of Sponsoring Organizations of the Treadway Commission (COSO) (2004). Enterprise
Risk Management –Integrated Framework. Jersey City, NJ: AICPA.
Eckles, D. L., Hoyt, R. E., & Miller, S. M. (2014). The impact of enterprise risk management on
the marginal cost of reducing risk: Evidence from the insurance industry. Journal of Banking &
Finance, 43 (C), 247–261.
Farrell, M., & Gallagher, R. (2014). The Value Implications of Enterprise Risk Management
Maturity. The Journal of Risk and Insurance 82 (3), 625–657.
Hampton, J. J. (2015). Fundamentals of Enterprise Risk Management. How top companies assess
risk, manage exposure, and seize opportunity (2nd Ed.). New York: American Management
Association.
Hampton, J. J. (2009). Fundamentals of Enterprise Risk Management. How top companies assess
risk, manage exposure, and seize opportunity. New York: American Management Association.
Hopkin, P. (2017). Fundamentals or Risk Management. Understanding, evaluating, and imple-
menting effective risk management (4th Ed.). London: Kogan Page.
Hoyt, R. E., & Liebenberg, A. P. (2011). The value of enterprise risk management. The Journal of
Risk and Insurance, 78 (4), 795–822.
Hunziker, S. (2018). Erfolgskriterien von Enterprise Risk Management in der praktischen
Umsetzung. In S. Hunziker & J. O. Meissner (Eds.), Ganzheitliches Chancen- und
Risikomanagement. Interdisziplinäre und praxisnahe Konzepte (pp. 1–28). Wiesbaden: Springer
Gabler.
Institute of Risk Management (IRM) (2018). A Risk Practitioners Guide to ISO 31000: 2018.
London: IRM.
ISO (2018). ISO 31000:2018 – Risk management Guidelines. Geneva, Switzerland: ISO.
Lam, J. (2017). Implementing Enterprise Risk Management. From Methods to Applications. New
Jersey: John Wiley & Sons.
Levy, C., Lamarre, E., & Twining, J. (2010). Taking control of organizational risk culture.
McKinsey Working Papers on Risk.
McShane, M. K., Nair, A., & Rustambekov E. (2011). Does Enterprise Risk Management Increase
Firm Value? Journal of Accounting, Auditing and Finance, 26 (4), 641–658.
Mishkin, F. S., & Eakins, S. G. (2018). Financial Markets and Institutions (9th Ed.). Harlow, UK:
Pearson.
Romeike, F. (2018). Risikomanagement. Wiesbaden: Springer Gabler.
Segal, S. (2011). Corporate Value of Enterprise Risk Management: The Next Step in Business
Management. New Jersey: John Wiley & Sons, Inc.
Smit, H. T. J., & Trigeorgis, L. (2004). Strategic Investment – Real Options and Games. Princeton:
Princeton University Press.
Smithson, C., & Simkins, B. J. (2005). Does Risk Management Add Value? A Survey of the
Evidence. Journal of Applied Corporate Finance, 17 (3), 8–17.
Schweizerische Nationalbank (SNB) (2015). Medienmitteilung: Nationalbank hebt Mindestkurs
auf und senkt Zins auf -0,75%. Zürich.
Vazquez, R. (2014). Five steps to a risk-savvy culture. Risk Management, 61 (9), 10–11.
Verbano, C., & Venturini, K. (2013). Managing Risks in SMEs: A Literature Review and Research
Agenda. Journal of Technology Management & Innovation, 8 (3), 186–197.
References
17© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019
S. Hunziker, Enterprise Risk Management,
https://doi.org/10.1007/978-3-658-25357-8_2
Countering Biases in Risk Analysis 2
Contents
2.1 Motivational Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Affect Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.2 Attribute Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Confirmation Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.4 Desirability of Options and Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.5 Optimism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.6 Transparency Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Cognitive Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.1 Anchoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.2 Availability Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.3 Dissonance Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Zero Risk Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.5 Conjunction Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.6 Conservatism Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.7 Endowment and Status Quo Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.8 Framing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.9 Gambler’s Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.10 Hindsight Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.11 Overconfidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.12 Perceived Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Group-Specific Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Authority Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.2 Conformity Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.3 Groupthink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.4 Hidden Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.5 Social Loafing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
https://doi.org/10.1007/978-3-658-25357-8_2
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-658-25357-8_2&domain=pdf
18 2 Countering Biases in Risk Analysis
Learning Objectives
When you have finished studying this chapter, you should be able to:
• know the different biases in risk analysis
• understand the importance of biases in risk analysis
• recognise the need to counter biases throughout the risk process
• understand the limitations of debiasing strategies
• establish some real examples for your management and employees
There is always an easy solution to every human problem — neat, plausible, and wrong.
(Henry Louis Mencken)
Throughout the whole ERM process, it’s crucial to recognise that many risks do not
manifest themselves by exogenous events, but rather by people’s behaviour and choices.
Only by applying the intellectual capacity to question our current future prospects and
long-lived assumptions, we can obtain the means to manage the real risks to which com-
panies are exposed (Wolf 2012). As already explained, the primary objective of ERM is
to increase the quality of decisions by systematically analysing opportunities and risks.
Such risk analyses should make decision-making situations in companies more transpar-
ent and help to present uncertainties more realistically. Paradoxically, however, the input
factors for risk analyses are just as subject to biases as the decision situation itself. This
means that risk analyses only contribute to the quality of a decision if the risk manager
is aware of the most important motivational, cognitive and group-specific biases and can
reduce them by taking appropriate countermeasures.
Identifying and quantifying risks are two of the most important ERM activities in
which risk managers and related personnel engage. Behavioural decision research over
the last 50 years has found that these two risk management process steps are prone to
many motivational and cognitive biases. People usually overestimate some risks and
their corresponding probabilities and underestimate others. Biases are an inherent chal-
lenge to all decisions and deeply rooted in human behaviour. Thus, the question in ERM
activities is not whether biases exist, but rather how these distortions within the risk
management decision-making process can be effectively managed.
In the following, a distinction is made between cognitive and motivational biases. The
former refer to false mental processes that lead to deviant behaviour from socially well-
accepted normative principles (however, it is strongly believed that this type of bias is
important for evolutionary reasons). The latter include conscious or unconscious distor-
tions of opinions due to different incentives like social pressure, organisational environ-
ment and self-interest (Montibeller and von Winterfeldt 2015, p. 1230).
Unfortunately, the vast amount of literature has dealt only with cognitive biases
and has neglected motivational biases which are harder to account for in an ERM pro-
gramme. In many cases in literature, motivational biases are mistakenly classified as
192.1 Motivational Biases
cognitive biases. Some of the biases of both groups can be alleviated or amplified in
group decision-making processes. To account for the importance of group-specific activi-
ties in ERM processes (e.g. risk management workshops), a separate chapter particularly
covers group-specific biases.
After the explanation of each bias, specific measures are suggested which the risk
manager can apply or propose to mitigate or eliminate the negative effects. These proce-
dures and attempts to counter biases are known as “debiasing techniques”.
2.1 Motivational Biases
Let us first look at motivational biases. These biases are judgments that are influenced
by the desirability or undesirability of events, consequences, outcomes or decisions in a
company. This includes, for example, the deliberate attempt by experts to provide opti-
mistic forecasts for a preferred action or outcome. Another example is underestimat-
ing the cost of a project to deliver bids that are more competitive. Selected motivational
biases which are believed to severely impact risk analysis are presented below.
2.1.1 Affect Heuristics
Affect heuristics are a sort of mental abbreviation in which people make decisions that
are strongly influenced by their current emotions. Essentially, everyone’s personal affect
(a psychological term for emotional reaction) plays a crucial role. Emotions influence all
kinds of decisions, large and small ones. After all, it seems obvious that someone is more
likely to take risks or try new things when he or she feels happy. Likewise, individuals
are less likely to make difficult decisions when they are depressed. If someone relies on
his “gut feeling” to make an important decision, this is typically an example of affect
heuristics (Montibeller and von Winterfeldt 2015, p. 1235).
Affect-based assessments are more pronounced when people do not have the
resources or time to think. Rather than looking at risks and rewards independently, peo-
ple with a negative attitude, e.g. towards an internationalization strategy of a company,
may assess their benefits (opportunities) as low and their risks as high. This leads to a
more negative risk-benefit correlation than would be observed under conditions without
time pressure (Finucane et al. 2000).
One study for example found that tobacco, alcohol and food additives are all per-
ceived as high-risk and low-reward topics. In contrast, X-rays, vaccines and antibiotics
are considered low-risk and high-reward (Fischhoff et al. 1978). The important aspect
of this result is that the positions have always been classified as both low-risk and high-
reward (or vice versa), even if some positions are actually high-risk/high-reward or low-
risk/low-reward. This result occurs because smoking, drunkenness and food additives
trigger negative emotional reactions, while the other activities trigger positive emotions.
20 2 Countering Biases in Risk Analysis
Therefore, we do not really consider the true risks and opportunities; we automatically
choose the more positive option (low risk and high reward) for concepts with positive
associations and do the opposite for those with negative associations (The Decision Lab
n. d.).
Various approaches can help to reduce the negative consequences of affect heuris-
tics. Risk managers can check whether decision-makers focus too much on a single risk
assessment proposal. They can bring critical decisions to a panel with alternative view-
points to discuss risks and opportunities. In this way, it is possible to avoid underesti-
mating the risks of an idea that somebody is very attached to. Companies can also use
decision-making tools that allow various factors to be weighted and evaluated. Within
the scope of risk identification, risks and potential risk scenarios should be formulated as
neutrally as possible. In risk assessments, it may be necessary to have risk scenarios to
be assessed by different people with different backgrounds, interests and incentives.
For example, this could be supported by an ERM committee. Such a committee usu-
ally consists of specialists and experts from different divisions and business units. This
means that the assessment of losses or financial consequences resulting from a potential
occurrence of risk should be much more well-founded and complete than the assessment
by individual, possibly unrelated employees.
2.1.2 Attribute Substitution
Attribute substitution is an attempt to solve a complex problem with a heuristic attrib-
ute that is a false substitution. Concretely, people involved in risk analysis may sub-
stitute a difficult problem for an easier one incorrectly and without being aware of it.
Attribute substitution is a generic model that is applicable in many different areas and
can be easily remembered. Essentially, attribute substitution is the collapse of attention
from a broader, complex question to one that is narrower, but more easily answered
(Smith and Bahill 2009, p. 2). Attribute substitution may take many forms. Examples
include the substitution of an emotion such as fear. The problem of attribute substitu-
tion is that it often causes inaccurate (risk) assessments of emotional themes such as
dread risks (terrorism, plane crash, pandemic situation).
For example, when individuals are offered insurance against their own death in a ter-
rorist attack while on a foreign trip, they are willing to pay more for it than they would
for insurance that covers death of any kind on that trip, although the latter risk obviously
includes the former risk. Kahneman concludes that the attribute of fear is being substi-
tuted for an assessment of the total risk exposure of being abroad. Fear of a terrorist
attack is perceived as more significant risk than fear of dying on a trip (Kahneman 2007).
Kahneman and Frederick propose three conditions for attribute substitution (2002):
• It is not expected that substitution will take place when answering factual questions
that can be retrieved directly from memory or about current experiences.
21
• An associated attribute is easily accessible, either because it is automatically assessed
in normal perception or because it has been primed.
• Substitution is not recognised and corrected by the reflective system. For example,
when asked, a bat and a ball cost CHF 1.10 together. The racket costs CHF 1 more
than the ball. How much does the ball cost? Many respondents erroneously answer
with CHF 0.10. One explanation regarding attribute substitution is that instead of
working out the sum, respondents split the sum of CHF 1.10 into a large and a small
amount, which is easy to do. Whether they think this is the correct answer depends on
whether they check the calculation with their reflective system.
There is unfortunately no simple solution for the substitution attributes in the ERM
process. First of, it is important to become aware of the fact that people tend to sub-
stitute simpler but related risk assessments in place of more complex risk assessments.
Subsequently, examples of this bias can be presented to managers and decision-makers
to demonstrate their own behaviour. Some suggestions made by Smith and Bahill (2009)
in the context of ameliorating attribute substitution in systems engineering might be
adapted to risk analysis (pp. 15–16): To counter the risk to mistakenly replace a complex
risk phenomenon with an easier, but wrong one, is to deliberately create risk analogies of
greater complexity in addition to the current (easy) risk scenario. The idea behind this is
that the development and discussion of risk analogies of greater complexity can be use-
ful because they offer new perspectives on the same risk and reduce the risk to come to
quickly to a too simple, substituted solution.
A second (partial) remedy of attribute substitution is to draw on subject matter experts
in risk analysis processes. A subject matter expert is characterised by long lasting practi-
cal experience that positively impacts perceptual abilities, recognition skills and enables
faster decision-making. In addition, experts have stronger self-monitoring capabilities
which allows them to recognise when they make for example false and too easy judge-
ments on risks. As Smith and Bahill (2009) point out, “such noncollapsing situational
awareness should serve to prevent erroneous attribute substitution” (p. 16).
2.1.3 Confirmation Bias
Confirmation bias is one of the most common cognitive biases for decision-makers. This
type of bias tends to interpret information based on an earlier assumption rather than let-
ting the data speak for itself (Wolf 2012). It shows the tendency to select and consider
only (risk) information that confirms our existing beliefs and assessments. For example,
suppose a manager believes that men will respond positively to a new service and sends
surveys to men who have tested the service. Confirmation biases can lead him to inter-
pret this survey in a way that confirms his preconceived notion. On an organisation-wide
level, the data that underlie a decision process can be flawed. Without conscious, system-
atic probing, data selection is prone to confirmation bias (Baer et al. 2017).
2.1 Motivational Biases
22 2 Countering Biases in Risk Analysis
The confirmation bias can occur in different stages of the ERM process. During the
risk identification process, there is a risk that only factors that confirm an initial pre-
selection will be taken into account. For example, cyber risk exposure can be confirmed
due to the high media presence. This is despite the fact that a company has no online
presence at all and is already very well prepared when dealing with the Internet. The
distortion can also occur during risk analysis and quantification. Once an assessment has
been carried out, facts are sought that support it.
As a manager or risk manager, it is a rare luxury to have all the relevant data before
making an informed decision. More often, we have to deal with incomplete information,
which leaves us open to confirmation bias. To avoid this trap, it is recommended to take
some time before making important decisions and ask ourselves what would have hap-
pened if we had made the opposite choice. One approach to effectively counter that bias
is to collect specific data to defend an opposite view of specific risk scenarios and then
compare it with the data that supported the first risk assessment. Next, risk managers can
reassess the decision against the larger record. Still, the perspectives may be incomplete,
but the risk assessment will be much more balanced (Redman 2017).
To further reduce the confirmation bias, risk managers should review the following
countermeasures. It is highly recommended that different subject matter experts on the
same topic are involved when making decisions on risks. For example, when it comes to
probability assessments, it is worth having the same risk scenario assessed independently
by different experts. It is also advisable to remove the time pressure from decisions and
to deal intensively with an important risk/reward decision that have considerable con-
sequences on business objectives. Finally, a corporate culture that allows for different
views and opinions supports the critical engagement with risks.
2.1.4 Desirability of Options and Choice
Desirability bias refers to the tendency to give socially desirable answers instead of
choosing answers that reflect true views. The distortion of responses due to this personal-
ity trait becomes an important issue when, for example, unwanted risks or risks that may
jeopardise a project are being discussed. If a person knows that he or she is being moni-
tored, it is more likely that he or she will primarily indicate the risks that are known or
easy to manage. This obviously distorts the risk relevant data (Grinnell and Unrau 2018,
p. 488). Accordingly, the bias leads to over- or underestimating of probabilities, conse-
quences, values, or weights in a direction that favours a desired alternative (Montibeller
and von Winterfeldt 2015, p. 1235).
Precautions should be taken to mitigate the negative effects of the desirability of
options. Basically, it helps (again) to involve different stakeholders in decision-making
situations (Montibeller and von Winterfeldt 2015, p. 1235). With regard to ERM, for
example, opinions of experts from other departments or business units can be consulted
during risk assessments. The collected risk scenarios and associated risk data can also
23
be validated by experts. It is advisable to implement incentives and responsibilities that
fundamentally reduce this bias. Those people who are responsible for achieving business
objectives are basically more focused on a comprehensive identification and analysis of
the risks.
In addition, it is a crucial task to ask the right questions in the consciousness of this
bias. Thus, suggestive questions should be consistently avoided. It is also important to
create a corporate culture in which risks can be discussed openly. This includes ensur-
ing that the disclosure of risks has no negative impact on employees. This means that the
level (impact) of the risks would play only a minimal or no role when it comes to remu-
neration. Rather, the far-sighted management of relevant risks intentionally accepted in
order to pursue business objectives should be assessed. Presenting concrete examples of
such biases at the beginning of decision-making processes can also increase awareness.
2.1.5 Optimism
This cognitive bias occurs when the desirability of a result leads to an increase in entry
expectations. It is often referred to as “wishful thinking” or “distortion of optimism”.
The bias is particularly evident when people assess the impact or consequences of a risk
scenario. It is the tendency to judge positive results too optimistically or the tendency not
to identify the potentially negative results or to not see them completely (Emmons et al.
2018, p. 58). Unwanted optimism can therefore lead to unnecessary risks being taken.
For example, we usually underestimate the risk of being involved in a car accident
or falling ill. At the same time, we expect to live longer than is indicated by objective
data. We also think that we are more successful in our job than we are (Sharot 2011,
p. R941). The same distortion can also be seen in everyday business or in projects. Many
large projects are budgeted far too low because decision-makers face an optimism bias.
This often has negative financial consequences. Despite this, some of today’s elementary
buildings would hardly have been realised if cost truth had prevailed right from the start.
Accordingly, this distortion can also have positive effects.
The following factors make the optimism bias more likely to occur (Cherry 2018a).
• Infrequent risk scenarios are more likely to be influenced by the distortion of opti-
mism. People tend to think that they are less likely to be affected by events such as
floods just because they are usually not everyday events.
• People experience the distortion of optimism more when they think that the events are
under direct control of the individual. It is not the case that people believe that things
will work magically, they rather think that they have the skills and know-how to do so.
• The distortion of optimism is more likely to occur when the negative risk scenarios
are perceived as unlikely. For example, if a person believes that companies rarely go
bankrupt, they are rather unrealistically optimistic about these specific risks.
2.1 Motivational Biases
24 2 Countering Biases in Risk Analysis
Research has shown that people who are anxious are less likely to be confronted with the
optimism bias. It has also been found that experiencing certain risk events can reduce the
distortion of optimism. Related to ERM, the occurrence and consequences of a risk can
thus reduce the value of experience and thus the optimism bias. After all, it is less likely to
experience the bias if one regularly compares one’s behaviour with that of others in deci-
sion-making situations. In this context, it can help to establish valuation rules and place
hypothetical bets against the desired event (Montibeller and von Winterfeldt 2015, p. 1235).
Researchers also have tried to help people reduce the distortion of optimism, espe-
cially to promote healthy behaviours and reduce risky behaviours. However, they have
found that reducing or eliminating the bias is indeed incredibly difficult. Attempts to
reduce the optimism bias through measures such as educating participants about risk fac-
tors, encouraging them to consider risky examples, and educating subjects have led to
little change (Cherry 2018a).
In the context of risk analysis, the following approach might reduce the optimism
bias: Similar to the previous biases, it is crucial to take an outside view on risk scenarios
by considering additional perspectives of subject matter experts. One effective approach
that supports this idea is called “prospective hindsight”, in which participants of risk
assessments imagine that a specific business objective has not been accomplished and
then identify all the possible risks why this happened. This exercise enables people iden-
tify possible risks and opportunities in their assessments that may not come to mind oth-
erwise (see similar Singh and Ryvola 2018).
2.1.6 Transparency Bias
Gleißner (2017) states that a transparent identification and presentation of risks is not
necessarily in the personal interest of each manager and decision-maker (p. 14). Various
reasons for this can be found that lead to both conscious and unconscious non-identifica-
tion of risks. For example, it can be assumed that people who are prepared to take fraud-
ulent (business-damaging) actions do not support complete transparency. They probably
do not want past fraudulent actions to be uncovered, nor do they want such actions to be
thwarted in the future.
Furthermore, the transparent presentation of risks can weaken a manager’s own posi-
tion. It is possible that some projects would be discontinued if all risks were presented
transparently. Specifically if an employee or even a manager is dependent on a project
and wants to advance his or her career with it, a conscious non-identification is to be
assumed. However, lack of communication about the benefits of ERM can also lead to
uncertainty on the part of employees, who consciously and unconsciously conceal risks.
Increasing managers’ motivation to be accurate is a key remedy. This can be done by
making them aware of potential biases, or by incentivizing them for the accuracy of their
feedback. Rewards for accurate feedback on risks and rewards does not sound intuitive at
first. The key idea here is to reward people to be more transparent and precise about risk,
25
independent from the scale (impact) of the risk. Training, bonuses or other incentives
could be offered for increasing the transparency in risk assessments. If such incentive
systems are adequately established, superiors can also recognise who is reporting hon-
estly and correctly which also increases visibility.
Gamification might be a very promising approach to counter transparency bias. In
fact, very little research on the relationship of game mechanisms and ERM transparency
is available. However, motivating people to be transparent in risk assessments could be
enhanced by awarding specific “transparency rewards”: Collecting points, unlocking
new levels, receiving fictitious titles and other approaches could play an important role.
Internal and external leaderboards support these transparency efforts. In this context, it is
important that incentives should not only be implemented at the individual level, but also
at the team and department level (Hossain and Li 2013).
2.2 Cognitive Biases
Cognitive biases are systematic errors in thinking that may affect input into decisions
and judgments that people make. Basically, from an evolutionary standpoint, these
instincts provide mechanisms to make rapid decisions in important and complex situa-
tions based on previously observed patterns (Rees 2015, p. 12). One must be careful not
to confuse cognitive biases with logical fallacies. A logical fallacy is based on an error
in a logical argument, while a cognitive bias is related to false thought processing often
arising from challenges with attention, attribution, memory or other mental stumbling
blocks.
2.2.1 Anchoring
To arrive at a decision an individual usually starts from an anchor number and then
adjusts that number or estimate by correcting it up or down (Wolf 2012). A decision
maker must be careful not to use this as a shortcut that can lead to wrong decisions.
People have the habit that they like to think automatically. Sometimes we avoid making
decisions because it is too much of a burden. Anchoring could be an easy way to make
decisions based on one particular piece of information. When decision makers focus on
or give too much weight to one piece of information without considering other crucial
factors, serious mistakes are made (Friedman 2017).
Information overload and lack of time make people more susceptible to anchoring. If
there are no clear points of orientation available to the decision-maker, the person prefers
to seek for an anchor. If an anchor is not readily available, a decision-maker will prob-
ably consider the first one when some numbers, statistics or other information is pre-
sented. Any projection of the future is to some extent based on historical data and also
2.2 Cognitive Biases
26 2 Countering Biases in Risk Analysis
includes some anchoring. As the balanced and conscious decision-making on risks and
rewards is a centrepiece of ERM, it is important that risk-based decisions are not based
on anchors that may significantly bias risk perception and risk assessments.
Example
Anchoring is not a curiosity only occurring in research laboratories; it can be just as
powerful in the real world. In an experiment conducted a few years ago, real estate
agents were given the opportunity to assess the value of a house that was actually for
sale. They visited the house and studied a comprehensive information brochure con-
taining a price claim. Half of the brokers saw an asking price that was significantly
higher than the list price of the house; the other half saw one that was significantly
lower. Each broker expressed his opinion about a reasonable purchase price for the
house and the lowest price at which he or she would sell the house if he or she were
the owner.
The estate agents were then asked about the factors that affected their judgment.
Remarkably, the asking price was not one of these factors; the brokers were proud
of their ability to ignore them. They claimed that price demands did not influence
their answers, but they were wrong. The anchor effect was 41%. In fact, knowledge-
able practitioners were almost as vulnerable to anchor effects as students of business
administration without real estate experience, whose anchor index was 48%. The
only difference between the two groups was that the students admitted to having been
influenced by the anchor, while the professionals denied this influence (Kahneman
2012).
Several measures are available to deal with anchoring. Risk managers can consider a
specific reference point for information when preparing risk-based decisions. It may be
essential to set an anchor based on current knowledge and financial objectives and be
willing to adapt it to changing circumstances. It is important to consider and discuss the
underlying fundamental data and assumptions which led to a specific anchor. In addi-
tion, risk managers must ensure that risk assessments remain flexible and are open to
new sources of information during workshops or interviews. They must be aware of that
bias in risk analysis and not provide interviewees with specific anchors prior risk identifi-
cation and risk assessment.
A skilled risk manager can ask relevant questions that can reveal a company’s anchor-
ing behaviour. Are risk assessments carried out in such a way that a constructive dis-
cussion between different opinion leaders has led to consensus? Are risks assessed on
a neutral basis without specifying anchor numbers or anchor data prior to risk assess-
ment? Are risks consequently discussed with an advocate who argues against the first
consensus within risk assessments or risk workshops? Taking into account these aspects
may help to ameliorate anchoring bias (see similar Kent Baker and Puttonen 2017,
pp. 118–119).
27
2.2.2 Availability Bias
As suggested by Tversky and Kahneman (1973), a persistent cognitive bias that has spe-
cial relevance for risk perception is known as availability. Leaning on frequently occur-
ring (risk) events is an often applied short cut when trying to predict the future and make
decisions when faced risk and uncertainty (Wolf 2012). Availability is also affected by
numerous factors unrelated to the frequency of occurrence. An example of availability is
the extent to which individuals are influenced by their memories and perceptions of past
events in discussion about (future) risks and opportunities.
Due to the availability bias, many risk assessment are heavily distorted. For example,
we tend to systematically overestimate the risk of earthquakes, thunderstorms or fires.
At the same time, we underestimate strategic or operational risks such as increasing
customer complaints or systematic bottlenecks at management level. Topics often inten-
sively covered by media and press are often much rarer as we believe. Spectacular risks
are basically much more present in our brains than the opposite.
The availability bias may for example affect the Board of Directors. As a rule, there
is usually an intense discussion about what management presents, e.g. quarterly figures
such as revenues and EBIT. More important topics such as a skilful product launch by
the competition, increased employee turnover or an unexpected change in customer
behaviour are rarely adequately discussed. However, these neglected topics can pose sig-
nificant threats to the company, i.e. can become strategic risks.
The following points can be suggested as countermeasures. It may be worth to offer
basic courses and trainings on how probability estimates can be assessed not based on
past events and experience. Providing counter-examples can also be used to show the
effect of availability biases. In this context, risk managers can address the challenge
of assessing risks prospectively instead of retrospectively. Risk managers can set high
standards for “neutral thinking” in risk workshops by asking questions to uncover poten-
tial availability distortions such as: What happened in the past? Has this risk occurred
once or several times in the past? What type of risk mitigation has been performed after
this risk? Is this risk still relevant in the future? In summary, it can be said that risk man-
agers and risk managers who assess risks should pay attention to past information that
flows into scenario development (Montibeller and von Winterfeldt 2015, p. 1233).
Additionally, different perspectives of various persons involved in risk assessments
should be considered regularly. A risk manager may form a team with different experi-
ences and perspectives. This countermeasure itself will limit the distortion of availability
as people usually question each other’s natural thinking. It can be worth to consider also
external perspectives that simply do not exist within the company.
2.2 Cognitive Biases
28 2 Countering Biases in Risk Analysis
2.2.3 Dissonance Bias
An incompatible opinion (e.g. risk assessment) with our existing way of thinking cre-
ates discomfort because our mind cannot easily deal with contradictory ideas at the same
time. This discomfort is called cognitive dissonance. The result is the urge to discredit
or ignore information that does not fit the current way of thinking. Thus, it is conceiv-
able that information about downside risk is ignored because it contradicts the poten-
tial opportunities (rewards). Avoiding this dissonance can obviously affect the quality of
decisions under uncertainty.
Cognitive dissonance in the workplace is widespread and a major source of stress for
professionals working for example in organisational support functions such as risk man-
agement. There are many examples and scenarios that can lead to cognitive dissonance,
ranging from observing inappropriate and poor leadership practices to encouraging peo-
ple to take on tasks that are not consistent with procedures, norms, training, organisa-
tional or personal values. When confronted with contradictory beliefs and practices and
the pressure to tolerate them, these professionals often experience deep personal dissatis-
faction (Celati 2004, p. 58).
A first step in overcoming and eliminating dissonances is that risk managers are
aware of it and address them in risk management workshops or interviews. Skilled risk
managers can try to identify existing and potential dissonances. Role-playing exercises
can create comfort and confidence, which in turn reduces dissonance. Another approach
is to ask trusted people to review its own actions and beliefs and suggest alternative
courses. Successful risk managers seek feedback from others and consider their opinions
in risk assessment (Kent Baker and Puttonen 2017, p. 121).
2.2.4 Zero Risk Bias
The zero risk bias describes individual’s preference for options which result in reduc-
ing small risk to zero over a greater reduction in larger risks compared to the first. In
other words, we tend to have a preference for the absolute certainty of a smaller benefit
(i.e., complete elimination of risk) to the lesser certainty of receiving a larger benefit.
This bias can be observed specifically by risk averse people and managers. These risk
averse decision-makers prefer small benefits which can be certainly realised to large ones
which are less certain. For a risk decision-maker, the importance of having knowledge
about this bias cannot be understated.
Example
Scientists identified a risk-free bias in the responses to a questionnaire about a hypo-
thetical cleaning scenario involving two dangerous sites X and Y, with X causing 8
cases of cancer annually and Y causing 4 cases annually. Respondents chose three
remedies: Two options each reduced the total number of cancer cases by 6, while the
29
third reduced the number by 5 and completely eliminated the cases at site Y. The third
option reduced the number of cancer cases by 6 per year. The third option reduced
the total number of cancer cases by 6, while the third option reduced the number by 5
and completely eliminated the cases at site Y. The third option reduced the total num-
ber of cancer cases by 6, while the third option reduced the number by 5 and com-
pletely eliminated the cases at site Y. The third option reduced the number of cancer
cases by 6, while the third option reduced the number by 5 and completely eliminated
the cases at site Y. While the latter option had the worst overall reduction, 42% of
respondents rated it better than at least one of the other options. This conclusion was
similar to an earlier economic study, which found that people were willing to bear
high costs to eliminate a risk completely (Baron et al. 1993).
This bias can occur at various stages in ERM, specifically when weighing two options.
In order to reduce the risk of a disaster from 5 to 0% (i.e. to completely exclude it),
people would invest a lot more than they would to reduce it from 10 to 5%. This effect
shows that people attach irrational importance to unlikely events. Particularly concerning
risk mitigation efforts, this bias can have a considerable impact on costs.
A general solution for zero risk bias is not known. It is important to be aware that
there is no such thing as complete security, i.e. zero risk. One way to reduce the cer-
tainty effect can be by avoiding so called “sure things” in utility elicitation and separat-
ing value and utility elicitation. It can also be useful to examine the relative risk attitude
and to point out possible misinterpretations. In summary, it is often not the best course of
action to completely eliminate one risk. Instead, a balanced risk portfolio that will yield
a greater aggregated relative risk reduction is more efficient and effective than focusing
solely on risks which can be completely mitigated.
2.2.5 Conjunction Fallacy
The conjunction (joint occurrence) of two risk events is considered more likely than the
constituent risk event, specifically if the probability assessment is based on a reference
case similar to the conjunction. Conjunction errors occur when we assign a higher prob-
ability to a risk event with higher specificity. This fundamentally violates the laws of
probability. Consider the following example from tennis:
• A: Roger Federer will win the game
• B: Roger Federer loses the first set
• C: Roger Federer will lose the first set, but win the match
• D: Roger Federer wins the first set, but loses the match
Different studies by Kahneman show that people arrange the chances by directly con-
tradicting the laws of logic and probability. He explains this as follows using the above
2.2 Cognitive Biases
30 2 Countering Biases in Risk Analysis
tennis example: The critical points are B and C. B is the more comprehensive event and
its probability must be higher than that of an event it contains. In contrast to logic, but
not representativeness or plausibility, 72% of the respondents gave B a lower probability
than C. However, the loss of the first set is by definition always a more likely event than
the loss of the first set and victory in the game (Tentori et al. 2013). The following exam-
ple rooted in the insurance industry further illustrates the conjunction fallacy.
Example
If people are given the opportunity to take out air travel insurance shortly before the
flight, they appear willing to pay more for insurance that covers terrorism than insur-
ance that covers any cause of death from air travel—including terrorism. Obviously,
insurance that only covers terrorism should be worth less than insurance that covers
terrorism in addition to some other risks (see Fig. 2.1). Perhaps because we are more
capable to imagine a particular risk event, we are often more likely to expect that risk
happen compared to broader, unspecific risk events (Hubbard 2009, p. 100).
In business we are often prone to conjunctional errors, probably because we face so
much supportive context. For example, we might hear separate rumours that company
budgets are about to be cut and that a senior executive in our department is considering
leaving the company. We consider each of these events unlikely—perhaps a 33% chance
of budget cuts and a 25% chance of the executive leaving. But if we hear both rumours at
the same time, our intuition that both events will happen is pretty high—maybe 50% or
more.
To reduce conjunction fallacy, risk managers should illustrate the logic of joint prob-
abilities with Venn diagrams and provide concrete examples to participants of risk
workshops or interviews. Employees need to understand the bias and its relevance for
decision-making. One approach to uncover the conjunction fallacy is to assess the proba-
bility of two events separately and then estimate the conditional probability of one event,
given that the other event occurs. Whenever a company faces important decisions which
include several risk scenarios that can occur simultaneously, it is helpful to discuss the
probabilities of these scenarios with several experts within and outside the company.
Terrorism
insurance
Insurance for
other causes of
death
Insurance for
any cause of
death
Fig. 2.1 Intersection example from the insurance industry
31
2.2.6 Conservatism Bias
Conservatism bias is a mental process in which people hold on to their previous views or
predictions at the expense of recognizing new information (Edwards 1982). Suppose a
trader receives bad news about a company’s earnings and this news contradicts another
profit estimate from the previous month. Decision-makers can take a conservational
approach in order to minimise risks. However, this bias can result in lower profits.
Avoiding bizarre and unhealthy risks should be the goal, while at the same time increas-
ing prudent risk taking, which does not necessarily leads to greater risk exposures.
For example, there is a tendency to overestimate the probability of low-probability
risk events occurring, where impact would be significant if such a risk event did happen.
At the same time, a conservative mind-set may not fully take into account the reality that
most operational risks are higher-probability risk scenarios. It is important to note that
the conservatism bias seems to contradict the representativeness bias, the latter referring
to an overreaction to new information, while the distortion of conservatism refers to an
underreaction to new information.
Risk managers can reduce conservatism bias by carefully reviewing new informa-
tion to determine its value over previous beliefs and seek unbiased advice. If new infor-
mation is difficult to discover, verify, or explain, opinions by subject matter experts
become more important. However, every new piece of information should be analysed
and deserves careful review—it may reduce uncertainty. Another approach is to make the
thinking process more flexible, meaning that people need to learn to let go of previous
beliefs when confronted with credible evidence that contradicts existing opinions and
estimates. If people are about to ignore information because it is difficult to understand
(such as math or statistics), risk managers must either take the time to translate this infor-
mation into “business language” or involve an expert who can support the explanation of
this information.
2.2.7 Endowment and Status Quo Bias
Another type of cognitive bias is the status quo bias. People prefer things to stay the way
they are, or that the current state remains the same. They ask to get paid more for an item
they own than they are willing to pay for it when they do not own it. Accordingly, their
disutility for losing is greater than their utility for gaining the same amount (Montibeller
and von Winterfeldt 2015, p. 1235). This distortion can affect human behaviour and is of
interest in many areas of sociology, politics and economics.
The evidence from a large number of experimental studies demonstrates the endow-
ment effect. In simple versions of such experiments, half of the participants receive a
particular object—for example a lottery ticket, a chocolate bar, or a pen, depending on
the experiment—and the other half receive the equivalent monetary value. Subsequently,
2.2 Cognitive Biases
32 2 Countering Biases in Risk Analysis
participants are allowed to swap the object and the money, either with the experimenter
or with each other, again depending on the particular experiment.
However, the number of trades is usually considerably lower than expected, and the
vast majority of participants prefer to keep what they receive: for instance the pens were
worth more money to those objects who started with pens than to those who started with
money. This behaviour is usually regarded as a consequence of the effects of “loss aver-
sion” and the “status quo” bias.
In politics, the status quo bias is also often used to explain the conservative way of
thinking. People who describe themselves as conservative tend to focus on preserving
traditions and keeping things as they are. This avoids risks associated with change, but
also misses possible benefits that change could bring. Of course, as with many other cog-
nitive distortions, the status quo bias has a benefit. Since it prevents people from tak-
ing risks, the bias provides some protection. However, this risk avoidance can also have
negative effects if the alternatives actually offer more safety and benefit than the current
state (Cherry 2018b).
Debiasing endowment and status quo is difficult in practice. Risk managers could
explain that the status quo is not relevant for future decisions on risks and rewards. When
for example discussing project risks, he or she can show that sunk costs should not play
a role in the risk analysis and subsequent decisions (Montibeller and von Winterfeldt
2015, p. 1235).
2.2.8 Framing
Framing effects mean that people’s response to information is influenced by how infor-
mation is presented (Wolf 2012). People’s preferences can be reversed by appropriate
information design. As in prospect theory, framing often comes in the form of profits
or losses. This theory shows that a loss is perceived as more significant and thus more
avoidable than an equivalent gain. In the hierarchy of choice architecture, a safe profit
is preferred to a probable one, and a probable loss to a safe loss. Decisions can also be
formulated in such a way that the positive or negative aspects of the same decision are
highlighted, thus bringing affect heuristics to the fore.
The following example can illustrate the framing effect:
Example
“Participants saw a film of a traffic accident and then answered questions about the
event, including the question ‘About how fast were the cars going when they con-
tacted each other?’ Other participants received the same information, except that
the verb ‘contacted’ was replaced by either hit, bumped, collided, or smashed. Even
though all of the participants saw the same film, the wording of the questions affected
their answers. The speed estimates (in miles per hour) were 31, 34, 38, 39, and 41,
respectively.
33
One week later, the participants were asked whether they had seen broken glass at
the accident site. Although the correct answer was ‘no,’ 32% of the participants who
were given the ‘smashed’ condition said that they had. Hence the wording of the ques-
tion can influence their memory of the incident.” (Memon et al. 2003, p. 118).
Risk managers can reduce framing effects by trying to “see through the frame”, or rather,
to look at things more objectively. This task is difficult because people may have incen-
tives “nudge” others in a certain direction or decision by the way they present informa-
tion. For example, division managers try to convince management of their successful
projects or risk mitigation measures by advertising and presenting them positively (Kent
Baker and Puttonen 2017, p. 121).
It seems important in this context that incentives exist not only at the individual level
but also at the team and department level. Another option is to get a second opinion from
a person who is not involved in the decision-making process. In most cases, the latter
can look at the different options from a more neutral perspective. Finally, research for-
tunately shows that if people feel happy, framing effects can be reduced (Cassotti et al.
2012).
2.2.9 Gambler’s Fallacy
Tversky and Kahneman introduced the gambler’s fallacy as a result of heuristic repre-
sentativeness in the 1970s. It arises from belief in the law of small numbers, namely the
notion that irrelevant information about the past is important to predict future events. If
a random event has occurred several times, we tend to predict that it will occur less fre-
quently in the future, so that the results balance out on average. This, we do not realise
that small samples are often not representative of the population (Sun and Wang 2010,
pp. 124–125). This error must be taken into account in particular in risk analysis and risk
scenario quantification.
Gambler’s Fallacy and the hot hand fallacy are closely related, but somewhat dif-
ferent. The hot hand fallacy refers to the phenomenon that we believe a number of
successful events (e.g. non-occurrence of risk) must be continued just because a num-
ber of successes have just occurred. For example, because no risk occurred in the last
three years, we are more likely to think that no risk will occur in the fourth year. The
Gambler’s Fallacy applies in case we expect a reversal of the results, not for the continu-
ation of a certain result.
Today, a large number of risk decisions are strongly influenced by data analysis.
McCann (2014) noted that with the increasing dependence on data analysis results, play-
ers’ mistakes are becoming more and more apparent. A typical evidence that can be
found in prediction is the tendency to observe and identify certain patterns in data, even
if these “patterns” can only occur due to nothing but random events.
2.2 Cognitive Biases
34 2 Countering Biases in Risk Analysis
In order to reduce Gambler’s Fallacy, it is advisable to impart basic statistical knowl-
edge to employees. Managers who make important decisions need to know and under-
stand statistical fundamentals. By explaining the probability logic and the independence
of events, better decisions can be made. Risk managers can identify typical examples
of mistakes and present them to management and employees (Montibeller and von
Winterfeldt 2015, p. 1236).
2.2.10 Hindsight Bias
The hindsight bias describes that people change their estimates of the probability of
events and outcomes after they are already known. They overestimate their ability to
predict past events, even if the outcome was completely unpredictable (Wolf 2012).
The bias arises because it is difficult for people to separate what they currently know
from past experience. Although hindsight bias is now widely accepted, the under-
lying mechanisms that explain it are still being discussed. The problem with this bias
is that we believe that the causes of past events were simpler than they actually were.
Understanding this distortion is therefore essential so that we can learn from our expe-
riences and mistakes. One area in the decision-making process that is very likely to be
affected by hindsight bias is the control phase and the environmental scanning phase (see
similar Barnes 1984, p. 130).
Typical examples of this are strategic decisions made by companies that are subse-
quently regarded as obvious. For example, only a few companies in the media and cloth-
ing industries have relied on Internet commerce. In the meantime, numerous traditional
companies from these sectors have gone bankrupt. Frequently the question is asked why
these companies were not also relying on the Internet. At the time of the strategic deci-
sion, however, it could not yet be foreseen that this would be the right decision.
One way to deal with this bias is to admit that companies are susceptible to hindsight
bias. Risk managers need to remind all employees that the future is basically unpredict-
able, even if people think that they can predict certain risk scenarios based on their past
experience. Risk managers should use objective data if available to complement opinions
by subject matter experts. It is also worthwhile to review risk scenario assumptions about
future developments using (outside) expert opinions. In summary, this means that risk
managers and decision-makers should weigh different alternatives against each other,
taking into account the fact that situations are constantly changing.
2.2.11 Overconfidence
This bias describes a decision-maker’s overestimation of his or her own abilities. This
can occur in two forms: Overestimation of one’s own abilities or performance and over-
estimation of one’s own knowledge. The overestimation of one’s own performance
35
often occurs. For example, most drivers consider themselves to be better than average.
However, it is not possible that more than half of the drivers are better than average. The
term is used more frequently for the second form of overestimation. Decision-makers are
overconfident if they consider their own judgements to be more precise than they actu-
ally are.
Overconfidence often manifests itself in the fact that, for example, intervals are given
too narrowly. People are confronted with difficult factual questions and asked for their
answers. This is done by giving the best answer together with a 90% confidence inter-
val. Because the given interval is often set too narrowly, the true value is often missed
(Shefrin 2016, pp. 62–63). This phenomenon is also called “miscalibration”.
Economist Philip Tetlock spent 20 years studying forecasts by experts about the econ-
omy, stock markets, wars and other issues. He found the average expert did as well as
random guessing or as he put it “as a dart-throwing chimpanzee”. Tetlock believes fore-
casting can be valid, but only when done with a long list of conditions, including humil-
ity, rigorous use of data and a ruthless vigilance for biases of all types. He said that he
believes it is possible to predict the future, at least in some situations and to some extent,
and that any intelligent, open-minded and hardworking person can cultivate the requisite
skills. Obviously, this is a challenge at the heart of the whole risk industry (Tetlock and
Gardner 2015, p. 6).
In order to overcome overconfidence bias some selected debiasing strategies can help.
Risk managers should declare probability training obligatory for risk owners and deci-
sion-makers. Risk managers can, for example, start the risk assessment with extreme risk
estimates (low and high) and thus avoid central tendency anchors (Montibeller and von
Winterfeldt 2015, p. 1233). To challenge risk scenario assessments, counter-arguments
can be developed that challenge the underlying values and assumptions. Risk managers,
but also every employee should further consider constructive criticism from people they
trust. This can serve as a very important step to reduce overconfidence. It is not necessar-
ily the case that criticism is always right, however, risks managers and risk owners get
some food for thought to challenge their own risk perception.
2.2.12 Perceived Risks
Psychologist Paul Slovic has dealt with the question why opinions of risk experts differ
from those of non-experts. Understanding these differences and the ability to articulate
them is a critical skill that risk managers must have (Shefrin 2016, p. 56). Slovic points
out that risk managers, when assessing risks, tend to focus more on specific variables
such as expected death rates. He points out that non-experts, on the other hand, rely more
on intuitive risk assessments (risk perceptions) that can be very different from expert
judgements.
The risk perception of non-experts is heavily influenced by two factors, dread risk
and an unknown risk. Dread risk includes dread and a number of other considerations
2.2 Cognitive Biases
36 2 Countering Biases in Risk Analysis
such as perceived lack of control, fatal consequences, catastrophic potential and une-
qual distribution of costs and benefits. In the context of dread risk, he mentions serious
events such as Chernobyl and Fukushima. Unknown risk is the lack of familiarity, e.g.
whether the activity or technology has new, unobservable, unknown and delayed harm-
ful consequences. For example, the public assesses nuclear power as much riskier than
risk experts. The difference can be attributed to both dread risk and an unknown risk.
Dread risk is very complex to deal with. In this context, perceived control is an important
issue. For example, psychometric research has found that people are willing to tolerate
voluntary risks, e.g. from skiing, 1000 times higher than risks associated with involun-
tary activities, e.g. from food preservatives. Unknown risk is relevant because people are
naturally afraid of the unknown (Shefrin 2016, p. 58).
The perceived risk can be managed by using two different risk reduction strategies.
The first strategy is to reduce uncertainty by seeking information. To achieve this, a
company-wide information system is important. In this system, objective risk informa-
tion can be collected and made available to employees. It is also possible to support risk
assessments by providing useful questions such as “how often in 10 years will a major
problem with a nuclear power value occur” or “how often will we have a supply bot-
tleneck in the next 10 years”. Wrong risk perception can only be changed with the nec-
essary experience and the acquisition of knowledge. The second strategy is to reduce
vulnerability by reducing the risk exposure (Al-Shammari and Masri 2016, p. 248). It
is also helpful that risk managers support risk owners during risk identification and risk
assessment interviews. Specifically for inexperienced people, it is important to have a
mentor (risk manager) who helps to assess risks more objectively.
2.3 Group-Specific Biases
At the collective level, the confirmation bias introduced in Sect. 2.1.3 is referred to as
group-specific distortion. It typically occurs when a group aims to reach consensus
before making decisions. Group-based decisions have fundamental advantages that are
particularly evident in the following points:
• More information available
• Enriched discussion with different opinions and perspectives
• Improved accuracy and more creativity
• Higher acceptance of the decision
The relevant question is whether teams actually make better decisions than individuals
do. The so-called group-specific biases must be viewed critically. The time allowed for
decision-making in groups can be so limited that the group may be in a hurry to make
the wrong decisions. Efforts should therefore be made to ensure that all views are heard
in risk management workshops or ERM committees and taken into account.
37
u Tip In order to integrate different views on the same risk scenario, it is neces-
sary to adopt a critical attitude. Often the best decisions come from chang-
ing the way people think about problems and looking at them from different
angles. “Six thinking hats” can help to look at problems from different per-
spectives, but one by one, to avoid confusion from too many angles that over-
load your thinking. It is also a powerful decision-checking technique in group
situations, as everyone examines the situation from every perspective simulta-
neously (Manktelow 2005, pp. 86–87).
Each “thinking hat” is a different way of thinking. These are explained below
(de Bono 1999):
• White hat: With this thinking hat, the focus is on the available data. We look
at information we have, analyse past trends, and see what we can learn. We
look for gaps in our knowledge and try to close or take them into account.
• Red hat: “Wearing” red hat, we look at problems with our intuition, gut
reaction and emotion. Also, we think about how others might react emo-
tionally. We try to understand the answers from people who do not fully
understand our reasoning.
• Black hat: We use black hat thinking and consider the potentially nega-
tive results of a decision. We look at it carefully and defensively. We try to
understand why it might not work. This is important because it shows the
weaknesses in a plan. It allows us to eliminate them, change them, or cre-
ate contingency plans to address them.
Black hat thinking helps make our plans “harder” and more resilient. It can
also help us to identify fatal errors and risks before we begin a course of
action. It is one of the true benefits of this model, as many successful peo-
ple get so used to thinking positively that they often cannot see problems
in advance. As a result, they are not well prepared for difficulties.
• Yellow hat: This hat helps us to think positively. It is the optimistic view that
helps u to see all the benefits of the decision and the value in it. The yellow
hat thinking helps us to go on when everything looks gloomy and difficult.
• Green hat: The green hat stands for creativity. This is where we develop
creative solutions to a problem. It is a freewheeling way of thinking with
little criticism of ideas (we can try out a number of creativity tools that will
help us).
• Blue hat: This hat represents process control. It is the hat worn, for exam-
ple, by people who lead meetings. If they have difficulties because ideas
dry up, they can direct the activity into green hat thinking. When emer-
gency plans are needed, they will prompt black hat to think.
One variant of this technique is to look at problems from the perspective of
different professionals (e.g., doctors, architects, or sales managers) or different
customers.
2.3 Group-Specific Biases
38 2 Countering Biases in Risk Analysis
Applied in this form, the six thinking hats concept can help to reduce or even prevent
biases in many of the group situations described below.
2.3.1 Authority Bias
This cognitive bias describes the tendency of people to weight the opinion of a person
of authority comparatively strongly. They are also more easily influenced or persuaded
by authority persons. There are numerous examples of how this cognitive bias is used
to influence consumer behaviour. These can be stock market tips from self-proclaimed
financial experts or advertisements for toothbrushes that promote a unique cleaning
result. The effect already occurs when people look like persons of authority, whether
they are actually experts in the field or just pretending to be. Conformity and compliance
are so deeply embedded in a person’s psyche that the acceptance of any kind of com-
mands coming from such a person becomes a standard habit. Unfortunately, we usually
simply stop questioning these authorities.
We often come across numerous articles claiming long-term health benefits associated
with coffee, wine or dark chocolate. However, it is claimed that these results are based
on extensive research. It may be worth to dig a little deeper and we may experience a
surprise (Kamal 2018).
• This research could always be funded by these companies.
• The research could be done at an obscure university.
• The sample size can be less than 100.
• All participants can belong to a specific ethnic group.
• Etc.
Various debiasing strategies are available to reduce this distortion. Basically, it is helpful
to build mutual trust. Employees are often more open if they are not constantly moni-
tored. If we strengthen this relationship (corporate culture), employees will be more
likely to honestly report risks and opportunities. Research has also shown that increasing
psychological distance can help reduce bias. Instead of permanently discussing impor-
tant decisions in the same office, researchers have found that telephone conversations or
changes in premises can also contribute to bias reduction (Milgram 1965).
Risk managers can use suitable examples to draw the employees’ attention to that
bias. Before the global financial crisis of 2007/2008, which was preceded by a phase
of high growth, only a few voices were critical. Hardly any financial experts dared to
comment critically on the development, even though economic up and down cycles have
always been part of economic action.
39
2.3.2 Conformity Bias
Humans are social beings. Ideas about risks that conflict with the group are not always
welcome. Even if some risks are very important, people tend to contribute to stability
and cooperation. When a decision maker encounters both affirmative and conflicting evi-
dence, the tendency is to overweight the affirmative evidence and underweight the con-
flicting evidence. Having received affirmative evidence, we are often confident that we
have enough appropriate evidence to underpin our faith. The more affirmative evidence
we gather, the more confident we become.
Kelman (1958) distinguished between three different types of conformity:
• Compliance: This occurs when one person exerts influence because he or she hopes to
achieve a positive response from another person or group. He assumes induced behav-
iour because he expects to receive specific rewards or approvals and to avoid specific
punishment or rejection by conformity (Kelman 1958, p. 53).
• Internalization: This occurs when an individual assumes influence because the content
of the induced behaviour—the ideas and actions it consists of—is inherently reward-
ing. It adopts the induced behaviour because it is congruent with its value system
(Kelman 1958, p. 53).
• Identification: This occurs when an individual assumes influence because he or she
wants to establish or maintain a satisfying, self-defining relationship with another per-
son or group (Kelman 1958, p. 53).
Example
A good example of the conformity bias is the experiment conducted by Asch (1956).
He shows how group coercion can influence a person to such an extent that they judge
an obviously false statement to be correct. Asch’s attempt was to ask for the length of
several presented lines. The test persons were given a small card with a line printed on
top and a selection of three more lines underneath. One of the three lower strokes was
obviously just as long as the upper one, one longer, one shorter. The test subjects only
had to name the line matching the upper line. Faced with this simple task alone, each
subject gave the right answer.
But then Asch brought the participants together in groups. Each group consisted
of a test person and seven helpers, who Asch had instructed without the knowledge
of the test persons. The helpers now began unanimously to give wrong answers. They
called short strokes long, long strokes short. And the unsuspecting test subjects? They
followed. The same test persons who had previously been able to correctly identify
the lines in front of their eyes, now explained that strokes that ended after a few finger
widths were longer than those that extended almost over the entire page. Not even one
in four subjects managed to resist the nonsense of the helpers.
2.3 Group-Specific Biases
40 2 Countering Biases in Risk Analysis
Asch (1956) explained the denial of reality with the fear of a dissenting opinion.
In interviews, the test subjects said that they had doubted their own perception in
the face of the helpers’ so convincingly delivered judgments. Others claimed to have
noticed the other’s error, but did not want to spoil the mood. Some test persons even
confessed that they were basically convinced that something was wrong with them.
Obviously, avoiding risk management workshops in larger groups and conducting one-
on-one interviews instead fully eliminates conformity. To counteract conformity bias in
workshops, risk managers can also collect anonymous feedback on risk scenarios first
and then discuss these inputs within the group. Additionally, the can invite new experts
into the group on a regular basis. Fresh people in risk management workshops do not
yet feel the same pressure to adapt as other members. Also, outsiders will be unlikely to
share the group’s acquired prejudices. Conflicts can nevertheless arise in such a setting.
Due to their outsider role, however, they do not endanger cooperation within the team.
No workshop member has to stand against his own team and expect consequences that
could endanger further cooperation with the risk manager (Clayton 2011, pp. 148–149).
Basically, if people contribute anonymously to a risk assessment, they are much more
comfortable and will probably say what they really think about risks. One way to support
this is to use anonymous mailboxes as well as contact persons who are not considered
direct superiors. Management must also set the right tone that this feedback is given high
priority (Clayton 2011, p. 148). Last but not least, eliciting a second risk assessment in
addition to the first consensus on a risk can further reduce conformity bias.
2.3.3 Groupthink
Groupthink is a certain way of thinking of people in a group (team, meeting, workshop,
conference, and committee). In group thinking, the group tends to avoid conflicts or tries
to minimise them and aims at reaching consensus. However, this consensus is usually not
but based on adequate critical evaluation and analysis. Individual perspectives and
individual creativity are (partially) lost, lateral thinking is often undesirable. It is not the
case that the group members feel compelled—they rather feel very bound to the group
and avoid getting into a conflict situation. The harmony of the group is felt as more
important than the development of realistic risk scenarios. This can indeed lead to people
making unfavourable decisions (Kaba et al. 2016, pp. 403–404).
There are several factors that can make groups susceptible to group thinking. First, a
group might have a leader who advises members not to disagree. At the same time, the
leader makes clear what he or she wants to do and hear. People are inherently selfish,
and most will seek opportunities in their own interests to support the leader in a way that
is consistent with their own goals. The leader might want to hear “yes”, not “yes, but”
and certainly not “no”. It also encourages group thinking when the group is made up of
members with similar backgrounds. As a result, confirmation bias and availability bias
41
combine to limit discussion of relevant risk issues and risk perspectives (Shefrin 2016,
p. 65).
Groupthink has a special significance when it comes to risk decisions. It leads to
“polarization”, i.e. the group dynamics strengthen the risk attitudes of the group mem-
bers. Group polarization may occur when assessing risk scenarios in risk workshops.
Groups tend to make extreme judgments during such workshops. This is particularly the
case if the persons involved hold similar opinions before the meeting starts (Moscovici
and Zavalloni 1969, pp. 125–135). If, for example, individual group members are not
very risk-averse in their attitude prior to a risk workshop, group thinking can result in
the whole group being too extremely risk-averse. If many individuals classify a risk as
high before a group discussion, this can lead to an even higher assessment of the risk
through the group discussion. Thus, there is the danger of under- and overestimation of
risks through group discussions (Lermer et al. 2014, pp. 3–4).
Example
One of the main causes of the Challenger Space Shuttle disaster in January 1986 is
considered the phenomenon of group thinking, particularly the illusion of unanimity.
The latter means that the group decision corresponds to the majority view. When such
cognitive distortion occurs, it is assumed that the majority of opinions and individual
judgements are unanimous. Group thinking results from the confirmation heuristic
and is explained by the following three characteristics: overestimation of the group,
narrow-mindedness, and pressure to conform. These characteristics can distort the
group’s decision in the wrong direction.
Although the manufacturer of the O-ring (part of the Space Shuttle) has identi-
fied the risk of the O-ring malfunctioning in extreme cold, the manufacturer agreed
to launch the Challenger Space Shuttle due to group thinking. Factors contributing
to this irrational behaviour include in particular direct pressure on dissidents (group
members are under social pressure not to contradict the group consensus), self-cen-
sorship (doubts and deviations from the perceived group consensus are not accepted)
and the illusion of unanimity.
During the occurrence of the Challenger Space Shuttle disaster, the group as a
whole did not consider the manufacturer’s opinion that the O-ring could not function
properly in a very cold environment and did not conduct a full analysis of this opin-
ion. This eventually led to the critical disaster (Murata 2017, p. 400).
Polarization occurs because group members try to reinforce each other’s judgements and
suggestions. For example, one group member may propose a risky strategy. Other group
members confirm why this would be a good idea. This can lead to increased risk appe-
tite because the arguments are mutually confirmed and the members feel comfortable
with even more risk. In this case, the group accepts more risk than the individual would
(Stangor 2014). Finally, a group member often only discloses information if it supports
the direction in which the group is moving about certain risk scenarios. This then leads
2.3 Group-Specific Biases
42 2 Countering Biases in Risk Analysis
to the confirmation of others in the group. Information that runs counter to this direction
is withheld. The same applies to information that makes the discloser appear in a less
favourable light (Shefrin 2016, p. 65).
To reduce the group thinking bias, risk managers should look for different person-
alities in a risk workshop and establish a climate where group members know why it is
important to question risks and opportunities. It is also important that all group members
follow certain rules to ensure a fair exchange of ideas and assessments. To achieve this,
groups should be kept small (5–8 participants). It is also advisable to let the group mem-
bers speak first, not an authority person. This also includes reducing power imbalances,
i.e. working with flat hierarchies in these teams. In this respect, it is advisable to provide
channels for anonymous feedback. In this way, individual members who recognise the
overconfidence but do not dare to express themselves critically can express their opinion
anonymously. Otherwise, there would be a danger that the group would portray them as
moaners and whingers. An also effective measure is to invite people from other depart-
ments in risk management workshops or risk committees, especially those affected by
decisions (Shefrin 2016, pp. 64–65).
Within the scope of risk identification, it should be noted that risks and then oppor-
tunities are first discussed within the group. In reverse order, there is a danger that the
opportunities overshadow the potential risks and are therefore discussed too less criti-
cally. In group situations, it can be helpful to define a person as an advocate whose task it
is to challenge assumptions critically, including individual opportunities identified by the
organisation. With regard to the negative effects mentioned, it must be taken into account
that team decisions reflect the creativity of a large number of people and are generally
highly accepted (Shefrin 2016, p. 65).
2.3.4 Hidden Profile
If risks are identified in groups, group-specific factors can distort the ERM process.
Among other things, groups rarely manage to exchange all available and relevant infor-
mation on risks. This particularly affects information known only to individuals (Lermer
et al. 2014, p. 2). This phenomenon is discussed under the term hidden profile and is
based on the investigations of Stasser and Titus (1985). The two researchers formed
groups consisting of four students and gave the individual students convergent and diver-
gent information. The students were to arrive at a correct result in groups of four with the
help of the information received. However, this was only possible if all students shared
all the information they received with the group. Though, most groups could not solve
the hidden profile. Convergent information was exchanged and discussed. However,
divergent information often remained unmentioned (pp. 1467–1478). This phenomenon
has been reproduced in various other studies.
43
Moskaliuk (2013) describes various strategies to reduce this bias. Four of them are
listed below:
• Being aware of this bias as a risk manager: This creates the basic prerequisites for
specifically avoiding the phenomenon of hidden profiles.
• Avoid hierarchies: Especially people with low status tend to withhold their exper-
tise. People with high status should thus first hold back with their own assessments in
order to give all participants opportunities to share their views with the group.
• Search and collect first, then evaluate information: This prevents information that
might be significant from being devalued directly.
• Making the expertise of those involved transparent: This makes it clear that different
opinions can be expected on the basis of their specialist knowledge. In addition, the
individual participants can be asked directly about their expert assessments.
The first point is basically applicable to all psychological factors mentioned. Just as risks
need to be known in order to be managed, ERM specialists should be aware of psycho-
logical factors in order to reduce them. It is important to note that discussion and group
leaders in particular should become aware of psychological factors. Because of their
role, they have the necessary skills and power to steer the group in a goal-oriented man-
ner. Furthermore, the strategy of avoiding hierarchies can also be transferred to the other
group-specific biases (Scherrer 2018).
The third point tends to be present in ERM if the individual process steps are con-
sistently carried out separately. If risk identification and risk assessment are carried out
together, cognitive biases, which tend to occur in both process steps, are also effective.
This prevents adequate identification and would consequently reduce the quality of the
entire process. It is thus better to first identify risks with a conscious management of
cognitive biases and only in a next step—which may even take place on another day—
to consciously assess the identified risks again. The last point suggested by Moskaliuk
(2013) can be considered as a specific measure to counter hidden profiles (Scherrer
2018).
2.3.5 Social Loafing
Lermer et al. (2014) describe that groups are less creative than individuals in identify-
ing risks. Thus, risk identification in groups is not necessarily advantageous (p. 1). A
possible explanation for diminishing creativity is the Ringelmann effect or social loaf-
ing. Ringelmann discovered that the average pulling force of a person during tug-of-war
decreases proportionally the more people are involved in the pull. However, this effect
could not only be proven in tug-of-war, but also in mental work activities (Leitl 2007).
This is a kind of motivation deficit, which occurs above all when the performance of
individuals is not apparent.
2.3 Group-Specific Biases
44 2 Countering Biases in Risk Analysis
It is important to remember that social loafing does not always happen. For example,
Karau and Williams (1997) found that social loafing did not occur for a cohesive group.
Moreover, the results of their second study suggest that people can actually make greater
efforts when working with low-performing employees (a social compensation effect).
According to Dobelli (2018), individual benefits should be made visible in order to
reduce social loafing (p. 139). This can be done using various methods. With regard to
risk identification, Lermer et al. (2014) recommend that brainstorming be dispensed with
in the group and that brainwriting be used instead. Possible risks are noted in writing by
the individual experts. In order to avoid the negative group effect as far as possible, they
recommend that the group context be avoided altogether. This means that the experts
involved in brainwriting neither meet the other experts surveyed nor present their results
to a group. They also recommend using a network of individual experts for risk identifi-
cation, whose results are collected centrally and, if necessary, played back individually to
the experts (pp. 2–3).
As you have learned, the landscape associated with ERM processes is burdened with
psychological landmines. Even risk perceptions and expert assessments are suscepti-
ble to a wide range of psychological influences. The above mentioned concepts are in
the spotlight of every risk assessment. Some biases overlap in certain aspects because
they address similar problems. Reducing some cognitive biases require the inclusion of
a group, whereas group situations can in turn be associated with numerous own biases.
Reducing susceptibility to biases is therefore a recurring task. In particular, the reduction
of biases in group work can only succeed in a suitable social environment, meaning that
the risk culture must also be addressed (Shefrin 2016, pp. 68–69).
Key Aspects to Remember
Know the different biases in risk analysis
Throughout the whole ERM process, it is important to note that many risks do
not manifest themselves by exogenous events, but rather by people’s behaviour
and choices. Basically, the following three categories of biases can be identified:
Motivational, cognitive and group-specific biases. Especially in the case of cogni-
tive biases, we are usually not aware of many thinking errors and they can only be
identified by an in-depth analysis and corresponding skills of risk managers and
decision-makers.
Understand the importance of biases for risk analysis
Biases are an important topic for risk analysis because systematic errors are made
in the risk identification and risk assessment of risks. Knowledge of biases and the
measures taken to reduce them can help companies to carry out a more objective
45
risk analysis. Most importantly, errors in risk identification due to biases can nega-
tively affect the whole ERM process.
Recognise the need to mitigate biases throughout the risk process
The mitigation of biases is an important issue. This can take place at various points
in the assessment and decision-making process. One of the most important meas-
ures is to reduce cognitive errors by making concrete examples of biases available
to risk owners and management. In addition, the involvement of several perspec-
tives or experts is often recommended. Finally, it can help to impart basic statisti-
cal knowledge to employees.
Be familiar with limitations of biases mitigation
Not all biases can be eliminated. Every day people are confronted with possible
thinking traps and they cannot always be resolved without contradiction. There are
also scenarios in which biases can be revealed through group discussion, but at the
same time new biases are created by the group itself. Thus, a cost-benefit analysis
should also be carried out with regard to the reduction of biases.
Have some easy to understand examples for your employees ready
Theoretical knowledge of biases is merely the basis for recognizing biases in com-
plex practical situations. Companies are well advised to disclose identified or com-
mitted errors of thought to a broad circle of decision-makers. This is the only way
to improve decision quality. Ultimately, it helps if the risk manager can show some
biases using concrete examples. Using past decision processes documented for
example in risk management workshops, the risk manager can plausibly demon-
strate how such biases have influenced decisions about risks.
Critical Thinking Questions
1. To what extent do motivational biases differ from cognitive biases?
2. What general measures can companies take to reduce cognitive biases?
3. Under what conditions are group decisions preferable to individual decisions?
4. How can the concept of “six thinking hats” help to identify and avoid group-
specific biases?
5. What role can a positive risk culture play in reducing cognitive biases?
2.3 Group-Specific Biases
46 2 Countering Biases in Risk Analysis
References
Al-Shammari, M., & Masri, H. (2016). Ethical and Social Perspectives on Global Business
Interaction in Emerging Markets. Hershey, Pennsylvania: IGI Global.
Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unani-
mous majority. Psychological Monographs, 70 (9), 1–70.
Baer, T., Heiligtag, S., & Samandari, H. (2017). The business logic in debiasing. https://www.mck-
insey.com/business-functions/risk/our-insights/the-business-logic-in-debiasing. Accessed 17
December 2018.
Barnes, J. H. (1984). Cognitive Biases and Their Impact on Strategic Planning. Strategic
Management Journal, 5 (2), 129–137.
Baron, J., Gowda, R., & Kunreuther, H. (1993). Attitudes toward managing hazardous waste:
What should be cleaned up and who should pay for it? Risk Analysis, 13, 183–192. https://doi.
org/10.1111/j.1539-6924.1993.tb01068.x.
Cassotti, M., Habib, M., Poirel, N., Aïte, A., Houdé, O., & Moutier, S. (2012). Positive emotional
context eliminates the framing effect in decision-making. Emotion, 12 (5), 926–931.
Celati, L. (2004). The Dark Side of Risk Management: How People Frame Decisions in Financial
Markets. London: Prentice Hall.
Cherry, K. (2018a). Understanding the Optimism Bias. AKA the Illusion of Invulnerability. https://
www.verywellmind.com/what-is-the-optimism-bias-2795031. Accessed 11 December 2018.
Cherry, K. (2018b). How the Status Quo Bias Affects Your Decisions. https://www.verywellmind.
com/status-quo-bias-psychological-definition-4065385. Accessed 11 December 2018.
Clayton, M. (2011). Risk Happen: Managing risk and avoiding failure in buisness projects.
London: Marshall Cavendish International.
de Bono, E. (1999). Six thinking hats. Boston: Back Bay Book.
Dobelli, R. (2018). Die Kunst des klaren Denkens. 52 Denkfehler, die Sie besser anderen überlas-
sen. München: Deutscher Taschenbuch-Verlag.
Edwards, W. (1982). Conservatism in Human Information Processing (excerpted). In D.
Kahneman, P. Slovic & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases.
Cambridge: Cambridge University Press.
Emmons, D. L., Mazzuchi, T. A., Sarkani, S., & Larsen, C. E. (2018). Mitigating cognitive biases
in risk identification: Practitioner checklist for the aerospace sector. Defense Acquisition
Research Journal, 25 (1), 52–93.
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judg-
ments of risks and benefits. Journal of Behavioral Decision Making, 13 (1), 1–17.
Fischhoff, B., Slovic, P., & Lichtenstein, S. (1978). Fault trees: Sensitivity of estimated fail-
ure probabilities to problem representation. Journal of Experimental Psychology: Human
Perception and Performance, 4, 330–344.
Friedman, H. H. (2017). Cognitive Biases that Interfere with Critical Thinking and Scientific
Reasoning: A Course Module. SSRN Electronic Journal. http://dx.doi.org/10.2139/
ssrn.2958800.
Gleißner, W. (2017). Grundlagen des Risikomanagements. Mit fundierten Informationen zu
besseren Entscheidungen (3rd Ed.). München: Verlag Franz Vahlen.
Grinnell, R. M., & Unrau, Y. A. (2018). Social Work Research and Evaluation. Foundations of
Evidence-Based Practice (11th Ed.). New York: Oxford University Press.
Hossain, T., & Li, K. K. (2013). Crowding Out in the Labor Market: A Prosocial Setting
Is Necessary. Management Science, 60 (5), 1148–1160. http://dx.doi.org/10.1287/
mnsc.2013.1807.
https://www.mckinsey.com/business-functions/risk/our-insights/the-business-logic-in-debiasing
https://www.mckinsey.com/business-functions/risk/our-insights/the-business-logic-in-debiasing
http://dx.doi.org/10.1111/j.1539-6924.1993.tb01068.x
http://dx.doi.org/10.1111/j.1539-6924.1993.tb01068.x
https://www.verywellmind.com/what-is-the-optimism-bias-2795031
https://www.verywellmind.com/what-is-the-optimism-bias-2795031
https://www.verywellmind.com/status-quo-bias-psychological-definition-4065385
https://www.verywellmind.com/status-quo-bias-psychological-definition-4065385
http://dx.doi.org/10.2139/ssrn.2958800
http://dx.doi.org/10.2139/ssrn.2958800
http://dx.doi.org/10.1287/mnsc.2013.1807
http://dx.doi.org/10.1287/mnsc.2013.1807
47
Hubbard, D. W. (2009). The failure of risk management. Why it’s broken and how to fix it.
Hoboken, NJ: John Wiley & Sons Inc.
Kaba, A., Wishart, I., Fraser, K., Coderre, S., & McLaughlin, K. (2016). Are we at risk of group-
think in our approach to teamwork interventions in health care? Medical Education, 50 (4),
400–408.
Kahneman, D. (2007). Short Course in Thinking About Thinking. https://www.edge.org/3rd_cul-
ture/kahneman07/kahneman07_index.html.
Kahneman, D. (2012). Schnelles Denken, langsames Denken (3rd Ed.). München: Siedler Verlag.
Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intui-
tive judgement. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The
psychology of intuitive judgment (pp. 49–81). Cambridge: Cambridge University Press.
Kamal, P. (2018). How To Spot These Cognitive Biases To Make You Smarter. And Strategies To
Make It Work For You. https://medium.com/@piyush2911/how-to-spot-these-cognitive-biases-
to-make-you-smarter-4649a82b5a6c. Accessed 22 November 2018.
Karau, S. J., & Williams, K. D. (1997). The effects of group cohesiveness on social loafing and
social compensation. Group Dynamics: Theory, Research, and Practice, 1, 156–168.
Kelman, H. C. (1958). Compliance, identification, and internalization: three processes of attitude
change. Journal of Conflict Resolution, 2, 51–60.
Kent Baker, H., & Puttonen, V. (2017). Investment Traps Exposed: Navigating Investor Mistakes
and Behavioral Biases. Bingley, UK: Emerald Publishing.
Leitl, M. (2007). Social Loafing? Harvard Business Manager. http://www.harvardbusinessmanager.
de/heft/artikel/a-622728.html. Accessed 20 November 2018.
Lermer, E., Streicher, B., & Sachs, R. (2014). Psychologische Einflüsse II: Risikoeinschätzung
in Gruppen. https://www.munichre.com/site/corpo-rate/get/documents_E399088179/mr/asset-
pool.shared/Documents/0_Corporate_Webs-ite/1_The_Group/Focus/Emerging-Risks/2013-09-
emerging-risk-discussion-paper-de . Accessed 20 November 2018.
Manktelow, J. (2005). Mind Tools. Essential skills for an excellent career (4th Ed.). Swindon, UK:
Mind Tools Ltd.
McCann, D. (2014). 10 cognitive biases that can trip up finance. CFO.com. http://ww2.cfo.com/
forecasting/2014/05/10-cognitive-biases-can-trip-finance. Accessed 20 November 2018.
Memon, A. A., Vrij, A., & Bull, R. (2003). Psychology and Law: Truthfulness, Accuracy and
Credibility (2nd Ed.). Chichester: Wiley.
Milgram, S. (1965). Some Conditions of Obedience and Disobedience to Authority. Human
Relations, 18 (1), 57–76.
Montibeller, G., & von Winterfeldt, D. (2015). Cognitive and motivational biases in decision and
risk analysis. Risk Analysis, 35 (7), 1230–1251.
Moscovici, S., & Zavalloni, M. (1969). The group as a polarizer of attitudes. Journal of
Personality and Social Psychology, 12 (2), 125–135.
Moskaliuk, J. (2013). Warum Gruppen falsch entscheiden. https://www.wissensdialoge.de/hidden_
profile. Accessed 20 November 2018.
Murata, A. (2017). Cultural Difference and Cognitive Biases as a Trigger of Critical Crashes or
Disasters – Evidence from Case Studies of Human Factors Analysis. Journal of Behavioral and
Brain Science, 7, 399–415. https://doi.org/10.4236/jbbs.2017.79029.
Redman, T. C. (2017). Root Out Bias from Your Decision-Making Process. Harvard Business
Review. https://hbr.org/2017/03/root-out-bias-from-your-decision-making-process. Accessed 11
December 2018.
Rees, M. (2015). Business Risk and Simulation Modelling in Practice: Using Excel, VBA and @
RISK. Chichester: John Wiley & Sons.
References
https://www.edge.org/3rd_culture/kahneman07/kahneman07_index.html
https://www.edge.org/3rd_culture/kahneman07/kahneman07_index.html
https://medium.com/%40piyush2911/how-to-spot-these-cognitive-biases-to-make-you-smarter-4649a82b5a6c
https://medium.com/%40piyush2911/how-to-spot-these-cognitive-biases-to-make-you-smarter-4649a82b5a6c
http://www.harvardbusinessmanager.de/heft/artikel/a-622728.html
http://www.harvardbusinessmanager.de/heft/artikel/a-622728.html
https://www.munichre.com/site/corpo-rate/get/documents_E399088179/mr/assetpool.shared/Documents/0_Corporate_Webs-ite/1_The_Group/Focus/Emerging-Risks/2013-09-emerging-risk-discussion-paper-de
https://www.munichre.com/site/corpo-rate/get/documents_E399088179/mr/assetpool.shared/Documents/0_Corporate_Webs-ite/1_The_Group/Focus/Emerging-Risks/2013-09-emerging-risk-discussion-paper-de
https://www.munichre.com/site/corpo-rate/get/documents_E399088179/mr/assetpool.shared/Documents/0_Corporate_Webs-ite/1_The_Group/Focus/Emerging-Risks/2013-09-emerging-risk-discussion-paper-de
http://ww2.cfo.com/forecasting/2014/05/10-cognitive-biases-can-trip-finance
http://ww2.cfo.com/forecasting/2014/05/10-cognitive-biases-can-trip-finance
Warum Gruppen falsch entscheiden | Das Hidden Profil entdecken.
Warum Gruppen falsch entscheiden | Das Hidden Profil entdecken.
http://dx.doi.org/10.4236/jbbs.2017.79029
https://hbr.org/2017/03/root-out-bias-from-your-decision-making-process
48 2 Countering Biases in Risk Analysis
Scherrer, M. (2018). Menschlicher Faktor im Risikomanagement. Bachelor Thesis, Lucerne
University of Applied Sciences and Arts.
Sharot, T. (2011). The optimism bias. Current Biology, 21 (23), R941–R945.
Shefrin, H. (2016). Behavioral Risk Management. Managing the Psychology That Drives
Decisions and Influences Operational Risk. New York: Palgrave Macmillan.
Sing, R., Ryvola R. (2018). Cognitive Biases in Climate Risk Management. https://reliefweb.int/
sites/reliefweb.int/files/resources/RCRCCC%2Bcognitive%2Bbiases_5%2Bshortcuts.ppd.
Accessed 18 January 2019.
Smith, E. D., & Bahill, A. T. (2009). Attribute Substitution in Systems Engineering. Systems
Engineering (January 2009), 1–19.
Stangor, C. (2014). Principles of Social Psychology – 1st International Edition. https://opentextbc.
ca/socialpsychology/. Accessed 29 January 2019.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased
information sampling during discussion. Journal of Personality and Social Psychology, 48 (6),
1467–1478.
Sun, Y., & Wang, H. (2010). Gambler’s fallacy, hot hand belief, and the time of patterns. Judgment
and Decision Making, 5 (2), 124–132.
Tentori, K., Crupi, V., & Russo, S. (2013). On the determinants of the conjunction fallacy: prob-
ability versus inductive confirmation. Journal of Experimental Psychology, 142 (1), 235–255.
Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. New
York: Crown Publishers.
The Decision Lab (n. d.). Affect Heuristic. https://thedecisionlab.com/bias/affect-heuristic/.
Accessed 11 December 2018.
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probabil-
ity. Cognitive Psychology, 5 (2), 207–232.
Wolf, R. F. (2012). How to Minimize Your Biases When Making Decisions. https://hbr.
org/2012/09/how-to-minimize-your-biases-when. Accessed 21 November 2018.
https://reliefweb.int/sites/reliefweb.int/files/resources/RCRCCC%252Bcognitive%252Bbiases_5%252Bshortcuts.ppd
https://reliefweb.int/sites/reliefweb.int/files/resources/RCRCCC%252Bcognitive%252Bbiases_5%252Bshortcuts.ppd
https://thedecisionlab.com/bias/affect-heuristic/
https://hbr.org/2012/09/how-to-minimize-your-biases-when
https://hbr.org/2012/09/how-to-minimize-your-biases-when
49© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019
S. Hunziker, Enterprise Risk Management,
https://doi.org/10.1007/978-3-658-25357-8_3
Creating Value Through ERM Process 3
Contents
3.1 Balance Rationality with Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Embrace Uncertainty Governance as Part of ERM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3 Collect Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Identify Sources, Events and Impacts of All Risks . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Develop an Effective and Structured Risk Identification Approach . . . . . . . . . . . 56
3.3.3 Identify Risks Enterprise-Wide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3.4 Treat Business and Decision Problems not as True Risks . . . . . . . . . . . . . . . . . . . 59
3.3.5 Don’t Let Reputation Risk Fool You . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.6 Focus on Management Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.7 Conduct One-on-One Interviews with Key Stakeholders . . . . . . . . . . . . . . . . . . . 76
3.3.8 Complement with Traditional Risk Identification . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.4 Assess Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.4.1 Identify Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4.2 Quantify Key Risk Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.4.3 Support Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.4 Differentiate between Decisions and Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.4.5 Overcome the Regulatory Risk Management Approach . . . . . . . . . . . . . . . . . . . . 115
3.4.6 Overcome the Separation of Risk Analysis and Decision-Making . . . . . . . . . . . . 116
3.4.7 Assess Impact on Relevant Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4.8 Avoid Pseudo-Risk Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.4.9 Develop Useful Risk Appetite Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.4.10 Make Uncertainties Transparent and Comprehensible . . . . . . . . . . . . . . . . . . . . . 128
3.4.11 Exploit the Full Decision-Making Potential of ERM . . . . . . . . . . . . . . . . . . . . . . 133
3.4.12 Align ERM with Business Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.4.13 Replace Standard Risk Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.4.14 Disclose Risks Appropriately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3.5 Assess and Improve ERM Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.5.1 Test ERM Effectiveness Appropriately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.5.2 Increase ERM Maturity Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
https://doi.org/10.1007/978-3-658-25357-8_3
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-658-25357-8_3&domain=pdf
50 3 Creating Value Through ERM Process
Learning Objectives
When you have finished studying this chapter, you should be able to:
• differentiate between intuition and rationality
• know how the ERM process works
• explain how ERM can add value to the company
• assess risks and develop quantified key risk scenarios on your own
• understand the importance of integrating risk information into decision-making
processes
• asses the maturity level of an ERM programme
3.1 Balance Rationality with Intuition
In practice, many company decisions are based on both intuitive and rational input, often
with different weights between them. Effective ERM should be designed to reduce the
intuitive and increase the rational input into decision-making processes. It goes without
saying that fully intuitive, qualitative procedures in risk management are not capable to
improve rational decision-making. However, risk management itself is prone to many
well-known motivational and cognitive biases (Chap. 2) and relies often on informal,
intuitive assessments. Such unstructured risk assessments comprise high portions of gut
feel, professional experience and suffer from transparent, objective decision criteria. In
addition, intuitive assessments often lack the consideration of diverse opinions within
the company which could increase reliability. Intuitive approaches to risk management
and subsequently to decision-making may not be wrong or are even highly efficient and
effective under certain circumstances. In situations, where decision-makers face frequent
and insignificant or urgent decisions for which they have many years of relevant experi-
ence, intuitive decisions may be indeed the best choice (see similar Rees 2015, p. 7).
We have to pay attention concerning the use of the term “rational”. It may be mislead-
ing in the context of ERM. Amongst many other definitions, “rational ERM” focus on
“accuracy of beliefs” and the full exploitation of the best available information. Intuition
is usually understood as a decision-making process that relies on non-conscious and
rapid recognition of associations and patterns to make affective judgements (Dane and
Pratt 2007). In this respect, a person or a group who does not act rationally, has beliefs
(e.g. about the impact and probability of a specific risk) that do not fully consider all rel-
evant information at hand and do not follow a linear, step-by-step and analytical process
which can explained ex post (Simon 1987). Thus, even best-practice rational ERM is
prone to subjective and intuitive risk assessments. However, rational ERM aims at reduc-
ing subjectivity and intuition as far as possible.
51
For the purpose of this textbook, we define rational risk management as the approach
to
• consciously decrease the impact of cognitive and motivational biases on risk assess-
ments as much as possible
• collect as much as possible relevant information (Dean and Sharfman 1996)
• rely on structured, step-by-step risk analysis methods as e.g. scenario analysis
• quantitatively assess and aggregate key risks and assess the effect on key success met-
rics to identify interdependencies between risks
• combine intuitive input (management judgement) with objective, data-based input
where appropriate
• increase transparency of decision criteria (make decisions reproducible)
• apply rules which are known to analytically work (e.g. cause-effect analysis)
• accept decisions that mainly base on intuition where appropriate.
Cleary, in practice, intuition in decision-making processes overrides rational ERM many
times. Even if the results of a “rational risk analysis” unambiguously contradicts the
gut instinct of management or board, decisions are made anyway, arguing that the risk
analysis may be wrong (e.g. pseudo-accuracy of risk quantification) or at least omitted
relevant factors and uncertainties. Another reason not to use rational input is owing to
the fact that creating “rationality” is time-consuming, costly, may be considered as too
complex and is not in line with how the human brain is wired (fast and intuitive deci-
sions). In other situations, intuition and rationality can create a paradoxical tension
because these two approaches are fundamentally different and inconsistent. Thus, their
conjoint application may results in tensions. This tension may be solved in a not very
ideal way, e.g. a rational manager may disregard intuition because of its biases and focus
solely on rational and analytical procedures (Calabretta et al. 2016, p. 4). Eventually,
management judgement cannot be fully replaced by the “best” rational decision-making
tools. Complex and rare risk events for example cannot get fully captured by any for-
mal risk analysis and still need a considerable amount of intuition and judgement by the
decision-maker.
After all, rational risk analysis is designed to reduce well-known biases in risk
analysis activities and to support an adequate balance between intuitive and rational
approaches in significant decision-making processes. In that sense, formal risk analysis
in an ERM approach can support decisions by developing reasonable quantitative risk
scenarios which cover the full range of potential future outcomes and ultimately, increase
the decision quality by challenging strategically relevant management assumptions.
Increased decision quality in turn can enhance performance (e.g. increase in company
value) by selecting promising projects, investments and efficient risk mitigation meas-
ures (Rees 2015, p. 19).
3.1 Balance Rationality with Intuition
52 3 Creating Value Through ERM Process
3.2 Embrace Uncertainty Governance as Part of ERM
Too often, risk management is primarily understood as a regulatory approach which aims
at safeguarding corporate value. However, this approach does not go far enough from a
modern corporate governance perspective. Good corporate governance not only focuses
on asset protection, but also on increasing corporate value (Filatotchev et al. 2006). This
requirement is fully in line with the modern ERM approach which is ultimately geared
to increase corporate value. In traditional risk management, the focus is on securing pro-
cesses and systems; the support of value-creating decision-making processes is up to the
management. In this traditional sense, risk management is not a very creative manage-
ment tool and hardly concerned with the future development of the company. It essen-
tially deals with the efficiency of established processes and projects and the complying
with laws and regulations. In addition, traditional risk management predominantly cares
about “well-known” risks which have a sufficient data basis or the company has enough
experience to assess these risks by means of probabilities and impact, e.g. financial risks.
It immediately becomes clear that traditional risk management fails in rare, unique
and complex decision-making situations. New projects or major investments in new
products, the expansion into new markets and mergers and acquisitions, for example, are
often excluded from traditional risk management because it is not able to methodically
deal with this type of complexity and high uncertainty regarding probability of occur-
rence and impact. If successful, these complex decision-making situations all contribute
to an increase in company value. Precisely this is the claim of modern ERM—to cre-
ate value. How can this gap between traditional, value-preserving and modern, value-
enhancing ERM be closed? To put it simply, one answer is that companies have to
promote a good uncertainty governance (see Casas i Klett 2008, pp. 26–30). What does
that mean? A basic distinction can be made between the terms uncertainty and risk.
In traditional risk management, it is often implicitly assumed that risk or the underly-
ing probabilities are reasonably measurable. This means that decision-makers have an
a priori knowledge of the distribution of probabilities, e.g. based on historical data.
Uncertainty, on the other hand, is qualified as not measurable and highly subjective and
is therefore not suitable as a rational decision criterion.
Uncertainty governance is based on the theory of behavioural economics, which was
founded by the two famous authors and researchers Kahneman and Tversky. It stipulates
that subjective assessments in decision-making situations can be a misleading guide. As
a result, decisions under uncertainty may become even more uncertain due to the human
factor. This contradicts the main requirement that risk management reduces uncertainty
associated with decision-making processes. Does this mean that complex, potentially
value-adding decisions should not be made from a risk management perspective? The
following arguments would argue in favour of this:
• Lack of data to reasonably assess probabilities
• No previous experience with comparable decision-making situations
53
• Human assessments are subject to different biases
• Outcomes are highly uncertain.
Certainly not. Such decisions must be made in order to create corporate value. It is dif-
ficult to imagine companies that reject all potentially value-creating projects and invest-
ments because no reliable (i.e. missing a priori knowledge of probabilities of success)
risk assessment is possible. Such decisions, which have been carefully prepared, can
lead to high growth and added value in a positive case. They are thus definitely neces-
sary. Can this problem be reconciled with the modern ERM approach? Are decisions that
have unmeasurable and often low probabilities of success compatible with risk manage-
ment? The answer is clearly yes. ERM can methodically support the conscious handling
of uncertainty, there is no contradiction. Accordingly, modern ERM implies appropriate
uncertainty governance.
In principle, risk management can also be valuable in such complex decisions
involving a high degree of uncertainty. Uncertainty governance also means that larger
losses are accepted if the decision quality was high at the time the decision was taken.
Modern ERM can make the following important contribution to increasing the quality of
decisions:
• Firstly, it is important to recognise and transparently disclose that such decisions are
indeed highly risky and that if successful, the company can make significant progress
(to be defined differently depending on the company context). In the event of a loss,
however (e.g. product launch fails), the entire investment can become worthless.
• With the methods of modern ERM, various plausible (e.g. very pessimistic) scenar-
ios can be developed despite high uncertainty and lack of data. These scenarios show
openly and transparently that the degree of uncertainty is high and that one specific
probability of occurrence cannot be assigned meaningfully. A better way to deal with
that issue is introducing probability ranges which are capable to express the degree of
uncertainty transparently and quantitatively.
• Modern ERM seeks to increase rationality by using measures to reduce cognitive and
motivational biases (see Chap. 2).
• Modern EMR focuses on the human being. Leadership qualities and human judge-
ment are regarded as valuable sources of risk assessment and scenario developments.
Somewhat different from Casas i Klett (2008), we do not consider risk management and
uncertainty governance as two different main concepts of corporate governance in this
textbook. These concepts only remain fully different if risk management is understood
in its traditional form as a regulatory monitoring instrument to protect the value of the
company and to ensure process and system efficiency. But the boundaries dissolve when
we talk about ERM. This approach combines the best available data and information for
3.2 Embrace Uncertainty Governance as Part of ERM
54 3 Creating Value Through ERM Process
risk assessments. In some cases, these are large amounts of financial data that allow sim-
ple derivation of probability distributions. In other cases, risk management increases the
decision quality of risky, value-enhancing investments and projects by processing peo-
ple’s assessments and judgements in the best possible way (i.e. largely unbiased) into
plausible risk scenarios. Figure 3.1 summarises our understanding of risk management
and uncertainty governance.
It draws on the basic considerations of Casas i Klett (2008), but has been adapted to
the extent that uncertainty governance is not understood as an independent main concept,
but as an integral part of the modern ERM approach.
3.3 Collect Risk Scenarios
Key risk identification is the very first and critical step in the ERM process, which is a
continuous, enterprise-wide and integrated process. Risks are identified by source, for a
certain timeframe, and for each of the different risk categories. The result of that step is a
risk identification of all key risks. It is important that a risk manager is aware of the criti-
cal practical challenges before starting the process.
Traditional Risk
Management
Risk
Uncertainty Governance
Uncertainty
Data-driven,
regulatory-driven
Subjective judgment
of executives
Protecting
firm value
Increasing
firm value
C
orporate
G
overnance
Securing and
monitoring processes,
systems
People-driven,
Creativitiy-driven
Modern ERM-Approach
Fig. 3.1 Uncertainty governance as a part of ERM
55
3.3.1 Identify Sources, Events and Impacts of All Risks
In risk assessments (personal interviews, risk workshops or the request to fill in a tem-
plate), many people tend to think about the (financial) consequences of risks first: What
happens if a risk occurs? What impact does it have on my area of financial responsibil-
ity? For example, what is the potential impact on liquidity (e.g. excessive inventories),
earnings (e.g. bad debt losses) or costs (e.g. development of new services)? Of course,
every risk (independent of the source) has financial consequences and is often incorrectly
categorised as “financial risk”. Specifically, people with a strong financial mindset (e.g.
Financial Analyst, CFO) are prone to that way of thinking about risks. However, from an
ERM perspective, the identification of the risk sources is far more relevant for the devel-
opment of effective, preventive risk mitigation measures. What may be causes of a risk
to occur? Where must preventive measures be implemented to prevent financial impact
(e.g. shortening storage periods, introducing debt recovery, carrying out market analy-
ses)? Thus, risks must be developed in the form of a plausible story, i.e. in a so-called
cause-effect chain. The cause at the very beginning of that risk story is often the starting
point for defining effective risk mitigation strategies.
For example, the risk of a ratings downgrade is often found in the risk registers of
companies funded with public debt. However, a ratings downgrade may be seen as a risk
event, which is embedded in a story of different causes and impacts. In this case, poor
relations to the rating agency or a poorly executed strategy may be the sources of that
risk. Of course, debt ratings determined by rating agencies may have positive or nega-
tive impact on capital costs, and thus, have also a financial impact (effect). Another risk
story based on an everyday life situation is displayed in a simple tool for visualizing such
cause-effect stories called bow-tie analysis (see Fig. 3.2).
The risk events can be found in the middle of the bow-tie diagram. An overtired
taxi driver collides with stones on the motorway, skids and overturns. The incident is
recorded by the media, which puts the taxi company in a bad light. In addition, legal
requirements are violated, because the taxi driver did not have a sufficiently long recov-
ery time before his drive. On the left side of the fly are possible causes listed that led to
these incidents. The rockfall, the poor visibility due to rain and twilight, a broken head-
light and an overtired, sickly taxi driver are responsible for this collision. On the right
part of the display, we can see the consequences of this accident. As we can easily rec-
ognise, the risk story always ends with financial losses. Thus fines and deductibles of
insurances become due. Due to the damage to their reputation, customers switch to a
competitor, which leads to lower revenues.
The lessons learned from these two examples are clear: Although both risks ultimately
lead to negative financial impact, they are not financial risks. The causes of both risks
lie in the operational and strategic environment. These risks must be categorised accord-
ingly, otherwise sources and impacts of risks are confused and thus consistency of the
risk identification and risk categorization process is violated.
3.3 Collect Risk Scenarios
56 3 Creating Value Through ERM Process
3.3.2 Develop an Effective and Structured Risk Identification
Approach
In practice, many risk management systems lack a well-developed and well-structured
approach to risk identification. A failure of a applying a structured and well-developed
risk identification process can lead to serious problems:
• Risk identification is not linked to the achievement of business objectives and created
only for the sake of a risk inventory
• Relevant key risks with a major impact on business objectives are not identified
• Uncoordinated risk identification leads to higher costs and less credibility of the over-
all ERM programme
• Risk identification is too operationally focused and too less strategically oriented, i.e.
risks are considered only after plans and strategies have been approved by manage-
ment and major decisions have been made.
• Relevant stakeholders of ERM are not involved, leading to lower acceptance of over-
all ERM
• Best available sources for risk information are not considered
• Risk identification is too narrowly focused on internal risks (no environmental
scanning)
Causes Events Impact
Rocks on street
Broken
headlight
Sick driver
Low visibility
Car passenger
injury
Taxi damage €
Reputation
impact
Reduced
revenues €
Fines €
Compensation €
Driver
fatigue
Tipping
Media
coverage
Regulatory
breach
Collision
Obstacle
overlooked
Fig. 3.2 Bow-tie analysis: separation of causes, events and effects. (adapted from Protecht 2013)
57
ERM is a strategic management tool that has to deal with strategy-relevant risks and
opportunities. A systematic and an “as complete as possible” risk identification can be
achieved by considering and combining various tools and taking into account external
and internal perspectives. A clever filter function within the risk identification process
prevents minor, non-relevant risks from being included in the subsequent risk assessment
process. All the following information and explanation within the risk identification par-
agraph serve to make risk identification more effective and efficient and thus to create a
basis for credible ERM that is accepted by the company and creates value.
3.3.3 Identify Risks Enterprise-Wide
Many companies have already implemented a kind of enterprise risk management and
declare it accordingly as “ERM” in their annual reports. If you take a closer look, how-
ever, risks are not always identified, assessed and managed enterprise-wide. In some
cases, business areas are completely excluded from risk analysis, sometimes the focus
is only on financial or operational risks, and sometimes only risks that have their sources
internal to the company are identified. There are basically five reasons why companies
fail to implement ERM enterprise-wide. These reasons are depicted in Fig. 3.3 and are
subsequently described below (see similar Segal 2011, pp. 25–27).
3.3 Collect Risk Scenarios
R & D
Board
Profitable
Business Unit
Divison Product
X
Divison Product
Y
Divison Product
Z
CEO
Marketing
Finance
R & D
Marketing
Finance
R & D
Marketing
Finance
Missing
Strategic
Focus
Excluded
Business Unit
Financial Risk Focus
Missing
External
Focus
Fig. 3.3 Reasons not to implement ERM enterprise-wide
58 3 Creating Value Through ERM Process
1. Profitable Business Unit: Companies can be deliberately reluctant with an in-depth
risk analysis in areas of business that are very profitable, fast-growing and may be
capable to offset less profitable business units. Often risk management is still per-
ceived as a “business barrier” because only the downside risk is addressed. This may
give cause for concern that a thorough risk analysis could slow the growth and profits
of the successful business unit. Thus, it may be the case that management implements
ERM first in areas that are less critical to the company’s financial performance.
2. Excluded Business Unit: Very often, risk management implementation is started with
a pilot project (e.g. with a first business unit), followed with an enterprise-wide, step-
by-step roll-out plan. However, this can lead to the roll-out being repeatedly delayed
due to other priorities. The result is incomplete ERM implementation. In many com-
panies, risk management does not enjoy top priority on the management agenda.
Often, scarce resources or promising other, directly profitable projects are more
important and urgent than ERM.
3. Missing strategic focus: The focus of risk management often lies on the operational
area of the company. Paradoxically, the management of operational risks is equipped
with relatively high resources (e.g. process risk management, internal control sys-
tems), while a full integration of strategic risks into the ERM is often missing or is
methodically implemented at a significantly lower level (e.g. only qualitative, infor-
mal risk assessments). Numerous studies clearly show that strategic risks should be
the most important risk category for the non-financial industry (Segal 2011, p. 29).
For example, significant company value losses are primarily attributable to the occur-
rence of strategic risks, not to operational or financial risks. There are three important
reasons why companies often fail to treat strategic risks holistically and as a priority.
Firstly, companies often lack methodological knowledge of how strategic risks can
be quantitatively assessed, which means that the analysis often remains at an unstruc-
tured, qualitative level. Secondly, it is argued that strategic risks are too complex to
be assessed and that no data is available. Thirdly, often risk managers have no access
to the strategy document or are not invited to the strategy table at all. This may be
related to the too low hierarchical position of the risk manager. He or she is often not
a member of management and thus not directly involved in strategic issues.
4. Missing external focus: Experience shows that ERM often has a strong internal focus.
This means that risks are identified by internal subject matter experts and internal
risk owners. This leads to a risk identification that primarily captures risks internally
(risk source is within the company). Many risk owners identify risks for their spe-
cific, internal area of responsibility, which are then aggregated and reported to man-
agement and board. A structured analysis of the environment for the purpose of risk
identification using simple tools such as PEST analysis is missing. Many significant
risks sources actually emerge outside the company. Of course, ERM is not designed
to accurately predict the future concerning political, economic, social and techno-
logical developments and the corresponding risks and opportunities. Nobody owns a
working crystal ball. However, an analysis of the environment can help to identify
59
some potential risks and opportunities as early as possible that could arise from the
environment. Risk related information from the WEF’s global risk report, the analysis
of surveys and studies on emerging risks, reading professional journals, attending risk
management related research conferences, exchanging information in risk manage-
ment associations, analysing risk disclosures in annual reports or in SEC filings (Form
10-K), for example, can all help in this.
5. Financial risk focus: Historically, risk management has evolved from insurance and
financial risk management. Many sophisticated quantitative methods for risk assess-
ment have been known for more than half a century. To this day, many education and
training programmes are specialised in financial risk management. Many courses
in the area of financial management also focus on risk management, but primarily
from a narrow financial perspective. Thus, today we face the problem that many pro-
spective risk managers bring a strongly finance-oriented mindset into the company.
Unfortunately, methods and techniques of risk identification and risk assessment used
in financial risk management can not easily be transferred to other risk categories
(especially strategic risks). As a result, many risk management systems focus on the
financial risk category due to the missing knowledge and the educational background
of risk managers.
3.3.4 Treat Business and Decision Problems not as True Risks
It is clear that in many risk management workshops or in one-on-one interviews with the
risk manager, not only true risks (see definition in Sect. 1.3) are identified. Many of the
risks articulated in risk identification endeavours tend to concern existing weaknesses or
concerns about unfavourable conditions in the company (Rees 2015, p. 34). At the opera-
tional level, for example, an inadequate and inefficient business process can be mentioned.
Since a business line manager perceives a deviation from his or her expected efficiency
level, this gap is often classified as a “business risk”. Of course, a vast amount of measures
can now be discussed to close this gap and make the process more efficient, e.g.:
• Process re-design
• Assign accountability of the process to one single person
• Increase IT support of the process
• Focus on few and most important key controls
• Reduce non value-creating process activities (getting rid of activities that waste time
and resources)
• Outsource that specific process to increase overall efficiency.
Important for risk managers to know is that the current low efficiency level of a process
per se is not a risk, but a business problem. The true risk which is in accordance to our
risk definition in this example lies in the fact that the planned actions to improve the
3.3 Collect Risk Scenarios
60 3 Creating Value Through ERM Process
process efficiency do not have the desired effect (remember—deviation from what was
expected or planned is risk).
At the more strategic level, for example, the low growth rate of a new business area
can pop up in a risk workshop. Again, many potential actions can be taken to improve
the growth rate to an expected or ideal level:
• Closely monitor the competitors
• Create a new marketing campaign
• Invest in talented people
• Increase social media activities
• Tone at the top: Communicate the importance of sales to all employees
• Develop new products or services
The true risk here is not the weak growth rate per se, but again rather that the planned
activities do not successfully resolve the issue at hand to a required or expected growth
rate level. Of course these business problems may be of great importance for the com-
pany, but from a risk management perspective they should not be directly included in the
further ERM process. The problems per se are already existing weaknesses and do no
longer represent risks which may materialise in the future. If, however, corresponding
measures are taken to resolve or improve these business problems, new (real) risks may
arise in the future. These risks include the aforementioned uncertainty as to whether the
planned measures will actually have the expected impact or not.
Another stumbling block of the risk identification process is to distinguish between
decision problems and “true risks”. Again, in risk workshops, participants may identify
risks in the form of pure decision issues. Let us consider the situation where a manager
is concerned about an upcoming decision with regard to the implementation of a new
Enterprise Resource Planning (ERP)-system. She believes that it might be a risk that this
IT-project may be rejected due to too low priority. From her perspective, the new ERP-
system would significantly improve the efficiency of many business processes and ulti-
mately, be a competitive advantage. From a risk management perspective, this is not a
traditional risk. The reason is because that decision is fully controllable by the company
itself, i.e. no unexpected or uncontrollable variability is associated with that decision.
An easy test here to asses if it is rather a decision problem than a true risk is to answer
the following question: Does it make sense to assign a probability of occurrence to an
alleged risk? If the answer is “no” because the result is fully controllable by the com-
pany’s decision, then it is certainly not a true risk. True risks have usually a variability
attached to them even if nothing is decided at all. Decision problems only vary in the
sense of the difference between the pre- and after decision state, but they may be as cru-
cial for the success of a company and its risk profile as traditional risks too (Rees 2015,
pp. 34, 40).
What can we conclude based on that distinction of risks and decision problems? Of
course, upcoming business decisions are not meant to be ignored, in fact they must be
61
identified and classified as such for further assessment of the most effective actions to
take, this could be either to implement risk measures or to make a business decision.
The lesson learned here is to consider not only the volatility of risks and their probabili-
ties in decision-making about mitigation strategies, rather to include potential changes
of the baseline (plan) values through different business decision options (Rees 2015,
pp. 40–41).
3.3.5 Don’t Let Reputation Risk Fool You
An excellent reputation is crucial for most, if not all, companies. It enhances credibility,
loyalty, attractiveness and preference (Bunnenberg 2016). These attributes may have a
positive impact on costs and revenues. For this reason, a company’s reputation is a valu-
able asset to actively manage. However, while there is a broad consensus on the impor-
tance of reputation, not a single comprehensive definition has yet been found. According
to Fleischer (2015), this is because the question of how reputation is created has not yet
been fully answered. As long as there is uncertainty about what actually causes good rep-
utation, it cannot be conclusively defined (pp. 54–55). On the other hand, the lack of a
broadly accepted definition is owed to the fact that the term has been the subject of schol-
arly and academic discourse for decades. It has literally been broken down into its indi-
vidual parts since it has found its way into numerous economic disciplines on the basis
of American authors. So far, it has not been possible to combine these individual parts
into a definition that is acceptable for all economic disciplines (Kirstein 2009, p. 25).
With this knowledge in mind, we agree for the purpose of this textbook on a more recent,
evaluation-oriented definition. The following definition is different from many others in
the sense that it focuses on a more evaluative definition rather than on a perception-based
one. It serves as a good basis for establishing a relationship to reputational risk.
u Corporate reputation may be understood as the observers’ collective judgements of a
company based on the assessments of the financial, social, and environmental impacts
attributed to the company over time (Barnett et al. 2006, pp. 34–36).
Since products and services of many companies hardly differ from each other, 70 to
80% of company value today is created by intangible assets (Eccles et al. 2007). This of
course includes also the value of good reputation. Reputation has gained in importance
and represents a central success driver of most companies. Particularly in today’s world,
companies are primarily regarded as “social organisations”. Companies have long since
been understood not only as economic and technical systems, but must also create social
acceptance and prestige. Today, economic success is a well-balanced mix of products
and social acceptance (Buss 2007, p. 233).
The whole process of creating good reputation is reinforced by globalisation and the
associated internationalisation of markets and by industries at the end of their life cycles.
3.3 Collect Risk Scenarios
62 3 Creating Value Through ERM Process
These developments pose major challenges for companies. Specifically in difficult times
and during economic crises, media interest in stumbling companies is even greater. In
addition, the internet and social media can quickly turn a previously local event into a
national or even international affair. As the boundaries between the inside and outside
world dissolve and the pressure for transparency increases, reputation is becoming
increasingly important. Thus, companies with a high reputation are more resilient to
survive crises, as stakeholders perceive the company as less interchangeable (Hillmann
2011, p. 5).
So far we have learned that corporate reputation creates value that needs to be pro-
tected or even expanded. Of course, everything that is valuable is also subject to the risk
that this value could be negatively impacted. At this point, we must link corporate repu-
tation to reputational risk. Similar to the vast amount of different definition of reputa-
tions, no market standard has yet been established for a uniform definition of reputation
risk (Deloitte 2015, p. 5). For our purposes, we define reputation risk as follows:
u Reputation risk is the risk of unexpected loss due to a change in the observers’ col-
lective judgements of a company based on the assessments of the financial, social, and
environmental impacts attributed to the company over time (based on the definition of
corporate reputation by Barnett et al. 2006, pp. 34–36).
Reputation risk is a very company-specific risk and varies depending on the product or
service the company offers. Some companies are more susceptible and have to expect
faster and larges losses of trust than others. For this reason, every company should assess
reputation risks differently. Let us briefly consider what the current literature learns us
about what reputation risk is. We are faced not only with disagreement on the defini-
tion, but also with disagreement on the characteristics of reputation risk. As Roth (2015)
points out, a reputation risk is a so called secondary risk with other, preceding risks
occurring first. She identified three triggers which can cause reputation risk:
• Non-compliance: Reputation risk can be triggered from non-participation in regula-
tory trends, for example if unlawful conduct becomes publicly known. Such primary
risks can be a breach of tax law, a financial accounting scandal or disregard for envi-
ronmental regulations (Sieler 2007, p. 6).
• Unethical practices: Violations of ethical and moral rules also increasingly triggers
reputation risk (Bunnenberg 2016). Such risks include fraud, corruption and inhuman
working conditions.
• Event risks: Finally, unforeseeable events can also impact a company’s reputation. For
example, preceding risks can be a hostile takeover bid, restructuring or occupational
accidents (Sieler 2007, p. 6).
This understanding of reputation is predominantly found in companies which have
already an ERM in place. In these companies, reputation risk is treated as an additional
63
dimension of impact. Other approaches to manage reputation risk is to consider it as
a separate risk category. As such, reputation risk does not have to be related to other
risk categories or it can even trigger subsequent risks (Chapelle 2015, p. 38; Romeike
and Weissensteiner 2015, p. 20). For example, the subsequent risk of not having access
to debt capital or problems in personnel recruitment can occur due to a bad reputation
(Weissensteiner 2014, p. 35). Consensus in literature can be found about the fact that
reputation risk management is indispensable due to the enormous importance of good
reputation as an asset and competitive advantage. Reputation risk must be integrated into
the general ERM process.
After having touched on the terms of reputation and reputation risk, we now turn to
the main problem of dealing with reputation risk in practice. In most risk inventories,
reputation risk is listed as one of the key risks. The problem with this is that reputation
per se is not correctly defined as risk. If we consider the discussion above on the distinc-
tion between causes, events and impact, it quickly becomes clear that reputation risks are
never properly defined by its sources. Let us have a look at Fig. 3.4.
Reputation risk is an event that can be placed in the middle of a risk scenario develop-
ment using bow-tie technique. First of all, potential sources have to be identified that can
lead to a subsequent reputation risk. These sources can often be identified in the opera-
tional risk category. Internal embezzlement, poor product quality or the exploitation of
employees can be causes that subsequently lead to e.g. criminal prosecution and/or high,
3.3 Collect Risk Scenarios
Causes Events Impact
Non-compliance
Hostile takeover
bid
Poor product
quality
Unethical
practices Cost of capital €
Reduced
revenues €
Lower
company value
€
Fines €
Reputation
risk
Media
coverage
Strategic
risk
Prosecution
Fig. 3.4 Reputation risk
64 3 Creating Value Through ERM Process
negative media attention. These risks themselves may cause a negative impact on reputa-
tion, which—in the worst case—can evolve into a strategic risk for the company. The
consequences of a reputation risk must also be analysed in detail. Reputation losses can
lead to higher capital costs, lower revenues and ultimately lower company value. The
final impacts of reputation risk are always financial consequences. Thus, it is of no use
to consider reputation as an independent risk per se, but it must be embedded in one or
more risk scenarios that identify causes and impacts of reputation risk. Reputation risks
found in company’s risk registers are wrongly stated risks because they cannot be man-
aged as such if the sources have not been identified. Accordingly, reputation risk does
not lead to concrete actions, as it is not correctly defined in the form of a cause-and-
effect analysis that enables a management of that risk.
3.3.6 Focus on Management Assumptions
This textbook on ERM does not focus primarily on strategy development and strategy
implementation. For these topics, many very good standard textbooks are available (e.g.
Barney and Hesterly 2006; Collis and Montgomery 2004). However, we can not com-
pletely do without discussing explicit references to strategic management. A central con-
cern of modern ERM is the integration of risk analysis into strategic activities. In this
respect, risk management cannot be separated from strategic management. However,
the following explanations on strategic management are now clearly geared to the risk
management perspective. It is demonstrated at which interfaces and with which methods
a risk manager can create added value to the classical strategic management processes
which are mainly based on uncertain management assumptions.
One step of utmost importance to implement a successful ERM programme is to
understand the basic strategic risk assessment process and the role of the risk manager
within it. Strategic risk assessment should be clearly owned and embedded by the man-
agement as their indispensable part of the overall strategic risk management responsibil-
ity. Strategic risk assessment is a systematic and ongoing process for assessing relevant
risks that could endanger the longevity of a company. Performing an initial strategic risk
assessment is a useful activity for management and the board. It is a responsibility that
cannot be delegated to lower hierarchical levels. Both the board and management need to
understand the company’s strategy and the associated strategic risks. The following sec-
tions discuss the distinct steps of risk identification and its practical challenges.
3.3.6.1 Start with Understanding the Business Strategy and Strategic
Risk
The development and promotion of strategic risk management processes and compe-
tencies within the organisation can create a strong foundation for the improvement
of risk management and general corporate governance (Frigo and Anderson 2009).
Strategic risk management can also add value to the company in constantly analysing the
65
company’s strategy, the corresponding assumptions and proactively developing appro-
priate measures for countering the most relevant risks that could endanger the achieve-
ment of strategic objectives. As a result, the management, the board and the risk manager
must challenge all strategically relevant assumptions (by the means of both intuitive and
rational techniques) to increase the effectiveness of strategic risk management. However,
from an ERM perspective, every risk manager needs a good understanding of the compa-
ny’s strategy and business model. Thus, the initial step in the risk identification process is
to gain a deep understanding of key business strategies, its components and all underly-
ing assumptions. Not all companies have well-developed and well-documented strategic
plans and objectives, many companies undertake a more informal way regarding their
documentation and articulation of strategic goals. However, surprisingly few companies
are capable to clearly state their strategy and competitive advantage in a few sentences.
Collis and Rukstad (2008) point out that “most executives cannot articulate the objective,
scope, and advantage of their business in a simple statement. If they can’t, neither can
anyone else” (p. 1). Thus, very often, the basic precondition to conduct a strategic risk
assessment is (partially) missing. Every company needs to develop an overview of key
strategies and business objectives in order to identify specific strategic risks associated
with them. This crucial step will also serve as the foundation to align risk management
with strategic management. A useful approach which facilitates and provides structure to
strategy formulation is suggested by Collis and Rukstad (2008).
Strategic risks are often not quantitatively assessed due to their high complexity and a
lack of knowledge and data. Of course, companies usually do not have much experience
with the same type of strategic risks over time. Strategic risks usually emerge abruptly
and hit many companies only once in their life cycles. In addition, it is challenging for
companies to identify, interpret, assess and prepare for such risks. These often low prob-
ability and high impact risk can escalate quickly, leaving companies confused, paralysed
and often prone to error (Deloitte 2017). Strategic risks are proven to be those risks that
are most critical to the company’s ability to successfully execute its strategy and achieve
its various strategic objectives (Frigo and Anderson 2011). Strategic risks can manifest
themselves in various forms, such as pursuing an inappropriate strategy by misjudging
the demand for a specific new product. Even with the “correct” strategy, a risk is not
being capable to implement a strategy successfully. Other strategic risks may be missing
out on important market trends, fast changing customer trends and disruptive innovation
risk. For the latter strategic risk, an example is described below.
Example
With disruptive innovation, a service or a product displaces established suppliers on
the market. As a rule, the offer first penetrates the lower market segment with simple
applications and then rapidly gains market share.
Companies tend to innovate faster than customer needs evolve (e.g. from CD to
DVD to Blueray). As a result, services and products come onto the market that are
too expensive and demanding for many people. But they serve the higher levels of
3.3 Collect Risk Scenarios
66 3 Creating Value Through ERM Process
their markets and the customers who always want the best alternative. As the margins
in these sub-markets are high, the companies achieve a correspondingly high level of
profitability.
However, this mechanism for success opens the door to “disruptive innovations” in
the lower market segments (e.g. streaming services). Disruptive in this context means
addressing new consumers who could not previously afford a service or product.
Disruptive companies often start with low margins, small target markets and simpler
products compared to existing solutions (see the price of a song on Spotify). Such
“disruptive companies” may pose a strategic risk for an established company.
Due to the low margins, they are unattractive for established companies that focus
on the upper market segment. This creates space at the lower end for disruptive com-
petitors. Some examples of disruptive innovation, which can lead to disruptive inno-
vation risk for established companies, include (see Clayton Christensen (n. d.):
Disruptor Disruptee
Smartphones Cellular phones
Discount retailers Full-service department stores
Retail medical clinics Traditional doctor’s offices
Streaming service Compact disc
3D printing Lathes and milling machines
Cloud computing On-premises
Mini mills Integrated steel mills
An interesting approach to classify sources of strategic risks can be found in one of the
very rare papers on strategic risks. Slywotzky and Drzik (2005) developed seven major
strategic risk areas. In each of these risk areas, different types of strategic risks can arise:
• Industry risk (margin squeeze, rising R&D or capital expenditure costs, overcapacity,
commoditization, deregulation, increased power among suppliers, extreme business-
cycle volatility),
• Technology risk (shift in technology, patent expiration, processes that become
obsolete),
• Brand risk (erosion, collapse),
• Competitor risk (emerging global rivals, gradual market-share gainer, one-of-a-kind
competitor),
• Customer risk (customer priority shift, increasing customer power, overreliance on a
few customers)
• Project risk (R&D, IT, business development or M&A failure)
• Stagnation risk (flat or declining volume and weak pipeline).
67
Of course, the paper published by Slywotzky and Drzik (2005) does not improve strate-
gic risk management in companies per se, rather it can be used to challenge the own stra-
tegic environment and supports strategic risk identification by helping to trigger the right
thoughts, e.g. in risk workshops. Having gained a good grasp of the company’s strategy,
its businesses and the term “strategic risk”, the risk manager can now advance to the next
step on his or her journey to identify all key risks.
3.3.6.2 Collect All Management Assumptions
In practice, many companies face the challenge of not knowing how they can effec-
tively and efficiently identify their most relevant risks. Surprisingly few textbooks on
ERM actually present techniques and methods to focused, strategy-relevant risk iden-
tification. Checking and questioning all assumptions made at management and board
level is the first and most important step of a focused risk identification process (see
similar Sidorenko and Demidenko 2017, p. 86). A risk manager have to elicit and col-
lect assumptions made by management and board on key strategic risks inherent with
the company’s strategy and objectives. This step provides also the opportunity to chal-
lenge key individuals’ assumptions regarding potential emerging strategic risks. Critical
assumptions about developments in the technological, political, social and economic
environment (e.g. currencies, market growth, customer behaviour, regulatory framework)
can quickly become obsolete. In checking these assumptions, a risk manager can make a
valuable contribution through a targeted risk analysis in which he or she can introduce an
additional, usually more rational perspective to these assumptions. Most of these man-
agement assumptions about the company’s future success are clearly of strategic nature.
These assumptions relate to the strategy development and strategy implementation pro-
cess. It is thus of crucial importance that appropriate attention is paid to strategic risk
management.
The analysis of strategic management assumptions should begin with a breaking
down of strategic objectives into operational objectives and key performance indicators
(KPIs). Specifically, in larger companies, strategic objectives are already present in the
form of measurable targets and thus serve as a good basis for the risk manager to under-
take a risk analysis. Of course, it is of crucial importance that a risk manager has access
to the strategy documents (which is not always the case), the financial plan, the business
plan and the budget to assess all key assumptions of the management (Sidorenko and
Demidenko 2017, pp. 8–9). What remains is the question of how companies can translate
strategic goals into measurable, action-oriented criteria. Basically, there are many strate-
gic instruments that cover the interface between strategic and operational focus.
One of the well-known tools is the Balanced Scorecard (BSC). It comprises a num-
ber of structural similarities and interfaces with ERM: The structure of the BSC as a
planning, management control and information tool provides an appropriate basis for
challenging management assumption on a more tactical level. Both ERM and BSC are
designed to achieving strategic goals. Both management tools consider the strategy from
3.3 Collect Risk Scenarios
68 3 Creating Value Through ERM Process
an enterprise-wide perspective and focus on almost all (risk) areas and their critical value
drivers. One of the main advantages of the BSC lies in the fact that the recommended
maximum amount of key measures (“twenty is plenty“) with specific target values are
directly derived from strategic objectives. These measures, defined for example as “our
revenues are expected to grow faster than that of the strongest competitor in order to fos-
ter our market position”, are subject to many uncertainties which require a thorough risk
analysis from an ERM perspective (Hunziker et al. 2018, p. 55). Let’s make a concrete
example of how a measurable target based on the BSC can serve as a basis to identify
assumptions and ultimately identify risks.
Figure 3.5 shows the financial perspective of a balanced scorecard from a ski and hik-
ing company. Within this perspective, several tactical performance indicators have been
defined. One of these relates to the sales target. The company aims to achieve a 10%
increase in sales compared to the previous year. The minimum acceptable limit is 6%.
The sales target must now be subjected to an assumption analysis. This means that the
risk manager has to identify all uncertain assumptions for the three product groups Ski,
Skiwear and Hiking that could have an impact (positive or negative) on the achievement
of this target. Examples of such uncertain assumptions are the expected impact of a mar-
keting campaign, expected inflation rate, expected competitor behaviour and expected
Skiwear Skis Hiking gear
Finance
Strategic Target Key Figure Unit Bottom
Tolerance
Target
Figure
Increase return on investment Return on Investment % 20.00 30.00
Increase revenue Increase of revenue
compared to previous year
% 12.00 20.00
Increase contribution margin Average contribution margin
per customer
$ 140.00 180.00
Improve cash flow Average cash flow $ 40’000.00 50’000.00
Identification of management assumptions
– Customer acquisition (marketing campaign) + 10 %
– Stable exchange rates
– No new competitor
– No inflation
– Good to very good snow conditions
– Customer acquisition + 5 %
– Stable exchange rates
– No new competitor
– No inflation
– Good to very weather conditions
Management assump�on = uncertain�es = risks = require risk analysis
Fig. 3.5 Break down of strategic objectives
69
weather conditions. From an ERM perspective, all these assumptions are risks with vari-
ability attached that need to be collected and analysed as part of the risk identification
process step.
3.3.6.3 Use Strategic Tools to Complement Assumption Analysis
Having analysed all management assumptions of strategic goals, the risk manager needs
to complement the strategic risk identification for the sake of completeness. For this pur-
pose, it is strongly recommended to use well-known strategic tools to analyse the busi-
ness environment more thoroughly. In the following, a number of important and useful
strategic management tools which support strategic risk identification will be briefly
introduced. Although we know that it is very difficult, if not impossible, to predict the
future and to foresee relevant trends, critical risk scenarios can be developed with a care-
ful analysis of the environment. It may thus be worthwhile for companies, despite the
high degree of uncertainty, to think about future trends and weak signals which may
slowly emerge in the environment, in order to develop (even very negative) risk scenarios
based on this environmental scanning and prepare for them. However, such predictions
based on environmental analyses partly fail in practice because often, abrupt and drastic
changes (e.g. US financial crisis in 2007) are not included in the risk managers’ scenarios
(see also Taleb 2007).
The risk manager can significantly contribute to the successful development of the
company in this process step, too. Companies need to scan the environment to be capable
to understand external changes and trends in order to develop effective risk mitigation
measures to secure the company’s longevity or to increase company value (Choo 1999,
p. 21). The previously performed assumption analysis of the strategic objectives can now
be supplemented by a general environment analysis (often, this is called “environmental
scanning”). New risks that have not yet been discussed can thus be identified or risks that
have already been identified can be enriched with further information from this process
step. According to Choo (1999), four different approaches of such environmental scan-
ning to identify new trends and developments can be applied (p. 22):
1. Undirected viewing (sensing). The aim of this first approach is to search the environ-
ment as broadly as possible for any unknown developments and trends. There are no
clear guidelines for this kind of environmental analysis. It is not a question of tracking
down and confirming ex-ante presumed developments or trends. Rather, companies
try to gain a sense for possible weak signals or emerging developments. Undirected
viewing is a process of detecting and viewing of already existing information in a
completely unstructured way.
2. Conditioned viewing (sense-making). Compared to undirected viewing, a company
may view at information about pre-selected topics, concerns or developments. Still,
this is a much unstructured procedure, but with a more pre-defined scope to look at
information within. The goal is to assess the potential impact of the pre-selected top-
ics on the company in a cost-effective manner. If the potential risks attached to the
3.3 Collect Risk Scenarios
70 3 Creating Value Through ERM Process
presumed developments may be of high importance, the approach can be changed
from conditioned viewing to actively searching for further information, the next two
steps.
3. Informal search (learning): A company searches actively for further information to get
a better grasp of the issue or trend at hand. For example, a potential very negative
risk scenario needs a deeper understanding to be able to assess it more accurately and
to formulate any subsequent queries. Informal at this stage means in an unstructured
manner and with limited resources. Clearly, the goal of this step is to collect sufficient
information to learn if a specific risk scenario under scrutiny may need any specific
course of action by the company or not. If a risk manager perceives that a company
needs to decide about the implementation of any preventive risk measures to counter
that risk, a more formal search (approach 4) may be required.
4. Formal search (deciding). This last approach aims at finding information in a struc-
tured and planned manner. The goal of this fourth approach is to get as much informa-
tion as needed to decide on a specific course of action, e.g. to decide to preventively
mitigate a specific risk. Formal searches are fine in granularity, more time-consuming
and targeted to use its information for acting and deciding.
The challenge for companies is to find a balance between more limited, well-structured
and less limited, unstructured approaches. If the focus is too strong on undirected view-
ing, it can ultimately become very expensive without finding decision-relevant infor-
mation. Moreover, with this method the amount of data quickly becomes large and
confusing. If the focus is too strong on structured, narrowly limited analyses, there is a
danger that relevant trends and risks will not be identified at all (Andersen and Winther
Schrøder 2010, p. 148). In essence, there is no best practice as to how such an analysis
of the environment should be carried out. The consideration and combination of various
established tools from strategic management can be a promising approach. A distinction
must be made between general environmental risks, industrial risks and company-
specific risks. For all of these three layers, corresponding tools are available. As there are
very valuable basic strategic management textbooks available, only a few very helpful
tools are briefly introduced in this textbook.
Structured Analysis of Competitive Climate
Porter’s five forces model (1980) is a well-known and typical framework in order to con-
duct industry analysis stemming from different forces as changing customer preferences,
new product developments, industry regulations and process innovations and many more.
Furthermore, the tool is adequate to assess own strategies and moves of existing and
potential competitors with the respective consequences. The following example shows
the results of a practical application of the five forces model.
71
Industry threats and opportunities in ski manufacturing
An analysis of the profit dynamics in the industry can benefit from Porter’s five forces
model. The model makes assessments about the industry’s attractiveness based on the
effect of five key forces, namely: (1) the threat of new entrants; (2) the bargaining
power of buyers; (3) the bargaining power of suppliers; (4) the threat of substitute
products or services; and (5) the intensity of competition in the industry. Each of these
points is examined below.
1. The risk of new competitors is rather low. The production of skis is utility-inten-
sive, which requires a considerable initial investment. In addition, established com-
petitors have a know-how advantage and a close connection to professional sport.
There are smaller ski manufacturers that are pushing their way into the market.
However, these only produce small quantities and satisfy a selected segment of
usually premium customers. Finally, existing patents for innovative suppliers pro-
tect their products from being copied, e.g. a specific ski boot plate.
2. The consumer has comparatively high bargaining power. This is illustrated by the
high discounts granted on newer models in the second part of the ski season. Since
accessories such as ski bindings and ski pieces can be combined almost at will,
the consumer is not tied to a single brand (see, for example, the coffee capsule
market). It should not be neglected that skis are usually durable and the purchase
decision can be postponed by one or more years. After all, it is easy to change
suppliers.
3. Suppliers have only limited bargaining power. Many of the input materials are
standard products and are offered by a large number of companies. Since ski
manufacturers usually purchase large quantities, suppliers are often prepared to
make certain concessions. Because these are standard products with little poten-
tial for differentiation, a market price will be established that includes only a small
margin.
4. Ski touring, snowboarding or sledging can be regarded as direct substitutes for ski-
ing. In the wider environment, there are numerous ski sports such as cross-country
skiing, snowshoeing or ice skating as possible alternatives. The risk of substitution
is relatively high. However, consumers often commit themselves to one or more
winter sports at a young age and remain loyal to them in the long term.
5. The market is dominated by large suppliers such as Rossignol, Atomic, Salomon,
Völkl and Head. The intensity of competition in the ski industry is relatively
high, as the products are similar in many respects. The intensity of the market is
reflected in the fact that every year numerous new and revised models are placed
on the market every year.
3.3 Collect Risk Scenarios
72 3 Creating Value Through ERM Process
Interestingly, the Porter’s five forces model in particular has not established itself well in
practice, for example in contrast to SWOT analysis. Grundy (2006) recognises several
reasons for this:
• The model is relatively abstract and very analytical.
• The language is relatively technically and micro economically focused.
• The practical implications are not easy to recognise, the model is relatively difficult to
implement.
• The logic of the model is not easy to understand and cannot be easily transferred to
the own context (p. 214).
However, the contribution of this model to the practical analysis of the business environ-
ment is very high. If the model is somewhat adapted and more “practical”, it can be very
useful for strategic risk and opportunity identification. In addition to all the criticism and
limitations of this model (see Grundy 2006, p. 215), it is one of the most important tools
for assessing the forces which determine the profitability of an industry.
One aspect in the discussion about the practical relevance of Porter’s five forces
model is its dependence on other strategic management tools. A paper by Grundy (2006),
which is very valuable for practitioners (e.g. risk managers), shows how the five com-
petitive forces can be embedded as a puzzle piece in a superordinate strategic analysis
model. Specifically, it is recommended to combine Porter’s five forces model with a sec-
ond, also very popular strategic management tool named PEST analysis.
The acronym PEST refers to political, economic, socio-economic and technological
factors. By the means of this tool, companies are able to assess the general environmen-
tal risks which comprise many exogenous factors outside the control of corporate man-
agement. It is clearly a useful tool to conduct strategic risk analysis and provides a broad
overview of the most important macro-environmental factors to analyse. Several variants
have emerged over time, one of the most well-known enhancements is PESTEL which
includes environmental and legal factors. An example of how the results of a PEST anal-
ysis could look like is shown below.
Drivers of change in ski manufacturing
Political issues: Numerous safety regulations also apply to ski manufacturers and
sportswear manufacturers. High tariffs on individual product groups may reduce the
attractiveness of individual overseas sales markets. Environmental associations are
more critical of mass tourism in high alpine areas, which may also reduce the attrac-
tiveness of skiing.
Economic issues: As the number of skier days tends to decrease due to global
warming, more skis are hired instead of bought. It is also to be expected that only
high-altitude ski resorts will be profitable in the long term. Lower-lying ski resorts
close to conurbations are thus likely to disappear more and more. From a global
perspective, growth markets, especially China, Russia and India, will significantly
73
increase the demand for skis, clothing and accessories. The market is highly seasonal
and saturated. Especially in spring, consumers expect high discounts.
Social issues: Urbanization is increasing more and more and the possibilities for
leisure activities are becoming more diverse. Accordingly, skiing competes with
leisure activities that are less weather-dependent. The ageing of the population can
potentially act as a brake on growth. In general, Western Europe is sceptical about
mass tourism in ski resorts, especially the intensive snowmaking for slopes.
Technology issues: The spread of the Internet makes it possible to make a detailed
price comparison between ski and ski equipment manufacturers. In addition, various
factors, such as the Internet, are driving the need for individual products. However,
there are no signs of any disruptive manufacturing processes or materials. The
demand for sustainably manufactured skis is likely to increase.
The growth drivers act as a link pin between the environmental analysis (PEST) and the
industry analysis. If, for example, the environment changes unfavourably, this can lead to
growth brakes, which in turn make specific industry forces more relevant (Grundy 2006,
p. 217). Figure 3.6 graphically depicts a sort of “onion model” which begins with a PEST
analysis and ends with the analysis of the own company in the competitive environment.
This onion model can significantly improve the identification of potential key risks.
SWOT Analysis (Andrews 1971)
A company can apply a SWOT analysis in order to conduct a strategic analysis by iden-
tifying strengths and weaknesses in the internal company environment on the one hand,
and opportunities and threats in the external market environment on the other hand.
Current
customers &
competitors
Life cycle of
own industry
N
e
w
e
n
tr
a
n
ts
Bargaining
power of
customers
Bargaining
power of
suppliers
Technological change Political change
Economic change Social change
Life cycle of
own industry
Growth driver
N
e
w
s
u
b
s
ti
tu
te
s
Fig. 3.6 Competitive mapping. (own depiction based on Grundy 2006, p. 217)
3.3 Collect Risk Scenarios
74 3 Creating Value Through ERM Process
It is probably the most well-known strategic analysis tool in theory and practice. The
outcome of this strategic analysis can help to identify strategic risk factors. Especially
for SMEs, the use of a SWOT analysis is helpful. The fact that it is a very straightfor-
ward tool that incorporates both internal and external (uncertain) developments is very
valuable. In addition, the SWOT analysis links the relevant problem areas within compa-
nies with the corresponding business objectives. In the following, a simple SWOT analy-
sis of a ski manufacturer is illustrated.
Results of a SWOT Analysis (ski manufacturer)
Strengths Weaknesses
• Qualified and long-standing employees who
know the processes and products
• Existing customer base that appreciates the
quality of the brand
• Own sales channels that reduce dependence
on intermediary trade
• Financially less dependent on lenders
• Lower economies of scale compared to
larger competitors
• Awareness strongly limited to Western
European area
• Strong focus on alpine skiing, little expe-
rience in the touring ski and snowboard
market
• Strong focus on functionality and less known
for high quality designs
Opportunities Threats
• Digitization of the ski product and its
accessories
• New overseas markets with high growth
potential
• Individualization of products (skis, ski
boots, bindings, etc.)
• Proximity to the Ski World Cup to benefit
from partnerships and feedback
• Quality risk due to production in Eastern
Europe
• Global warming reduces number of snow
kilometres on skis
• Strategic wrongly assessed attractiveness of
skiing
• Entry of a new competitor in the near pre-
mium or premium segment
Return Driven Strategy Framework (Frigo and Anderson 2011)
This framework is applied to analyse the components of a company’s strategy. It also
provides an opportunity to see how different elements of the strategy are linked together
and drive value creation. Furthermore, it offers the perspective on the identification of
risk areas in the strategy. The return driven strategy framework has been applied as an
effective technique for the integration of strategic and risk management goals. This tool
consists of eleven core tenets and three foundations that combined establish a hierarchy
of interrelated activities which have to be followed to achieve superior financial perfor-
mance. Executives not only adopt this framework to evaluate strategies but increasingly
use it to identify risk areas as part of the company’s strategic risk assessment.
Strategic Risk Management Framework (Beasley and Frigo 2007)
This tool provides a structured guideline and areas of focus to identify, link and priori-
tise a company’s strategic risks that include for instance customer risk, supply chain risk,
75
employee engagement risk, reputation risk (remember—not a risk in the strict sense),
innovation risk, financial risk among many others. The elements of the strategic risk
management framework correspond to the tenets of the previously introduced return
driven strategy framework. Hence, the discussion and analysis can be based from the risk
areas of the strategic risk management (SRM) framework associated with the strategy
classification.
VRIO Framework (Barney 2002) and Value-Chain Analysis (Porter 1985)
The application of these tools can support the company to deal with risk factors which
are endogenous and caused by the company’s processes, people and technological sys-
tems. Risks such as inability to observe and react to market changes, operational dis-
ruptions and technological breakdowns are included as well (Andersen and Winther
Schrøder 2010).
3.3.6.4 Risk Identification: Mission Accomplished?
The strategic management tools, such as the classic SWOT analysis, are undoubtedly
valuable tools for identifying and documenting relevant developments in a structured
manner. They can be considered essential tools for any risk manager. Another advantage
of using such tools is that they can build bridges (linguistic and cultural) between corpo-
rate management and risk management. Since these tools were primarily developed from
strategic management, they are widely accepted and known to many in practice. In addi-
tion, these tools are directly linked to long-term future plans as opposed to many other
tools focusing predominantly on short-term, operational issues. It thus makes sense for
risk managers to make use of these tools as well.
However, the process of risk identification is not yet complete in the sense of ERM.
This is illustrated by the example of the SWOT analysis:
• The results are classified into opportunities, threats, strengths and weaknesses. As we
have learned, weaknesses and strengths are not real risks, but already real conditions.
• From an ERM perspective, the opportunities and threats have not yet been classified
or prioritised. At this point, it is still unclear what relative, potential impact they can
have on the company’s objectives.
• It is not yet clear how probable the individual opportunities and threats will material-
ise in the future.
• Often, the degree of abstractness in a SWOT analysis is too high. Opportunities and
threats exist in keyword form, but it is unclear which concrete scenarios are behind
them (each opportunity can have several scenarios with different probabilities). From
an ERM perspective, concrete, plausible and comprehensible scenarios would have to
be developed on the basis of the SWOT analysis.
• The SWOT analysis focuses primarily on strategic risk factors. Operational and finan-
cial risks are in most cases (partially) excluded and must be identified using other
instruments.
3.3 Collect Risk Scenarios
76 3 Creating Value Through ERM Process
• Even if a SWOT analysis is performed by relevant stakeholders of an ERM pro-
gramme (management and board level coverage), it does not include all available
information (and thus probably not all strategic risks). A SWOT analysis must be
complemented by other important subject matter experts, internal or external to the
company.
• Group-specific biases (Sect. 2.3) may pose a significant threat for transparent, objec-
tive and comprehensive risk identification by the means of SWOT analysis.
The next step in the risk identification process is to conduct qualitative interviews with
key stakeholders to enhance the process of challenging management assumptions and
information gathered by strategic management tools.
3.3.7 Conduct One-on-One Interviews with Key Stakeholders
How can we proceed in practice with effective risk identification, who needs to be
involved and how does the risk manager need to prepare? In the case of an initial imple-
mentation of ERM, it is certainly very advantageous if management, preferably the Chief
Executive Officer (CEO), informs in advance about the relevance of the new ERM. As
is well known, the “tone at the top” is very important so that the corresponding commit-
ment on the part of management is noticeable enterprise-wide.
3.3.7.1 Prefer Interviews Over Templates and Surveys
In practice, it is evident that the supposedly simpler and more cost-effective option of
querying risks via e-mail and ready-made templates does not work. Unfortunately, this
procedure is still practiced relatively frequently. The main reasons why personal inter-
views are preferable to sending templates are the following:
• Low involvement and commitment by the recipients
• Often not taken very seriously because recipients do not know exactly what is hap-
pening to their information.
• The necessary time is often not spent on it. As a rule, such templates are filled out
quickly and with low priority.
• There is a high risk that last year’s list will be copied and that only few creative
thoughts will flow into risk identification.
• The risk manager cannot be asked any questions. The recipient fills in “something” to
the best of his knowledge and belief.
• The risk manager cannot guide the development of complex scenarios. It may not be
possible to reduce relevant cognitive or motivational biases in this way.
Figure 3.7 shows an example of a simple template used in this or a similar way for risk
identification purposes. In the subsequent years after ERM implementation, the template
773.3 Collect Risk Scenarios
R
IS
K
M
A
N
A
G
E
M
E
N
T
TE
M
P
LA
TE
R
is
k
ow
ne
r
na
m
e:
B
us
in
es
s
U
ni
t:
D
at
e:
ID
R
is
k
T
itl
e
R
is
k
Im
pa
ct
P
ro
ba
bi
lit
y
of
O
cc
ur
en
ce
R
is
k
M
ap
A
re
a
R
is
k
D
es
cr
ip
tio
n
R
is
k
C
at
eg
or
y
H
is
to
ric
D
at
a
R
is
k
S
ou
rc
es
R
is
k
In
te
r-
de
pe
nd
en
ci
es
M
iti
ga
tio
n
in
P
la
ce
E
ffe
ct
iv
en
es
s
of
M
iti
ga
tio
n
R
is
k
O
w
ne
r
M
ed
iu
m
Lo
w
H
ig
h
M
ed
iu
m
Lo
w
Lo
w
Fi
g
. 3
.7
E
xa
m
pl
e
of
a
r
is
k
m
an
ag
em
en
t
te
m
pl
at
e
78 3 Creating Value Through ERM Process
will be sent again with the request that the risk owner updates it and adds new risks if
necessary. In this textbook, we will completely abandon this approach and show a more
effective and beneficial approach.
The use of one-on-one interviews to complement risk identification is a very impor-
tant step for the following reasons:
• The involvement of employees, department heads, team leaders, etc. creates greater
acceptance for ERM.
• Personal interviews clearly prevent the “not-invented-here” syndrome. Decisions to
introduce new ERM measures are better accepted if employees are involved in the
decision-making process.
• Risks that have not yet been identified (specifically more operational risks) can be
identified. Not all risks are covered by the assumption analysis and strategic environ-
ment analysis.
• The involvement of specific experts (e.g. internal audit, external audit, and external
specialists) on specific topics creates a further perspective.
• The interviews with various ERM stakeholders allow several perspectives on the same
risk and thus promote discourse in the (common) case of divergent opinions.
After this advance information, the risk manager must consider with whom he or she
would like to conduct the interviews. The goal must be to obtain the most representative
(risk) view possible of the entire company. The hurdles and challenges that arise have
already been discussed in Sect. 3.3.2.
3.3.7.2 Select and Inform Interviewees Carefully
Since interviews are resource-intensive, it is important to select the interviewees care-
fully. Who can bring in which risk perspective to represent a specific area of expertise, a
business area or a cross-sectional function? As a rule, only a few interviews are enough
to obtain a company-wide risk profile. Irrespective of the company size, experience has
shown that 10 to 20 interviews may be sufficient in most cases.
Figure 3.8 shows an example of a company that conducts 13 interviews to enable
company-wide risk identification. As can be seen from the organisation chart, differ-
ent hierarchy levels are represented. From the operative business, the risk manager has
selected three experts who have a particularly high level of industry knowledge and can
thus contribute valuable information to possible industry risks. Internal audit can provide
valuable information based on their audit activities. Board members can add to the strate-
gic risk analysis by assessing environmental risks or industry specific risks.
Once the relevant experts have been identified, they should be informed in advance
about the upcoming interviews. It is important that this information contains the follow-
ing elements:
79
• ERM and its purpose (e.g. enhancing company value, improving decision quality)
• Importance of experts for the success of ERM (valuable experience, significant contri-
bution to risk assessment)
• Information handling (e.g. who receives the interview information? What happens
with this information? What is reported back to the expert? What kind of conse-
quences may the interviewee expect?)
• Importance of interviewees answering honestly and transparently (e.g. creating incen-
tives that promote truthful answers).
• Interview procedure (e.g. duration of interview, recording of interview, identification
of three or five most important risks, assessment of very pessimistic scenarios, devel-
opment of scenarios with the help of the risk manager)
• Acknowledging and reaffirming that the expert is part of the successful business
development.
The next step is now to arrange the individual appointments with the experts. It is impor-
tant to allow enough time for the meeting, especially for the very first one. Experience
clearly shows that, as a rule, too little time is available for more detailed discussions
of individual risk scenarios. The time factor often leads to hasty decisions and poorly
reflected risk assessments.
3.3.7.3 Elicit Feedback on Major Risks
During the interviews, the risk manager must pay attention to the individual biases and
try to minimise them through skilful conversation (Chap. 2). Experience has shown that
interviews should focus on identifying the three or five major risks at most. The princi-
ple of “relevance over quantity” applies here. If the expert is asked about the 10 most
R & D
Board/AC
3 Division
Managers
Divison Product
X
Divison Product
Y
Divison Product
Z
Management
Marketing
Finance
R & D
Marketing
Finance
R & D
Marketing
2 Board
Members
CEO/CFO/
CRO/CTOInternal AuditHead IA
Expert with
Experience
Expert with
Experience
Expert with
Experience
Fig. 3.8 Enterprise-wide risk perspectives
3.3 Collect Risk Scenarios
80 3 Creating Value Through ERM Process
important risks, there is a danger that he will focus his time on some risks that are highly
unlikely to be relevant from an enterprise-wide perspective.
If possible, interviews should be recorded electronically and conducted face-to-face.
This allows the risk manager to concentrate better on the conversation, to ask questions
and also to better understand the non-verbal language. After the interview, he or she can
transcribe it in detail and no important information is lost. What can be helpful for the
conversation and as a thought support in risk identification is a sheet of paper showing
the basic structure of a bow-tie diagram. This makes it easier to think through the scenar-
ios in terms of causes, events and impacts. Figure 3.9 shows a corresponding template,
which can be printed out and brought to the interviews. It is important that the risk man-
ager briefly explains the scenario analysis and proactively refers to the causes, events and
impacts in the conversation.
3.3.7.4 Focus on Plausible Stories, not on Numbers
As part of risk identification, it is important to develop risk scenarios that are as plau-
sible, complete and representative for the possible range of uncertainty. Risk identi-
fication interviews should start with developing very pessimistic scenarios. Does this
Causes Impact
Events
–
–
–
–
–
–
–
–
–
–
–
–
Fig. 3.9 Bow-Tie documents for interviews
81
not contradict the modern approach according to which ERM can create value for the
company? Should not very optimistic, value creating scenarios be developed first? The
answer in both cases is no and can be justified as follows:
• It goes without saying that management must know all the scenarios that can endan-
ger the existence of a company. These are scenarios that can lead a company into
over-indebtedness or illiquidity.
• Moreover, the effect of such negative scenarios on relevant performance indicators,
e.g. on EBIT or company value, must be assessed later in order to create a basis for
decision-making on how to deal with these risks.
• If opportunities scenarios are discussed first, this can have a “euphoric” overshadow-
ing effect. This means that downside risks are then given too little weight and dis-
cussed too little in the subsequent discussion. It is thus always worth starting with the
negative scenarios first.
• As a general rule, scenario development can be used to adequately represent all pos-
sible future realities in the form of a “distribution”. This requires an equal assessment
of pessimistic and optimistic scenarios.
The risk manager should ensure that risk scenarios are developed as complete as possi-
ble. Complete in this context means:
• Are there one or more causes that lead to the risk event? One should not limit oneself
too quickly to the first, plausible cause.
• Are these causes independent of each other or do they only lead to the risk event in
combination? If the causes are independent, two different risks have been identified.
• Are there causes of the causes? The “why” should be asked until the origin of the
cause has been found. Preventive measures are the best way to manage risk.
• What are the sequences of the risk event? Does this event trigger a follow-up risk?
If so, should it be incorporated into this scenario? Correlations with other risks can
already be integrated via scenario development.
• Are there short- and long-term consequences? It is well known that strategic risks in
particular may arise abruptly, but have an impact over several years. These effects
must be taken into account in scenarios.
• In addition, the financial impact of the scenario must be considered. It can have
impact on different line items in the financial plan.
• Risk scenarios should be as debiased as possible. For example, the risk manager
has to ensure that no hindsight biases are included in the prospectively-oriented risk
scenarios.
In this phase of the ERM process, as already mentioned, the three to five most impor-
tant risks are to be discussed. In addition to the very pessimistic scenario, consideration
3.3 Collect Risk Scenarios
82 3 Creating Value Through ERM Process
should also be given to what a very optimistic scenario (best case) could look like. Two
cases have to be distinguished:
• For many operational (event) risks, there is no actual optimistic scenario according
to our risk definition (deviation from plan). This applies in the case where the plan
anticipates the non-occurrence of a risk. For example, the risk of a flood catastrophe
is not taken into account in the financial plan because the probability of occurrence is
relatively low. The optimistic risk scenario would be: No flood catastrophe occurs. A
better scenario of flood risk, which even generates value, does not exist in this case.
• With strategic and many financial risks, there are realistic scenarios that can turn
out better than expected. These are usually so-called distribution risks, which can
assume several or many realities. For example, a very optimistic scenario could be
that, despite a competitor entering the market, one’s own market position can be sig-
nificantly strengthened because the competitor fails and one’s own company emerges
stronger from this situation.
The reason for capturing not only very negative but also very positive scenarios is the
opportunity of obtaining an initial overview of the ratio between rewarded and unre-
warded risks. Unrewarded risks are events that do not include any opportunity poten-
tial. These include many operational risks such as flooding, fire, machine breakdown.
As a rule, it is not worth taking these risks consciously. In contrast, rewarded risks are
generally associated with potential opportunities, usually strategic or financial (e.g.
interest rates, currencies) risks. This procedure provides an initial indication of which
risks are generally more likely to be avoided or minimised and for which conscious risk-
taking makes it possible to exploit potential strategic opportunities (and to create value
accordingly).
Up to this stage, we have now collected three to five potential risks from each expert.
These are available in the form of very pessimistic scenarios. Where appropriate, very
optimistic scenarios have also been developed. All scenarios have been thought through
by the means of the bow-tie technique to the extent that the cause(s) and final financial
impacts on consistent financial performance indicators such as EBIT, cash flow, equity
or company value are known. In order for risk identification to become a consistent and
high-quality process, the following important aspects must be observed:
u The following points in risk identification must be considered:
• Only as much information as necessary should be collected by the experts.
This means a fully thought-out scenario per risk with an initial rough esti-
mate of the financial impact is sufficient.
• The scenarios should be developed on a net basis. This means that all exist-
ing risk mitigation measures should be included in the scenario devel-
opment. Gross risks are “pseudo risks” and prevent (or overestimate) a
realistic risk assessment.
83
• It must be clear what the financial impact refers to, e.g. EBIT, free cash flow
or company value. This performance measure should be used consistently
so that risk scenarios can be compared at later stages.
• An assessment of the probability of occurrence is not yet necessary at this
point. All key risks are basically “rare” events. Frequency losses that can
often occur with a high probability (such as process risks) are generally not
key risks. Potential key risks should therefore be selected exclusively on the
basis of loss potential. Companies must know the absolute loss potential of
each risk, regardless of the probability of occurrence. Diluting the real risk
by calculating an expected value is dangerous and misleading.
• Quality over quantity: Few, but relevant risks should be recorded com-
pletely and comprehensibly.
3.3.8 Complement with Traditional Risk Identification
By means of the assumption analysis and the qualitative interviews, most of the risks
relevant to the company (i.e. decision-relevant risks with reference to specific business
objectives) can usually be identified. Of course, there are numerous other risk identifica-
tion methods that can be useful as a supplement. However, these methods often refer to
rather operational risk management, which is not ERM. This textbook focuses on strat-
egy-relevant, company-wide risk management. For this reason, it does not present indi-
vidual risk identification methods in a comprehensive way. In the following, however, a
few techniques are introduced that are relatively important in practice and can contribute
to supplementing the ERM process.
3.3.8.1 Conduct Risk Workshops Carefully
Workshops bring risk experts from different functions and hierarchical levels together
to exploit the collective knowledge of the group and develop or complete a list of risks
related to the company’s strategy and the corresponding business objectives (COSO
2017, p. 70). Although risk workshops are a very popular instrument to develop and
collect risk scenarios in practice, many of them fail to produce reliable and relevant
risk information. Apart from the well-known biases to counter in group meetings (see
Chap. 2), other common organisational key aspects are often neglected.
Of course, the risk manager should be familiar with current risk policies, risk appetite
statements, risk exposures and all other risk related guidelines. Next, a sound preparation
of a risk workshop is crucial. Ideally, the risk manager contacts all participants of the
workshop in advance to inform about the key objectives of the meeting, e.g. to identify
relevant risks which might have an impact on the company’s strategy. Workshops usu-
ally take more time than planned. It is important to allow enough time for the work-
shop, otherwise decisions could be driven by a lack of time rather than by appropriate
reasons. Moreover, the risk manager should facilitate effective discussion by booking
3.3 Collect Risk Scenarios
84 3 Creating Value Through ERM Process
an appropriate meeting room with round tables. To avoid hiding in the group and to be
capable to lead the discussions efficiently, the number of attendees should not exceed
eight to ten attendees.
It might be helpful to provide to all attendees an overview of possible risk areas prior
to the risk workshop. This promotes creative thinking and prevents thinking blockades
(empty sheet syndrome). An example of such a risk area sheet is provided in Fig. 3.10.
In addition to the sharing of the risk areas, the risk manager can provide the latest ver-
sion of risk analysis performed, e.g. on strategic management assumptions. This can pro-
mote the relevant discussions right from the beginning of the workshop and is preferable
to start with a blank risk identification sheet.
At the very start of the workshop, the risk manager should briefly introduce the state
of the ERM process, the objectives of the workshop, and the relevance of the experts
attending the meeting, the planned time schedule and an outlook of the next steps past
the risk workshop. During the discussions, the risk manager acts as a facilitator and
should be a neutral moderator. The crucial part is to counter specific group biases by e.g.
starting with discussing risks prior to opportunities, deliberately eliciting a second solu-
tion to every risk assessment, assigning somebody to play devil’s advocate and introduc-
ing the difference between business issues and real risks. The role of a moderator can be
very challenging. In the following, a few key aspect are to be taken into account:
• Keep a close eye to time management. Focus on high level risk scenario development.
Detailed risk analysis including discussing risk mitigation options is very time con-
suming and could be done afterward by subsequent interviews with risk owners
Ecology Procurement Production Sales
indicators
• environmental
sustainability
• of the products
• of the additives
• of the production
processes
• environmental trends
indicators
• prices
• conditions
• supply volume
• quality level
• punctuality of
suppliers
• size of order
• order routes
indicators
• component diversity
• occupancy rate
• inventories
• reject rate
• output change
• setup times
• setup costs
indicators
• new orders
• backlog
• order/purchase
behaviour
• price/program policy of
the competition
• image of own and
competitor products
• complaints rates
Macroeconomics Demography Politics Technology
indicators
• interest rates
• exchange rates
• economic indices
• union wage level
• money supply
indicators
• population growth
• demographic structure
• human resources
• unemployment rate
indicators
• law preparation
• political parties
• political stability
• election results
indicators
• innovations
• development of
materials
• trends of change in
production and
process technology
Fig. 3.10 Example of possible risk areas. (adapted from Diederichs 2013)
85
• Make sure that risks are described enough specific, i.e. develop plausible stories, start-
ing with risk causes.
• Guide the discussions to external (environmental) risk identification. Usually, the
focus lies too much on internal business issues rather than on external emerging risks.
• Avoid risk management jargon, try to speak business language to increase credibility
and acceptance. Do not ask for probabilities of risks, there is no need to do so at this
stage of the ERM process.
• Do not get into details more than what is needed. As a facilitator, the task of the risk
manager is to lead participants through a process of group knowledge capturing.
• Make sure attendees understand the concept of uncertainty. This is not a single num-
ber, rather a range which expresses the degree of uncertainty. Usually, participants are
reluctant to guess at specific numbers.
• Follow the rules for brainstorming quite closely: Risk managers shall not evaluate any
ideas. The goal is to collect everything first. The discussion of any risks will follow
later.
• For brainstorming to be effective, create a diverse workshop group covering different
areas of business and invite external subject matter experts if useful.
• Appreciate all contributions to risk identification. It is important to create an atmos-
phere where no answer is wrong. Risk managers should promote disagreement, this
can enrich the perspectives to existing risk assessments.
• Prepare some good examples of well-developed risk scenarios, explain the differences
between sources, events and impacts.
• If the risk manager thinks that an appropriate amount of risk scenarios have been
developed, he or she can switch to the next process step. The risk manager should
summarise all the ideas from the participants into a structured form, specifically
pointing to risks with much disparity. This can be done in a coffee or lunch break.
After the break, the risk manager shares his or her summary with the participants to
start the follow-up session. The aim of this follow-up session is to reach some degree
of consensus regarding the causes and specifically the (financial) impact of risks.
• At the end of the workshop, explain in detail what happens with all the collected risk
scenarios. Risk managers should share the results of the workshops in a comprehensi-
ble way with all participants.
In summary, risk workshops can be a useful complement to the analysis of management
assumptions if the above described success criteria are followed. In practice, certain
biases dominate so greatly that risk workshops are inadequate as the sole instrument for
identifying risks and in the worst case even do more harm than good. In addition, the risk
manager must be highly skilled at moderating such risk workshops.
3.3.8.2 Consider Process-Based Risk Identification
Basically, ERM should not be the driver for process management in the company, there
are more rational reasons. However, if a company has already described and visualised
3.3 Collect Risk Scenarios
86 3 Creating Value Through ERM Process
its processes (e.g. ISO 9001), these can be a useful basis for complementing risk identifi-
cation. However, it must be clearly stated that process analyses generally do not produce
any strategy-relevant risks in most cases.
In the context of the introduction of an internal control system, which is primarily
designed for process assurance, process-based risk identification can be a very reason-
able procedure. The first step is to consider which processes should be subjected to a risk
analysis based on relevant criteria (scoping). This can be done on the basis of quantita-
tive (based on balance sheet and income statement items) or qualitative criteria (com-
plexity, importance, criticality). Once the processes have been selected, a risk-based
analysis of the individual process activities is carried out. An example of such an analy-
sis is shown in Fig. 3.11. Together with the risk manager, the process owner can analyse
“what can go wrong” in the individual process activities. If potential process weaknesses
are identified and there is no corresponding effective and efficient process control, this is
an indication of potential risk.
3.3.8.3 Use Risk Checklists with Caution
Checklists use the knowledge of other institutions such as risk management associations,
universities or consultants. Basically, it is very tempting to use risk checklists that are as
order
intake
material
shortfall
demand
generated
x
o orderplacement
material
availability
check
capacity
planning
+ special order purchase order
incoming
goods
control
goods
delivery
+
o
warehousing
fabrication
special order
quality
control
feedback
x quality control positive
quality control
negative
incoming goods
control positive
rework
return
goods
incoming
goods
control
negative
What can
go wrong?
x
What can
go wrong?
What can
go wrong?
What can
go wrong?
What can
go wrong?
Fig. 3.11 Process-based risk analysis
87
comprehensive as possible. This makes risk identification significantly faster and more
cost-effective. In addition, experience from other companies in the same industry can be
used. Such checklists can be supplemented with further, company-specific risks.
It appears that checklists are actually an ideal instrument for risk identification.
However, this also entails significant disadvantages:
• Checklists prevent your own thinking or creativity. Risk identification thus quickly
degenerates into a ticking-off exercise
• Checklists are incomplete, specifically company-specific risks are only insufficiently
covered
• Many risks on the checklist are not relevant and may thus distract from actual risks
• Checklists only show negative risks, the opportunity potential is not taken into
account
• Checklists do not establish a direct reference to business objectives
• Strategic risks can hardly be found on a checklist because they are very
company-specific
• Checklists do not always define risks consistently according to causes, often one finds
a mix between causes, events and impacts.
Risk checklists should never be solely used to identify risks. If a company decides to
use checklists, they should be used as supplements after the assumption analysis and
qualitative interviews have been carried out. Such checklists have not be confused with
predefined risk categories. It may make sense, for example, to predefine risk categories
for all interviewees in qualitative interviews. This is even very advantageous in order to
achieve a certain consistency in the identification process. Risk categories have a sig-
nificantly higher level of aggregation than concrete, individual risks. They are more com-
parable to risk areas, e.g. strategic, operational and financial risks are three broad risk
areas. Currency fluctuations of the CHF/EUR currency pair are a concrete risk within the
category “financial risks”. Figure 3.12 shows the difference between a risk checklist and
a meaningful presetting for e.g. a risk workshop or a risk interview to trigger the identifi-
cation of relevant risks within the broader risk categories.
3.3.8.4 Try Fault Tree Analysis (FTA) for Critical Processes and Systems
Fault tree analysis (FTA) has its roots in the aerospace and reactor technology sectors
and is mainly used in complex, safety-critical processes and systems. The method was
first used in 1961 to investigate a missile launching system. It is used both to search for
potential sources of error and to optimise and assess safety. The aim of FTA is the sys-
tematic identification of all possible failure combinations, understood as causes that lead
to a given result. This includes the creation of a graphical system model in which the
undesirable situation is at the top and the possible sources of error are at the base and are
linked with Boolean operators.
3.3 Collect Risk Scenarios
88 3 Creating Value Through ERM Process
Following this rather general definition of the FTA, an attempt is now being made to
establish a link to business risk and quality management. An example of this is product
reliability, with the focus on that part of the integrated product lifecycle where manufac-
turing companies have little impact on products. This corresponds to the period shortly
after the market launch, where it will become apparent to what extent the products actu-
ally contribute to satisfying the needs of the customers. If an error occurs here, this can
have serious consequences for the company. Ideally, product defects and the associated
risks are thus already recognised in the development cycle, either in the planning phase
or at the latest in the test phase, in which the risks and functionalities of the prototypes of
the products to be produced are checked. Within the framework of product reliability, the
FTA is of considerable importance as an analytical instrument for the structured identifi-
cation of product-related risks.
In the first phase of the FTA, the aim is to identify as many causes as possible on the
basis of an identified problem and to depict them graphically in a cause system. A so-
called fault tree is used in the FTA to represent the cause system. The fault tree is a top-
down analysis technique. It is a method in which, starting from an identified problem or
risk, causes are gradually linked to the causes of causes, until the cause system has been
mapped as completely as possible.
Basically, two main groups of symbols of the FTA can be distinguished: Events
(labelled symbols) and logical links (unlabelled symbols). In the top-down procedure,
the risk event “engine of a machine cannot be stopped” (risk to be analysed—also
called top event) is assumed and all possible causes (“emergency stop switch system”
and “alternative power supply for engine”) and causes of the causes (“switch 1 fails”
and “switch 2 fails”) for this risk are graphically displayed. Ideally, the FTA searches for
groups of events (so-called cut sets) that cause the top event to occur. The more events
Risk
Category
Risk
Subcategory
Risk Checklist Risk
Present?
Financial Market Currency risk
…
…
YES NO
YES NO
YES NO
Strategic Supply Chain Delivery interruption
…
…
YES NO
YES NO
YES NO
Strategic Rivalry Market entry of competitor
…
…
YES NO
YES NO
YES NO
Operational HR Untrained staff
…
…
YES NO
YES NO
YES NO
«Ticking-off
excercise»
Single risks
Meaningful prese�ng for
workshops / interviews
Fig. 3.12 Risk categories vs risk checklist
89
in such a cut set, the less likely it is that the top event will occur. This means that risk
managers search specifically for so-called minimal cut sets, that is, for groups of events
that have as few individual events as possible. To put it simply, minimal cut sets are the
most likely constellations for a top event to occur. Of course, the fault trees are much
more complex in practice than in the example above. Therefore, there are special soft-
ware packages that make it possible to analyse the error trees especially with regard to
the cut sets (Rautenstrauch and Hunziker 2011).
3.3.8.5 Prevent Costly Errors with Failure Mode and Effects Analysis
(FMEA)
The FMEA was developed by NASA in parallel to the FTA in the 1960s and was used
for the first time in the Apollo programme. The method was later widely used in the
automotive industry through power plant construction. Meanwhile, the FMEA is used
for the development of new products, the use of new production processes, products with
safety requirements, changes to the product, material or process, changes in the condi-
tions of use of known products, complaints and requirements by the customer.
In contrast to FTA—which is a representative of top-down instruments—Failure
Mode and Effects Analysis (FMEA) is one of the bottom-up analysis forms. FMEA and
FTA are related instruments which complement each other and, in combination, have
their greatest effect in terms of risk identification. Instead of examining which product
components could cause a given error or risk situation (top event), the FMEA tries to
find out what type of error or risk is triggered by the given product components. Within
the framework of quality management, the FMEA is thus used to minimise the risk aris-
ing from the occurrence of errors. Potential errors in systems, designs and processes are
analysed and measures defined to detect them as early as possible.
The FMEA is motivated by the knowledge of the connection between the costs of
eliminating faults and the time of their discovery. As a rule of thumb, the so-called rule
of ten1 is often mentioned, which states that the costs increase tenfold from one process
step to the next. For this reason, FMEA follows the idea of preventive error prevention
instead of subsequent detection or correction.
Depending on the different hierarchy levels of the application of an FMEA, the
FMEA is classified into three subgroups. The classic distinction is based on a system
FMEA (product concept), a design FMEA (examination of products for weak points in
design or layout) and a process FMEA (manufacturing process). The findings from the
investigation at system level serve as the basis for the design FMEA, the results of which
flow into the considerations at process level. As a result of cause and effect, a hierarchi-
cal shift results for the different FMEA types, in which the error cause becomes the error
type and the error type becomes the error effect in the subsequent investigation.
In order to create an FMEA, an FMEA team is formed within the company, consist-
ing of employees from all departments concerned, in order to ensure a common view
from different perspectives. An important role in this process is played by the team
3.3 Collect Risk Scenarios
90 3 Creating Value Through ERM Process
leader, who must bring all results together and then document them. The team will use
an FMEA form to answer the following questions:
• Where can an error occur?
• How does the error manifest itself or how does it occur?
• What kind of error sequence can occur?
• Why can the error occur?
The following is a brief explanation of the individual steps involved in answering the
above questions. In the first step, the system (product) is delimited and described. This
results in a division into individual system elements (end products, assemblies and com-
ponents) and the determination of the individual interfaces between the elements. In the
subsequent error analysis, potential errors are assigned to the individual system elements
that are defined as restrictions or non-performance of system functions. The central
result of the analysis of the error sequence is the effect of the error on the end user of the
product. In the final step of the analysis, all causes that could lead to the described error
are described. Then measures to avoid or detect the individual errors and their causes are
listed.
In the subsequent risk assessment, the probability of occurrence, the significance of
the consequences and the probability of detecting the individual faults are discussed. The
evaluation of errors is calculated using the risk priority number: probability of occur-
rence multiplied by significance of consequences multiplied by probability of discovery
(problems with this approach are discussed in Sect. 3.4.1.4). If the risk priority figure
exceeds a threshold value defined within the company, countermeasures are to be taken.
Ideally, such measures should aim at error prevention instead of error detection. Finally,
the effectiveness of the individual measures to reduce errors is to be assessed. The risk
priority number prior improvement is compared with the risk priority number of the
improved system (Rautenstrauch and Hunziker 2011).
3.4 Assess Key Risk Scenarios
Probably one of the most challenging steps in the ERM process is to develop appropriate
criteria to differentiate between key risks and all other risks (Rees 2015, p. 36). To carry
out this important step, we need to reconsider what is fundamentally a key risk—and
what happens to all other risks. It is obvious that applying the wrong selection criteria
can lead to a more or less false understanding of the current risk exposure.
First, it is important to understand that ERM is primarily concerned with risks and
opportunities that may have a relevant impact on the achievement of objectives. In most
companies, financial performance is the most important indicator of short- and long-term
target achievement. Finally, the company’s financial situation is of crucial importance for
91
its long-term existence. Thus, the assessment of a risk in terms of its impact on financial
targets must be an important criterion for most companies.
Should risks be excluded from further analysis that do not exceed a certain minimum
loss potential? The answer depends on the perspective. From an ERM point of view, it
is necessary to define clever filters so that only relevant risks are subjected to a detailed,
more complex assessment. Risk quantification and risk simulation based on key risk
selection is much more cost-efficient and less complex to set up and maintain if only a
few important risks are taken into account.
u The selection of a few, relevant risks is decisive as to whether ERM systems can
be used meaningfully in practice in the long term or whether they will not sur-
vive due to their complexity and high costs. The flexibility and strategic orien-
tation of ERM systems for ad hoc decision support is a key success factor.
However, risks that are filtered out from an ERM perspective should not simply be
“deleted”. These risks could become key risks over time, so they need to be monitored
and regularly reassessed. It is thus important to store all risks in a database and to cre-
ate a kind of a “watch list”. However, these “watch-list” risks may be relevant from an
operational risk management perspective. Depending on whether a company runs opera-
tional risk management in addition to ERM, these risks can be managed decentrally and
coordinated with other assurance functions (e.g. internal control).
Of course, focusing on key risks has one major caveat: It may lead to an underestima-
tion of the current risk exposure if many “minor” risks are excluded from further risk
analysis. In addition, the relative importance of a risk does not directly include the rela-
tive relevance of possible risk mitigation measures. For less important risks, there may
exist simple and cost-effective measures to reduce or eliminate them completely. There is
no reason to not think about risk mitigation even for small or unimportant risks. This in
turn can significantly reduce the company’s overall risk exposure.
On the other hand, it may also be the case that risks being considered unimportant
can trigger other risks and accumulate to relevant risks due to risk interdependencies.
Figure 3.13 shows the basic challenge of this ERM process step. After having collected
risks (uncertainties) from various sources, they have to be consistently assessed for fur-
ther prioritization. Companies may apply different filters to select key risks from the
“risk universe”. Eventually, the risk manager has to create a key risk list for further risk
analysis (quantitative scenario development).
3.4.1 Identify Key Risk Scenarios
In the following some filters are discussed critically. The first two filters aim to exclude
“fake” risks. On the one hand, this concerns unrealistic scenarios against which no mean-
ingful measures can be taken. On the other hand, as already mentioned in Sect. 3.3.4,
3.4 Assess Key Risk Scenarios
92 3 Creating Value Through ERM Process
pure decision-making problems that are entirely within the control of the company must
be recorded in a separate list. The two subsequent selection criteria describe filters that
are very common in practice. However, it should be kept in mind that some filters for
risk prioritisation can do more harm than good. Subsequently, we explain a simple but
very useful filter for creating a key risk list at this stage of the ERM process.
3.4.1.1 Exclude Unrealistic, Devastating Risks
To ensure that ERM remains credible and is taken seriously by its stakeholders, no unre-
alistic, irrelevant risks should be included in the key risk list. However, the question of
how to distinguish realistic and unrealistic risk is not so easy to answer. Let us assume
a very bad risk scenario that can be devastating for all projects and all business areas of
a company and in addition, for all companies in a specific industry, in a country or even
worldwide. Let us label it “Aliens take over world domination”. Such a scenario is prob-
ably untrustworthy in the sense of being purely speculative and not reaching consensus
among experts. In addition, alien invasion has of course a very low probability of occur-
rence. No company can meaningfully prepare for this event nor can it implement meas-
ures to minimise the impact to a reasonable level.
Key risks
Risk universe
Filter I
Filter II
Filter III
One-on-one interviews
Management
assumption analysis Traditional risk
identification
Fig. 3.13 Application of smart filters to create a key risk list
93
Other, similar implausible scenarios can be, for example, risks that make life on earth
impossible, e.g. a devastating meteorite impact, deadly global diseases, global cyber war,
robotic takeover of mankind, world war III, fundamental shift of the political system
from democracy to dictatorship. To enable risk-based comparability of the risk exposure
between projects, business areas and strategic options in a company, such unrealistic sce-
narios must be consequently excluded in all risk analyses.
Unrealistic, devastating risks, which usually affect an entire economy or even the
global economy, should not be confused with very rare, company-specific risks for which
individual companies can prepare by implementing appropriate risk mitigation measures
(to some extent). These very rare, but plausible risks may “only” affect individual busi-
ness areas in certain regions or “only” affect some, but not all strategic initiatives. An
example of a plausible, very rare and very pessimistic risk scenario is a flood disaster in a
certain region where the company has a production site for a specific product that is only
produced at this facility. Even if this risk is very rare (e.g. 0.005% annual probability) but
has a destructive impact (production site is completely destroyed), it must be included in
the risk analysis for the following reasons (see similar Rees 2015, p. 38):
• The risk is partially manageable, it can be insured, for example, and preventive meas-
ures (protective walls, early warning systems, redundant production site) can be
implemented.
• The risk is a realistic, if rare, scenario. There is broad consensus that it will happen at
some point in the future.
• The risk only has a company-specific impact and a company may be disadvantaged
relative to its competitors when it occurs (e.g. loss of market share).
• The risk only affects one product line (and is as such maybe more risky as other prod-
uct lines, everything else held constant) and can be managed with some effort in the
case it occurs (the existence of the whole company is not at stake).
3.4.1.2 Separate Pure Management Action Items
In Sect. 3.3.4, we briefly discussed the differences between decision problems and real
risks. Now we are so far advanced in the ERM-process that we have to consider how
to deal with pure decision-making problems, which can also have an impact on the risk
exposure (pre- and post-decision risk exposure). Shall risk managers deliberately exclude
decision issues from their risk identification process? One could argue that such decisions
should be left to the responsibility of management. If so, no choice has to be made about
risk prioritization at this point. However, the answer is clearly no. One of the crucial
steps to improve the overall ERM effectiveness is to be aware of the existence of decision
problems and their relation to traditional risks (see similar Rees 2015, pp. 34–35, 40–41).
Next, risk managers should develop a process or a scheme to enable the compari-
son between decision problems and risks with uncertainty attached (“real” risks).
Thirdly, from a risk assessment perspective, this distinction between fully controllable
decision and non-controllable (or only partially controllable) risks is crucial to make.
3.4 Assess Key Risk Scenarios
94 3 Creating Value Through ERM Process
Subsequent risk models based on key risks should be capable to capture both effects
of pure (management) decisions and truly risky items. Straight ahead: A best-practice
ERM approach is to display pre- and post-decision values for all types of decisions, be
it the decision about a measure to reduce the probability of occurrence of a specific risk
or a management decision which only impacts the baseline expectation (plan).
ERM is of course not responsible for recording, evaluating and reporting pure deci-
sion-making problems in a holistic manner. However, risk management workshops and
interviews may exclusively address such aspects. It thus makes sense for the risk man-
ager to record these in a structured manner and make them available to decision-makers.
Pure decision-making problems do not have to be subjected to a more in-depth, quanti-
tative scenario analysis. It also does not make sense to assign different probabilities for
these decisions, since the decision lies in the full control of management. This makes it
obvious that they do not correspond to the definition of “uncertainty” and thus cannot be
included in a classical risk model. However, they also have an impact on financial perfor-
mance, which can be estimated similarly to a real risk. In contrast to the quantitative sce-
nario analysis of real risks, however, not the potential deviations from the planned value
are assessed, but the potential change of the planned value itself. We will learn more
about this difference in the chapter on risk quantification.
3.4.1.3 Avoid Risk Maps as Selection Criterion
A widely used approach for risk assessment and subsequent risk prioritisation is the
risk map (or heat map). It serves as visualised communication aid for corporate risks
and form the basis for decision-making support and prioritising which risks need to be
addressed with which urgency. Based on the prioritisation process, corresponding risk
mitigation measures are derived (Hunziker and Rautenstrauch 2015). Many consulting
firms and training centres with risk management certificates train this approach as a cen-
tral risk assessment instrument. Various international organisations that publish standards
and frameworks for risk management, such as COSO II, National Institute of Standards
& Technology (NIST) or CobIT, also recommend such an evaluation approach. In prac-
tice, it is probably the most widely used approach to risk assessment and prioritization
(Hubbard 2009, pp. 120–121).
In principle, a risk in the risk map is assessed as a product of the probability of occur-
rence and impact-on-occurrence (probability-impact matrices). Risk maps usually use a
kind of scoring system based on ordinal scales. This means that relative gradations are
made on the basis of a value range of e.g. 1–5, where 1 is classified as “very low impact”
and 5 as “catastrophic impact”. Other gradations with value ranges from 1–3 to 1–10
can also often be found in practice. It is usually assumed that the distances between the
individual values are equal, i.e. a risk with score 3 is assessed as three times more serious
than a risk with a value of 1. Figure 3.14 shows an example of a risk map as it is often
used in practice.
Caution is needed when using such risk prioritization instruments. Risk manage-
ment experts such as Cox (2008) or Hubbard (2009) even describe them as useless or
95
counterproductive, as they can lead to wrong decisions. The following problems with
risk maps must be taken into account when using them. Some can be reduced or elimi-
nated by certain measures, others are inherent in the instrument.
The use of risk maps is very simple. In the risk map illustrated in Fig. 3.14, the risks
must be assigned to one of the nine fields, which require a rough relative assessment of
the probability of occurrence and the impact. Colour gradations are often used, whereby
the risks in the red fields at the top right are assessed as “unacceptable”. Red risks
require priority treatment, i.e. risk reduction measures must be defined. The orange fields
contain “critical risks”, although it is often not clear whether there is a need for action,
but this is less urgent in terms of time than with the “red risks” or whether the risks are
tolerated and observed more closely. However, the colouring fails to provide a realistic
assessment of the risk. The red fields at the top right can be described as pseudo risks
(or phantom risks, see Samad-Khan 2005, p. 3). It is simply not possible that there are
business risks that threaten companies as a whole with a very high frequency. Thus, in
practice, real “red risks” at the top right exist very seldomly.
The focus of risk maps is in many cases risk prioritisation with respect to an aver-
age value, i.e. expected value. This equals a probability-weighted impact. Averaging such
risks may lead to serious false risk assessments which in turn may lower decision quality
significantly. For example, an expected value of the impact of raw material price volatil-
ity may be close to zero. However, the upside and downside potential (e.g. on a 95%
confidence interval) of price volatility may be important for decision-makers. Related to
low
P
ro
ba
bi
lit
y
of
o
cc
ur
er
nc
e
in
%
Impact in €
medium
high
low highmedium
green green
green
yellow
yellow
yellow red red
red
Fig. 3.14 Risk map
3.4 Assess Key Risk Scenarios
96 3 Creating Value Through ERM Process
the expected value problem, a risk with a very small probability of occurrence and a dev-
astating impact-on-occurrence does not necessarily fall into the “red area” of the risk
map.
In the best case, the verbally anchored scales of the risk map are stored with quantita-
tive values (e.g. “low” with an annual probability of occurrence of 1–20% and an extent
of damage of 0–50,000 €). In the worst case, the verbal risk assessment is not linked to
any quantitative values. Studies have shown that verbal, subjective scales such as “low” to
“high” or “unlikely” to “very likely” are “translated” by people into highly divergent per-
centages, which can make the classification in one of the fields almost unusable (Budescu
et al. 2009). Subjective scales are further subject to many cognitive biases: Hubbard and
Evans (2010) state that individual experiences, overconfidence, confirmation bias and
optimism bias may significantly impact the assessment of probability and impact.
As risk matrices display discrete categories of impact and probability, the resolution
is defined by the number of categories. Cox (2008) concludes that the limited resolu-
tion is an inherent disadvantage of risk matrices. In this sense, the selected scales in risk
maps are too “compressed”. For example, two different risks have annual probabilities
of 0.5% and 19%, respectively. In the above example, both risks are consequently “com-
pressed” to the value 1 (“low”), although both probabilities differ considerably (risk
occurs once every 5 years or once every 200 years). The same applies to the assessment
of the impact. The multiplication of both variables into one expected value leads to a
further compression of the information and thus to very inaccurate (or dangerous) risk
assessments.
Furthermore, the correct risk definition is repeatedly violated in the application of risk
maps. The application of a risk map assumes that a risk can be meaningfully described
by one probability of occurrence and one single impact: The risk either occurs or it
does not occur. And when it occurs, it always does so with the same probability. For the
majority of risks, this probability description is not appropriate or simply wrong. The
example of interest rate risks is intended to illustrate this: Interest rate or currency pair
changes can actually occur with any number of possible values (see the concept of vola-
tility), but not every change is equally probable. Such a risk can no longer be described
as a “risk event” and thus cannot be deducted from the risk map. Here, for example,
a volatility (fluctuation) would have to be depicted using various estimated scenarios.
Many operational risks, such as a machine breakdown, can also be poorly described as
a risk event, as several consequences are conceivable. Furthermore, the risk map usually
only shows the “negative risk”, positive potentials (opportunities) are completely ignored
in most cases.
Further, risk interdependencies are also ignored by the risk map. If, for example, two
risks assessed as “medium” (e.g. “fire causes loss in warehouse” and “interruption of
production process due to loss of personnel”) occur simultaneously due to a hurricane,
they can no longer be assessed as independent events. Such dependencies cannot be
meaningfully modelled in a risk map. Finally, the risk map also reflects challenges that
are only indirectly related to the instrument itself. For example, different practices for
97
developing the final impact of a risk event are observed. Three possibilities are applied in
practice (see Duijm 2015):
• The impact is represented by a risk event causing the worst case scenario and the cor-
responding probability of that event.
• The impact is represented by the most likely consequences (e.g. based on average of
past losses, similar to an expectation value) and the corresponding probability is the
probability that the most likely event occurs.
• The impact is represented by different impact scenarios, each may be in another
impact category of the risk map and the corresponding probabilities are the probabili-
ties that each of those scenarios occur.
Of course, each of those possibilities may lead to different risk assessments. Having said
that, we can draw the following conclusions: Possibility 1 may lead to overly conserva-
tive outcomes, further, less pessimistic scenarios are neglected. Possibility 2 violates our
definition of risk (risk is deviation from expected, the “representative” impact is quite
similar to expected value) and thus may underestimate true risk, companies may face
overly optimistic impact assessments. Option 3 is basically preferable to the other pos-
sibilities in that it also enables addressing different realistic scenarios of the same risk
event. However, this may lead to many entries in the risk map when several events with
several scenarios are considered (see Duijm 2015).
3.4.1.4 Avoid Expected Values as Selection Criterion
As just discussed, in risk maps the individual risks are generally assessed according to
probability of occurrence and impact and graphically represented as expected loss in the
matrix. As simple and understandable as this procedure may seem, the expected values
of the individual risks are subject to considerable limitations. However, expected values
also have meaningful applications if they are used correctly. In the following, this will be
discussed first.
On the one hand, the tangibility and calculation of expected values is relatively sim-
ple. The two variables “probability of occurrence” and “impact” can be derived either
from historical data or expert judgements. Quantifying the individual risks with proba-
bilities and financial impact is in practice very often essential for subsequent aggregation
of the individual risks across individual business areas or hierarchical levels to gain an
enterprise-wide risk exposure. It is thus not adequate to group risks only in risk classes
such as “small, medium and large risks” in order to be able to assess or aggregate them
reasonably later.
A further advantage of applying expected values lies in the option of pooling indi-
vidual risks in order to calculate overall risk exposures at different corporate levels.
Because of the additivity of the expected values (i.e. it is mathematically correct to add
the expected values), the sum of the expected values of individual risks is precisely the
expected value of the overall risk exposure. For example, it may make sense to assess
3.4 Assess Key Risk Scenarios
98 3 Creating Value Through ERM Process
the effectiveness of risk mitigation measures over time on the basis of the overall risk
exposure of, for example, individual business units. The expected value is thus a par-
ticularly useful risk measure if the primary objective of risk management is to assess
the effectiveness of risk mitigation measures to manage risks. Effectiveness in this case
means that average expected losses (sum of all expected values of the individual risks)
are smaller than, for example, in the previous business year.
However, expected value is not a risk measure. The reason for this claim is fairly
simple. We need to recall the definition of “risk”: risks are unexpected, random devia-
tions from planned values. Though, this is in complete contradiction to the risk measure
“expected value”. The expected value of a risk is neither unpredictable nor random—it
is a known factor in advance and is thus by definition not a measure for defining a risk.
From a risk management perspective, the expected (i.e. known) loss must thus certainly
not be the top selection criterion. On the contrary, the potential unexpected deviations
from the expected value, i.e. the distribution of possible losses in a range around the
expected value, are much more relevant. In particular, the worst case scenario may be
completely underestimated (or neglected) by expected values. The expected value merely
provides an indication of the average losses over an infinite period of time. From a com-
pany’s perspective, however, it is of no interest whether it could bear the losses on aver-
age. Rather, the worst deviations from the expected loss that could cause a company to
become insolvent are essential. A simple numerical example is provided to illustrate this.
The two risks X (probability of occurrence of 1% and impact of EUR 10,000,000)
and Y (probability of occurrence of 50% and impact of EUR 200,000) have the same
expected value of EUR 100,000. However, if risk X actually occurs, the impact to be
born is significantly higher than with risk Y. It is thus of no use to a company to survive
on average in the long run. The expected value is not a real risk and underestimates the
relevance of rare, but serious risks. For risks with the same expected value, the risk map
tends to suggest risk neutrality. In practice, however, this neutrality is hardly present,
because decision-makers often care whether they can generate a profit opportunity (loss
possibility) of, for example, EUR 10,000,000 with 1% probability or EUR 200,000 with
50% probability Thus, companies usually behave risk averse in decision-making pro-
cesses, not risk neutral as expected values imply (see e.g. Jonkman et al. 2003).
What do we learn from this insight? In fact, it is very astounding how persistently
expectation values remain in practice as a major decision criterion for risk selection or
risk prioritization. As this is such a crucial aspect to understand the learnings are summa-
rised in the following box.
u Expected value is not a suitable measure for the selection of key risks. It is not
possible to identify risks that could threaten the survival of the company. The
multiplication of probability of occurrence and impact seems simple at first,
the resulting single number (e.g. called risk priority number) can be put into
an easily understandable order. Unfortunately, this method does not increase
decision quality, often the opposite is the case. Expected value fully contra-
dicts with our definition of risk in the ERM approach.
99
3.4.1.5 Prefer Impact Over Probability
In practice, the probability of occurrence of a risk is an indicator often used to distin-
guish between important and unimportant risks. As we have learned, it is often used to
calculate expected values. The simultaneous consideration of probability of occurrence
and impact is probably one of the most widespread approaches for prioritizing risks in
the non-financial industry. The disadvantages of expected values have already been dis-
cussed in detail in the previous paragraph. At this point, we would like to ask whether it
makes sense to consider the probability of occurrence as a criterion to select individual
key risks. It is often seen in practice that very rare risks with a very high impact are not
defined as key risks. In risk maps, the “relevance line” is often set so that very rare risks
are never positioned in the red area. Is this a legitimate procedure? In the following, a
few thoughts are presented that shed a critical light on probabilities as a filter criterion.
Firstly, it is important that decision-makers are aware of all the risks that can have
a significant impact on the company’s objectives. This provides the basis for manage-
ment to fulfil its responsibility to discuss as many risks as possible that could threaten
the existence of the company. In this context, it is irrelevant how high the probability of
occurrence is. It is important to consider whether the company is prepared in the event of
a risk occurrence or whether measures need to be taken if necessary. Of course, manage-
ment can also decide to accept a significant risk, which it considers to be very rare. In
this case, it is a well-informed decision to accept a key risk if the associated potential for
success justifies it. If, however, probabilities are actually used as filters, it can happen that
the management is not even aware of them and thus blind spots arise, which can be very
serious. Very rare risks with a high impact are consequently not discussed at management
level because they are not included in the risk reporting. In the case of a risk occurrence,
it is of little use to the management to refer to the rarity of an event. In this respect, this
procedure can be considered as a breach of duty not to have dealt with all the risks that
threaten the existence of the company (irrespective of the probability of occurrence).
Secondly, it is very difficult to reliably assess probabilities and, depending on the
assessment, this can lead to completely different key risks. People find it difficult to
assess probabilities. In principle, probabilities for risks with which a company has no
experience cannot be easily assessed. In the area of strategic risks, it is thus challenging
to estimate the probability of occurrence as accurately as possible. An example illustrates
the problem attached to that: depending on the probability with which an interviewee
expects a new competitor to appear on the market, this risk becomes a key risk or not.
For example, it may be that a company sets the filter in a risk map at 5% probability
of occurrence for the next year. If a board member now assesses this risk at 3%, it falls
below the threshold and is not reported and discussed as a key risk. However, these 3%
are difficult to verify. It could also be 7% or 10%, which can also be considered plausi-
ble. A mitigation of this problem could be that impact and probabilities are recorded and
reported separately, but the key risk list is only generated on the basis of impacts. The
probabilities would then serve as additional information and a basis for discussion, but
are not an equally weighted selection criterion.
3.4 Assess Key Risk Scenarios
100 3 Creating Value Through ERM Process
A third reason why the probability of occurrence is not a good selection criterion can
be illustrated by the following example. Let us assume our key risk list contains of 25
risks. The risk manager analyses the selected risk scenarios and concludes that each key
risk scenario has a very low probability. For the sake of simplicity, we assume that all
risks have an equal estimated probability of occurrence of 1% (p). In other words, each
risk is expected only once in a hundred years. Are we confident that none of the top risks
will occur next year? Can we inform our board that there will be no unpleasant surprises
next year due to the very low probabilities? Let us assume that the 25 (N) top risks are
uncorrelated. This assumption may be quite realistic, since the risk interdependencies
are already incorporated during the individual scenario developments. What is the prob-
ability that at least one of the rare risks will occur next year? The math is as follows:
1-(1-p)N. If we use our figures (p = 1%; N = 25), we calculate a probability of 22.2%.
This value is relatively high and is usually underestimated in traditional risk management
systems based on individual risk assessments (e.g. by means of risk maps). If we extend
the time horizon to e.g. 5 years (according to the achievement of the strategic objectives),
this probability already increases to 71.5%. In the long term, rare risks are thus very
much to be expected. The lesson here is that very low probability-risks should not be
excluded from the key risk selection process.
u At this point, it is important to understand that probabilities in the ERM
approach are still highly relevant. Probabilities are particularly relevant when
assessing the impact of multiple risks on a particular business objective. For
the selection of key risks, however, we need filters that prevent threatening
individual risks from being excluded or not taken into account in the more
detailed risk quantification. We thus strongly recommend that the key risk list
is primarily based on the impact of risks and that probabilities of risks may be
included in the risk list as additional information, if available.
3.4.1.6 Distinguish Between Key and Non Key Risks
We have reached the culmination of the first and important process step of risk identifi-
cation. We remember that the aim was to create an overview of key risks. This list is the
first important outcome, which is then subjected to a quantitative scenario assessment
in a subsequent step. The assessments of the individual impacts are to be deemed provi-
sional. They have only helped us to distinguish between key risks and non-key risks (see
similar Segal 2011, pp. 151–152).
The following figure shows a corresponding procedure. It shows an excerpt of pes-
simistic risk scenarios of a company in relation to the defined EBIT target. The expected
EBIT amounts to EUR 5 million. All significant deviations from the plan are thus of
interest, which is in line with our risk definition. If a risk scenario has a loss potential
higher than EUR 2 million, it is taken into account in the further risk analysis. It is thus
included in the key risk list. As you can see from the chart, probabilities of occurrence
are missing. If these were already collected during risk identification, they could be
101
added as a supplement to the individual risk scenarios. In our approach to risk identifi-
cation presented so far, however, we have deliberately refrained from collecting prob-
abilities. These will only become relevant in the subsequent quantitative risk scenario
development Fig. 3.15.
Remember that a risk database must be populated also with all non-key risks to cre-
ate a so called “watch-list”. This list can be provided as a supporting tool for operational
risk management or internal control systems. In addition, all non-key risks shall be mon-
itored on a regular basis in order to recognise emerging key risks as early as possible. It
is assumed that only a few watch-list risks will qualify as key risks at later points in the
future. Nevertheless, as business models can change quite quickly due to e.g. changes in
customer needs, some risks deemed minor can become strategy-relevant later on.
At this point, it is important to note that the key risk list per se is not yet an instrument
relevant to decision-making. One could say that in traditional risk management such a
list is often the key result of the risk management process. From a modern ERM perspec-
tive, this list should be understood as a kind of database in which risks are collected and
adjusted over time. Only the subsequent quantification of the individual risk scenarios
and the integration into decision-making processes provide the desired added value of
ERM.
3.4 Assess Key Risk Scenarios
RScen1
4 –
-6 Mio €
Key Risk
3 –
2 –
1 –
0 –
-1 –
RScen2 RScen3 RScen4 RScen5 RScen6
-3.5 Mio €
Key Risk
-3 Mio €
Key Risk
-2 Mio €
-1.5 Mio €
-1 Mio €
EBIT Plan
(5’000’0000 €)
Filter
(3’000’0000 €)
Fig. 3.15 Key risk scenarios
102 3 Creating Value Through ERM Process
u The mere creation of a key risk list as the basis for risk reporting to manage-
ment and the Board of Directors does not provide any added value. The risks
on this list are merely isolated individual risk assessments that are not (yet)
included in decision-making processes.
3.4.2 Quantify Key Risk Scenarios
The next step in the ERM process is a quantitative risk assessment of all key risk sce-
narios. Its aim is to reflect the uncertainty associated with key risks as holistically and
realistically as possible. Only quantification makes a meaningful comparison of differ-
ent risks and opportunities possible. However, a misunderstanding must be cleared up
at this point: It is not a question of “calculating” a precise truth with risk quantifica-
tion. We all know that this is not possible because nobody can predict the future exactly.
With the help of reasonable evaluation methods, however, we can express the degree of
uncertainty more objectively and transparently than will ever be possible with qualita-
tive methods. It is thus not a question of producing illusory precisions, but of developing
“ranges of uncertainty” on the basis of plausible quantitative risk scenarios.
As discussed previously, an ERM programme must assess all risks (independent
of their source) with the same care. In particular, strategic risks are often not assessed
quantitatively in practice. Practitioners often claim that the complexity of risks or their
sources and a lack of data impede quantitative risk assessments. However, this translates
to the following important statement:
u ERM programmes that quantify only financial risks and (partially) operational
risks, but assess “non-quantifiable risks” (strategic risks) only qualitatively, fail
in making reasonable statements about how risk exposures may impact busi-
ness objectives. This in turn impedes the supporting role of ERM in risk-ori-
ented decision-making. It is thus strongly recommended to adopt an ERM that
is methodologically capable of assessing all risk categories quantitatively.
The problems of pure qualitative risk assessments are manifold and have already been
addressed in previous paragraphs. However, it is also important to notice that quantitative
assessment methods are not per se superior to qualitative techniques because they look
more complex, mathematical and “accurate”. In practice, quantitative models are often
incomplete and neglect relevant risks, particularly strategic risk where data availabil-
ity is scarce. Interestingly, operational risks at lower hierarchical levels and specifically
financial risks are usually quantified using state-of-the-art stochastic methods. Hubbard
(2009) calls this observation in practice a “risk paradox”: relevant, strategic risks are
often assessed by qualitative, simple scoring methods, whereas operational low-level
risks are often included in quantitative risk models (p. 174).
103
Furthermore, data quality is crucial for the quality of quantitative analysis: the finan-
cial crisis has clearly shown that model assumptions based on classical financial market
theory can not withstand reality. Extremely rare, but devastating scenarios have been reg-
ularly underestimated (so-called tail risks). Stochastic models require a sound data basis,
which is often not the case, specifically in the area of strategic and operational risks. As
a consequence, either unrealistic scenarios are estimated or some risks are completely
ignored. Finally, it is questionable whether complex stochastic models are actually
applied correctly in practice and understood by management. These “black box” models
are often difficult to communicate to decision-makers and cannot be understood without
appropriate know-how (Hunziker 2018, pp. 18–19).
The critical question now is, which approach shall we present in this textbook on risk
quantification? There are many good textbooks on stochastic risk modelling available.
However, the procedures and approaches recommended in these books do not (at least
not yet) seem to prevail in the non-financial industry. From a practical point of view, this
can have several (partly false) reasons:
• Stochastic risk modelling is reserved for the financial industry, the methods are not
transferable to non-financial risks.
• The procedure is considered too complex, one is content with simpler methods that
are easier to understand (e.g. qualitative risk management).
• Data is missing so that appropriate models can be created.
• The maintenance of such models is often considered too complex.
• The benefits of quantitative approaches are called into question because it is assumed
that models are fundamentally wrong (the image of quantitative risk models has suf-
fered at the latest since the financial crisis).
• The basic assumptions of normalised returns are increasingly criticised; correspond-
ing statistical distributions no longer correspond to reality.
Two questions at this point arise: What information should risk quantification be based
on? Should stochastic or deterministic risk scenarios be quantified? Risk quantification
is based on the principle of using the best available information, depending on the risk
category. These can be historical data as input for the assessment of financial risks or pri-
marily expert assessments in the area of strategic risks. Thus, the quantification approach
discussed in this textbook combines different data sources within the scenario quantifica-
tion approach. Pure stochastic modelling as input for risk simulation is not used for the
aforementioned reasons.
Subject matter experts who are “closest to the risk” in the company are explicitly
included in the risk assessment (as they already have been in the risk identification pro-
cess). A properly performed risk quantification with the risk manager as enabler and
discussion facilitator together with board members, business, divisional and department
heads usually leads to more reliable (tail) scenarios than a pure stochastic evaluation
based on (often insufficient) historical data. Moreover, a deterministic risk assessment
3.4 Assess Key Risk Scenarios
104 3 Creating Value Through ERM Process
approach which is based mainly on expert judgements rather than relying solely on pure
stochastic (black box) models supports the acceptance of ERM and enhances an appro-
priate risk culture.
In the following, we learn why quantified risk models still matter, how to effectively
develop quantified risk scenarios and how to (not) aggregate single risks which may have
a simultaneous impact on a specific business objective.
3.4.2.1 Why Risk Quantification Matters
As already touched on, criticism of risk modelling has increased considerably in recent
years. There is now a long list of counterarguments why companies should not use quan-
titative risk models. However, it still remains to be clarified what might be better alter-
natives. Unfortunately, there are no such alternatives as we learn in this textbook. An
excerpt of the opponent’s list why risk models could fail are briefly listed here:
• The past has shown that risk models are wrong. So are they in the future.
• There is no or too little data available for such models. The quality of the models is
thus poor.
• Nobody understands risk models, in the best case the risk manager him- or herself.
• Risk quantification and subsequent risk aggregation produce false accuracies, hence a
qualitative evaluation must be better.
• Risk models fail due to effort and complexity.
• Basically, human experience and intuition is stronger than risk modelling
• Garbage in, garbage out as a killer argument
Taking into account the above arguments, we believe that opponents of risk models
sometimes have false ideas about what they can or cannot do. At this point, we would
like to clarify this and argue that there are currently no approaches superior to risk mod-
elling (see similar Rees 2015, pp. 91–92). First of all, we need to consider why a com-
pany should be concerned with risk models at all. Principally, quantitative risk models
deal with situations (expectations about the future) that cannot be perfectly understood
or anticipated because they are subject to uncertainty (risk). If this uncertainty did not
exist (e.g. regarding the net present value of a strategic project), risk models could be
entirely ignored. Of course, if a company is not willing or able to develop meaningful
assumptions regarding risk causes and risk interdependencies in the form of scenarios,
risk models do not make sense either. They do not replace the skills of developing realis-
tic assumptions of how the future might unfold.
We all are aware today that risk models are a simplification (in some cases, an over-
simplification) of the reality and that quantified risk assessments are never accurate or
only coincidentally correct (because they deal with the future). They ultimately reflect
opinions and assessments of subject matter experts, partially combined with historical
data where available. In this sense, the killer argument that all quantitative risk models
are wrong by definition is perfectly correct.
105
However, companies should accept that skilfully led discussions during risk assess-
ment interviews or workshops are often very fruitful. The process of discussing and
creating a quantitative risk model is often more useful than the (false) outcome per se.
During this process, assumptions are questioned, new views and ideas generated, new
discussions initiated and possible future risk potentials identified and assessed more
systematically. Quantification sometimes requires uncomfortable transparency, which
is, however, much more important as a basis for discussion than qualitative (verbal, use
of qualitative scales) assessments. Figures are not subject to interpretation. No matter
whether they are wrong or correct, they are the better basis for fruitful discussion.
Hiding or concealing vaguely formulated risk assessments is no longer easily possi-
ble. Consensus amongst management ultimately represented in the quantified risk model
serves as an important decision-making basis and promotes further discussions regard-
ing model assumptions and risk appetite confrontation. An aggregated model which is
totally implausible to management can also show that there is something wrong with the
assumptions about the future. For example, a risk model that displays a new strategic
option (e.g. new market entry) as a risk simulation result only with positive, profitable
scenarios would probably have to be critically questioned (maybe the true downside risk
has not been fully reflected in the model).
ERM can only be linked to value-based management if quantified risk scenarios
are available. An integration of ERM into strategic planning, budget processes or other
decisions is only possible if there is a common ground, usually this is the connection
with financial performance management. Qualitative risk management clearly fails in
this case. In the context of multi-scenario planning, which may credibly reveal risk and
opportunity impacts on objectives, qualitative risk management is not relevant.
The quantification of risk scenarios primarily enables transparency, a sound discus-
sion basis, prioritisation and comparison with other risks. It also supports the identifica-
tion of risk interdependencies and objective-based risk aggregation. It forces companies
to think through a risk scenario holistically and to check its plausibility by means of
quantification. If risks are classified purely verbally or only in rough risk classes, the
underlying scenario development is often carried out relatively imprecisely and too
broadly.
u Peter Drucker is credited with one of the most important quotes in busi-
ness management. “If you can’t measure it, you can’t improve it.” This quote
is specifically true also for ERM. If companies are reluctant to express their
uncertainty attached to business objectives quantitatively, then they can not
possibly improve risk-based decision-making.
In summary, we are convinced that modern ERM is only possible on the basis of quan-
titative risk assessment. It is important to understand that risk quantification is only
a small but crucial part of the ERM puzzle. Properly understood and applied, risk
quantification creates the best possible discussion about uncertainty in the future.
3.4 Assess Key Risk Scenarios
106 3 Creating Value Through ERM Process
Incorrectly applied, it leads to little credibility and a high potential for frustration. In
practice, it is now a question of reducing these hurdles through the success stories of
companies that benefit from quantitative risk management. Risk quantification outside
the financial industry is still very critically assessed or partially demonised in prac-
tice. It is a well-researched subject area that has been waiting for years to diffuse into
practice. This textbook encourages students to perhaps introduce this approach later in
their professional lives, or at least to take a positive stand for it.
3.4.2.2 Develop Quantitative Key Risk Scenarios
At this point, it makes sense to clarify precisely what we mean by quantitative scenario
development. In particular, questions of how this approach differs from other risk assess-
ment methods in the area of risk management or common corporate planning and budg-
eting. First, we want to differentiate quantitative risk analysis from simple sensitivity
analyses applied in budgeting processes. In practice, it is usually put forward that finan-
cial plans and budgets are supplemented with a pessimistic (lower bound, e.g. 90% of
planned values are achieved) and optimistic (upper bound, e.g. 110% of planned values
are achieved) “risk” scenario, and that risk analyses thus has been already applied.
Although such simple sensitivity analyses have their legitimacy, they are subject to
some significant disadvantages from an ERM point of view (see similar Rees 2015, p. 89):
• Very pessimistic or very optimistic scenarios (extreme values) are often not incorpo-
rated, thus such plans usually cover only a part of the entire risk distribution.
• Usually, no probability assumptions are included in such sensitive analyses, thus no
comparisons can be made with the risk appetite statements (if appropriately defined)
and no probabilistic risk aggregation can be performed. Moreover, it remains unclear
how much uncertainty is attached to the different scenarios.
• It is not clear if the lower and upper bounds (sensitivities) comprise only true risks or
whether the plan could be optimised by simple management decisions.
• The expected value of the plan is unknown. Expectation values usually differ from the
most probable outcome (which is the plan).
• The different risk sources which may impact the plan are not fully known and are
separately identified and recorded.
Now that we have briefly clarified that sensitivity analyses are no substitute for genuine
risk quantification, we would like to briefly address the traditional risk quantification by
means of probability of occurrence and impact. As previously discussed, several prob-
lems are attached to that simple procedure. The majority of the risks cannot be com-
prehensively described as “single risk events”. For example, it is obvious that interest
rate changes, oil price fluctuations, fluctuations in sales, market entry of competitors and
many more risks can have different consequences. Even risks that are supposedly consid-
ered as binary risk in practice (either risk event occurs or not) are more complex in fact.
A risk of a machine breakdown can manifest in different states, e.g. only one machine
107
break down for a very limited time with minor consequences or several machines have a
more significant defect at the same time which leads to production downtimes. These dif-
ferent states are called “risk scenarios”.
u The basic idea with scenario development is to produce a robust and reliable
range of the most relevant possible future states of the same risk. In many
cases, it is not possible to define only one state of a risk, assuming a risk has
exactly one probability of occurrence and exactly one impact. Thus, we need
to develop so called “risk distributions” which cover very pessimistic, but also
very optimistic scenarios and some scenarios in between with different prob-
abilities of occurrence attached to every scenario.
Another reason why there is need to fully quantify all future risk states, independent of
their source, is due to integration purposes. In order for risk management and corporate
planning to be integrated, a common ground must be found, i.e. risks must be quanti-
fied. A true integration of risk management and corporate planning can only be achieved
through linking the financial impacts with financial plans. This enables that plan devia-
tions caused by potential risks can be made transparent and visible. These potential
deviations should be discussed by management and can either be accepted (if within
risk appetite or if corresponding upside potential is high) or actively manage toward an
acceptable level (if risk appetite is exceeded). In other words, quantitative risk scenarios
ultimately support decision-making processes.
As previously mentioned, risk scenario analysis is a practical, highly effective tool to
conduct risk assessments. It supports the identification of cause-and-effect chains when
thinking through individual scenarios and thus incorporate interdependencies (correlations)
with other risks (e.g. a volcanic eruption scenario leads to an economic downturn which in
turn leads to a loss of sales which ultimately reduces free cash flow in year 201X).
The question at this point is: How many risk scenarios per risk have to be developed
to produce a “robust risk distribution”? The answer is not straightforward and is related
to our deterministic risk assessment approach. Let us assume that we assess the risk of a
new competitor entering the market. We have already captured the very pessimistic sce-
nario as part of the risk identification process and assessed it with a rough loss potential.
It qualified as a key risk and thus is considered for detailed quantitative risk scenario
development. The following example is the result of an interview with a strategic man-
agement representative. It describes a quantified, very pessimistic risk scenario with a
probability of occurrence attached and an EBIT amount in EUR.
Example
Mr Grob (risk manager) and Ms Frozen (strategic management representative) devel-
oped during the risk quantification interview the following very pessimistic risk sce-
nario: Next year, a new competitor will enter the market that can take market shares
of up to 40% from us next year and 20% the year after next. After three years, our
3.4 Assess Key Risk Scenarios
108 3 Creating Value Through ERM Process
innovative products, which are currently in the development phase, will enable us to
push this competitor out of the market again. Based on my industry experience, this
can happen with a probability of 3%. If we lose 40% and 20% of market share in the
next two years, this would have a cumulated negative impact on revenues (EUR -5
million), but also a positive impact on costs (less personnel needed, EUR +1 million).
Ultimately, EBIT of this product line is reduced by EUR 4 million.
The next step is to quantify the very optimistic scenario in the same way. Three different
quantified scenarios are then available:
• Very pessimistic scenario (probability of occurrence
Frequency >102–103 per
year
Green WV Amber Amber
Frequency �102 per year Green Green Green CE DS
Quantitative Risk Assessment 23
outcome is obtained, with cyber espionage and identity theft both being very high,
closely followed by denial of service. Web vandalism is lower on this scale.
Generally, moving to a more quantitative metric is preferable, with the tradeoff of
requiring more data with accuracy an important factor.
To demonstrate, assume the context of a construction firm with a portfolio of ten
jobs, involving some risk to worker safety. The firm has a safety program that can be
applied to reduce some of these risks to varying degrees on each job. Cox addressed
four different levels of risk evaluation, depending upon the level of data available. The
risk matrices that we have been looking at require little quantitative data, although as
we have demonstrated in Table 2.6, they are more convincing if they are based on
quantitative input. Table 2.11 provides full raw data for the ten construction jobs.
In Table 2.11, column 2 is the potential liability due to injury in thousands of
dollars. Column 3 is the probability of an injury if no special safety improvement is
undertaken. Column 4 is the product of column 2 and column 3, the expected loss
without action. Column 5 is the proportion of the injury probability that can be
reduced by proposed action, which leads to savings in column 6 (the product of
column 4 and column 5). Column 7 is the amount of budget that would be needed to
reduce risk. Column 8 (RRPUC) is the risk reduction per unit cost.
Table 2.12 gives the risk matrix in categorical terms, using the dimensions of
probability of injury {below 0.19; 0.20–0.25; 0.26 and above) and liability risk
{below 399; 400–599; 600 and above).
For each combination of injury probability and liability risk has a mitigation
strategy assigned. Insurance is obtained in all cases (even for subcontracting).
Assigning extra safety personnel costs additional expense. Subcontracting sacrifices
Table 2.11 Hypothetical construction data
Job
Liability
risk (k$)
Prob
{injury}
(frequency)
Expected
loss (risk) Reducible
Savings
(k$)
Cost of
reducing RRPUC
1 250 0.30 75.0 0.7 52.50 25 2.100
2 300 0.20 60.0 0.5 30.00 20 1.500
3 320 0.15 48.0 0.6 28.80 25 1.152
4 340 0.20 68.0 0.3 20.40 15 1.360
5 370 0.11 40.7 0.5 20.35 20 1.018
6 410 0.18 73.8 0.6 44.28 25 1.771
7 440 0.33 145.2 0.4 58.08 20 2.904
8 460 0.25 115.0 0.7 80.50 30 2.683
9 480 0.20 96.0 0.5 48.00 20 2.400
10 530 0.08 42.4 0.4 16.96 18 0.942
Table 2.12 Hypothetical risk matrix
Liability risk low Liability risk medium Liability risk high
Prob{injury} high Assign safety Assign safety Subcontract
Prob{injury} medium Insurance only Assign safety Assign safety
Prob{injury} low Insurance only Insurance only Assign safety
24 2 Risk Matrices
quite a bit of expected profit, and thus is to be avoided except in extreme cases.
Table 2.12 demonstrates what Cox expressed as a limitation in that while the risk
matrix is quick and easy, it is a simplification that can be improved upon. Cox
suggested three indices, each requiring additional accurate inputs.
The first index is to use risk (the expected loss column in Table 2.11), the second
risk reduction (savings column in Table 2.11), the third the risk reduction per unit
cost (RRPUC column in Table 2.11). These would yield different rankings of which
jobs should receive the greatest attention. In all three cases, the contention is that
there is a risk reduction budget available to be applied, starting with the top-ranked
job and adding jobs until the budget is exhausted. Table 2.13 shows rankings and
budget required by job.
If there were a budget of $100k, using the risk ranking jobs 7, 8, 9, and 1 would be
given extra safety effort, as well as a 20% effort on job 6. With the risk reduction
index as well as the RRPUC index, a different order of selection would be applied,
here yielding the same set of jobs. For a budget of $150k, the risk index would
provide full treatment to job 6, add job 4, and 75% of job 2. The risk reduction index
would also provide full treatment to job 6, add job 2, and provide 40% coverage to
job 3. The RRPUC index also would again provide full treatment to job 6, add job
2, and 2/3rds coverage to job 4. The idea of all three indices is much the same, but
with more information provided. Table 2.14 shows the expected gains from these
two budget levels for each index.
Given a budget of $100k, the risk index would reduce expected losses by $58.08k
on job 7, $80.50k on job 8, $48k on job 9, $52.50k on job 1, and $8.856k on job 6, for
total risk reduction of $247.936k. As we saw, this was the same for all three indices.
But there is a difference given a budget of $150k. Here the risk index actually comes
out a bit higher than the risk reduction index, but Cox has run simulations showing that
Table 2.13 Ranking by index
Risk index
ranking
Budget
(k$)
Risk reduction index
ranking
Budget
(k$)
RRPUC
ranking
Budget
(k$)
Job 7 20 Job 8 30 Job 7 20
Job 8 30 Job 7 20 Job 8 30
Job 9 20 Job 1 25 Job 9 20
Job 1 25 Job 9 20 Job 1 25
Job 6 25 Job 6 25 Job 6 25
Job 4 15 Job 2 20 Job 2 20
Job 2 20 Job 3 25 Job 4 15
Job 3 25 Job 4 15 Job 3 25
Job 10 18 Job 5 20 Job 5 20
Job 5 20 Job 10 18 Job 10 18
Table 2.14 Risk reductions achieved by index
Budget Risk index Risk reduction index RRPUC
$100k 247.936 247.936 247.936
$150k 326.260 324.880 326.961
Quantitative Risk Assessment 25
risk reduction should provide a bit better performance. The RRPUC has to be at least
as good as the other two, as its basis is the sorting key. The primary point is that there
are ways to incorporate more complete information into risk management. The
tradeoff is between the availability of information and accuracy of output.
Strategy/Risk Matrix
Risk matrices can be applied to capture the essence of tradeoffs in risk and other
measures of value. In this case, we apply a risk matrix to a construction industry
study where the original authors applied an analytic hierarchy model.8 The model is
relatively straightforward. The construction context included a number of types of
work, each with a relative rating of supply risk along with a similar weighting of
strategic impact. Data is given in Table 2.15.
Figure 2.1 displays a scatter diagram of this data.
Table 2.15 Construction
work risk and impact
Type Supply risk Strategic impact
Cement 0.05 0.34
Workforce 0.09 0.40
Aggregate 0.11 0.58
Transport 0.12 0.18
Demolition 0.12 0.38
Painting 0.15 0.25
Misc. 0.15 0.28
Steel 0.15 0.65
Insulation 0.16 0.18
Travel 0.17 0.29
Cast iron 0.18 0.23
Excavation 0.20 0.26
Locksmith 0.21 0.36
Floor cover 0.22 0.23
Infrastructure 0.23 0.58
Sanitary 0.23 0.70
Ceilings 0.25 0.24
Geotechnical 0.25 0.29
Electrical 0.25 0.57
Climate 0.26 0.34
Aluminum 0.31 0.24
Formwork 0.31 0.31
Concrete 0.46 0.92
Mosaic 0.51 0.26
Carpentry 0.54 0.24
Special forming 0.56 0.31
Stone 0.59 0.24
Scaffolding 0.62 0.29
26 2 Risk Matrices
Construction contexts could differ widely, but we will assume an operation where
the greatest profit is expected from conducting operations normally. Risk can be
reduced by spending extra money in the form of added inspection and safety
supervisors, but this would eat into profit. The least profit would be expected from
an option to outsource construction, placing the risk on subcontractors. The criteria
can be sorted in a risk matrix considering both dimensions, as in Table 2.16.
In this case, this policy would result in outsourcing (subcontracting) concrete
work, which has a supply risk rating of 0.46 and a very high strategic impact of 0.92.
Added risk control would be adopted for ten other types of work: aggregate, steel,
infrastructure, sanitary, electrical, mosaic, carpentry, special forming and scaffold-
ing, and stone.
Cement
0.4
Aggregate
0.18
Demolition
0.250.28
Steel
Insulation
0.29
0.230.26
Locksmith
0.23
0.58
Sanitary
0.24
0.29
Electrical
Climate
Aluminum
Formwork
Concrete
0.26Carpentry
Forming
Stone
Scaffolding
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
St
ra
te
gi
c
im
pa
ct
Supply risk
Strategic Impact vs. Supply Risk
Fig. 2.1 Strategic impact plotted against supply risk
Table 2.16 Risk matrix of risk/strategic impact trade-off
Supply risk
�0.2
Supply risk >0.2
to �0.5
Supply risk >0.5
to �0.8
Supply risk
>0.8
Strategic impact
>0.8
Add risk
control
Outsource Outsource Outsource
Strategic impact
>0.5 to �0.8
Add risk
control
Add risk control Outsource Outsource
Strategic impact
>0.2 to �0.5
Normal
operation
Normal operation Add risk control Outsource
Strategic impact
�0.2
Normal
operation
Normal operation Normal operation Add risk
control
Strategy/Risk Matrix 27
Risk Adjusted Loss
It is better to be quantitative than qualitative—but the problem is that data is not
always available. But Monat and Doremus9 have presented a risk-adjusted index
approach with the following steps:
• Identify risks
• Assign quantitative values for probability and dollar impact to each risk
(subjective)
• Estimate the organization’s or individual’s risk tolerance using rule-of-thumb
• Calculate Risk-Adjusted Loss (RAL formula below)
• Prioritize risks from highest RAL to lowest
RAL ¼ Probability � Impact
� 1 þ 1 � Probabilityð Þ � Impact= 2 � Risk Toleranceð Þ½ �
Monat and Doremus include variance in the above formulation. RAL essentially
adds a risk factor to expected value based on their formulation of variance and risk
aversion. Risk tolerance tries to reflect the organization’s ability to absorb risk. The
larger the organization, the greater their ability to absorb risk. A rule-of-thumb for
risk-averse companies would be to multiply net income by 1.24 (there are other rule-
of-thumbs). To demonstrate, consider Table 2.7 redone in terms of assessment of
impact and probability in Table 2.17, showing expected losses as P � I.
This approach could make it easier to set the color limits. For instance, expected
loss above $450,000 might call for red, below $60,000 green, and in between amber.
This would vary a bit from the verbal limits given in Table 2.7, where the P ¼ 0.95
Impact¼Insignificant was assigned amber classification, but in Table 2.17 you can
see that very little expected loss was expected. The same for P ¼ 0.01 Impact Major.
The red categories were similar except that P ¼ 0.95 Impact Minor was here
classified as Amber, as was P ¼ 0.7 Impact Moderate, while they were red in
Table 2.17 Table of expected losses
Impact
insignificant
10,000
Impact
minor
100,000
Impact
moderate
500,000
Impact
major
1,000,000
Impact
catastrophic
10,000,000
Probability
0.95
9500 95,000 475,000 950,000 9,500,000
Probability
0.7
7000 70,000 350,000 700,000 7,000,000
Probability
0.4
4000 40,000 200,000 400,000 4,000,000
Probability
0.2
2000 20,000 100,000 200,000 2,000,000
Probability
0.01
100 1000 5000 10,000 100,000
28 2 Risk Matrices
Table 2.17. With expected losses, it is less likely to get inversions of categories
(although just because you quantify an estimate does not mean that you have
removed all subjectivity). To apply the formula, assume an organization with net
income of 1,000,000 per year, making RT ¼ 1,240,000. Table 2.18 gives the risk-
adjusted losses for the expected losses in Table 2.17.
The formula seems to have an anomaly for high P and high I, with an inversion.
This occurs because the high impact value of 10,000,000 overwhelms the RT of
1,240,000, making the latter component of the formula negative. Thus there is an
interesting phenomenon in the formula for high P and high I, but in reality, such
cases would easily be considered high risk, and firms should be wary of taking on
risks greater than twice their annual income. Further, the formula yields drastic
increases over expected loss when Impact is 10,000,000. Not only is the inversion
there for high probability, the extreme low probability outcome turns red (which
might be appropriate for catastrophic loss).
Monat and Doremus also suggest using their formula to rank order new risks. For
a new case with an estimated probability of 0.65 and estimated impact of
$4,000,000, the RAI would be 2,884,375, definitely in the red zone. If a portfolio
of five new projects were being considered with the estimates given in Table 2.19
(along with the RT of 1,240,000 used above), the RAI could provide a basis for
ranking relative risks.
Table 2.18 Table of RAI
Impact
insignificant
10,000
Impact
minor
100,000
Impact
moderate
500,000
Impact
major
1,000,000
Impact
catastrophic
10,000,000
Probability
0.95
9502 95,192 479,788 969,153 11,415,323?
Probability
0.7
7008 70,847 371,169 784,677 15,467,742
Probability
0.4
4010 40,968 224,194 496,774 13,677,419
Probability
0.2
2006 20,645 116,129 264,516 8,451,613
Probability
0.01
100 1040 5998 13,992 499,194
Table 2.19 New cases
Probability Impact Expected loss RAI Rank
0.65 4,000,000 2,600,000 4,067,742 3
0.15 6,000,000 900,000 2,750,806 4
0.25 8,000,000 2,000,000 6,838,710 1
0.40 5,000,000 2,000,000 4,419,355 2
0.90 1,000,000 900,000 936,290 5
Risk Adjusted Loss 29
Note that the expected loss for cases 4 and 5 were the same, but the RAI is much
greater for case 3, which ranked highest in risk of the five cases. Based on expected
loss, case 1 was the riskiest. Based on RAI, cases 3 and 4 are both rated riskier than
case 1. Consideration of risk aversion is a valid approach, but does require some
assumptions just as any quantitative model.
Conclusions
The study of risk management has grown in the last decade in response to serious
incidences threatening trust in business operations. The field is evolving, but the first
step is generally considered to be application of a systematic process, beginning with
consideration of the organization’s risk appetite. Then risks facing the organization
need to be identified, controls generated, and review of the risk management process
along with historical documentation and records for improvement of the process.
Risk matrices are a means to consider the risk components of threat severity and
probability. They have been used in a number of contexts, basic applications of
which were reviewed. Cox and Levine provide useful critiques of the use of risk
matrices. That same author also suggested more accurate quantitative analytic tools.
An ideal approach would be to expend such measurement funds only if they enable
reducing overall cost. The interesting aspect is that we do not really know. Thus we
would argue that if you have accurate data (and it is usually worth measuring
whatever you can), you should get as close to this ideal as you can. Risk matrices
provide valuable initial tools when high levels of uncertainty are present. Quantita-
tive risk assessment in the form of indices as demonstrated would be preferred if data
to support it is available.
Notes
1. Prasad, S.B. (2011). A matrixed assessment. Internal Auditor 68(6), 63–64.
2. Day, G.S. (2007). Is it real? Can we win? Is it worth doing? Managing risk and
reward in an innovation portfolio, Harvard Business Review 85:12, 110–120.
3. McIlwain, J.C. (2006). A review: A decade of clinical risk management and risk
tools, Clinician in Management 14:4, 189–199.
4. Cox, L.A. Jr. (2008). What’s wrong with risk matrices? Risk Analysis 28:2,
497–512.
5. Levine, E.S. (2012). Improving risk matrices: The advantages of logarithmically
scaled axes. Journal of Risk Research 15(2), 209–222.
6. Ball, D.J. and Watt, J. (2013). Further thoughts on the utility of risk matrices.
Risk Analysis 33(11), 2068-2078.
7. Cox, L.A., Jr. (2012). Evaluating and improving risk formulas for allocating
limited budgets to expensive risk-reduction opportunities. Risk Analysis 32(7),
1244–1252.
30 2 Risk Matrices
8. Ferreira, L.M.D.F., Arantes, A. and Kharlamov, A.A. (2015). Development of a
purchasing portfolio model for the construction industry: An empirical study.
Production Planning & Control 26(5), 377–392.
9. Monat, J.P. and Doremus, S. (2018). An alternative to Heat Map Risk Matrices
for project risk prioritization. Journal of Modern Project Management May–Aug,
104–113.
Notes 31
Value-Focused Supply Chain Risk Analysis 3
A fundamental premise of Keeney’s book1 is that decision makers should not settle
for those alternatives that are thrust upon them. The conventional solution process
is to generate alternative solutions to a problem, and then focus on objectives. This
framework tends to suppose an environment where decision makers are powerless
to do anything but choose among given alternatives. It is suggested that a more
fruitful approach would be for decision makers to take more control over this
process, and use objectives to create alternatives, based on what the decision
makers would like to achieve, and why objectives are important.
Hierarchy Structuring
Structuring translates an initially ill-defined problem into a set of well-defined
elements, relations, and operations. This chapter is based on concepts presented in
Keeney, and in Olson.2
Before we discuss hierarchies and their structure, we should give some basic
definitions. Keeney and Raiffa3 gave the following definitions:
Objective—the preferred direction of movement on some measure of value
Attribute—a dimension of measurement
Keeney and Raiffa distinguish between utility models, based upon tradeoffs of
return and risk found in von Neumann-Morgenstern utility theory and the more
general value models allowing tradeoffs among any set of objectives and
sub-objectives. Preferential independence concerns whether the decision maker’s
preference among attainment levels on two criteria do not depend on changes in
other attribute levels. Attribute independence is a statistical concept measured by
correlation. Preferential independence is a property of the desires of the decision
maker, not the alternatives available.
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_3
33
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_3&domain=pdf
The simplest hierarchy would involve VALUE as an objective with available
alternatives branching from this VALUE node. Hierarchies generally involve addi-
tional layers of objectives when the number of branches from any one node exceeds
some certain value. Cognitive psychology has found that people are poor at
assimilating large quantities of information about problems. Saaty used this concept
as a principle in analytic hierarchy development, calling for a maximum of from
seven branches in any one node in the analytic hierarchy process (AHP).4
Desirable characteristics of hierarchies given by chapter 2 of Keeney and Raiffa
(1976) include:
Completeness—objectives should span all issues of concern to the decision maker,
and attributes should indicate the degree to which each objective is met.
Operability—available alternatives should be characterized in an effective way.
Decomposability—preferential and certainty independence assumptions should be
met
Lack of Redundancy—there should not be overlapping measures
Size—the hierarchy should include the minimum number of elements necessary.
Keeney and Saaty both suggest starting with identification of the overall funda-
mental objective. In the past, business leaders would focus on profit. Keeney stated
that the overall objective can be the combination of more specific fundamental
objectives, such as minimizing costs, minimizing detrimental health impacts, and
minimizing negative environmental impacts. For each fundamental objective,
Keeney suggested the question, “Is it important?”
Subordinate to fundamental objectives are means objectives—ways to accom-
plish the fundamental objectives. Means objectives should be mutually exclusive
and collectively exhaustive with respect to fundamental objectives. When asked
“Why is it important?”, means objectives would be those objectives for which a clear
reason relative to fundamental objectives appears. If no clear reason other than “It
just is” appear, the objective probably should be a fundamental objective. Available
alternatives are the bottom level of the hierarchy, measured on all objectives
immediately superior. If alternative performance on an objective is not measurable,
Keeney suggests dropping that objective. Value judgments are required for funda-
mental objectives, and judgments about facts required for means-ends objectives
(Fig. 3.1):
Decision makers should not settle for those alternatives that are thrust upon them.
The conventional solution process is to generate alternative solutions to a problem,
and then focus on objectives. This framework tends to suppose an environment
where decision makers are powerless to do anything but choose among given
alternatives. It is suggested that a more fruitful approach would be for decision
makers to use objectives to create alternatives, based on what the decision makers
would like to achieve, and why objectives are important.
34 3 Value-Focused Supply Chain Risk Analysis
Hierarchy Development Process
Hierarchies can be developed in two basic manners: top-down or bottom-up. The
most natural approach is to start at the top, identifying the decision maker’s
fundamental objective, and developing subelements of value, proceeding downward
until all measures of value are included (weeding out redundancies and
measures that do not discriminate among available alternatives). At the bottom of
the hierarchy, available alternatives can be added. It is at this stage that new and
better alternatives are appropriate to consider. The top-down approach includes the
following phases:5
1. Ask for overall values
2. Explain the meanings of initial value categories and interrelationships
WHAT IS MEANT by this value?
WHY IS THIS VALUE IMPORTANT?
HOW DO AVAILABLE OPTIONS AFFECT attaining this value?
3. Get a list of concerns—as yet unstructured
The aim of this approach is to gain as wide a spectrum of values as possible. Once
they are attained, then the process of weeding and combining can begin.
The value-focused approach has been applied to supply chain risk identification.6
Here we will present our view of value-focused analysis to a representative supply
chain risk situation. We hypothesize a supply chain participant considering location
of a plant to produce products for a multinational retailer. We can start looking for
overall values, using the input from published sources given in Table 3.1. The first
focus is on the purpose of the business—the product. Product characteristics of
importance include its quality, meeting specifications, cost, and delivery. In today’s
business environment, we argue that service is part of the product. We represent that
in our hierarchy with the concept of manufacturability and deliverability to
consumer (which reflects life cycle value to the customer). The operation of the
supply chain is considered next, under the phrase “management,” which reflects the
Fig. 3.1 Value hierarchy framework
Hierarchy Structuring 35
ability of the supply chain to communicate, and to be agile in response to changes.
There are also external risks, which we cluster into the three areas of political
(regulation, as well as war and terrorism), economic (overall economic climate as
well as the behavior of the specific market being served), and natural disaster. Each
of these hierarchical elements can then be used to identify specific risks for a given
supply chain situation. We use those identified in Table 3.1 to develop a value
hierarchy.
Table 3.1 Value hierarchy for supply chain risk
Top Level Second Level Third Level
Product Quality
Cost Price
Investment required
Holding cost/service level tradeoff
On-time delivery
Service Manufacturability Outsourcing opportunity cost/risk tradeoff
Ability to expand production
New technology breakthroughs
Product obsolescence
Deliverability Transportation system
Insurance cost
Management Communication IS breakdown
Distorted information leading to bullwhip
effect
Forecast accuracy
Integration
Viruses/bugs/hackers
Flexibility Agility of sources
Ability to replace sources as needed
Safety Plant disaster
Labor Risk of strikes, disputes
Political Government Customs and regulations
War and Terrorism
Economic Overall economy Economic downturn
Exchange rate risk
Specific regional
economy
Labor cost influence
Changes in competitive advantage
Specific market Price fluctuation
Customer demand volatility
Customer payment
Natural
disaster
Uncontrollable disaster
Diseases, epidemics
36 3 Value-Focused Supply Chain Risk Analysis
The next step in multiple attribute analysis is to generate the alternatives. There
are a number of decisions that might be made, to include vendor selection, plant
siting, information system selection, or the decision to enter specific markets by
region or country. For some of these, there may be binary decisions (enter a
country’s market or not), or there might be a number of variants (including different
degrees of entering a specific market). In vendor selection and in plant siting, there
may be very many alternatives. Usually, multiple attribute analysis focuses on two to
seven alternatives that are selected as most appropriate through some screening
process. Part of the benefit of value analysis is that better alternatives may be
designed as part of the hierarchical development, seeking better solutions
performing well on all features.
Suggestions for Cases Where Preferential Independence Is Absent
If an independence assumption is found to be inappropriate, either a fundamental
objective has been overlooked or means objectives are beings used as fundamental
objectives. Therefore, identification of the absence of independence should lead to
greater understanding of the decision maker’s fundamental objectives.
Multiattribute Analysis
The next step of the process is to conduct multiattribute analysis. There are a
number of techniques that can be applied.7 Multiattribute utility theory (MAUT)
can be supported by software products such as Logical Decision, which are usually
applied in more thorough and precise analyses. The simple multiattribute rating
theory (SMART)8 can be used with spreadsheet support, and is usually the easiest
method to use. Analytic hierarchy process can also be applied, as was the case in all
of the cases applying multiple objective analysis. Expert Choice software is avail-
able, but allows only seven branches, so is a bit more restrictive than MAUT, and
much more restrictive than SMART. Furthermore, the number of pairwise
comparisons required in AHP grows enormously with the number of branches.
Still, users often are willing to apply AHP and feel confident in its results.9 Here
we will demonstrate using SMART for a decision involving site selection of a plant
within a supply chain.
The SMART Technique
Edwards proposed a ten step technique. Some of these steps include the process of
identifying objectives and organization of these objectives into a hierarchy.
Guidelines concerning the pruning of these objectives to a reasonable number
were provided.
The SMART Technique 37
Step 1: Identify the person or organization whose utilities are to be
maximized Edwards argued that MAUT could be applied to public decisions in
the same manner as was proposed for individual decision making.
Step 2: Identify the issue or issues Utility depends on the context and purpose of
the decision.
Step 3: Identify the alternatives to be evaluated This step would identify the
outcomes of possible actions, a data gathering process.
Step 4: Identify the relevant dimensions of value for evaluation
of the alternatives It is important to limit the dimensions of value to those that
are important for this particular decision. This can be accomplished by restating and
combining goals, or by omitting less important goals. Edwards argued that it was not
necessary to have a complete list of goals. If the weight for a particular goal is quite
low, that goal need not be included. There is no precise range of goals for all
decisions. However, eight goals was considered sufficiently large for most cases,
and fifteen too many.
Step 5: Rank the dimensions in order of importance For decisions made by one
person, this step is fairly straightforward. Ranking is a decision task that is easier
than developing weights, for instance. This task is usually more difficult in group
environments. However, groups including diverse opinions can lead to a more
thorough analysis of relative importance, as all sides of the issue are more likely to
be voiced. An initial discussion could provide all group members with a common
information base. This could be followed by identification of individual judgments
of relative ranking.
Step 6: Rate dimensions in importance, preserving ratios The least important
dimension would be assigned an importance of 10. The next-least-important dimen-
sion is assigned a number reflecting the ratio of relative importance to the
least important dimension. This process is continued, checking implied ratios as
each new judgment is made. Since this requires a growing number of comparisons,
there is a very practical need to limit the number of dimensions (objectives).
Edwards expected that different individuals in the group would have different
relative ratings.
Step 7: Sum the importance weights, and divide each by the sum This step
allows normalization of the relative importances into weights summing to 1.0.
Step 8: Measure the location of each alternative being evaluated on each
dimension Dimensions were classified into the groups: subjective, partly subjec-
tive, and purely objective. For subjective dimensions, an expert in this field would
estimate the value of an alternative on a 0–100 scale, with 0 as the minimum
plausible value and 100 the maximum plausible value. For partly subjective
38 3 Value-Focused Supply Chain Risk Analysis
dimensions, objective measures exist, but attainment values for specific alternatives
must be estimated. Purely objective dimensions can be measured. Raiffa advocated
identification of utility curves by dimension.10 Edwards proposed the simpler expe-
dient of connecting the maximum plausible and minimum plausible values with a
straight line.11 It was argued that the straight line approach would provide an
acceptably accurate approximation.
Step 9: Calculate utilities for alternatives Uj ¼ Σk wk ujk where Uj is the utility
value for alternative j, wk is the normalized weight for objective k, and ujk is the
scaled value for alternative j on dimension k. Σk wk ¼ 1. The wk values were
obtained from Step 7 and the ujk values were generated in Step 8.
Step 10: Decide If a single alternative is to be selected, select the alternative with
maximum Uj. If a budget constraint existed, rank order alternatives in the order of
Uj/Cj where Cj is the cost of alternative j. Then alternatives are selected in order of
highest ratio first until the budget is exhausted.
Plant Siting Decision
Assume that a supply chain vendor is considering sites for a new production facility.
Management has considered the factors that they feel are important in this decision
(the criteria):
• Acquisition and building cost
• Expected cost per unit
• Work force ability to produce quality product
• Work force propensity for labor dispute
• Transportation system reliability
• Expandability
• Agility to changes in demand
• Information system linkage
• Insurance structure
• Tax structure
• Governmental stability
• Risk of disaster
Each of these factors need to be measured in some way. If possible, objective
data would be preferred, but often subjective expert estimates are all that is
available. The alternatives need to be identified as well. There are an infinite
number of sites. But the number considered is always filtered down to a smaller
number. Here we will start with ten options. Each of them has estimates
performances on each of the twelve criteria listed (Table 3.2):
The SMART Technique 39
Each of the choices involves some tradeoff. With twelve criteria, it will be rare
that one alternative (of the final set of filtered choices) will dominate another,
meaning that it is at least as good or better on all criteria measures, and strictly
better on at least one criterion.
Each measure can now be assigned a value score on a 0–1 scale, with 0 being the
worst performance imaginable, and 1 being the best performance imaginable. This
reflects the decision maker’s perception, a subjective value. For our data (Table 3.3),
a possible set of values could be:
The SMART method now needs to identify relative weights for the importance of
each criterion in the opinion of the decision maker or decision making group. This
process begins by sorting the criteria by importance. One possible ranking:
• Work force ability to produce quality product
• Expected cost per unit
• Risk of disaster
• Agility to changes in demand
• Transportation system reliability
• Expandability
• Governmental stability
• Tax structure
Table 3.2 Plant siting data
Location A&B UnitC Quality Labor Trans Expand
Alabama $20 m $5.50 High Moderate 0.30 Good
Utah $23 m $5.60 High Good 0.28 Poor
Oregon $24 m $5.40 High Low 0.31 Moderate
Mexico $18 m $3.40 Moderate Moderate 0.25 Good
Crete $21 m $6.20 High Low 0.85 Poor
Indonesia $15 m $2.80 Moderate Moderate 0.70 Fair
Vietnam $12 m $2.50 Good Good 0.75 Good
India $13 m $3.00 Good Good 0.80 Good
China #1 $17 m $3.10 Good Good 0.60 Fair
China #2 $15 m $3.20 Good Good 0.55 Good
Location Agility IS link Insurance Tax Govt Disaster
Alabama 2 mos Very good $400 $1000 Very good Hurricane
Utah 3 mos Very good $350 $1200 Very good Drought
Oregon 1 mo Very good $450 $1500 Good Flood
Mexico 4 mos Good $300 $1800 Fair Quake
Crete 5 mos Good $600 $3500 Good Quake
Indonesia 3 mos Poor $700 $800 Fair Monsoon
Vietnam 2 mos Good $600 $700 Good Monsoon
India 3 mos Very good $700 $900 Very good Monsoon
China #1 2 mos Very good $800 $1200 Very good Quake
China #2 3 mos Very good $500 $1300 Very good Quake
40 3 Value-Focused Supply Chain Risk Analysis
• Insurance structure
• Acquisition and building cost
• Information system linkage
• Work force propensity for labor dispute
The SMART method proceeds by assigning the most important criterion a value
of 1.0, and then assessing relative importance by considering the proportional worth
of moving from the worst to the best on the most important criterion (quality) and
moving from the worst to the best on the criterion compared to it. For instance, the
decision maker might judge moving from the worst possible unit cost to the best
possible unit cost to be 0.8 as important as moving from the worst possible quality
to the best possible quality. We assume the following ratings based on this
procedure:
Criterion Rating Proportion
Work force ability to produce quality product Quality 1.00 0.167
Expected cost per unit UnitC 0.80 0.133
Risk of disaster Disaster 0.70 0.117
(continued)
Table 3.3 Standardized scores for plant siting data
Location A&B UnitC Quality Labor Trans Expand
Alabama 0.60 0.40 0.90 0.30 0.90 1.0
Utah 0.30 0.35 0.90 0.80 0.95 0
Oregon 0.10 0.45 0.90 0.10 0.86 0.5
Mexico 0.70 0.80 0.40 0.30 1.00 1.0
Crete 0.50 0.20 0.90 0.10 0.30 0
Indonesia 0.80 0.90 0.40 0.30 0.55 0.3
Vietnam 0.90 0.95 0.60 0.80 0.50 1.0
India 0.85 0.87 0.60 0.80 0.40 1.0
China #1 0.75 0.85 0.60 0.80 0.60 0.3
China #2 0.80 0.83 0.60 0.80 0.70 1.0
Location Agility IS link Insurance Tax Govt Disaster
Alabama 0.8 1.0 0.70 0.80 1.0 0.5
Utah 0.6 1.0 0.80 0.70 1.0 0.9
Oregon 1.0 1.0 0.60 0.60 0.8 0.8
Mexico 0.4 0.7 1.00 0.40 0.4 0.4
Crete 0.2 0.7 0.50 0.00 0.8 0.3
Indonesia 0.6 0 0.30 0.90 0.4 0.7
Vietnam 0.8 0.7 0.50 1.00 0.8 0.7
India 0.6 1.0 0.30 0.85 1.0 0.7
China #1 0.8 1.0 0.10 0.70 1.0 0.8
China #2 0.6 1.0 0.55 0.65 1.0 0.4
Note that for the Disaster criterion, specifics for each locale can lead to different ratings for the same
major risk category.
The SMART Technique 41
Agility to changes in demand Agility 0.65 0.108
Transportation system reliability Trans 0.60 0.100
Expandability Expand 0.58 0.097
Government stability Govt 0.40 0.067
Tax structure Tax 0.35 0.058
Insurance structure Insurance 0.32 0.053
Acquisition and building cost A&B 0.30 0.050
Information system linkage IS link 0.20 0.033
Work force propensity for labor dispute Labor 0.10 0.017
Proportion is obtained by dividing each rating by the sum of ratings (6.00).
Overall value for each alternative site can then be ranked by the sumproduct of
criterion relative importances times the matrix of scores on criteria.
Location A&B UnitC Quality Labor Trans Expand Agility
IS
link Insurance Tax Govt Disaster
weight 0.05 0.133 0.167 0.017 0.1 0.097 0.108 0.033 0.053 0.058 0.067 0.117
Alabama 0.6 0.4 0.9 0.3 0.9 1 0.8 1 0.7 0.8 1 0.5
Utah 0.3 0.35 0.9 0.8 0.95 0 0.6 1 0.8 0.7 1 0.9
Oregon 0.1 0.45 0.9 0.1 0.86 0.5 1 1 0.6 0.6 0.8 0.8
Mexico 0.7 0.8 0.4 0.3 1 1 0.4 0.7 1 0.4 0.4 0.4
Crete 0.5 0.2 0.9 0.1 0.3 0 0.2 0.7 0.5 0 0.8 0.3
Indonesia 0.8 0.9 0.4 0.3 0.55 0.3 0.6 0 0.3 0.9 0.4 0.7
Vietnam 0.9 0.95 0.6 0.8 0.5 1 0.8 0.7 0.5 1 0.8 0.7
India 0.85 0.87 0.6 0.8 0.4 1 0.6 1 0.3 0.85 1 0.7
China #1 0.75 0.85 0.6 0.8 0.6 0.3 0.8 1 0.1 0.7 1 0.8
China #2 0.8 0.83 0.6 0.8 0.7 1 0.6 1 0.55 0.65 1 0.4
This analysis ranks the alternatives as follows:
Rank Site Score
1 Vietnam 0.762
2 Alabama 0.754
3 India 0.721
4 China #2 0.710
5 Oregon 0.706
6 China #1 0.679
7 Utah 0.674
8 Mexico 0.626
9 Indonesia 0.557
10 Crete 0.394
This indicates a close result for Vietnam and Alabama, with the first seven sites all
reasonably close as well. There are a couple of approaches. More detailed
comparisons might be made between Vietnam and Alabama. Another approach is
42 3 Value-Focused Supply Chain Risk Analysis
to look at characteristics that these alternatives were rated low on, with the idea that
maybe the site’s characteristics could be improved.
Conclusions
Structuring of a value hierarchy is a relatively subjective activity, with a great deal of
possible latitude. It is good to have a complete hierarchy, including everything that
could be of importance to the decision maker. However, this yields unworkable
analyses. Hierarchies should focus on those criteria that are important in discrimi-
nating among available alternatives. The key to hierarchy structuring is to identify
those criteria that are most important to the decision maker, and that will help the
decision maker make the required choice.
This chapter presented the value-focused approach, and the SMART method.
These were demonstrated in the context of the supply chain risk management
decision of selecting a plant location for production of a component. The methods
apply for any decision involving multiple criteria.
Notes
1. Keeney, R.L. (1992). Value-Focused Thinking: A Path to Creative
Decisionmaking. Cambridge, MA: Harvard University Press.
2. Olson, D.L. (1996). Decision Aids for Selection Problems. New York: Springer.
3. Keeney, R.L. & Raiffa, H. (1976). Decisions with Multiple Objectives:
Preferences and Value Tradeoffs. New York: John Wiley & Sons.
4. Saaty, T.L. (1988). Decision Making for Leaders: The Analytic Hierarchy
Process for Decisions in a Complex World. Pittsburgh: RWS Publications.
5. Keeney, R.L., Renn, O. and von Winterfeldt, D. (1987). Structuring Germany’s
energy objectives, Energy Policy 15, 352–362.
6. Neiger, D., Rotaru, K and Churilov, L. (2009). Supply chain risk identification
with value-focused process engineering, Journal of Operations Management
27, 154–168.
7. Olson (1996), op cit.
8. Edwards, W. (1977). How to use multiattribute utility measurement for social
decisionmaking, IEEE Transactions on Systems, Man, and Cybernetics,
SMC-7:5, 326–340.
9. Olson, D.L., Moshkovich, H.M., Schellenberger, R. and Mechitov, A.I. (1995).
Consistency and Accuracy in Decision Aids: Experiments with Four
Multiattribute Systems, Decision Sciences 26:6, 723–749; Olson, D.L.,
Mechitov, A.I. and Moshkovich, H. (1998). Cognitive Effort and Learning
Features of Decision Aids: Review of Experiments, Journal of Decision Systems
7:2, 129–146.
10. Raiffa, H. (1968). Decision Analysis. Reading, MA: Addison-Wesley.
11. Edwards, W. (1977), op cit.
Notes 43
Examples of Supply Chain Decisions
Trading Off Criteria 4
In prior editions, we reviewed five cases of models trading off criteria, seeking to
demonstrate how multiple criteria models can be applied, along with value analysis
to seek improvement. Sometimes risk is dealt with directly. Other times it is implicit,
especially in cases involving environmental issues. In this third edition, we present
five more cases.
In the five cases to follow, we will try to demonstrate the kinds of trade-off
decisions often applied in practice. A number of different multiple criteria
methodologies were applied in the original papers. We demonstrate with the less
complex SMART methodology, which is not often published recently because
journal publication requires new approaches, and the SMART methodology is
well-known (and quite useful). You can refer to the original articles if you are
interested in the methodologies they specifically used. We try to use their data as
closely as possible.
Case 1: Zhu, Shah and Sarkis (2018)1
This paper dealt with identifying product lines in the beverage industry to delete
when faced with a downsizing situation, seeking lean and sustainable supply chain
organization. Companies often have extensive product portfolios, making it difficult
to be lean. The authors consider strategic impact, resource management, financial
performance, and stakeholder interest. They apply analytic hierarchy process and its
variant analytic network process, as well as benefit cost and risk analysis. The
alternatives were three product families. Product family A was a signature brand
with many loyal customers, making it difficult to make changes. Product family B
was a secondary line, a healthier version of product family A, and a substitute.
Product family C was an innovative product line facing less direct competition than
the other two product families.
The company had a mature supply chain and a reputation for high quality. They
considered nine product candidates to delete with the intent of focusing to a leaner
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_4
45
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_4&domain=pdf
supply chain system. These alternatives consisted of plastic (product family A), glass
(product family B), or metal (product family C) containers for each of the three
product families.
The analysis applied analytic hierarchy process (AHP) to obtain relative weights
for the higher level criteria of product characteristics as well as impact on internal
and external operational factors. They then went on to holistically evaluate plastic,
glass, and metal variants of each of the three products (nine alternatives) using
analytic network processing, followed by benefit-cost ratio adjusted for opportunity
(adjusting benefits) and risk (adjusting costs). The three analyses looked at different
aspects of the decision. We will use their AHP study to compare with a value
analysis.
Product-specific decision characteristics included the three criteria of impact on
resources (IOR), impact on strategy (IOS), and impact on financial performance
(IOFP). Criteria relating to internal operations were competencies (CO), supply
chain operations activities (SCOA), and lean dimensions (LD). External environ-
mental characteristics involved environmental sustainability (ES) and external
shareholders (SH). Thus eight criteria were involved. These criteria can be rank
ordered as follows:
CO > ES > SCOA > SH > LD > IOS ¼ IOFP > IOR
Swing weighting for these criteria could be accomplished as in Table 4.1.
To make the sum equal 1.0, the last weight (IOR) was raised to 0.02 from the
calculated 0.02. The scores for the three options of product families A, B, and C
would need to be scored on all eight criteria. These are given in Table 4.2.
Here product family C won out, with product family A second. Thus the analysis
would recommend going with metal containers in place of plastic. These rankings
match what the source authors obtained from AHP.
Table 4.1 Product deletion case swing weighting
Criteria Code
From
max Weight
From
min Weight Compromise
Internal operations
competencies
CO 100 0.303 300 0.303 0.30
Environmental
sustainability
ES 70 0.212 220 0.222 0.22
Supply chain operations
activities
SCOA 55 0.167 170 0.172 0.17
External stakeholders SH 35 0.106 110 0.111 0.11
Lean dimensions LD 35 0.106 100 0.101 0.10
Impact on strategy IOS 15 0.045 40 0.040 0.04
Impact on financial
performance
IOFP 15 0.045 40 0.040 0.04
Impact on resources IOR 5 0.015 10 0.010 0.01
46 4 Examples of Supply Chain Decisions Trading Off Criteria
Value Analysis
Value analysis looks at relative strengths and weaknesses of each option. The score
matrix given in Table 4.2 provides a means to assess these. Product family C’s
relative strengths are in external stakeholders, internal operational competencies,
supply chain operations activities, and impact on strategy, while it is relatively weak
on environmental stability and impact on financial performance. Product family A is
strong on environmental stability, lean dimensions, financial performance, and
impact on resources, while weak on internal operations, supply chain operations,
and strategy impact. Product family B was not best on anything, while weakest with
respect to external stakeholder and resource impact.
Case 2: Liu, Eckert, Yannou-Le Bris, and Petit (2019)2
This case involves a larger dataset. Supplier selection is a widely popular supply
chain decision supported by multiple criteria models. Liu et al. (2019) modeled
sustainability balanced against economic value and social responsibility, in line with
the triple bottom line approach emphasized in Europe. They combined fuzzy input
into the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)
and the analytic network process (ANP) to the task of ranking 12 types of farmers
and intermediate suppliers in a pork value chain in France. In that study, two
decision makers were involved, applying pairwise comparisons to the three triple
bottom line factors, as well as the three groups of subcriteria.
The 12 sources varied in feeding practices, dominant feed composition, size, and
horizontal or vertical storage. Twenty measures of environmental, economic, and
social (the triple bottom line) aspects were considered as displayed in Table 4.3.
Table 4.2 Product family scores on criteria
Criteria Code Weight
Prod.
family A
Prod.
family B
Prod.
family C
Internal operations
competencies
CO 0.30 0.2 0.6 0.8
Environmental
sustainability
ES 0.22 1 0.2 0.1
Supply chain operations
activities
SCOA 0.17 0.2 0.6 0.8
External stakeholders SH 0.11 0.6 0.2 1
Lean dimensions LD 0.10 1 0.5 0.5
Impact on strategy IOS 0.04 0.2 0.6 0.8
Impact on financial
performance
IOFP 0.04 1 0.7 0.3
Impact on resources IOR 0.02 1 0.2 0.4
Value score 0.556 0.450 0.634
Case 2: Liu, Eckert, Yannou-Le Bris, and Petit (2019) 47
The 12 source alternatives were categories of suppliers in the value chain is
shown in Table 4.4.
Measures were given for each criterion on each type of farmer.
The SMART methodology would begin by identifying swing weights. The first
step in that process would be to rank order the 20 criteria. The rank order complying
with the analytic network process values obtained in the original article were:
C2:4 > C3:1 > C2:1 ¼ C2:7 > C2:5 ¼ C2:6 > C1:7 > C3:2 ¼ C3:3 ¼ C3:4
> C2:2 ¼ C2:3 > C1:6 > C1:1 ¼ C1:8 ¼ C1:9 > C1:3 > C1:4 ¼ C1:5
> C1:2
The greatest weight was given to feed manufacturing cost (C2.4), more than
double that of the second-ranked measure of work hours (C3.1). The lowest weights
were given to the environmental factors, with the exception of land occupation
(C1.7).
Swing weighting could be applied as shown in Table 4.5.
The next step is to obtain relative scores for each alternative on each criterion.
Table 4.6 gives normalized scores where 1.0 is the best score, and 0 the worst.
Table 4.3 Pork supply chain criteria
TBL component Criteria Code Measure
Environmental Freshwater eutrophication C1.1 Kg SO2 eq
Terrestrial acidification C1.2 Kg SO2 eq
Human toxicity C1.3 Kg1, 4-DB eq
Fossil depletion C1.4 Kg oil eq
Water depletion C1.5 M3
Climate change C1.6 Kg CO2 eq
Land occupation C1.7 M2a
Freshwater ecotoxicity C1.8 Kg1, 4-DB eq
Marine ecotoxicity C1.9 Kg1, 4-DB eq
Economic Investment <5 years C2.1 Euro/ton
Investment 5–9 years C2.2 Euro/ton
Investment 10–14 years C2.3 Euro/ton
Feed manufacturing cost C2.4 Euro/ton
Total feed system cost C2.5 Euro/ton
Waste C2.6 Percentage
Labor cost C2.7 Euro/ton
Social Work hours C3.1 Hours/day
Biodiversity varieties C3.2 Number by formula
Biodiversity species C3.3 Number by formula
Localness C3.4 Percent by formula
48 4 Examples of Supply Chain Decisions Trading Off Criteria
Liu et al. found ranks by preference as follows:
Excellent: S1 > S2
Acceptable: S10 > S3 > S4
Poor: S12 > S7 > S8 > S11 > S5 > S6
Table 4.6 has the same ranking for the Excellent category, and S10 also came
third. There was some difference for the intermediate-ranked categories, but quite a
bit of similarity for the lower ranks.
Value Analysis
Value analysis is possible by identifying where each alternative has relative
strengths and weaknesses. S1, the colza farmer, was strongest on six measures,
including low land occupation, short-term investment, low waste, low labor cost,
and work hours. It was weakest on long-term investment. The twelfth-ranked
alternative, S6, was strongest on localness, but weak on human toxicity, long-term
investment, waster, and labor cost. The context of this problem was to rank given
alternatives. The value analysis can show why ranking was as it ended up.
Table 4.4 Types of farmers
Code Type name Orientation
Dominant
feed Size of feed
Type of
storage
S1 Bought colza Purchasing Colza
S2 Bought soy Purchasing Soy
S3 Made < 2500 T Producing Dry cereals Silo < 2500
T
S4 Made > 2500 T Producing Dry cereals Silo < 2500
T
S5 Made maize Hori <
2500 T
Producing Corn Silo < 2500
T
Horizontal
S6 Made maize Hori >
2500 T
Producing Corn Silo < 2500
T
Horizontal
S7 Made maize Vert <
2500 T
Producing Corn Silo < 2500
T
Vertical
S8 Made maize Vert >
2500 T
Producing Corn Silo < 2500
T
Vertical
S9 Mix Horizontal Mix Dry cereals Horizontal
S10 Mix Vertical Mix Dry cereals Vertical
S11 Mix maize Horizontal Mix Corn Horizontal
S12 Mix maize Vertical Mix Corn Vertical
Case 2: Liu, Eckert, Yannou-Le Bris, and Petit (2019) 49
Case 3: Khatri and Srivastava (2016)3
This case involves technology selection considering environmental considerations.
The context is an Indian aluminum recycling company that operated five plants.
They wanted to align business practices with sustainable development, and identified
three furnace and burner technologies seeking the most promising technology to
reach their goals.
The AHP model these authors used involved three alternative (RER—a reverber-
atory furnace with a regenerative burner, OROO—a rotary furnace with oxy fuel
burner technology, and REO—a reverberatory furnace with oxy fuel burner tech-
nology). They considered the following six criteria:
1. Environmental sustainability (EnvS) considered landfill area saved, hazardous
chemical reduction, reutilization of wastes, environmental emission reduction,
and recycled material usage.
Table 4.5 Swing weighting
Criteria Code
From
max. Weight
From
min. Weight Compromise
Food manufacturing
cost
C2.4 100 0.181 120 0.152 0.167
Work hours C3.1 50 0.091 60 0.076 0.080
Investment <5 years C2.1 45 0.082 55 0.070 0.075
Labor cost C2.7 45 0.082 55 0.070 0.075
Total feed system cost C2.5 40 0.073 50 0.063 0.068
Waste C2.6 40 0.073 50 0.063 0.068
Land occupation C1.7 35 0.064 45 0.057 0.060
Biodiversity varieties C3.2 30 0.054 40 0.051 0.053
Biodiversity species C3.3 30 0.054 40 0.051 0.053
Localness C3.4 30 0.054 40 0.051 0.053
Investment 5–9 years C2.2 20 0.036 35 0.044 0.040
Investment 10–14
years
C2.3 20 0.036 35 0.044 0.040
Climate change C1.6 15 0.027 30 0.038 0.033
Freshwater
eutrophication
C1.1 10 0.018 25 0.032 0.025
Freshwater ecotoxicity C1.8 10 0.018 25 0.032 0.025
Marine ecotoxicity C1.9 10 0.018 25 0.032 0.025
Human toxicity C1.3 8 0.015 20 0.025 0.020
Fossil depletion C1.4 5 0.009 15 0.019 0.015
Water depletion C1.5 5 0.009 15 0.019 0.015
Terrestrial acidification C1.2 3 0.005 10 0.013 0.010
50 4 Examples of Supply Chain Decisions Trading Off Criteria
T
a
b
le
4
.6
S
co
re
s
fo
r
al
te
rn
at
iv
es
W
g
t
S
1
S
2
S
3
S
4
S
5
S
6
S
7
S
8
S
9
S
1
0
S
1
1
S
1
2
C
2
.4
0
.1
6
7
0
0
0
.6
0
.6
0
.3
0
.3
0
.3
0
.3
0
.5
0
.5
0
.4
0
.4
C
3
.1
0
.0
8
0
1
1
0
.8
0
.8
0
.5
0
.5
0
.8
0
.8
0
.3
0
.8
0
.3
0
.8
C
2
.1
0
.0
7
5
1
1
0
.3
0
.2
0
.3
0
.2
0
.3
0
.2
0
.5
0
.3
0
.5
0
.3
C
2
.7
0
.0
7
5
1
1
0
.5
0
.5
0
.2
0
.2
0
.5
0
.5
0
.1
0
.5
0
.1
0
.5
C
2
.5
0
.0
6
8
0
.8
0
.7
0
.4
0
.4
0
.6
0
.6
0
.6
0
.6
0
.5
5
0
.5
5
0
.6
5
0
.6
5
C
2
.6
0
.0
6
8
1
1
0
.8
0
.8
0
.2
0
.2
0
.2
0
.2
0
.8
0
.8
0
.2
0
.2
C
1
.7
0
.0
6
0
1
0
.5
0
.2
0
.2
0
.4
0
.4
0
.4
0
.4
0
.6
0
.6
0
.7
0
.7
C
3
.2
0
.0
5
3
0
.8
0
.9
0
.3
0
.3
0
.6
0
.6
0
.6
0
.6
0
.2
0
.2
0
.5
0
.5
C
3
.3
0
.0
5
3
0
.8
0
.9
0
.3
0
.3
0
.7
0
.7
0
.7
0
.7
0
.1
0
.1
0
.6
0
.6
C
3
.4
0
.0
5
3
0
.1
0
.4
0
.9
0
.9
0
.8
0
.8
0
.8
0
.8
0
.7
0
.7
0
.3
0
.3
C
2
.2
0
.0
4
0
1
1
0
.7
0
.4
0
.7
0
.4
0
.7
0
.4
0
.9
0
.7
0
.9
0
.7
C
2
.3
0
.0
4
0
0
0
0
.5
0
.2
0
.5
0
.2
0
.5
0
.2
0
.8
0
.5
0
.8
0
.5
C
1
.6
0
.0
3
3
0
.6
0
.2
0
.3
0
.3
0
.4
0
.4
0
.4
0
.4
0
.3
5
0
.3
5
0
.3
3
0
.3
3
C
1
.1
0
.0
2
5
0
.9
0
.6
0
.4
0
.4
0
.5
0
.5
0
.5
0
.5
0
.6
0
.6
0
.2
0
.2
C
1
.8
0
.0
2
5
0
.6
0
.8
0
.4
0
.4
0
.3
0
.3
0
.3
0
.3
0
.5
0
.5
0
.2
0
.2
C
1
.9
0
.0
2
5
0
.7
0
.8
0
.2
0
.2
0
.3
0
.3
0
.3
0
.3
1
1
0
.9
0
.9
C
1
.3
0
.0
2
0
0
.8
0
.6
0
.1
0
.1
0
.2
0
.2
0
.2
0
.2
1
1
0
.9
0
.9
C
1
.4
0
.0
1
5
0
.5
0
.2
0
.6
0
.6
0
.7
0
.7
0
.7
0
.7
0
.9
0
.9
0
.1
0
.1
C
1
.5
0
.0
1
5
0
.6
0
.9
0
.3
0
.3
0
.3
5
0
.3
5
0
.3
5
0
.3
5
5
0
.7
0
.7
0
.4
0
.4
C
1
.2
0
.0
1
0
0
.5
0
.5
0
.7
0
.7
0
.8
0
.8
0
.8
0
.8
0
.4
0
.4
0
.2
0
.2
S
co
re
0
.6
5
5
0
.6
2
7
0
.5
0
3
0
.4
7
1
0
.4
3
4
0
.4
0
2
0
.4
8
0
0
.4
4
9
0
.5
1
3
0
.5
4
8
0
.4
4
9
0
.4
8
4
R
an
k
1
2
5
8
1
1
1
2
7
9
4
3
1
0
6
Case 3: Khatri and Srivastava (2016) 51
2. Social sustainability (SS) considered employee health and safety, employee skill
development, manual operation reduction, and employment in the local
community.
3. Customer orientation (CO) considered customer satisfaction, supply chain risk
mitigation, product quality improvement, and lead time reduction.
4. Technical criteria (TC) considered use of proven technology, output/input
improvement, technology licensing required, and the technology life cycle.
5. Manufacturing flexibility (MF) considered capacity growth, improved efficiency,
reduced changeover time, and inventory reduction.
6. Economic sustainability (ES) considered return on investment, profitability ratio,
operational cost, additional physical facilities required, and project cost.
Swing weighting calculations started with ranking as shown here:
ES > CS > EnvS > SS ¼ MF > TC
Swing weighting is shown in Table 4.7.
The next step is to score alternative performance on each of the six criteria, as
given in Table 4.8.
In this case, the model overwhelmingly pointed to selecting the regenerative
burner technology (RER). Value analysis is obvious here—there is a slight disad-
vantage relative to the oxy fuel burner technology with respect to manufacturing
flexibility, but RER had very strong relative advantages on environmental
Table 4.7 Aluminum swing weighting
Criteria From max. Weight From min. Weight Compromise
ES 100 0.290 40 0.276 0.28
CO 90 0.261 35 0.241 0.25
EnvS 70 0.203 30 0.207 0.2
SS 30 0.087 15 0.103 0.1
MF 30 0.087 15 0.103 0.1
TC 25 0.072 10 0.069 0.07
345 145 1
Table 4.8 Aluminum value scores
RER OROO REO Weight
ES Economic sustainability 1 0.75 0.3 0.25
CO Customer orientation 1 0.35 0.2 0.25
EnvS Environmental sustainability 1 0.25 0.1 0.2
SS Social sustainability 1 0.9 0.2 0.1
MF Manufacturing flexibility 0.6 1 0.4 0.1
TC Technical criteria 1 0.4 0.1 0.07
SCORE 0.930 0.543 0.212
52 4 Examples of Supply Chain Decisions Trading Off Criteria
sustainability, customer orientation, and technical criteria. Some decisions are not
too difficult.
Value Analysis
In this case, there were clear distinguishing performance scores. The first two
alternatives have some compensating advantage (the third does not, among the
criteria included). There were few criteria. While it is often best to focus on fewer
criteria, if there are a number of measurable items falling into clear categories, it can
work. In this case, RER is inferior to OROO only on manufacturing flexibility. Thus
value analysis might seek ways to improve manufacturing flexibility for RER.
Case 4: Envinda, Briggs, Obuah, and Mbah (2011)4
The fourth case involves the interesting decision domain of strategy selection. A
petroleum supply chain in Nigeria faced significant risks, which were modeled in six
areas (the criteria). The purpose of the model was to support selection of one of four
risk mitigation strategies:
• Reduce risk
• Share risk
• Avoid risk
• Retain risk
The six criteria are:
1. Geological and production risk (GPR)
2. Environmental and regulatory risk (ERR)
3. Transportation risk (TR)
4. Oil availability risk (OAR)
5. Geopolitical risk (GR)
6. Reputation risk (RR)
Weight generation began with rank ordering these risks:
GPR > TR > GR > RR > OAR > ERR
Swing weighting is shown in Table 4.9.
Note here that on the backwards pass TR and GR were rated as similar—the point
of swing weighting is to get different perspectives. Ties are possible, rank reversal
would be more concerning. The next step is to obtain scores for the four alternative
risk treatments (Table 4.10).
Case 4: Envinda, Briggs, Obuah, and Mbah (2011) 53
In application, the model could use the weight set for multiple cases, each with
new scores to reflect the new situation.
Value Analysis
Here the choice for this situation would indicate a strong recommendation to reduce
risk through proactive action. The reasons are much higher scores on all criteria
except reputation risk, where simply getting out of that business opportunity was
rated as higher.
Case 5: Akyuz, Karahalios, and Celik (2015)5
The last case involves application of multiple criteria analysis to balanced scorecard
assessment. Balanced scorecards involve measuring performance on four
perspectives (financial, operational, business process, and organizational learning
and growth). These can be applied in many different contexts. The case in point
involved maritime labor compliance in a British environment. Each of the four
perspectives considered four or five factors. The authors applied AHP to rank
order the relative importance of these 19 factors with the intent of identifying
where relative emphasis might be placed in operations. In general, their model
could be used to compare performance at multiple sites. Here we simply want to
demonstrate multiple criteria modeling in a balanced scorecard setting.
Table 4.9 Oil risk swing weighting
Criteria From max. Weight From min. Weight Compromise
GPR 100 0.299 50 0.303 0.30
TR 90 0.269 30 0.182 0.22
GR 70 0.209 30 0.182 0.19
RR 30 0.090 25 0.152 0.13
OAR 25 0.075 20 0.121 0.10
ERR 20 0.060 10 0.061 0.06
335 165 1
Table 4.10 Oil risk scoring
Reduce risk Share risk Avoid risk Retain risk Weights
GPR 0.9 0.6 0.7 0.33 0.30
TR 1 0.25 0.5 0.15 0.22
GR 1 0.3 0.8 0.3 0.19
RR 0.8 0.1 1 0.25 0.13
OAR 0.9 0.45 0.25 0.35 0.10
ERR 0.9 0.4 0.8 0.7 0.06
Scores 0.928 0.374 0.675 0.299
54 4 Examples of Supply Chain Decisions Trading Off Criteria
Each of the four balanced scorecard perspectives consisted of critical success
factors in the context of maritime labor environment assessment (Table 4.11).
Table 4.12 gives the subcriteria and swing weighting implied in the source article.
This involves rank ordering the 19 subfactors, and giving assessments of relative
importance.
Here the source author intent was to rank-order the subcriteria, identifying where
emphasis would be placed. Wages clearly were the most preferred factor, reflecting a
strong emphasis on financial perspectives. Summing weights by balanced scorecard
perspective, Financial received 0.512 of the relative weight, Internal business pro-
cesses 0.238, labor 0.164, and learning and growth 0.086. Inherently, value analysis
is implied by the compromise weights identify relative importance using the ratings
given.
Value Analysis
This application differs, because its intent is to provide a balanced scorecard type of
model. This can be very useful, and interesting. But value analysis applies only to
hierarchical development, because Akyuz et al. applied AHP to performance
measurement.
Table 4.11 Balanced scorecard components in Maritime Labor context
Perspective Critical success factor Code
Financial Seafarer’s employment agreements FP1
Wages FP2
Seafarer compensation for ship loss or foundering FP3
Food and catering FP4
Labor Recruitment and placement LP1
Entitlement to leave LP2
Repatriation LP3
Medical care on-board and ashore LP4
Social security LP5
Internal business Medical certificate IBP1
Manning levels IBP2
Accommodation and recreational facilities IBP3
Shipowner’s liability IBP4
Health and safety and accident prevention IBP5
Learning and growth Minimum age LGP1
Training and qualifications LGP2
Hours of work and rest LGP3
Career and skill development LGP4
Access to shore-based welfare facilities LGP5
Case 5: Akyuz, Karahalios, and Celik (2015) 55
Conclusions
The cases presented involved multiple criteria selection decisions (with the excep-
tion of the fifth, demonstrating how balanced scorecard modeling could be
supported). Multiple criteria analysis is a very good framework to describe specific
aspects of risk and to assess where they impact a given decision context. The value
scores might be useful as a means to select a preferred alternative, or as a perfor-
mance metric that directs attention to features calling for improvement.
Value analysis can provide useful support to decision-making by first focusing on
hierarchical development. In all five cases presented here, this was done in the
original articles. Nonetheless, it is important to consider overarching objective
accomplishment.
Two aspects of value analysis should be considered. First, if scores on available
alternatives are equivalent on a specific criterion, this criterion will not matter for this
Table 4.12 Implied swing weighting
Criteria
From
max. Weight
From
min. Weight Compromise
FP2 Wages 100 0.249 980 0.293 0.270
FP3 Seafarer compensation for
ship loss or foundering
50 0.125 470 0.140 0.130
IBP5 Health and safety and
accident prevention
40 0.100 340 0.102 0.110
LP5 Social security 30 0.075 250 0.075 0.075
FP4 Food and catering 28 0.070 225 0.067 0.068
IBP4 Shipowner’s liability 26 0.065 190 0.057 0.060
LP4 Medical care on-board and
ashore
20 0.050 142 0.042 0.045
FP1 Seafarer’s employment
agreements
19 0.047 140 0.042 0.044
LGP4 Career and skill development 16 0.040 101 0.030 0.035
IBP3 Accommodation and
recreational facilities
13 0.032 99 0.030 0.031
LGP2 Training and qualifications 11 0.027 80 0.024 0.025
IBP2 Manning levels 10 0.025 75 0.022 0.023
LP2 Entitlement to leave 9 0.022 70 0.021 0.021
LGP3 Hours of work and rest 7 0.017 50 0.015 0.016
IBP1 Medical certificate 6 0.015 40 0.012 0.014
LP1 Recruitment and placement 6 0.015 38 0.011 0.013
LP3 Repatriation 5 0.012 30 0.009 0.010
LGP1 Minimum age 3 0.007 18 0.005 0.006
LGP5 Access to shore-based
welfare facilities
2 0.005 10 0.003 0.004
401 1 3348 1 1
56 4 Examples of Supply Chain Decisions Trading Off Criteria
set of alternatives. However, it may matter if new alternatives are added, or existing
alternatives improved. Second, a benefit of value analysis is improvement of existing
alternatives. The score matrix provides useful comparisons of relative alternative
performance. If decision makers are not satisfied with existing alternatives, they
might seek additional choices by expanding their search or designing better
alternatives. The criteria with the greatest weights might provide an area of focus
in this search, and the ideal scores might give a standard for design.
Notes
1. Zhu, Q., Shah, P. and Sarkis, J. (2018). Addition by subtraction: Integrating
product deletion with lean and sustainable supply chain management. Interna-
tional Journal of Production Economics 205, 201–214.
2. Liu, Y., Eckert, C., Yannou-Le Bris, G. and Petit, G. (2019). A fuzzy decision
tool to evaluate the sustainable performance of suppliers in an agrifood value
chain. Computers & Industrial Engineering 127, 196–212.
3. Khatri, J. and Srivastava, M. (2016). Technology selection for sustainable supply
chains. International Journal of Technology Management & Sustainable Devel-
opment 15(3), 275–289.
4. Enyinda, C.I., Briggs, C., Obuah, E. and Mbah, C. (2011). Petroleum supply
chain risk analysis in a multinational oil firm in Nigeria. Journal of Marketing
Development and Competitiveness 5(7), 37–44.
5. Akyuz, E., Karahalios, H. and Celik, M. (2015). Assessment of the maritime
labour convention compliance using balanced scorecard and analytic hierarchy
process approach. Maritime Policy & Management 42(2), 145–162.
Notes 57
Simulation of Supply Chain Risk 5
Supply chains involve many risks, as we have seen. Modeling that risk focuses on
probability, a well-developed analytic technique. This chapter addresses basic simu-
lation models involving supply chains, to include inventory modeling (often accom-
plished through system dynamics) and Monte Carlo simulation of vendor
outsourcing decisions.
Inventory Systems
Inventory is any resource that is set aside for future use. Inventory is necessary
because the demand and supply of goods usually are not perfectly matched at any
given time or place. Many different types of inventories exist. Examples include raw
materials (such as coal, crude oil, and cotton), semifinished products (aluminum
ingots, plastic sheets, lumber), and finished products (cans of food, computer
terminals, shirts). Inventories can also be human resources (standby crews and
trainees), financial resources (cash on hand, accounts receivable), and other
resources such as airplanes seats.
The basic risks associated with inventories are the risks of stocking out (and thus
losing sales), and the counter risk of going broke because excessive cash flow is tied
up in inventory. The problem is made interesting because demand is almost always
uncertain, driven by the behavior of the market, usually many people making
spontaneous purchasing decisions.
Inventories represent a considerable investment for many organizations; thus, it is
important that they be managed well. Although many analytic models for managing
inventories exist, the complexity of many practical situations often requires
simulation.
The two basic inventory decisions that managers face are how much to order or
produce additional inventory, and when to order or produce it. Although it is
possible to consider these two decisions separately, they are so closely related
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_5
59
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_5&domain=pdf
that a simultaneous solution is usually necessary. Typically, the objective is to
minimize total inventory costs.
Total inventory cost can include four components: holding costs, ordering costs,
shortage costs, and purchasing costs. Holding costs, or carrying costs, represent
costs associated with maintaining inventory. These costs include interest incurred or
the opportunity cost of having capital tied up in inventories; storage costs such as
insurance, taxes, rental fees, utilities, and other maintenance costs of storage space;
warehousing or storage operation costs, including handling, record keeping, infor-
mation processing, and actual physical inventory expenses; and costs associated with
deterioration, shrinkage, obsolescence, and damage. Total holding costs are depen-
dent on how many items are stored and for how long they are stored. Therefore,
holding costs are expressed in terms of dollars associated with carrying one unit of
inventory for unit of time.
Ordering costs represent costs associated with replenishing inventories. These
costs are not dependent on how many items are ordered at a time, but on the number
of orders that are prepared. Ordering costs include overhead, clerical work, data
processing, and other expenses that are incurred in searching for supply sources, as
well as costs associated with purchasing, expediting, transporting, receiving, and
inspecting. In manufacturing operations, setup cost is the equivalent to ordering
cost. Setup costs are incurred when a factory production line has to be shut down in
order to reorganize machinery and tools for a new production run. Setup costs
include the cost of labor and other time-related costs required to prepare for the
new product run. We usually assume that the ordering or setup cost is constant and is
expressed in terms of dollars per order.
Shortage costs, or stock-out costs, are those costs that occur when demand
exceeds available inventory in stock. A shortage may be handled as a backorder,
in which a customer waits until the item is available, or as a lost sale. In either case, a
shortage represents lost profit and possible loss of future sales. Shortage costs
depend on how much shortage has occurred and sometimes for how long. Shortage
costs are expressed in terms of dollar cost per unit of short item.
Purchasing costs are what firms pay for the material or goods. In most inventory
models, the price of materials is the same regardless of the quantity purchased; in this
case, purchasing costs can be ignored. However, when price varies by quantity
purchased, called the quantity discount case, inventory analysis must be adjusted
to account for this difference.
Basic Inventory Simulation Model
Many models contain variables that change continuously over time. One example
would be a model of a retail store’s inventory. The number of items change gradually
(though discretely) over an extended time period; however, for all intents and
purposes, they may be treated as continuous. As customer demand is fulfilled,
inventory is depleted, leading to factory orders to replenish the stock. As orders
are received from suppliers, the inventory increases. Over time, particularly if orders
60 5 Simulation of Supply Chain Risk
are relatively small and frequent as we see in just-in-time environments, the inven-
tory level can be represented by a smooth, continuous, function.
We can build a simple inventory simulation model beginning with a spreadsheet
model as shown in Table 5.1. Model parameters include a holding rate of 0.8 per
item per day, an order rate of 300 for each order placed, a purchase price of 90, and a
sales price of 130. The decision variables are when to order (when the end of day
quantity drops below the reorder point (ROP)), and the quantity ordered (Q). The
model itself has a row for each day (here 30 days are modeled). Each day has a
starting inventory (column B) and a probabilistic demand (column C) generated
from a normal distribution with a mean of 100 and a standard deviation of 10.
Demand is made integer. Sales (column D) are equal to the minimum of the starting
quantity and demand. End of day inventory (column E) is the maximum of 0 or
starting inventory minus demand. The quantity ordered at the end of each day in
column F (here assumed to be on hand at the beginning of the next day) is 0 if ending
inventory exceeds ROP, or Q if ending inventory drops at or below ROP.
Profit and shortage are calculated to the right of the basic inventory model.
Column G calculates holding cost by multiplying the parameter is cell B2 times
the ending inventory quantity for each day, and summing over the 30 days in cell G5.
Order costs are calculated by day as $300 if an order is placed that day, and
0 otherwise, with the monthly total ordering cost accumulated in cell H5. Cell I5
calculates total purchasing cost, cell J5 total revenue, and cell H3 calculates net profit
considering the value of starting inventory and ending inventory. Column K
identifies sales lost (SHORT), with cell K5 accumulating these for the month.
Note that cell H3 adjusts for beginning and ending inventory.
Crystal Ball simulation software allows introduction of three types of special
variables. Probabilistic variables (assumption cells in Crystal Ball terminology) are
modeled in column C using a normal distribution [CB.Normal (mean, std)]. Decision
variables are modeled for ROP (cell E1) and Q (cell E2). Crystal Ball allows setting
minimum and maximum levels for decision variables, as well as step size. Here we
used ROP values of 80, 100, 120, and 140, and Q values of 100, 110, 120, 130, and
140. The third type of variable is a forecast cell. We have forecast cells for net profit
(H3) and for sales lost (cell K3).
The Crystal Ball simulation can be set to run for up to 10,000 repetitions for
combination of decision variables. We selected 1000 repetitions. Output is given for
forecast cells. Figure 5.1 shows net profit for the combination of an ROP of 140 and
a Q of 140.
Tabular output is also provided as in Table 5.2.
Similar output is given for the other forecast variable, SHORT (Fig. 5.2;
Table 5.3).
Crystal Ball also provides a comparison over all decision variable values, as given
in Table 5.4.
The implication here is that the best decision for the basic model parameters
would be an ROP of 120 and a Q of 130, yielding an expected net profit of $101,446
for the month. The shortage for this combination had a mean of 3.43 items per day,
with a distribution shown in Fig. 5.3. The probability of shortage was 0.4385.
Basic Inventory Simulation Model 61
T
a
b
le
5
.1
B
as
ic
in
v
en
to
ry
m
o
d
el
A
B
C
D
E
F
G
H
I
J
K
1
H
o
ld
ra
te
0
.8
R
O
P
1
4
0
2
O
rd
er
ra
te
3
0
0
Q
1
4
0
3
P
u
rc
h
as
e
9
0
N
et
1
0
1
,8
0
9
.2
S
h
o
rt
0
4
S
el
l
1
3
0
5
2
4
4
0
.8
6
6
0
0
2
7
7
,2
0
0
3
8
8
,0
5
0
6
D
ay
S
ta
rt
D
em
an
d
S
al
es
E
n
d
O
rd
er
H
o
ld
co
st
O
rd
er
co
st
P
u
rc
h
as
e
R
ev
en
u
e
S
H
O
R
T
7
1
1
0
0
8
5
8
5
1
5
1
4
0
1
2
3
0
0
1
2
,6
0
0
1
1
,0
5
0
0
8
2
1
5
5
8
4
8
4
7
1
1
4
0
5
6
.8
3
0
0
1
2
,6
0
0
1
0
,9
2
0
0
9
3
2
1
1
1
0
4
1
0
4
1
0
7
1
4
0
8
5
.6
3
0
0
1
2
,6
0
0
1
3
,5
2
0
0
1
0
4
2
4
7
1
0
5
1
0
5
1
4
2
0
1
1
3
.6
0
0
1
3
,6
5
0
0
1
1
5
1
4
2
1
0
4
1
0
4
3
8
1
4
0
3
0
.4
3
0
0
1
2
,6
0
0
1
3
,5
2
0
0
1
2
6
1
7
8
1
1
6
1
1
6
6
2
1
4
0
4
9
.6
3
0
0
1
2
,6
0
0
1
5
,0
8
0
0
1
3
7
2
0
2
1
0
5
1
0
5
9
7
1
4
0
7
7
.6
3
0
0
1
2
,6
0
0
1
3
,6
5
0
0
1
4
8
2
3
7
9
4
9
4
1
4
3
0
1
1
4
.4
0
0
1
2
,2
2
0
0
1
5
9
1
4
3
8
3
8
3
6
0
1
4
0
4
8
3
0
0
1
2
,6
0
0
1
0
,7
9
0
0
1
6
1
0
2
0
0
9
4
9
4
1
0
6
1
4
0
8
4
.8
3
0
0
1
2
,6
0
0
1
2
,2
2
0
0
1
7
1
1
2
4
6
1
1
5
1
1
5
1
3
1
1
4
0
1
0
4
.8
3
0
0
1
2
,6
0
0
1
4
,9
5
0
0
1
8
1
2
2
7
1
1
2
8
1
2
8
1
4
3
0
1
1
4
.4
0
0
1
6
,6
4
0
0
1
9
1
3
1
4
3
1
0
7
1
0
7
3
6
1
4
0
2
8
.8
3
0
0
1
2
,6
0
0
1
3
,9
1
0
0
2
0
1
4
1
7
6
1
1
0
1
1
0
6
6
1
4
0
5
2
.8
3
0
0
1
2
,6
0
0
1
4
,3
0
0
0
2
1
1
5
2
0
6
1
0
2
1
0
2
1
0
4
1
4
0
8
3
.2
3
0
0
1
2
,6
0
0
1
3
,2
6
0
0
2
2
1
6
2
4
4
9
6
9
6
1
4
8
0
1
1
8
.4
0
0
1
2
,4
8
0
0
2
3
1
7
1
4
8
9
1
9
1
5
7
1
4
0
4
5
.6
3
0
0
1
2
,6
0
0
1
1
,8
3
0
0
2
4
1
8
1
9
7
1
0
2
1
0
2
9
5
1
4
0
7
6
3
0
0
1
2
,6
0
0
1
3
,2
6
0
0
2
5
1
9
2
3
5
1
0
4
1
0
4
1
3
1
1
4
0
1
0
4
.8
3
0
0
1
2
,6
0
0
1
3
,5
2
0
0
62 5 Simulation of Supply Chain Risk
2
6
2
0
2
7
1
9
6
9
6
1
7
5
0
1
4
0
0
0
1
2
,4
8
0
0
2
7
2
1
1
7
5
1
0
3
1
0
3
7
2
1
4
0
5
7
.6
3
0
0
1
2
,6
0
0
1
3
,3
9
0
0
2
8
2
2
2
1
2
9
8
9
8
1
1
4
1
4
0
9
1
.2
3
0
0
1
2
,6
0
0
1
2
,7
4
0
0
2
9
2
3
2
5
4
9
7
9
7
1
5
7
0
1
2
5
.6
0
0
1
2
,6
1
0
0
3
0
2
4
1
5
7
1
0
3
1
0
3
5
4
1
4
0
4
3
.2
3
0
0
1
2
,6
0
0
1
3
,3
9
0
0
3
1
2
5
1
9
4
8
6
8
6
1
0
8
1
4
0
8
6
.4
3
0
0
1
2
,6
0
0
1
1
,1
8
0
0
3
2
2
6
2
4
8
1
0
5
1
0
5
1
4
3
0
1
1
4
.4
0
0
1
3
,6
5
0
0
3
3
2
7
1
4
3
8
9
8
9
5
4
1
4
0
4
3
.2
3
0
0
1
2
,6
0
0
1
1
,5
7
0
0
3
4
2
8
1
9
4
1
0
6
1
0
6
8
8
1
4
0
7
0
.4
3
0
0
1
2
,6
0
0
1
3
,7
8
0
0
3
5
2
9
2
2
8
8
9
8
9
1
3
9
1
4
0
1
1
1
.2
3
0
0
1
2
,6
0
0
1
1
,5
7
0
0
3
6
3
0
2
7
9
8
4
8
4
1
9
5
0
1
5
6
0
0
1
0
,9
2
0
0
Basic Inventory Simulation Model 63
System Dynamics Modeling of Supply Chains
Many models contain variables that change continuously over time. One example
would be a model of an oil refinery. The amount of oil moving between various
stages of production is clearly a continuous variable. In other models, changes in
variables occur gradually (though discretely) over an extended time period;
Fig. 5.1 Crystal ball output for net profit ROP 140, Q 140. # Oracle. Used with permission
Table 5.2 Statistical
output for net profit ROP
140, Q 140
Forecast: net
Statistic Forecast values
Trials 1000
Mean 100,805.56
Median 97,732.8
Mode 97,042.4
Standard deviation 6264.80
Variance 39,247,672.03
Skewness 0.8978
Kurtosis 2.21
Coeff. of variability 0.0621
Minimum 89,596.80
Maximum 112,657.60
Mean Std. error 198.11
# Oracle. Used with permission
64 5 Simulation of Supply Chain Risk
however, for all intents and purposes, they may be treated as continuous. An
example would be the amount of inventory at a warehouse in a production–distribu-
tion system over several years. As customer demand is fulfilled, inventory is
depleted, leading to factory orders to replenish the stock. As orders are received
from suppliers, the inventory increases. Over time, particularly if orders are rela-
tively small and frequent as we see in just-in-time environments, the inventory level
can be represented by a smooth, continuous, and function.
Fig. 5.2 SHORT for ROP 140, Q 140. # Oracle. Used with permission
Table 5.3 Statistical
output: ROP 140, Q 140
Forecast: net
Statistic Forecast values
Trials 1000
Mean 3.72
Median 0.00
Mode 0.00
Standard deviation 5.61
Variance 31.47
Skewness 1.75
Kurtosis 5.94
Coeff. of variability 1.51
Minimum 0.00
Maximum 31.00
Mean Std. error 0.18
System Dynamics Modeling of Supply Chains 65
T
a
b
le
5
.4
C
o
m
p
ar
at
iv
e
n
et
p
ro
fi
t
fo
r
al
l
v
al
u
es
o
f
R
O
P
,
Q
Q
(1
0
0
.0
0
)
Q
(1
1
0
.0
0
)
Q
(1
2
0
.0
0
)
Q
(1
3
0
.0
0
)
Q
(1
4
0
.0
0
)
R
O
P
(8
0
.0
0
)
9
9
,5
3
0
9
9
,9
4
8
9
9
,9
1
8
1
0
0
,1
5
9
1
0
1
,3
3
1
1
R
O
P
(1
0
0
.0
0
)
9
9
,6
2
7
1
0
0
,7
0
1
1
0
1
,0
5
1
1
0
1
,9
7
2
1
0
1
,5
1
2
2
R
O
P
(1
2
0
.0
0
)
9
9
,5
1
9
1
0
0
,4
2
9
1
0
0
,9
1
9
10
1,
44
6
1
0
1
,2
5
2
3
R
O
P
(1
4
0
.0
0
)
9
9
,5
2
5
9
9
,8
9
4
1
0
0
,5
8
6
1
0
0
,7
1
2
1
0
0
,8
0
5
4
1
2
3
4
5
#
O
ra
cl
e.
U
se
d
w
it
h
p
er
m
is
si
o
n
66 5 Simulation of Supply Chain Risk
Continuous variables are often called state variables. A continuous simulation
model defines equations for relationships among state variables so that the dynamic
behavior of the system over time can be studied. To simulate continuous systems, we
use an activity-scanning approach whereby time is decomposed into small
increments. The defining equations are used to determine how the state variables
change during an increment of time. A specific type of continuous simulation is
called system dynamics, which dates back to the early 1960s and a classic work by
Jay Forrester of M.I.T.1 System dynamics focuses on the structure and behavior of
systems that are composed of interactions among variables and feedback loops. A
system dynamics model usually takes the form of an influence diagram that shows
the relationships and interactions among a set of variables.
System dynamics models have been widely used to model supply chains, espe-
cially with respect to the bullwhip phenomenon,2 which has to do with the dramatic
increase in inventories across supply chains when uncertainty in demand appears.
Many papers have dealt with the bullwhip effect through system dynamics models.3
These models have been used to evaluate lean systems,4 Kanban systems,5 and JIT
systems,6 They also have been used to model vendor management inventory in
supply chains.7
We present a four-echelon supply chain model, consisting of a vendor providing
raw materials, an assembly operation to create the product, a warehouse, and a set of
five retailers. We will model two systems—one a push system, the other pull in the
sense that upstream activity depends on downstream demand. We will present the
pull system first.
Fig. 5.3 SHORT for R ¼ 120, Q ¼ 130. # Oracle. Used with permission
System Dynamics Modeling of Supply Chains 67
Pull System
The basic model uses a forecasting system based on exponential smoothing to drive
decisions to send material down the supply chain. We use EXCEL modeling, along
with Crystal Ball software to do simulation repetitions, following Evans and Olson
(2004).8 The formulas for the factory portion of the model are given in Fig. 5.4.
Figure 5.4 models a month of daily activity. Sales of products at retail generate
$100 in revenue for the core organization, at a cost of $70 per item. Holding costs are
$1 at the retail level ($0.50 at wholesale, $0.40 at assembly, and $0.25 at vendors).
Daily orders are shipped from each element, at a daily cost of $1000 from factory to
assembler, $700 from assembler to warehouse, and $300 from warehouse to
retailers. Vendors produce 50 items of material per day if inventory drops to
20 items or less. If not, they do not produce. They send material to the assembly
operation if called by that element, which is modeled in Fig. 5.5 (only the first 5 days
are shown). Vendor ending inventory is shown in column E, with cell E37 adding
total monthly inventory.
The assembly operation calls for replenishment of 30 units from the vendor
whenever their inventory of finished goods drops to 20 or less. Each daily delivery
A B C D E
1 RevP 100 ROPven 20
2 Cost 70 Qven 50
3 Hold 1
4 Vendor Vendor
5 Start Prod Send End
6 Time
7 1 40 =IF(E7<=$D$1,$D$2,0) =IF(J7<=$I$1,$D$2,0) =MAX(0,B7-D7)
8 =A7+1 =E7 =IF(E8<=$D$1,$D$2,0) =IF(J8<=$I$1,$D$2,0) =MAX(0,B8-D8)
9 =A8+1 =E8+C7 =IF(E9<=$D$1,$D$2,0) =IF(J9<=$I$1,$D$2,0) =MAX(0,B9-D9)
10 =A9+1 =E9+C8 =IF(E10<=$D$1,$D$2,0) =IF(J10<=$I$1,$D$2,0) =MAX(0,B10-D10)
11 =A10+1 =E10+C9 =IF(E11<=$D$1,$D$2,0) =IF(J11<=$I$1,$D$2,0) =MAX(0,B11-D11)
12 =A11+1 =E11+C10 =IF(E12<=$D$1,$D$2,0) =IF(J12<=$I$1,$D$2,0) =MAX(0,B12-D12)
13 =A12+1 =E12+C11 =IF(E13<=$D$1,$D$2,0) =IF(J13<=$I$1,$D$2,0) =MAX(0,B13-D13)
14 =A13+1 =E13+C12 =IF(E14<=$D$1,$D$2,0) =IF(J14<=$I$1,$D$2,0) =MAX(0,B14-D14)
15 =A14+1 =E14+C13 =IF(E15<=$D$1,$D$2,0) =IF(J15<=$I$1,$D$2,0) =MAX(0,B15-D15)
16 =A15+1 =E15+C14 =IF(E16<=$D$1,$D$2,0) =IF(J16<=$I$1,$D$2,0) =MAX(0,B16-D16)
17 =A16+1 =E16+C15 =IF(E17<=$D$1,$D$2,0) =IF(J17<=$I$1,$D$2,0) =MAX(0,B17-D17)
18 =A17+1 =E17+C16 =IF(E18<=$D$1,$D$2,0) =IF(J18<=$I$1,$D$2,0) =MAX(0,B18-D18)
19 =A18+1 =E18+C17 =IF(E19<=$D$1,$D$2,0) =IF(J19<=$I$1,$D$2,0) =MAX(0,B19-D19)
20 =A19+1 =E19+C18 =IF(E20<=$D$1,$D$2,0) =IF(J20<=$I$1,$D$2,0) =MAX(0,B20-D20)
21 =A20+1 =E20+C19 =IF(E21<=$D$1,$D$2,0) =IF(J21<=$I$1,$D$2,0) =MAX(0,B21-D21)
22 =A21+1 =E21+C20 =IF(E22<=$D$1,$D$2,0) =IF(J22<=$I$1,$D$2,0) =MAX(0,B22-D22)
23 =A22+1 =E22+C21 =IF(E23<=$D$1,$D$2,0) =IF(J23<=$I$1,$D$2,0) =MAX(0,B23-D23)
24 =A23+1 =E23+C22 =IF(E24<=$D$1,$D$2,0) =IF(J24<=$I$1,$D$2,0) =MAX(0,B24-D24)
25 =A24+1 =E24+C23 =IF(E25<=$D$1,$D$2,0) =IF(J25<=$I$1,$D$2,0) =MAX(0,B25-D25)
26 =A25+1 =E25+C24 =IF(E26<=$D$1,$D$2,0) =IF(J26<=$I$1,$D$2,0) =MAX(0,B26-D26)
27 =A26+1 =E26+C25 =IF(E27<=$D$1,$D$2,0) =IF(J27<=$I$1,$D$2,0) =MAX(0,B27-D27)
28 =A27+1 =E27+C26 =IF(E28<=$D$1,$D$2,0) =IF(J28<=$I$1,$D$2,0) =MAX(0,B28-D28)
29 =A28+1 =E28+C27 =IF(E29<=$D$1,$D$2,0) =IF(J29<=$I$1,$D$2,0) =MAX(0,B29-D29)
30 =A29+1 =E29+C28 =IF(E30<=$D$1,$D$2,0) =IF(J30<=$I$1,$D$2,0) =MAX(0,B30-D30)
31 =A30+1 =E30+C29 =IF(E31<=$D$1,$D$2,0) =IF(J31<=$I$1,$D$2,0) =MAX(0,B31-D31)
32 =A31+1 =E31+C30 =IF(E32<=$D$1,$D$2,0) =IF(J32<=$I$1,$D$2,0) =MAX(0,B32-D32)
33 =A32+1 =E32+C31 =IF(E33<=$D$1,$D$2,0) =IF(J33<=$I$1,$D$2,0) =MAX(0,B33-D33)
34 =A33+1 =E33+C32 =IF(E34<=$D$1,$D$2,0) =IF(J34<=$I$1,$D$2,0) =MAX(0,B34-D34)
35 =A34+1 =E34+C33 =IF(E35<=$D$1,$D$2,0) =IF(J35<=$I$1,$D$2,0) =MAX(0,B35-D35)
36 =A35+1 =E35+C34 =IF(E36<=$D$1,$D$2,0) =IF(J36<=$I$1,$D$2,0) =MAX(0,B36-D36)
37 =SUM(E7:E36)
Fig. 5.4 Factory model
68 5 Simulation of Supply Chain Risk
is 30 units, and is received at the beginning of the next day’s operations. The
assembly operation takes 1 day, and goods are available to send that evening.
Column J shows ending inventory to equal what starting inventory plus what was
processed that day minus what was sent to wholesale. Figure 5.6 shows the model of
the wholesale operation.
The wholesale operation feeds retail demand, which is shown in column L. They
feed retailers up to the amount they have in stock. They order from the assembler if
they have less than 25 items. If they stock out, they order 20 items plus 70% of what
they were unable to fill (this is essentially an exponential smoothing forecast). If they
still have stock on hand, the order to fill up to 25 items. Figure 5.7 shows one of the
five retailer operations (the other four are identical).
Retailers face a highly variable demand with a mean of 4. They fill what orders
they have stock for. Shortfall is measured in column U. They order if their end-of-
day inventory falls to 4 or less. The amount ordered is 4 plus 70% of shortfall, up to a
maximum of 8 units.
This model is run of Crystal Ball to generate a measure of overall system profit.
Here the profit formula is $175 times sales minus holding costs minus transportation
costs. Holding costs at the factory were $0.25 times sum of ending inventory, at the
assembler $0.40 times sum of ending inventory, at the warehouse 0.50 times ending
inventory, and at the retailers $1 times sum of ending inventories. Shipping costs
were $1000 per day from factory to assembler, $700 per day from assembler to
warehouse, and $300 per day from warehouse to retailer. The results of 1000
repetitions are shown in Fig. 5.8.
Here average profit for a month is $5942, with a minimum a loss of $8699 and a
maximum gain of $18,922. There was a 0.0861 probability of a negative profit. The
amount of shortage across the system is shown in Fig. 5.9. The average was 138.76,
with a range of 33–279 over the 1000 simulation repetitions.
10 4 =J9 =D9 =G9 =MIN(F10,M9) =F10+H10-I10
11 5 =J10 =D10 =G10 =MIN(F11,M10) =F11+H11-I11
Fig. 5.5 Core assembly model
A K L M N O P
1 WholMin 20
2 WholMax 25
3
4 Whol
5 Day Start Demand Order End Short Sent
6 0
7 1
=20 =20
=IF(O7>0,$N$1+INT(0.7*O7),IF(N
7>$N$2,0,$N$2-N7))
=K7-
P7 =IF(L7>K7,L7-K7,0)
MIN(K7,
L7)
8 2
=N7+I7
=T7+Y7+AD7+AI
7+AM7
=IF(O8>0,$N$1+INT(0.7*O8),IF(N
8>$N$2,0,$N$2-N8))
=K8-
P8 =IF(L8>K8,L8-K8,0)
MIN(K8,
L8)
9 3
=N8+I8
=T8+Y8+AD8+AI
8+AM8
=IF(O9>0,$N$1+INT(0.7*O9),IF(N
9>$N$2,0,$N$2-N9))
=K9-
P9 =IF(L9>K9,L9-K9,0)
MIN(K9,
L9)
10 4
=N9+I9
=T9+Y9+AD9+AI
9+AM9
=IF(O10>0,$N$1+INT(0.7*O10),IF
(N10>$N$2,0,$N$2-N10))
=K10-
P10 =IF(L10>K10,L10-K10,0)
MIN(K1
0,L10)
11 5
=N10+I10
=T10+Y10+AD1
0+AI10+AM10
=IF(O11>0,$N$1+INT(0.7*O11),IF
(N11>$N$2,0,$N$2-N11))
=K11-
P11 =IF(L11>K11,L11-K11,0)
MIN(K1
1,L11)
Fig. 5.6 Wholesale model
Pull System 69
The central limit theorem can be shown to have effect, as the sum of the five
retailer shortfalls has a normally shaped distribution. Figure 5.10 shows shortfall at
the wholesale level, which had only one entity.
The average wholesale shortages was 15.73, with a minimum of 0 and a maxi-
mum of 82. Crystal Ball output indicates a probability of shortfall of 0.9720,
meaning a 0.0280 probability of going the entire month without having shortage at
the wholesale level.
Fig. 5.8 Overall system profit for basic model. # Oracle. Used with permission
A Q R S T U
1 start 4 order ROP+.7short
2 rop 4 to Tmax
3 Tmax 8
4
5 R1
6 start demand end order short
7
=$R$1 =INT(CB.Exponential(0.25))
=MAX(0,Q7-
R7)
=IF(S7<=$R$2,4+INT(0.7*U7),IF
(S7>$R$3,0,$R$3-S7))
=IF(R7>Q7,R7-
Q7,0)
8 =S7+MIN(P
7,T7) =INT(CB.Exponential(0.25))
=MAX(0,Q8-
R8)
=IF(S8<=$R$2,4+INT(0.7*U8),IF
(S8>$R$3,0,$R$3-S8))
=IF(R8>Q8,R8-
Q8,0)
9 =S8+MIN(P
8,T8) =INT(CB.Exponential(0.25))
=MAX(0,Q9-
R9)
=IF(S9<=$R$2,4+INT(0.7*U9),IF
(S9>$R$3,0,$R$3-S9))
=IF(R9>Q9,R9-
Q9,0)
10 =S9+MIN(P
9,T9) =INT(CB.Exponential(0.25))
=MAX(0,Q10-
R10)
=IF(S10<=$R$2,4+INT(0.7*U10)
,IF(S10>$R$3,0,$R$3-S10))
=IF(R10>Q10,R10-
Q10,0)
11 =S10+MIN(
P10,T10) =INT(CB.Exponential(0.25))
=MAX(0,Q11-
R11)
=IF(S11<=$R$2,4+INT(0.7*U11)
,IF(S11>$R$3,0,$R$3-S11))
=IF(R11>Q11,R11-
Q11,0)
Fig. 5.7 Retailing model
70 5 Simulation of Supply Chain Risk
Fig. 5.9 Retail shortages for basic model. # Oracle. Used with permission
Fig. 5.10 Wholesale shortages for basic model. # Oracle. Used with permission
Pull System 71
Push System
The difference in this model is that production at the factory (column C in Fig. 5.4) is
a constant 20 per day, the amount sent from the factory to the assembler (column D
in Fig. 5.4) is also 20 per day, the amount ordered by the wholesaler (column M in
Fig. 5.6) is 20, the amount sent by the wholesaler to retailers (column P in Fig. 5.6) is
a constant 20, and the amount ordered by the wholesaler (column T in Fig. 5.7) is a
constant 20.
This system proved to be more profitable and safer for the given conditions. Profit
is shown in Fig. 5.11.
The average profit was $13,561, almost double that of the more variable push
system. Minimum profit was a loss of $2221, with the probability of loss 0.0052.
Maximum profit was $29,772. Figure 5.12 shows shortfall at the retail level.
The average shortfall was only 100.32, much less than the 137.16 for the pull
model. Shortfall at the wholesale level (Fig. 5.13) was an average of 21.54, ranging
from 0 to 67.
For this set of assumed values, the push system performed better. But that
establishes nothing, as for other conditions, and other means of coordination, a
pull system could do better.
Fig. 5.11 Push system profit. # Oracle. Used with permission
72 5 Simulation of Supply Chain Risk
Fig. 5.12 Retail shortages for the push model. # Oracle. Used with permission
Fig. 5.13 Wholesale shortages for the push model. # Oracle. Used with permission
Push System 73
Monte Carlo Simulation for Analysis
Simulation models are sets of assumptions concerning the relationship among model
components. Simulations can be time oriented (for instance, involving the number of
events such as demands in a day) or process oriented (for instance, involving
queuing systems of arrivals and services). Uncertainty can be included by using
probabilistic inputs for elements such as demands, inter-arrival times, or service
times. These probabilistic inputs need to be described by probability distributions
with specified parameters. Probability distributions can include normal distributions
(with parameters for mean and variance), exponential distributions (with parameter
for a mean), lognormal (parameters mean and variance), or any of a number of other
distributions. A simulation run is a sample from an infinite population of possible
results for a given model. After a simulation model is built, the number of trials is
established. Statistical methods are used to validate simulation models and design
simulation experiments.
Many financial simulation models can be accomplished on spreadsheets, such as
Excel. There are a number of commercial add-on products that can be added to
Excel, such as @Risk or Crystal Ball, that vastly extend the simulation power of
spreadsheet models. These add-ons make it very easy to replicate simulation runs,
and include the ability to correlate variables, expeditiously select from standard
distributions, aggregate and display output, and other useful functions.
In supply chain outsourcing decisions, a number of factors can involve uncer-
tainty, and simulation can be useful in gaining better understanding of systems.9 We
begin by looking at expected distributions of prices for the component to be
outsourced from each location. China C in this case has the lowest estimated price,
but it has a wide expected distribution of exchange rate fluctuation. These
distributions will affect the actual realized price for the outsourced component.
The Chinese C vendor is also rated as having relatively high probabilities of failure
in product compliance with contractual standards, in vendor financial survival, and
in political stability of host country. The simulation is modeled to generate 1000
samples of actual realized price after exchange rate variance, to include having to
rely upon an expensive ($5 per unit) price in case of outsourcing vendor failure.
Monte Carlo simulation output is exemplified in Fig. 5.14, which shows the
distribution of prices for the hypothetical Chinese outsourcing vendor C, which was
the low price vendor very nearly half of the time. Figure 5.15 shows the same for the
Taiwanese vendor, and Fig. 5.16 for the safer but expensive German vendor.
The Chinese vendor C has a higher probability of failure (over 0.31 from all
sources combined, compared to 0.30 for Indonesia). This raises its mean cost,
because in case of failure, the $5 per unit default price is used. There is a cluster
around the contracted cost of $0.60, with a minimum dropping slightly below 0 due
to exchange rate variance, a mean of $0.78, and a maximum of $1.58 given survival
in all three aspects of risk modeled. There is a spike showing a default price of $5.00
per unit in 0.3134 of the cases. Thus while the contractual price is lowest for this
alternative, the average price after consideration of failure is $2.10.
74 5 Simulation of Supply Chain Risk
Fig. 5.14 Distribution of results for Chinese vendor C costs. # Oracle. Used with permission
Fig. 5.15 Distribution of results for Taiwanese vendor costs. # Oracle. Used with permission
Monte Carlo Simulation for Analysis 75
Table 5.5 shows comparative output. Simulation provides a more complete
picture of the uncertainties involved.
Probabilities of being the low-cost alternative are also shown. The greatest
probability was for China C at 0.4939, with Indonesia next at 0.1781. The expensive
(but safer) alternatives of Germany and Alabama both were never low (and thus were
dominated in the DEA model). But Germany had a very high probability of survival,
and in the simulation could appear as the best choice (rarely).
Fig. 5.16 Distribution of results for Germany vendor costs. # Oracle. Used with permission
Table 5.5 Simulation output
Vendor
Mean
cost
Min.
cost
Max.
cost
Probability
of failure
Probability
low
AvgCost
if did not
fail
Average
overall
China B 0.70 �0.01 1.84 0.2220 0.1370 0.91 1.82
Taiwan 1.36 1.22 1.60 0.1180 0.0033 1.41 1.83
China C 0.60 0.05 1.58 0.3134 0.4939 0.78 2.10
China A 0.82 �0.01 2.16 0.2731 0.0188 1.07 2.14
Indonesia 0.80 0.22 1.61 0.2971 0.1781 0.96 2.16
Arizona 1.80 1.80 1.80 0.2083 0.0001 2.71 2.47
Vietnam 0.85 0.40 1.49 0.3943 0.1687 0.94 2.54
Alabama 2.05 2.05 2.05 0.2472 0 2.78
Ohio 2.50 2.50 2.50 0.2867 0 3.22
Germany 3.20 2.90 3.81 0.0389 0 3.42
Note: Average overall assumes cost of $5 to supply chain should vendor fail
76 5 Simulation of Supply Chain Risk
Conclusion
Simulation is the most flexible management science modeling technique. It allows
making literally any assumption you want, although the trade-off is that you have to
work very hard to interpret results in a meaningful way relative to your decision.
Because of the variability inherent in risk analysis, simulation is an obviously
valuable tool for risk analysis. There are two basic simulation applications in
business. Waiting line models involve queuing systems, and software such as
Arena (or many others) is very appropriate for that type of modeling. The other
type is supportable by spreadsheet tools such as Crystal Ball, demonstrated in this
chapter. Spreadsheet simulation is highly appropriate for inventory modeling as in
push/pull models. Spreadsheet models also are very useful for system dynamic
simulations. We will see more Crystal Ball simulation models in chapters covering
value at risk and chance constrained models.
Notes
1. Forrester, J.W. (1961). Industrial Dynamics. Cambridge, MA: MIT Press.
2. Sterman, J. (1989). Modelling managerial behavior: Misperceptions of feedback
in a dynamic decision making experiment. Management Science 35:3, 321–339.
3. Huang, H.-Y., Chou, Y.-C. and Chang, S. (2009). A dynamic system model for
proactive control of dynamic events in full-load states of manufacturing chains.
International Journal of Production Research 47(9), 2485–2506; Demarzo, P.
M., Fishman, M.J., He, Z. and Wang, N. (2012). Dynamic agency and the q
theory of investment. The Journal of Finance LXVII(6), 2295–2340.
4. Agyapong-Kodua, K., Ajaefobi, J.O. and Weston, R.H. (2009). Modelling
dynamic value streams in support of process design and evaluation. International
Journal of Computer Integrated Manufacturing 22(5), 411–427.
5. Claudio, D. and Krishnamurthy, A. (2009). Kanban-based pull systems with
advance demand information. International Journal of Production Research 47
(12), 3139–3160.
6. Chakravarty, F. (2013). Managing a supply chain’s web of risk. Strategy &
Leadership 41(2), 39–45.
7. Mishra, M. and Chan, F.T.S. (2012). Impact evaluation of supply chain
initiatives: A system simulation methodology. International Journal of Produc-
tion Research 50(6), 1554–1567.
8. Evans, J.R. and Olson, D.L. (2002). Introduction to Simulation and Risk Analysis
2nd ed. Englewood Cliffs, NJ: Prentice-Hall.
9. Wu, D. and Olson, D.L. (2008), Supply chain risk, simulation and vendor
selection, International Journal of Production Economics 114:2, 646–655.
Notes 77
Value at Risk Models 6
Value at risk (VaR) is one of the most widely used models in risk management. It is
based on probability and statistics.1 VaR can be characterized as a maximum
expected loss, given some time horizon and within a given confidence interval. Its
utility is in providing a measure of risk that illustrates the risk inherent in a portfolio
with multiple risk factors, such as portfolios held by large banks, which are
diversified across many risk factors and product types. VaR is used to estimate the
boundaries of risk for a portfolio over a given time period, for an assumed probabil-
ity distribution of market performance. The purpose is to diagnose risk exposure.
Definition
Value at risk describes the probability distribution for the value (earnings or losses)
of an investment (firm, portfolio, etc.). The mean is a point estimate of a statistic,
showing historical central tendency. Value at risk is also a point estimate, but offset
from the mean. It requires specification of a given probability level, and then
provides the point estimate of the return or better expected to occur at the prescribed
probability. For instance, Fig. 6.1 gives the normal distribution for a statistic with a
mean of 10 and a standard deviation of 4 (Crystal Ball was used, with 10,000
replications).
This indicates a 0.95 probability (for all practical purposes) of a return of at least
3.42. The precise calculation can be made in Excel, using the NormInv function for
a probability of 0.05, a mean of 10, and a standard deviation of 4, yielding a return of
3.420585, which is practically the same as the simulation result shown in
Fig. 6.1. Thus the value of the investment at the specified risk level of 0.05 is
3.42. The interpretation is that there is a 0.05 probability that things would be worse
than the value at this risk level. Thus the greater the degree of assurance, the lower
the value at risk return. The value at the risk level of 0.01 would only be 0.694609.
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_6
79
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_6&domain=pdf
The Basel Accords
VaR is globally accepted by regulatory bodies responsible for supervision of bank-
ing activities. These regulatory bodies, in broad terms, enforce regulatory practices
as outlined by the Basel Committee on Banking Supervision of the Bank for
International Settlements (BIS). The regulator that has responsibility for financial
institutions in Canada is the Office of the Superintendent of Financial Institutions
(OSFI), and OSFI typically follows practices and criteria as proposed by the Basel
Committee.
Basel I
Basel I was promulgated in 1988, focusing on credit risk. A key agreement of the
Basel Committee is the Basel Capital Accord (generally referred to as “Basel” or the
“Basel Accord”), which has been updated several times since 1988. In the 1996
(updated, 1998) Amendment to the Basel Accord, banks were encouraged to use
internal models to measure Value at Risk, and the numbers produced by these
internal models support capital charges to ensure the capital adequacy, or liquidity,
of the bank. Some elements of the minimum standard established by Basel are:
Fig. 6.1 Normal distribution (10,4). #Oracle. used with permission
80 6 Value at Risk Models
• VaR should be computed daily, using a 99th percentile, one-tailed confidence
interval.
• A minimum price shock equivalent to ten trading days be used. This is called the
“holding period” and simulates a 10-day period of liquidating assets in a period of
market crisis.
• The model should incorporate a historical observation period of at least 1 year.
• The capital charge is set at a minimum of three times the average of the daily
value-at-risk of the preceding 60 business days.
In 2001 the Basel Committee on Banking Supervision published principles for
management and supervision of operational risks for banks and domestic authorities
supervising them.
Basel II
Basel II was published in 2009 to deal with operational risk management of banking.
Banks and financial institutions were bound to use internal and external data,
scenario analysis, and qualitative criteria. Banks were required to compute capital
charges on a yearly basis and to calculate 99.9 % confidence levels (one in one
thousand events as opposed to the earlier one in one hundred events). Basel II
included standards in the form of three pillars:
1. Minimum capital requirements.
2. Supervisory review, to include categorization of risks as systemic, pension
related, concentration, strategic, reputation, liquidity, and legal.
3. Market discipline, to include enhancements to strengthen disclosure
requirements for securitizations, off-balance sheet exposures, and trading
activities.
Basel III
Basel III was a comprehensive set of reform measures published in 2011 with phased
implementation dates. The aim was to strengthen regulation, supervision, and risk
management of the banking sectors.
Pillar 1 dealt with capital, risk coverage, and containing leverage:
• Capital requirements to improve bank ability to absorb shocks from financial
and economic stress:
Common equity � 0.045 of risk-weighted assets
• Leverage requirements to improve risk management and governance:
Tier1 capital � 0.03 of total exposure
• Liquidity requirements to strengthen bank transparency and disclosure:
High quality liquid assets � total net liquidity outflows over 30 days
The Basel Accords 81
Pillar 2 dealt with risk management and supervision.
Pillar 3 dealt with market discipline through disclosure requirements.
The Use of Value at Risk
In practice, these minimum standards mean that the VaR that is produced by the
Market Risk Operations area is multiplied first by the square root of 10 (to simulate
10 days holding) and then multiplied by a minimum capital multiplier of 3 to
establish capital held against regulatory requirements.
In summary, VaR provides the worst expected loss at the 99 % confidence level.
That is, a 99 % confidence interval produces a measure of loss that will be exceeded
only 1 % of the time. But this does mean there will likely be a larger loss than the
VaR calculation two or three times in a year. This is compensated for by the
inclusion of the multiplicative factors, above, and the implementation of Stress
Testing, which falls outside the scope of the activities of Market Risk Operations.
Various approaches can be used to compute VaR, of which three are widely used:
Historical Simulation, Variance-covariance approach, and Monte Carlo simulation.
Variance-covariance approach is used for investment portfolios, but it does not
usually work well for portfolios involving options that are close to delta neutral.
Monte Carlo simulation solves the problem of non-linearity approximation if model
error is not significant, but it suffers some technical difficulties such as how to deal
with time-varying parameters and how to generate maturation values for instruments
that mature before the VaR horizon. We present Historical Simulation and Variance-
covariance approach in the following two sections. We will demonstrate Monte
Carlo Simulation in a later section of this chapter.
Historical Simulation
Historical simulation is a good tool to estimate VAR in most banks. Observations of
day-over-day changes in market conditions are captured. These market conditions
are represented using upwards of 100,000 points daily of observed and implied
Market Data. This historical market data is captured and used to generate historical
‘shocks’ to current spot market data. This shocked market data is used to price the
Bank’s trading positions as against changing market conditions, and these revalued
positions then are compared against the base case (using spot data). This simulates a
theoretical profit or loss. Each day of historically observed data produces a theoreti-
cal profit/loss number in this way, and all of these theoretical P&L numbers produce
a distribution of theoretical profits/losses. The (1-day) VaR can then be read as the
99th percentile of this distribution.
The primary advantage of historical simulation is ease of use and implementation.
In Market Risk Operations, historical data is collected and reviewed on a
regular basis, before it is added to the historical data set. Since this data corresponds
to historical events, it can be reviewed in a straightforward manner. Also, the
82 6 Value at Risk Models
historical nature of the data allows for some clarity of explanation of VaR numbers.
For instance, the Bank’s VaR may be driven by widening credit spreads, or by
decreasing equity volatilities, or both, and this will be visible in actual historical data.
Additionally, historical data implicitly contains correlations and non-linear effects
(e.g. gamma, vega and cross-effects).
The most obvious disadvantage of historical simulation is the assumption that
the past presents a reasonable simulation of future events. Additionally, a large
bank usually holds a large portfolio, and there can be considerable operational
overhead involved in producing a VaR against a large portfolio with dependencies
on a large and varied number of model inputs. All the same, other VaR methods,
such as variance-covariance (VCV) and Monte Carlo simulation, produce essentially
the same objections. The main alternative to historical simulation is to make
assumptions about the probability distributions of the returns on the market
variables and calculate the probability distribution of the change in the value of
the portfolio analytically. This is known as the variance-covariance approach. VCV
is a parametric approach and contains the assumption of normality, and the
assumption of the stability of correlation and at the same time. Monte Carlo
simulation provides another tool to these two methods. Monte Carlo methods are
dependent on decisions regarding model calibration, which have effectively the
same problems. No VaR methodology is without simplifying assumptions, and
several different methods are in use at institutions worldwide. The literature on
volatility estimation is large and seemingly subject to unending growth, especially in
acronyms.2
Variance-Covariance Approach
VCV Models portfolio returns as a multivariate normal distribution. We can use a
position vector containing cash flow present values to represent all components of
the portfolio and describe the portfolio. VCV approach concerns most the return and
covariance matrix(Q) representing the risk attributes of the portfolio over the chosen
horizon. The standard deviation of portfolio value (σ), also called volatility, is
computed:
σ ¼
ffiffiffiffiffiffiffiffiffiffi
htQh
p
ð1Þ
The volatility (σ) is then scaled to find the desired centile of portfolio value that is
the predicted maximum loss for the portfolio or VaR:
VaR ¼ σf Yð Þ
where : f Yð Þ is the scale factor for centile Y: ð2Þ
For example, for a multivariate normal return distribution, f(Y) ¼ 2.33 for
Y ¼ 1 %.
The Use of Value at Risk 83
It is then easy to calculate VaR from the standard deviation (1-day VaR ¼ 2.33 s).
The simplest assumption is that daily gains/losses are normally distributed and
independent. The N-day VaR equals
ffiffiffiffi
N
p
times the one-day VaR. When there is
autocorrelation equal to r the multiplier is increased from N to
N þ 2 N � 1ð Þρ þ 2 N � 2ð Þρ2 þ 2 N � 3ð Þρ3 þ . . . 2ρn�1
Besides being easy to compute, VCV also lends itself readily to the calculation
of the calculation of the marginal risk (Marginal VaR), Incremental VaR and
Component VaR of candidate trades. For a Portfolio where an amount xi is invested
in the ith component of the portfolio, these three VaR measures are computed as:
• Marginal VaR: ∂VaR
∂xi
• Incremental VaR: Incremental effect of ith component on VaR
• Component VaR xi
∂VaR
∂xi
VCV uses delta-approximation, which means the representative cash flow vector
is a linear approximation of positions. In some cases, a second-order term in the cash
flow representation is included to improve this approximation.3 However, this does
not always improve the risk estimate and can only be done with the sacrifice of some
of the computational efficiency. In general, VCV works well in calculating linear
instruments such as forward, interest rate SWAP, but works quite badly in non-linear
instruments such as various options.
Monte Carlo Simulation of VaR
Simulation models are sets of assumptions concerning the relationship among model
components. Simulations can be time-oriented (for instance, involving the number
of events such as demands in a day) or process-oriented (for instance, involving
queuing systems of arrivals and services). Uncertainty can be included by using
probabilistic inputs for elements such as demands, inter-arrival times, or service
times. These probabilistic inputs need to be described by probability distributions
with specified parameters. Probability distributions can include normal distributions
(with parameters for mean and variance), exponential distributions (with parameter
for a mean), lognormal (parameters mean and variance), or any of a number of other
distributions. A simulation run is a sample from an infinite population of possible
results for a given model. After a simulation model is built, a selected number of
trials is established. Statistical methods are used to validate simulation models and
design simulation experiments.
Many financial simulation models can be accomplished on spreadsheets, such as
Excel. There are a number of commercial add-on products that can be added to
Excel, such as @Risk or Crystal Ball, that vastly extend the simulation power of
spreadsheet models.4 These add-ons make it very easy to replicate simulation runs,
84 6 Value at Risk Models
and include the ability to correlate variables, expeditiously select from standard
distributions, aggregate and display output, and other useful functions.
The Simulation Process
Using simulation effectively requires careful attention to the modeling and imple-
mentation process. The simulation process consists of five essential steps:
Develop a conceptual model of the system or problem under study. This step
begins with understanding and defining the problem, identifying the goals and
objectives of the study, determining the important input variables, and defining
output measures. It might also include a detailed logical description of the system
that is being studied. Simulation models should be made as simple as possible to
focus on critical factors that make a difference in the decision. The cardinal rule of
modeling is to build simple models first, then embellish and enrich them as
necessary.
1. Build the simulation model. This includes developing appropriate formulas or
equations, collecting any necessary data, determining the probability distributions
of uncertain variables, and constructing a format for recording the results. This
might entail designing a spreadsheet, developing a computer program, or
formulating the model according to the syntax of a special computer simulation
language (which we discuss further in Chap. 7).
2. Verify and validate the model. Verification refers to the process of ensuring that
the model is free from logical errors; that is, that it does what it is intended to
do. Validation ensures that it is a reasonable representation of the actual system or
problem. These are important steps to lend credibility to simulation models and
gain acceptance from managers and other users. These approaches are described
further in the next section.
3. Design experiments using the model. This step entails determining the values of
the controllable variables to be studied or the questions to be answered in order to
address the decision maker’s objectives.
4. Perform the experiments and analyze the results. Run the appropriate
simulations to obtain the information required to make an informed decision.
As with any modeling effort, this approach is not necessarily serial. Often, you
must return to pervious steps as new information arises or as results suggest
modifications to the model. Therefore, simulation is an evolutionary process that
must involve not only analysts and model developers, but also the users of the
results.
Monte Carlo Simulation of VaR 85
Demonstration of VaR Simulation
We use an example Monte Carlo simulation model published by Beneda5 to
demonstrate simulation of VaR and other forms of risk. Beneda considered four
risk categories, each with different characteristics of data availability:
• Financial risk—controllable (interest rates, commodity prices, currency
exchange)
• Pure risk—controllable (property loss and liability)
• Operational—uncontrollable (costs, input shortages)
• Strategic—uncontrollable (product obsolescence, competition)
Beneda’s model involved forward sale (45 days forward) of an investment
(CD) with a price that was expected to follow the uniform distribution ranging
from 90 to 110. Half of these sales (20,000 units) were in Canada, which involved an
exchange rate variation that was probabilistic (uniformly distributed from �0.008 to
�0.004). The expected price of the CD was normally distributed with mean 0.8139,
standard deviation 0.13139. Operating expenses associated with the Canadian oper-
ation were normally distributed with mean $1,925,000 and standard deviation
$192,500. The other half of sales were in the US. In the US. There was risk of
customer liability lawsuits (2, Poisson distribution), with expected severity per
lawsuit that was lognormally distributed with mean $320,000, standard deviation
$700,000. Operational risks associated with US operations were normally
distributed with mean $1,275,000, standard deviation $127,500. The Excel spread-
sheet model for this is given in Table 6.1.
In Crystal Ball, entries in cells B2, B3, B7, B10, B21, B22 and B23 were entered
as assumptions with the parameters given in column C. Prediction cells were defined
for cells B17 (Canadian net income) and B29 (Total net income after tax). Results for
cell B17 are given in Fig. 6.2, with a probability of 0.9 prescribed in Crystal Ball so
that we can identify the VaR at the 0.05 level.
Statistics are given in Table 6.2.
The value at risk at the 0.95 level for this investment was �540,245.40, meaning
that there was a 0.05 probability of doing worse than losing $540,245.50 in US
dollars. The overall investment outcome is shown in Fig. 6.3.
Statistics are given in Table 6.3.
On average, the investment paid off, with a positive value of $96,022.98.
However, the worst case of 500 was a loss of over $14 million. (The best was a
gain of over $1.265 million.) The value at risk shows a loss of $1.14 million, and
Fig. 6.3 shows that the distribution of this result is highly skewed (note the skewness
measures for Figs. 6.2 and 6.3).
Beneda proposed a model reflecting hedging with futures contracts, and insurance
for customer liability lawsuits. Using the hedged price in cell B4, and insurance
against customer suits of $640,000, the after-tax profit is shown in Fig. 6.4.
Mean profit dropped to $84,656 (standard deviation $170,720), with minimum
�$393,977 (maximum gain $582,837). The value at risk at the 0.05 level was a loss
86 6 Value at Risk Models
of $205,301. Thus there was an expected cost of hedging (mean profit dropped from
$96,022 to $84,656), but the worst case was much improved (loss of over $14
million to loss of $393,977) and value at risk improved from a loss of over $1.14
million to a loss of $205 thousand.
Conclusions
Value at risk is a useful concept in terms of assessing probabilities of investment
alternatives. It is a point estimator, like the mean (which could be viewed as the
value at risk for a probability of 0.5). It is only as valid as the assumptions made,
which include the distributions used in the model and the parameter estimates. This
Table 6.1 Excel model of investment
A B C
1 Financial risk Formulas Distribution
2 Expected basis �0.006 Uniform(�0.008,�0.004)
3 Expected price per CD 0.8139 Normal(0.8139,0.13139)
4 March futures price 0.8149
5 Expected basis 45 days ¼B2
6 Expected CD futures 0.8125
7 Operating expenses 1.925 Normal(1,925,000,192,500)
8 Sales 20,000
9
10 Price $US 100 Uniform(90,110)
11 Sales 20,000
12 Current 0.8121
13 Receipts ¼B10 * B11/B12
14 Expected exchange rate ¼B3
15 Revenues ¼B13 * B14
16 COGS ¼B7 * 1,000,000
17 Operating income ¼B15 � B16
18
19 Local sales 20,000
20 Local revenues ¼B10 * B19
21 Lawsuit frequency 2 Poisson(2)
22 Lawsuit severity 320,000 Lognormal(320,000,700,000)
23 Operational risk 1,275,000 Normal(1,275,000,127,500)
24 Losses ¼B21 * B22 + B23
25 Local income ¼B20 � B24
26
27 Total income ¼B17 + B25
28 Taxes ¼0.35 * B27
29 After Tax Income ¼B27 � B28
Conclusions 87
is true of any simulation. However, value at risk provides a useful tool for financial
investment. Monte Carlo simulation provides a flexible mechanism to measure it, for
any given assumption.
However, Value at risk has undesirable properties, especially for gain and loss
data with non-elliptical distributions. It satisfies the well-accepted principle of
diversification under assumption of normally distributed data. However, it violates
the widely accepted subadditive rule; i.e., the portfolio VaR is not smaller than the
sum of component VaR. The reason is that VaR only considers the extreme
Fig. 6.2 Output for Canadian investment. #Oracle. used with permission
Table 6.2 Output
statistics for operating
income
Forecast Operating income
Statistic Forecast values
Trials 500
Mean 78,413.99
Median 67,861.89
Mode –
Standard Deviation 385,962.44
Variance 148,967,005,823.21
Skewness �0.0627
Kurtosis 2.99
Coefficient of variability 4.92
Minimum �1,183,572.09
Maximum 1,286,217.07
Mean standard error 17,260.77
88 6 Value at Risk Models
percentile of a gain/loss distribution without considering the magnitude of the loss.
As a consequence, a variant of VaR, usually labeled Conditional-Value-at-Risk
(or CVaR), has been used. With respect to computational issues, optimization
CVaR can be very simple, which is another reason for adoption of CVaR. This
pioneer work was initiated by Rockafellar and Uryasev,6 where CVaR constraints in
optimization problems can be formulated as linear constraints. CVaR represents a
weighted average between the value at risk and losses exceeding the value at risk.
CVaR is a risk assessment approach used to reduce the probability a portfolio will
Fig. 6.3 Output for after tax income. #Oracle. used with permission
Table 6.3 Output
statistics for after tax
income
Forecast Operating income
Statistic Forecast values
Trials 500
Mean 96,022.98
Median 304,091.58
Mode –
Standard Deviation 1,124,864.11
Variance 1,265,319,275,756.19
Skewness �7.92
Kurtosis 90.69
Coefficient of variability 11.71
Minimum �14,706,919.79
Maximum 1,265,421.71
Mean standard error 50,305.45
Conclusions 89
incur large losses assuming a specified confidence level. CVaR has been applied to
financial trading portfolios,7 implemented through scenario analysis,8 and applied
via system dynamics.9 A popular refinement is to use copulas, multivariate
distributions permitting the linkage of a huge number of distributions.10 Copulas
have been implemented through simulation modeling11 as well as through analytic
modeling.12
We will show how specified confidence levels can be modeled through chance
constraints in the next chapter. It is possible to maximize portfolio return subject to
constraints including Conditional Value-at-Risk (CVaR) and other downside risk
measures, both absolute and relative to a benchmark (market and liability-based).
Simulation CVaR based optimization models can also be developed.
Notes
1. Jorion, P. (1997). Value at Risk: The New Benchmark for Controlling Market
Risk. New York: McGraw-Hill.
2. Danielson, J. and de Vries, C.G. (1997). Extreme returns, tail estimation, and
value-at-risk. Working Paper, University of Iceland (http://www.hag.hi.is/
~jond/research); Fallon, W. (1996). Calculating value-at-risk. Working Paper,
Columbia University (bfallon@groucho.gsb.columbia.edu); Garman,
M.B. (1996). Improving on VaR. Risk 9,No. 5.
3. JP Morgan (1996). RiskMetrics™-technical document, 4th ed.
4. Evans, J.R. and Olson, D.L. (2002). Introduction to Simulation and Risk
Analysis 2nd ed. Upper Saddle River, NJ: Prentice Hall.
Fig. 6.4 After-tax profit with hedging and insurance. #Oracle. used with permission
90 6 Value at Risk Models
http://www.hag.hi.is/~jond/research
http://www.hag.hi.is/~jond/research
5. Beneda, N. (2004). Managing an asset management firm’s risk portfolio, Jour-
nal of Asset Management 5:5, 327–337.
6. Rockafellar, R.T. and Uryassev, S. (2002). Conditional value-at-risk for general
loss distributions. Journal of Banking & Finance 26:7, 1443–1471.
7. Al Janabi, M.A.M. (2009). Corporate treasury market price risk management: A
practical approach for strategic decision-making. Journal of Corporate Trea-
sury Management 3(1), 55–63.
8. Sawik, T. (2011). Selection of a dynamic supply portfolio in make-to-order
environment with risks. Computers & Operations Research 38(4), 782–796.
9. Mehrjoo, M. and Pasek, Z.J. (2016). Risk assessment for the supply chain of fast
fashion apparel industry: A system dynamics framework. International Journal
of Production Research 54(1), 28–48.
10. Guégan, D. and Hassani, B.K. (2012). Operational risk: A Basel II++ step
before Basel III. Journal of Risk Management in Financial Institutions 6(1),
37–53.
11. Hsu, C.-P., Huang, C.-W. and Chiou, W.-J. (2012). Effectiveness of copula-
extreme value theory in estimating value-at-risk: Empirical evidence from Asian
emerging markets. Review of Quantitative Finance & Accounting 39(4),
447–468.
12. Kaki, A., Salo, A. and Talluri, S. (2014). Scenario-based modeling of interde-
pendent demand and supply uncertainties. IEEE Transactions on Engineering
Management 61(1), 101–113.
Notes 91
Chance-Constrained Models 7
Chance-constrained programming was developed as a means of describing
constraints in mathematical programming models in the form of probability levels
of attainment.1 Consideration of chance constraints allows decision makers to
consider mathematical programming objectives in terms of the probability of their
attainment. If α is a predetermined confidence level desired by a decision maker, the
implication is that a constraint will be violated at most (1�α) of all possible cases.
Chance constraints are thus special types of constraints in mathematical program-
ming models, where there is some objective to be optimized subject to constraints. A
typical mathematical programming formulation might be:
Maximize f Xð Þ
Subject to : Ax � b
The objective function f(X) can be profit, with the function consisting of
n variables X as the quantities of products produced and f(X) including profit
contribution rate constants. There can be any number m of constraints in Ax, each
limited by some constant b. Chance constraints can be included in Ax, leading to a
number of possible chance constraint model forms. Charnes and Cooper presented
three formulations2:
1ð Þ Maximize the expected value of a probabilistic function
Maximize E Y½ � where Y ¼ f Xð Þð Þ
Subject to : Pr Ax � bf g � α
Any coefficient of this model (Y, A, b) may be probabilistic. The intent of this
formulation would be to maximize (or minimize) a function while assuring α
probability that a constraint is met. While the expected value of a function usually
involves a linear functional form, chance constraints will usually be nonlinear. This
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_7
93
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_7&domain=pdf
formulation would be appropriate for many problems seeking maximum profit
subject to staying within resource constraints at some specified probability.
2ð Þ Minimize variance
Min Var Y½ �
Subject to : Pr Ax � bf g � α
The intent is to accomplish some functional performance level while satisfying
the chance constraint set. This formulation might be used in identifying portfolio
investments with minimum variance, which often is used as a measure of risk.
3ð Þ Maximize probability of satisfying a chance constraint set
MaxPr Y � targetf g
Subject to : Pr Ax � bf g � α
This formulation is generally much more difficult to accomplish, especially in the
presence of joint chance constraints (where simultaneous satisfaction of chance
constraints is required). The only practical means to do this is running a series of
models seeking the highest α level yielding a feasible solution.
All three models include a common general chance constraint set, allowing
probabilistic attainment of functional levels:
Pr Ax � bf g � α
This set is nonlinear, requiring nonlinear programming solution. This inhibits the
size of the model to be analyzed, as large values of model parameters m (number of
constraints) and especially n (number of variables) make it much harder to obtain a
solution.
Most chance-constrained applications assume normal distributions for model
coefficients. Goicoechea and Duckstein presented deterministic equivalents for
non-normal distributions.3 However, in general, chance-constrained models become
much more difficult to solve if the variance of parameter estimates increases (the
feasible region shrinks drastically when more dispersed distributions are used). The
same is true if α is set at too high a value (for the same reason—the feasible region
shrinks).
Chance-constrained applications also usually assume coefficient independence.
This is often appropriate. However, it is not appropriate in many investment
analyses. Covariance elements of coefficient estimates can be incorporated within
chance constraints, eliminating the need to assume coefficient independence. How-
ever, this requires significantly more data, and vastly complicates model data entry.
94 7 Chance-Constrained Models
Chance-Constrained Applications
Chance-constrained models are not nearly as widespread as linear programming
models. A number of applications involve financial planning, to include retirement
fund planning models.4 Chance constraints have also been applied to stress testing
value-at-risk (and CVaR).5 Beyond financial planning, chance-constrained models
have been applied to supplier selection6 in operations, as well as in project selection
in construction.7 A multi-attribute model for selection of infrastructure projects in an
aerospace firm seeking to maximize company performance subject to probabilistic
budget constraints has been presented.8 There are green chance-constrained models
seeking efficient climate policies considering available investment streams and
renewable energy technologies.9
Chance constraints have been incorporated into data envelopment analysis
models.10 Chance-constrained programming has been compared with data envelop-
ment analysis and multi-objective programming in a supply chain vendor selection
model.11
Portfolio Selection
Assume a given sum of money to be invested in n possible securities. We denote by
x ¼ (x1,. . ., xn) as an investment proportion vector (also called a portfolio). As for the
number of securities n, many large institutions have “approved lists” where n is
anywhere from several hundred to a thousand. When attempting to form a portfolio
to mimic a large broad-based index (like S&P500, EAFE, Wilshire 5000), n can be
up to several thousand, denoted by ri the percent return of i-th security; other
objectives to characterize the i-th security could be:
• si is social responsibility of i-th security
• gi is growth in sales of i-th security
• ai is amount invested in R&D of i-th security
• di is dividends of i-th security
• qi is liquidity of i-th security
Consideration of such investment objectives will lead to utilization of multi-
objective programming models. The investor tries to select several possible
securities from the n securities to maximize his/her profit, which leads to the
investor’s decision problem as:
Max rp ¼
Xn
i¼1rixi
s:t: Ax � b
ð1Þ
where
Portfolio Selection 95
• rp is percent return on a portfolio over the holding period
• Ax � b, the feasible region in decision space
In the investor’s decision problem (1), the quantity rp to be maximized is a
random variable because rp is a function of the individual security ri random
variables. Therefore, Eq. (1) is a stochastic programming problem. Stochastic
programming models are similar to deterministic optimization problems where the
parameters are known only within certain bounds but take advantage of the fact that
probability distributions governing the data are known or can be estimated. To solve
a stochastic programming problem, we need to convert the stochastic programming
to an equivalent deterministic programming problem. A popular way of doing this is
to use utility function U(∙)], which maps stochastic terms into their deterministic
equivalents. For example, by use of the means μi, variances σii, and covariances σij of
the ri, a portfolio selection problem is to maximize expected utility.
E U rp
� �� �
¼ E rp
� �
� λVar rp
� �
,
where λ � 0 a risk reversion coefficient and may be different from different
investors. In other words, a portfolio selection problem can be modeled by a trade-
off between the mean and variance of random variable rp:
Max E U rp
� �� �
¼ E rp
� �
� λVar rp
� �
,
λ � 0
Ax � b
Assuming [U(rp)] is Taylor series expandable, the validity of E[U(rp)] and thus
the above problem can be guaranteed if [U(rp)] Taylor series expandable of r ¼
(r1,. . ., rn) follows the multinormal distribution. Another alternative to Markowitz’s
mean variance framework, chance-constrained programming was employed to
model the portfolio selection problem. We will demonstrate the utilization of
chance-constrained programming to model the portfolio selection problem in the
next section.
Demonstration of Chance-Constrained Programming
The following example was taken from Lee and Olson (2006).12 The Hal Chase
Investment Planning Agency is in business to help investors optimize their return
from investment, to include consideration of risk. By using nonlinear programming
models, Hal Chase can control risk.
Hal deals with three investment mediums: a stock fund, a bond fund, and his own
Sports and Casino Investment Plan (SCIP). The stock fund is a mutual fund
investing in openly traded stocks. The bond fund focuses on the bond market,
96 7 Chance-Constrained Models
which has a much stabler return, although significantly lower expected return. SCIP
is a high-risk scheme, often resulting in heavy losses, but occasionally coming
through with spectacular gains. In fact, Hal takes a strong interest in SCIP,
personally studying investment opportunities and placing investments daily. The
return on these mediums, as well as their variance and correlation, are given in
Table 7.1.
Note that there is a predictable relationship between the relative performance of
the investment opportunities, so the covariance terms report the tendency of
investments to do better or worse given that another investment did better or
worse. This indicates that variables S and B tend to go up and down together
(although with a fairly weak relationship), while variable G tends to move opposite
to the other two investment opportunities.
Hal can develop a mathematical programming model to reflect an investor’s
desire to avoid risk. Hal assumes that returns on investments are normally distributed
around the average returns reported above. He bases this on painstaking research he
has done with these three investment opportunities.
Maximize Expected Value of Probabilistic Function
Using this form, the objective is to maximize return:
Expected return ¼ 0:148 S þ 0:060 B þ 0:152 G
subject to staying within budget:
Budget ¼ 1 S þ 1 B þ 1 G � 1000
having a probability of positive return greater than a specified probability:
Pr Expected return � 0f g � α
with all variables greater than or equal to 0:
S, B, G � 0
The solution will depend on the confidence limit α. Using EXCEL, and varying α
from 0.5, 0.8, 0.9, and 0.95, we obtain the solutions given in Table 7.2.
Table 7.1 Hal chase
investment data
Stock S Bond B SCIP G
Average return 0.148 0.060 0.152
Variance 0.014697 0.000155 0.160791
Covariance with S 0.000468 �0.002222
Covariance with B �0.000227
Maximize Expected Value of Probabilistic Function 97
The probability determines the penalty function α. At a probability of 0.80, the
one-tailed normal z-function is 0.842, and thus the chance constrained is:
0:148S þ 0:060B þ 0:152G � 0:842 � SQRT 0:014697S2 þ 0:000936SB
�
�0:004444SG þ 000155B2 � 0:000454BG þ 0:160791G2Þ
The only difference in the constraint set for the different rows of Table 7.2 is that
α is varied. The effect seen is that investment is shifted from the high-risk gamble to
a bit safer stock. The stock return has low enough variance to assure the specified
probabilities given. Had it been higher, the even safer bond would have entered into
the solution at higher specified probability levels.
Minimize Variance
With this chance-constrained form, Hal is risk averse. He wants to minimize risk
subject to attaining a prescribed level of gain. The variance–covariance matrix
measures risk in one form, and Hal wants to minimize this function.
Min 0:014697S2 þ 0:000936SB � 0:004444SG þ 0:000155B2 � 0:000454BG
þ 0:160791G2
This function can be constrained to reflect other restrictions on the decision. For
instance, there typically is some budget of available capital to invest.
S þ B þ G � 1000 for a $1000 budget
Finally, Hal only wants to minimize variance given that he attains a prescribed
expected return. Hal wants to explore four expected return levels: $50/$1000
invested, $100/$1000 invested, $150/$1000 invested, and $200/$1000 invested.
Note that these four levels reflect expected returns of 5%, 10%, 15%, and 20%.
Table 7.2 Results for chance-constrained formulation (1)
Probability {return � 0} α Stock Bond Gamble Expected return
0.50 0 – – 1000.00 152.00
0.80 0.842 585.19 – 414.81 149.66
0.90 1.282 863.18 – 136.82 148.55
0.95 1.645 515.28 427.39 57.33 110.62
0.99 2.326 260.87 707.91 31.21 85.83
98 7 Chance-Constrained Models
0:148 S þ 0:06 B þ 0:152 G � r where r ¼ 50, 100, 150, and 200
Solution Procedure
The EXCEL input file will start off with the objective, MIN followed by the list of
variables. Then we include the constraint set. The constraints can be stated as you
want, but the partial derivatives of the variables need to consider each constraint
stated in less-than-or-equal-to form. Therefore, the original model is transformed to:
Min 0:014697S2þ0:000936SB�0:004444SGþ0:000155B2�0:000454BGþ0:160791G2
� �
st SþBþG �1000 budget constraint
0:148Sþ0:06Bþ0:152G�50 gain constraint
S,B,G�0
The solution for each of the four gain levels are given in Table 7.3.
The first solution indicates that the lowest variance with an expected return of $50
per $1000 invested would be to invest $20.25 in S (stocks), 778.56 in B (the bond
fund), $1.90 in G (the risky alternative), and keeping the 199.29 slack. The variance
is $100.564. This will yield an average return of 5% on the money invested.
Increasing specified gain to $100 yields the designed expected return of $100 with
a variance of $2807. Raising expected gain to 150 yields the prescribed $150 with a
variance of $43,872. Clearly this is a high-risk solution. But it also is near the
maximum expected return (if all $1000 was placed on the riskiest alternative, G, the
expected return would be maximized at $152 per $1000 invested). A model
specifying a gain of $200 yields an infeasible solution, and thus by running multiple
models, we can identify the maximum gain available (matching the linear program-
ming model without chance constraints). It can easily be seen that lower variance is
obtained by investing in bonds, then shifting to stocks, and finally to the high-risk
gamble option.
Maximize Probability of Satisfying Chance Constraint
The third chance-constrained form is implicitly attained by using the first form
example above, stepping up α until the model becomes infeasible. When the
probability of satisfying the chance constraint was set too high, a null solution
Table 7.3 Results for
chance-constrained
formulation (2)
Specified gain Variance Stock Bond Gamble
�50 100.564 20.25 778.56 1.90
�100 2807.182 413.28 547.25 39.47
�150 43,872 500.00 – 500.00
�152 160,791 – – 1000.00
Maximize Probability of Satisfying Chance Constraint 99
was generated (do not invest anything—keep all the $1000). Table 7.4 shows
solutions obtained, with the highest α yielding a solution being 4.8, associated
with a probability very close to 1.0 (0.999999 according to EXCEL).
Real Stock Data
To check the validity of the ideas presented, we took real stock data from the
Internet, taking daily stock prices for six dispersed, large firms, as well as the
S&P500 index. Data was manipulated to obtain daily rates of return over the period
1999 through 2008 (2639 observations—dividing closing price by closing price of
prior day).
r ¼ Vt
Vt�1
where Vt ¼ return for day t and Vt �1 ¼ return for the prior day. (The arithmetic
return yields identical results, only subtracting 1 from each data point.)
rarith ¼ Vt � Vt�1Vt�1
We first looked at possible distributions. Figure 7.1 shows the Crystal Ball best fit
for all data (using the Chi-square criterion—same result for Kolmogorov–Smirnov
or Anderson criteria), while Fig. 7.2 shows fit with the logistic distribution, and
Fig. 7.3 with the normal distribution.
The parameters for the Student’s-t distribution fit was a scale of 0.01, and 2.841
degrees of freedom. For the logistic distribution, the scale parameter was 0.01.
The data had a slight negative skew, with a skewness score of �1.87. It had a high
degree of kurtosis (73.65), and thus much more peaked than a normal distribution.
This demonstrates “fat tail” distributions that are often associated with financial
returns. Figures 7.1–7.3 clearly show how the normal assumption is too spread out
for probabilities close to 0.5, and too narrow for the extremes (tails). The logistic
distribution gives a better fit, but Student’s-t distribution does better yet.
Table 7.5 shows means standard deviations, and covariances of these
investments.
Table 7.4 Results for
chance-constrained
formulation (3)
α Stock Bond Gamble Expected return
3 157.84 821.59 20.57 75.78
4 73.21 914.93 11.86 67.53
4.5 38.66 953.02 8.32 64.17
4.8 11.13 983.38 5.48 61.48
4.9 and up – – – 0
100 7 Chance-Constrained Models
An alternative statistic for returns is the logarithmic return, or continuously
compounded return, using the formula:
rlog ¼ ln
Vf
Vi
� �
The Student’s-t distribution again had the best fit, followed by logistic and normal
(see Fig. 7.4).
This data yields slightly different data, as shown in Table 7.6.
Like the arithmetic return, the logarithmic return is centered on 0. There is a
difference (slight) between logarithmic return covariances and arithmetic return
covariances. The best distribution fit was obtained with the original data (identical
Fig. 7.1 Data distribution fit Student’s-t. # Oracle. Used with permission
Fig. 7.2 Logistic fit. # Oracle. Used with permission
Real Stock Data 101
to arithmetic return), so we used that data for our chance-constrained calculations. If
logarithmic return data was preferred, the data in Table 7.6 could be used in the
chance-constrained formulations.
Chance-Constrained Model Results
We ran the data into chance-constrained models assuming a normal distribution for
data, using means, variances, and covariances from Table 7.5. The model included a
budget limit of $1000, all variables �0, (chance constrained to have no loss),
obtaining results shown in Table 7.7.
Fig. 7.3 Normal model fit to data. # Oracle. Used with permission
Table 7.5 Daily data
Ford IBM Pfizer SAP WalMart XOM S&P
Mean 1.00084 1.00033 0.99935 0.99993 1.00021 1.00012 0.99952
Std. dev 0.03246 0.02257 0.02326 0.03137 0.02102 0.02034 0.01391
Min 0.62822 0.49101 0.34294 0.81797 0.53203 0.51134 0.90965
Max 1.29518 1.13160 1.10172 1.33720 1.11073 1.17191 1.11580
Cov(Ford) 0.00105 0.00019 0.00014 0.00020 0.00016 0.00015 0.00022
Cov(IBM) 0.00051 0.00009 0.00016 0.00013 0.00012 0.00018
Cov(Pfizer) 0.00054 0.00011 0.00014 0.00014 0.00014
Cov(SAP) 0.00098 0.00010 0.00016 0.00016
Cov(WM) 0.00044 0.00011 0.00014
Cov(XOM) 0.00041 0.00015
Cov(S&P) 0.00019
102 7 Chance-Constrained Models
Maximizing return is a linear programming model, with an obvious solution of
investing all available funds in the option with the greatest return (Ford). This has the
greatest expected return, but also the highest variance.
Minimizing variance is equivalent to chance-constrained form (2). The solution
avoided Ford (which had a high variance), and spread the investment out among the
other options, but had a small loss.
A series of models using chance-constrained form (1) were run. Maximizing
expected return subject to investment �1000 as well as adding the chance constraint
Pr{return � 970} was run for both normal and t-distributions.
Max expected return
s:t: Sum investment � 1000
Pr return � 970f g � 0:95
All investments � 0
It can be seen in Table 7.6 that the t-distribution was less restrictive, resulting in
more investment in the riskier Ford option, but having a slightly higher variance
(standard deviation). The chance constraint was binding in both assumptions (nor-
mal and Student’s-t). There was a 0.9 probability return of 979.50, and a 0.8
probability of return of 988.09 by t-distribution. Further chance constraint models
were run assuming t-distribution. For the model:
Max expected return
s:t: Sum investment � 1000
Pr return � 970f g � 0:95
Pr return � 980f g � 0:9
All investments � 0
Fig. 7.4 Distribution comparison from Crystal Ball. # Oracle. Used with permission
Chance-Constrained Model Results 103
T
a
b
le
7
.6
D
ai
ly
d
at
a
fo
r
lo
g
ar
it
h
m
ic
re
tu
rn
F
o
rd
IB
M
P
fi
ze
r
S
A
P
W
al
M
ar
t
X
O
M
S
&
P
M
ea
n
�0
.0
0
0
2
9
0
.0
0
0
1
5
�0
.0
0
0
8
4
�0
.0
0
0
3
8
0
.0
0
0
0
6
�0
.0
0
0
1
7
�0
.0
0
0
6
8
S
td
.
d
ev
0
.0
3
2
7
8
0
.0
2
4
5
5
0
.0
2
8
5
2
0
.0
3
0
8
7
0
.0
2
2
5
4
0
.0
2
2
1
9
0
.0
1
3
9
2
M
in
�0
.4
6
4
8
6
�0
.7
1
1
3
0
�1
.0
7
0
2
1
�0
.2
0
0
9
3
�0
.6
3
1
0
5
�0
.6
7
0
7
3
�0
.0
9
4
7
0
M
ax
0
.2
5
8
6
5
0
.1
2
3
6
4
0
.0
9
6
8
7
0
.2
9
0
5
8
0
.1
0
5
0
2
0
.1
5
8
6
3
0
.1
0
9
5
7
C
o
v
(F
o
rd
)
0
.0
0
1
0
7
0
.0
0
0
1
9
0
.0
0
0
1
3
0
.0
0
0
2
0
0
.0
0
0
1
6
0
.0
0
0
1
5
0
.0
0
0
2
2
C
o
v
(I
B
M
)
0
.0
0
0
6
0
0
.0
0
0
0
9
0
.0
0
0
1
5
0
.0
0
0
1
3
0
.0
0
0
1
2
0
.0
0
0
1
8
C
o
v
(P
fi
ze
r)
0
.0
0
0
8
1
0
.0
0
0
1
1
0
.0
0
0
1
4
0
.0
0
0
1
3
0
.0
0
0
1
4
C
o
v
(S
A
P
)
0
.0
0
0
9
5
0
.0
0
0
1
0
0
.0
0
0
1
6
0
.0
0
0
1
6
C
o
v
(W
M
)
0
.0
0
0
5
1
0
.0
0
0
1
1
0
.0
0
0
1
4
C
o
v
(X
O
M
)
0
.0
0
0
4
9
0
.0
0
0
1
5
C
o
v
(S
&
P
)
0
.0
0
0
1
9
104 7 Chance-Constrained Models
T
a
b
le
7
.7
M
o
d
el
re
su
lt
s
M
o
d
el
F
o
rd
IB
M
P
fi
ze
r
S
A
P
W
M
X
O
M
S
&
P
R
et
u
rn
S
td
D
ev
M
ax
re
tu
rn
1
0
0
0
.0
0
0
–
–
–
–
–
–
1
0
0
0
.8
4
3
2
.4
0
4
M
in
v
ar
ia
n
ce
–
4
5
.9
8
7
9
0
.8
6
9
3
0
.8
1
1
1
2
7
.5
0
8
1
1
6
.0
0
4
5
8
8
.8
2
1
9
9
9
.7
6
1
3
.1
5
6
N
o
rm
al
P
r{
>
97
0}
>
0.
95
3
9
8
.3
8
1
2
8
3
.7
8
5
–
–
2
2
2
.5
5
7
9
5
.2
7
7
–
1
0
0
0
.4
9
1
8
.5
3
4
t
P
r{
>
97
0}
>
0.
95
6
0
7
.1
6
2
2
9
6
.8
1
8
–
–
9
6
.0
2
0
–
–
1
0
0
0
.6
3
2
3
.0
3
5
t
P
r{
>
9
7
0
}>
0
.9
5
P
r{
>
98
0}
>
0.
9
5
8
1
.6
2
7
3
0
1
.5
2
8
–
–
1
1
6
.8
4
5
–
–
1
0
0
0
.6
1
2
2
.4
7
5
t
P
r{
>
9
7
0
}>
0
.9
5
P
r{
>
9
8
0
}>
0
.9
P
r{
>
99
0}
>
0.
8
4
3
8
.4
0
5
2
7
9
.2
8
7
–
–
2
2
0
.2
5
4
6
2
.0
5
4
–
1
0
0
0
.5
1
1
9
.3
2
0
M
ax
P
r{
>
1
0
0
0
}
1
6
.2
7
5
1
0
9
.8
6
7
1
0
5
.5
8
6
3
8
.7
4
8
1
7
4
.5
7
0
1
7
2
.2
4
4
3
8
2
.7
1
1
9
9
9
.9
1
1
3
.3
1
0
T
h
e
b
o
ld
em
p
h
as
is
si
g
n
ifi
es
th
e
in
st
an
ce
w
it
h
h
ig
h
v
ar
ia
n
ce
Chance-Constrained Model Results 105
The expected return was only slightly less, with the constraint Pr{return �
980} � 0.9 binding. There was a 0.95 probability of return of 970.73, and a 0.8
probability of return of 988.38. A model using three chance constraints was also run:
Max expected return
s:t: Sum investment � 1000
Pr return � 970f g � 0:95
Pr return � 980f g � 0:9
Pr return � 990f g � 0:8
All investments � 0
This yielded a solution where the 0.95 probability of return was 974.83, the 0.9
probability of return was 982.80, and the 0.8 probability of return was 990 (binding).
Finally, a model was run to maximizing probability of return �1000 (chance-
constrained model type 3).
Minimize D
s:t: Sum investment � 1000
Pr return � 970f g � 0:95
Pr return � 980f g � 0:9
D ¼ 1000 � Pr return � 1000f g � 0:8
All investments � 0
This was done by setting the deviation from an infeasible target. The solution
yielded a negative expected return at a low variance, with the 0.95 probability of
return 982.22, the 0.9 probability of return 987.71, and the 0.8 probability of return
992.67.
Conclusions
A number of different types of models can be built using chance constraints. The first
form is to maximize the linear expected return subject to attaining specified
probabilities of reaching specified targets. The second is to minimize variance. This
second form is not that useful, in that the lowest variance is actually to not invest. Here
we forced investment of the 1000 capital assumed. The third form is to maximize
probability of attaining some target, which in order to be useful, has to be infeasible.
Chance-constrained models have been used in many applications. Here we have
focused on financial planning, but there have been applications whenever statistical
data is available in an optimization problem.
The models presented all were solved with EXCEL SOLVER. In full disclosure,
we need to point out that chance constraints create nonlinear optimization models,
106 7 Chance-Constrained Models
which are somewhat unstable relative to linear programming models. Solutions are
very sensitive to the accuracy of input data. There also are practical limits to model
size. The variance–covariance matrix involves a number of parameters to enter into
EXCEL functions, which grow rapidly with the number of variables. In the simple
example there were three solution variables, with six elements to the variance–
covariance matrix. In the real example, there were seven solution variables (invest-
ment options). The variance–covariance matrix thus involved 28 nonlinear
expressions.
Notes
1. Charnes, A. and Cooper, W.W. (1959). Chance-constrained programming,
Management Science 6:1, 73–79; Charnes, A. and Cooper, W.W. (1962).
Chance-constraints and normal deviates, Journal of the American Statistical
Association 57, 134–148.
2. Charnes, A. and Cooper, W.W. (1963). Deterministic equivalents for optimizing
and satisficing under chance-constraints, Operations Research 11:1, 18–39.
3. Goicoechea, A. and Duckstein, L. (1987). Nonnormal deterministic equivalents
and a transformation in stochastic mathematical programming, Applied Mathe-
matics and Computation 21:1, 51–72.
4. Booth, L. (2004). Formulating retirement targets and the impact of time horizon
on asset allocation, Financial Services Review 13:1, 1–17.
5. Dupaĉová, J. and Polivka, J. (2007). Stress testing for VaR and CVaR. Quanti-
tative Finance 7(4), 411–421.
6. Bilsel, R.U. and Ravindran, A. (2011). A multiobjective chance constrained
programming model for supplier selection under uncertainty. Transportation
Research: Part B 45(8), 1284–1300.
7. Wibowo, A. and Kochendoerfer, B. (2011). Selecting BOT/PPP infrastructure
projects for government guarantee portfolio under conditions of budget and risk
in the Indonesian context. Journal of Construction Engineering & Management
137(7), 512–522.
8. Gurgur, C.Z. and Morley, C.T. (2008). Lockheed Martin Space Systems Com-
pany optimizes infrastructure project-portfolio, Interfaces 38:4, 251–262.
9. Held, H., Kriegler, E., Lessmann, K. and Edenhofer, O. (2009). Efficient climate
policies under technology and climate uncertainty, Energy Economics 31, S50–
S61.
10. Cooper, W.W., Deng, H., Huang, Z. and Li, S.X. (2002). Chance constrained
programming approaches to technical efficiencies and inefficiencies in stochas-
tic data envelopment analysis, Journal of the Operational Research Society
53:12, 1347–1356; Cooper, W.W., Deng, H., Huang, Z. and Li, S.X. (2004).
Chance constrained programming approaches to congestion in stochastic data
envelopment analysis, European Journal of Operational Research 155:2,
487–501.
Notes 107
11. Wu, D. and Olson, D.L. (2008). Supply chain risk, simulation, and vendor
selection, International Journal of Production Economics 114:2, 646–655.
12. Lee, S.M. and Olson, D.L. (2006). Introduction to Management Science 3rd
ed. Cincinnati: Thompson.
108 7 Chance-Constrained Models
Data Envelopment Analysis in Enterprise
Risk Management 8
Charnes, Cooper and Rhodes1 first introduced DEA (CCR) for efficiency analysis of
Decision-making Units (DMU). DEA can be used for modeling operational pro-
cesses, and its empirical orientation and absence of a priori assumptions have
resulted in its use in a number of studies involving efficient frontier estimation in
both nonprofit and in private sectors. DEA is widely applied in banking2 and
insurance.3 DEA has become a leading approach for efficiency analysis in many
fields, such as supply chain management,4 petroleum distribution system design,5
and government services.6 DEA and multicriteria decision making models have been
compared and extended.7
Moskowitz et al.8 presented a vendor selection scenario involving nine vendors
with stochastic measures given over 12 criteria. This model was used by Wu and
Olson9 in comparing DEA with multiple criteria analysis. We start with discussion
of the advanced ERM technology, i.e., value-at-risk (VaR) and view it as a tool to
conduct risk management in enterprises.
While risk needs to be managed, taking risks is fundamental to doing business.
Profit by necessity requires accepting some risk.10 ERM provides tools to rationally
manage these risks. We will demonstrate multiple criteria and DEA models in the
enterprise risk management context with a hypothetical nuclear waste repository site
location problem.
Basic Data
For a set of data including a supply chain needing to select a repository for waste
dump siting, we have 12 alternatives with four criteria. Criteria considered include
cost, expected lives lost, risk of catastrophe, and civic improvement. Expected lives
lost reflects workers as well as expected local (civilian bystander) lives lost. The
hierarchy of objectives is:
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_8
109
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_8&domain=pdf
Overall
Cost Lives Lost Risk Civic Improvement
The alternatives available, with measures on each criterion (including two cate-
gorical measures) are given in Table 8.1:
Models require numerical data, and it is easier to keep things straight if we make
higher scores be better. So we adjust the Cost and Expected Lives Lost scores by
subtracting them from the maximum, and we assign consistent scores on a 0–100
scale for the qualitative ratings given Risk and Civic Improvement, yielding
Table 8.2:
Nondominated solutions can be identified by inspection. For instance, Nome AK
has the lowest estimated cost, so is by definition nondominated. Similarly, Wells NE
has the best expected lives lost. There is a tie for risk of catastrophe (Newark NJ and
Epcot Center FL have the best ratings, with tradeoff in that Epcot Center FL has
better cost and lives lost estimates while Newark NJ has better civic improvement
rating, and both are nondominated). There are also is a tie for best civic improvement
(Newark NJ and Gary IN), and tradeoff in that Gary IN has better cost and lives lost
estimates while Newark NJ has a better risk of catastrophe rating), and again both are
nondominated. There is one other nondominated solution (Rock Springs WY),
which can be compared to all of the other 11 alternatives and shown to be better
on at least one alternative.
Table 8.1 Dump site data
Alternatives Cost (billions) Expected lives lost Risk Civic improvement
Nome AK 40 60 Very high Low
Newark NJ 100 140 Very low Very high
Rock Springs WY 60 40 Low High
Duquesne PA 60 40 Medium Medium
Gary IN 70 80 Low Very high
Yakima Flats WA 70 80 High Medium
Turkey TX 60 50 High High
Wells NE 50 30 Medium Medium
Anaheim CA 90 130 Very high Very low
Epcot Center FL 80 120 Very low Very low
Duckwater NV 80 70 Medium Low
Santa Cruz CA 90 100 Very high Very low
110 8 Data Envelopment Analysis in Enterprise Risk Management
Multiple Criteria Models
Nondominance can also be established by a linear programming model. We create a
variable for each criterion, with the decision variables weights (which we hold
strictly greater than 0, and to sum to 1). The objective function is to maximize the
sum-product of measure values multiplied by weights for each alternative site in
turn, subject to this function being strictly greater than each sum-product of measure
values time weights for each of the other sites. For the first alternative, the formula-
tion of the linear programming model is:
Max
X4
i¼1wiy1
s.t.
P4
i¼1wi ¼ 1
For each j from 2 to 12:
P4
i¼1wiyx1 �
P4
i¼1wiyj +0.0001
wi � 0:0001
This model was run for each of the 12 available sites. Non-dominated alternatives
(defined as at least as good on all criteria, and strictly better on at least one criterion
relative to all other alternatives) are identified if this model is feasible. The reason to
add the 0.0001 to some of the constraints is that strict dominance might not be
identified otherwise (the model would have ties). The solution for the Newark NJ
alternative was as shown in Table 8.3:
The set of weights were minimum for the criteria of Cost and Expected Lives lost,
with roughly equal weights on Risk of Catastrophe and Civic Improvement. That
makes sense, because Newark NJ had the best scores for Risk of Catastrophe and
Civic Improvement and low scores on the other two Criteria.
Running all 12 linear programming models, six solutions were feasible,
indicating that they were not dominated {Nome AK, Newark NJ, Rock Springs
Table 8.2 Scores used
Alternatives Cost Expected lives lost Risk Civic improvement
Nome AK 60 80 0 25
Newark NJ 0 0 100 100
Rock Springs WY 40 100 80 80
Duquesne PA 40 100 50 50
Gary IN 30 60 80 100
Yakima Flats WA 30 60 30 50
Turkey TX 40 90 30 80
Wells NE 50 110 50 50
Anaheim CA 10 10 0 0
Epcot Center FL 20 20 100 0
Duckwater NV 20 70 50 25
Santa Cruz CA 10 40 0 0
Multiple Criteria Models 111
WY, Gary IN, Wells NE and Epcot Center FL}. The corresponding weights
identified are not unique (many different weight combinations might have yielded
these alternatives as feasible). These weights also reflect scale (here the range for
Cost was 60, and for Lives Lost was 110, while the range for the other two criteria
were 100—in this case this difference is slight, but the scales do not need to be
similar. The more dissimilar, the more warped are the weights.) For the other six
dominated solutions, no set of weights would yield them as feasible. For instance,
Table 8.4 shows the infeasible solution for Duquesne PA:
Here Rock Springs WY and Wells NE had higher functional values than
Duquesne PA. This is clear by looking at criteria attainments. Rock Springs WY
is equal to Duquesne PA on Cost and Lives Lost, and better on Risk and Civic
Improvement.
Table 8.3 MCDM LP solution for Nome AK
Criteria Cost Lives Risk Improve
Object Newark NJ 0 0 100 100 99.9801
Weights 0.0001 0.0001 0.4975 0.5023 1.0000
Nome AK 60 80 0 25 12.5708
Rock Springs WY 40 100 80 80 79.9980
Duquesne PA 40 100 50 50 50.0040
Gary IN 30 60 80 100 90.0385
Yakima Flats WA 30 60 30 50 40.0485
Turkey TX 40 90 30 80 55.1207
Wells NE 50 110 50 50 50.0060
Anaheim CA 10 10 0 0 0.0020
Epcot Center FL 20 20 100 0 49.7567
Duckwater NV 20 70 50 25 37.4422
Santa Cruz CA 10 40 0 0 0.0050
Table 8.4 LP solution for Duquesne PA
Criteria Cost Lives Risk Improve
Object Duquesne PA 40 100 50 50 99.9840
Weights 0.0001 0.9997 0.0001 0.0001 1.0000
Nome AK 60 80 0 25 79.9845
Newark NJ 0 0 100 100 0.0200
Rock Springs WY 40 100 80 80 99.9900
Gary IN 30 60 80 100 60.0030
Yakima Flats WA 30 60 30 50 59.9930
Turkey TX 40 90 30 80 89.9880
Wells NE 50 110 50 50 109.9820
Anaheim CA 10 10 0 0 9.9980
Epcot Center FL 20 20 100 0 20.0060
Duckwater NV 20 70 50 25 69.9885
Santa Cruz CA 10 40 0 0 39.9890
112 8 Data Envelopment Analysis in Enterprise Risk Management
Scales
The above analysis used input data with different scales. Cost ranged from 0 to
60, Lives Lost from 0 to 110, and the two subjective criteria (Risk, Civic Improve-
ment) from 0 to 100. While they were similar, there were slightly different ranges.
The resulting weights are one possible set of weights that would yield the analyzed
alternative as non-dominated. If we proportioned the ranges to all be equal (divide
Cost scores in Table 8.2 by 0.6, Expected Lives Lost scores by 1.1), the resulting
weights would represent the implied relative importance of each criterion that would
yield a non-dominated solution. The non-dominated set is the same, only weights
varying. Results are given in Table 8.5.
Stochastic Mathematical Formulation
Value-at-risk (VaR) methods are popular in financial risk management.11 VaR
models were motivated in part by several major financial disasters in the late
1980s and 1990s, to include the fall of Barings Bank and the bankruptcy of Orange
County. In both instances, large amounts of capital were invested in volatile
markets when traders concealed their risk exposure. VaR models allow managers
to quantify their risk exposure at the portfolio level, and can be used as a benchmark
to compare risk positions across different markets. Value-at-risk can be defined as
the expected loss for an investment or portfolio at a given confidence level over a
stated time horizon. If we define the risk exposure of the investment as L, we can
express VaR as:
Prob L � VaRf g ¼ 1 � α
Table 8.5 Results using scaled weights
Alternative Cost Lives Risk Improve Dominated by
Nome AK 0.9997 0.0001 0.0001 0.0001
Newark NJ 0.0001 0.0001 0.4979 0.5019
Rock Springs WY 0.0001 0.7673 0.0001 0.2325
Gary IN 0.00001 0.0001 0.0001 0.9997
Wells NE 0.0001 0.9997 0.0001 0.0001
Epcot Center FL 0.0002 0.0001 0.9996 0.0001
Duquesne PA Rock Springs WY
Wells NE
Yakima Flats WA Six alternatives
Turkey TX Rock Springs WY
Anaheim CA All but Newark NJ
Duckwater NV Five alternatives
Santa Cruz CA Eight alternatives
Stochastic Mathematical Formulation 113
A rational investor will minimize expected losses, or the loss level at the stated
probability (1 � α). This statement of risk exposure can also be used as a constraint
in a chance-constrained programming model, imposing a restriction that the proba-
bility of loss greater than some stated value should be less than (1 � α).
The standard deviation or volatility of asset returns, σ, is a widely used measure of
financial models such as VaR. Volatility σ represents the variation of asset returns
during some time horizon in the VaR framework. This measure will be employed in
our approach. Monte Carlo Simulation techniques are often applied to measure the
variability of asset risk factors.12 We will employ Monte Carlo Simulation for
benchmarking our proposed method.
Stochastic models construct production frontiers that incorporate both ineffi-
ciency and stochastic error. The stochastic frontier associates extreme outliers with
the stochastic error term and this has the effect of moving the frontier closer to the
bulk of the producing units. As a result, the measured technical efficiency of every
DMU is raised relative to the deterministic model. In some realizations, some DMUs
will have a super-efficiency larger than unity.13
Now we consider the stochastic vendor selection model. Consider N suppliers to
be evaluated, each has s random variables. Note that all input variables are
transformed to output variables, as was done in Moskowitz et al.14 The variables
of supplier j (j¼1,2. . .N) exhibit random behavior represented by eyj ¼ ey1j, � � �, eysj
� �
,
where each eyrj (r ¼ 1 , 2 , . . . , s) has a known probability distribution. By
maximizing the expected efficiency of a vendor under evaluation subject to VaR
being restricted to be no worse than some limit, the following model (1) is
developed:
Max
P4
i¼1wiy1
s.t.
P4
i¼1wi ¼ 1
For each j from 2 to 12: Prob{
P4
i¼1wiyx1 �
P4
i¼1wiyj +0.0001}�(1-α)
wi � 0:0001
Because each eyj is potentially a random variable, it has a distribution rather than
being a constant. The objective function is now an expectation, but the expectation is
the mean, so this function is still linear, using the mean rather than the constant
parameter. The constraints on each location’s performance being greater than or
equal to all other location performances is now a nonlinear function. The weights wi
are still variables to be solved for, as in the deterministic version used above.
The scalar α is referred to as the modeler’s risk level, indicating the probability
measure of the extent to which Pareto efficiency violation is admitted as most α
proportion of the time. The αj (0 � αj � 1) in the constraints are predetermined
scalars which stand for an allowable risk of violating the associated constraints,
where 1 � αj indicates the probability of attaining the requirement. The higher the
value of α, the higher the modeler’s risk and the lower the modeler’s confidence
about the 0th vendor’s Pareto efficiency and vice-visa. At the (1 � α)% confidence
114 8 Data Envelopment Analysis in Enterprise Risk Management
level, the 0th supplier is stochastic efficient only if the optimal objective value is
equal to one.
To transform the stochastic model (1) into a deterministic DEA, Charnes and
Cooper15 employed chance constrained programming.16 The transformation steps
presented in this study follow this technique and can be considered as a special case
of their stochastic DEA,17 where both stochastic inputs and outputs are used. This
yields a non-linear programming problem in the variables wi, which has computa-
tional difficulties due to the objective function and the constraints, including the
variance-covariance yielding quadratic expressions in constraints. We assume that eyj
follows a normal distribution N(yj, Bjk), where yj is its vector of expected value and
Bjk indicates the variance-covariance matrix of the jth alternative with the kth
alternative. The development of stochastic DEA is given in Wu and Olson (2008).18
We adjust the data set used in the nuclear waste siting problem by making cost a
stochastic variable (following an assumed normal distribution, thus requiring a
variance). The mathematical programming model decision variables are the weights
on each criterion, which are not stochastic. What is stochastic is the parameter on
costs. Thus the adjustment is in the constraints. For each evaluated alternative yj
compared to alternative yk:
wcost(yj cost – z*SQRT(Var[yj cost]) + wlivesyj lives + wriskyj risk + wimpyj imp �
wcost(yk cost – zSQRT(Var[yk cost] + 2*Cov[yj cost,yk cost]
+ Var[yk cost] + wlivesyk lives + wriskyk risk + wimpyk imp
These functions need to include the covariance term for costs between alternative yj
compared to alternative yk.
Table 8.6 shows the stochastic cost data in billions of dollars, and the converted
cost scores (also billions of dollars transformed as $100 billion minus the cost
measure for that site) as in Table 8.2. The cost variances will remain as they were,
as the relative scale did not change.
The variance-covariance matrix of costs is required (Table 8.7):
The degree of risk aversion used (α) is 0.95, or a z-value of 1.645 for a one-sided
distribution. The adjustment affected the model by lowering the cost parameter propor-
tional to its variance for the evaluated alternative, and inflating it for the other
alternatives. Thus the stochastic model required a 0.95 assurance that the cost for the
evaluated alternative be superior to each of the other 11 alternatives, a more difficult
standard. The DEA models were run for each of the 12 alternatives. Only two of the six
alternatives found to be nondominated with deterministic data above were still
nondominated {Rock Springs WY and Wells NE}. The model results in Table 8.8
show the results for Rock Springs WY, withone set of weights {0, 0.75, 0.25, 0} yielding
Rock Springs with a greater functional value than any of the other 11 alternatives. The
weights yielding Wells NE as nondominated had all the weight on Lives Lost.
One of the alternatives that was nondominated with deterministic data {Nome
AK} was found to be dominated with stochastic data. Table 8.9 shows the results for
the original deterministic model for Nome AK.
The stochastic results are shown in Table 8.10:
Stochastic Mathematical Formulation 115
Wells NE is shown to be superior to Nome AK at the last set of weights the
SOLVER algorithm in EXCEL attempted. Looking at the stochastically adjusted
scores for cost, Wells NE now has a superior cost value to Nome AK (the objective
functional cost value is penalized downward, the constraint cost value for Wells NE
and other alternatives are penalized upward to make a harder standard to meet).
Table 8.6 Stochastic data
Alternative
Cost
measure
Mean
cost
Cost
variance
Expected
lives lost Risk
Civic
improvement
S1 Nome AK N(40,6) 60 6 80 0 25
S2 Newark NJ N
(100,20)
0 20 0 100 100
S3 Rock
Springs WY
N(60,5) 40 5 100 80 80
S4 Duquesne
PA
N(60,30) 40 30 100 50 50
S5 Gary IN N(70,35) 30 35 60 80 100
S6 Yakima
Flats WA
N(70,20) 30 20 60 30 50
S7 Turkey TX N(60,10) 40 10 90 30 80
S8 Wells NE N(50,8) 50 8 110 50 50
S9 Anaheim
CA
N(90,40) 10 40 10 0 0
S10 Epcot
Center FL
N(80,50) 20 50 20 100 0
S11 Duckwater
NV
N(80,20) 20 20 70 50 25
S12 Santa Cruz
CA
N(90,40) 10 40 40 0 0
Table 8.7 Site covariances
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12
S1 6 2 4 2 2 3 3 3 2 1 3 2
S2 20 3 10 9 5 2 1 4 5 1 4
S3 5 2 1 2 3 3 2 1 3 2
S4 30 10 8 2 2 6 5 1 4
S5 35 9 3 2 5 6 1 4
S6 20 3 2 10 8 2 12
S7 10 3 2 1 3 2
S8 8 2 1 3 2
S9 40 5 1 12
S10 50 2 8
S11 20 2
S12 40
116 8 Data Envelopment Analysis in Enterprise Risk Management
DEA Models
DEA evaluates alternatives by seeking to maximize the ratio of efficiency of output
attainments to inputs, considering the relative performance of each alternative. The
mathematical programming model creates a variable for each output (outputs
designated by ui) and input (inputs designated by vj). Each alternative k has
performance coefficients for each output (yik) and input (xjk).
The classic Charnes, Cooper and Rhodes (CCR)19 DEA model is:
Max efficiencyk ¼
P2
i¼1
uiyik
P2
j¼1
vjxjk
Table 8.8 Output for Stochastic Model for Rock Springs WY
Object Rock Springs WY 36.322 100 80 80 94.99304
Weights 0.0001 0.7499 0.24993 0.0001 1
Nome AK 67.170 80 0 25 59.999
Newark NJ 9.158 0 100 100 25.004
Duquesne PA 50.272 100 50 50 87.494
Gary IN 40.660 60 80 80 64.999
Yakima Flats WA 38.858 60 30 30 52.497
Turkey TX 47.538 90 30 30 74.994
Wells NE 57.170 110 50 50 94.993
Anaheim CA 21.514 10 0 0 7.501
Epcot Center FL 32.418 20 100 100 40.004
Duckwater NV 29.158 70 50 50 64.995
Santa Cruz CA 21.514 40 0 0 29.997
Table 8.9 Nome AK alternative results with original model
Object Nome AK 60 80 0 25 64.9857
Weights 0.7500 0.2498 0.0001 0.0001 1
Newark NJ 0 0 100 100 0.020
Rock Springs WY 40 100 80 80 54.994
Duquesne PA 40 100 50 50 54.988
Gary IN 30 60 80 100 37.505
Yakima Flats WA 30 60 30 50 37.495
Turkey TX 40 90 30 80 52.491
Wells NE 50 110 50 50 64.986
Anaheim CA 10 10 0 0 9.998
Epcot Center FL 20 20 100 0 20.006
Duckwater NV 20 70 50 25 32.492
Santa Cruz CA 10 40 0 0 17.491
DEA Models 117
s.t. For each k from 1 to 12:
P2
i¼1
uiyik
P2
j¼1
vjxjk
� 1
ui, vj � 0
The Banker, Charnes and Cooper (BCC) DEA model includes a scale parameter
to allow of economies of scale. It also releases the restriction on sign for ui, vj.
Max efficiencyk ¼
P2
i¼1
uiyikþγ
P2
j¼1
vjxjk
s.t. For each k from 1 to 12:
P2
i¼1
uiyikþγ
P2
j¼1
vjxjk
� 1
ui , vj � 0, γ unrestricted in sign
A third DEA model allows for super-efficiency. It is the CCR model without a
restriction on efficiency ratios.
Max efficiencyk ¼
P2
i¼1
uiyik
P2
j¼1
vjxjk
s.t. For each l from 1 to 12:
P2
i¼1
uiyil
P2
j¼1
vjxjl
� 1 for l 6¼ k
ui, vj � 0
Table 8.10 Nome AK alternative results with stochastic model
Object Nome AK 55.97 80 0 25 55.965
Weights 0.9997 0.0001 0.0001 0.0001 1
Newark NJ 9.009 0 100 100 9.027
Rock Springs WY 47.170 100 80 80 47.182
Duquesne PA 50.403 100 50 50 50.408
Gary IN 41.034 60 80 100 41.046
Yakima Flats WA 39.305 60 30 50 39.307
Turkey TX 47.715 90 30 80 47.721
Wells NE 57.356 110 50 50 57.360
Anaheim CA 21.631 10 0 0 21.625
Epcot Center FL 32.527 20 100 0 32.529
Duckwater NV 29.305 70 50 25 29.310
Santa Cruz CA 21.631 40 0 0 21.628
118 8 Data Envelopment Analysis in Enterprise Risk Management
The traditional DEA models were run on the dump site selection model, yielding
results shown in Table 8.11:
These approaches provide rankings. In the case of CCR DEA, the ranking
includes some ties (for first place and 11th place). The nondominated Nome AL
alternative was ranked tenth, behind dominated solutions Turkey TX, Duquesne PA,
Yakima Flats WA, and Duckwater NV. Nome dominates Anaheim CA and Santa
Cruz CA, but does not dominate any other alternative. The ranking in tenth place is
probably due to the smaller scale for the Cost criterion, where Nome AK has the best
score. BCC DEA has all dominated solutions tied for first. The rankings for 7th
through 12 reflect more of an average performance on all criteria (affected by scales).
The rankings provided by BCC DEA after first are affected by criteria scales. Super-
CCR provides a nearly unique ranking (tie for 11th place).
Conclusion
The importance of risk management has vastly increased in the past decade. Value at
risk techniques have been becoming the frontier technology for conducting enter-
prise risk management. One of the ERM areas of global business involving high
levels of risk is global supply chain management.
Selection in supply chains by its nature involves the need to trade off multiple
criteria, as well as the presence of uncertain data. When these conditions exist,
stochastic dominance can be applied if the uncertain data is normally distributed. If
not normally distributed, simulation modeling applies (and can also be applied if
data is normally distributed).
When the data is presented with uncertainty, stochastic DEA provides a good
tool to perform efficiency analysis by handling both inefficiency and stochastic
Table 8.11 Traditional DEA model results
CCR DEA BCC DEA Super-CCR Super-CCR
Alternative Score Rank Score Rank Score Rank
Nome AK 0.43750 10 1 1 0.43750 10
Newark NJ 0.75000 6 1 1 0.75000 6
Rock Springs WY 1 1 1 1 1.31000 1
Duquesne PA 0.62500 7 0.83333 8 0.62500 7
Gary IN 1 1 1 1 1.07143 2
Yakima Flats WA 0.5 8 0.70129 9 0.5 8
Turkey TX 0.97561 3 1 1 0.97561 3
Wells NE 0.83333 5 1 1 0.83333 5
Anaheim CA 0 11 0.45000 12 0 11
Epcot Center FL 0.93750 4 1 1 0.93750 4
Duckwater NV 0.46875 9 0.62500 10 0.46875 9
Santa Cruz CA 0 11 0.48648 11 0 11
Conclusion 119
error. We must point out the main difference for implementing investment VaR in
financial markets such as banking industry and our DEA VaR used for supplier
selection is that the underlying asset volatility or standard deviation is typically a
managerial assumption due to lack of sufficient historical data to calibrate the risk
measure.
Notes
1. Charnes, A., Cooper, W.W. and Rhodes, E. (1978). Measuring the efficiency of
decision-making units, European Journal of Operational Research 2, 429–444.
2. Banker, R.D., Chang,H. and Lee, S.-Y. (2010). Differential impact of Korean
banking system reforms on bank productivity. Journal of Banking & Finance 34
(7), 1450–1460; Gunay, E.N.O. (2012). Risk incorporation and efficiency in
emerging market banks during the global crisis: Evidence from Turkey,
2002–2009. Emerging Markets Finance & Trade 48(supp5), 91–102; Yang,
C.-C. (2014). An enhanced DEA model for decomposition of technical effi-
ciency in banking. Annals of Operations Research 214(1), 167–185.
3. Segovia-Gonzalez, M.M., Contreras, I. and Mar-Molinero, C. (2009). A DEA
analysis of risk, cost, and revenues in insurance. Journal of the Operational
Research Society 60(11), 1483–1494.
4. Ross, A. and Droge, C. (2002). An integrated benchmarking approach to
distribution center performance using DEA modeling, Journal of Operations
Management 20, 19–32; Wu, D.D. and Olson, D. (2010). Enterprise risk
management: A DEA VaR approach in vendor selection. International Journal
of Production Research 48(16), 4919–4932.
5. Ross, A. and Droge, C. (2004). An analysis of operations efficiency in large-
scale distribution systems, Journal of Operations Management 21, 673–688.
6. Narasimhan, R., Talluri, S., Sarkis, J. and Ross, A. (2005). Efficient service
location design in government services: A decision support system framework,
Journal of Operations Management 23:2, 163–176.
7. Lahdelma, R. and Salminen, P. (2006). Stochastic multicriteria acceptability
analysis using the data envelopment model, European Journal of Operational
Research 170, 241–252; Olson, D.L. and Wu, D.D. (2011). Multiple criteria
analysis for evaluation of information system risk. Asia-Pacific Journal of
Operational Research 28(1), 25–39.
8. Moskowitz, H., Tang, J. and Lam, P. (2000). Distribution of aggregate utility
using stochastic elements of additive multiattribute utility models, Decision
Sciences 31, 327–360.
9. Wu, D. and Olson, D.L. (2008). A comparison of stochastic dominance and
stochastic DEA for vendor evaluation, International Journal of Production
Research 46:8, 2313–2327.
10. Alquier, A.M.B. and Tignol, M.H.L. (2006). Risk management in small- and
medium-sized enterprises, Production Planning & Control, 17, 273–282.
120 8 Data Envelopment Analysis in Enterprise Risk Management
11. Duffie, D. and Pan, J. (2001). Analytical value-at-risk with jumps and credit
risk, Finance & Stochastics 5:2, 155–180; Jorion, P. (2007). Value-at-risk: The
New Benchmark for Controlling Market Risk. New York: Irwin.
12. Crouhy, M., Galai, D., and Mark, R. M. (2001). Risk Management. New York,
NY: McGraw Hill.
13. Olesen, O.B. and Petersen, N.C. (1995). Comment on assessing marginal impact
of investment on the performance of organizational units, International Journal
of Production Economics 39, 162–163; Cooper, W.W., Hemphill, H., Huang,
Z., Li, S., Lelas, V., and Sullivan, D.W. (1996). Survey of mathematical
programming models in air pollution management, European Journal of Oper-
ational Research 96, 1–35; Cooper, W.W., Deng, H., Huang, Z.M. and Li,
S.X. (2002). A one-model approach to congestion in data envelopment analysis,
Socio-Economic Planning Sciences 36, 231–238.
14. Moskowitz et al. (2000), op. cit.
15. Charnes, A. and Cooper, W.W. (1959). Chance-constrained programming,
Management Science 6:1, 73–79; see also Huang, Z. and Li, S.X. (2001).
Co-op advertising models in manufacturer-retailer supply chains: A game theory
approach, European Journal of Operational Research 135:3, 527–544.
16. Charnes, A., Cooper, W.W. and Symonds, G.H. (1958). Cost horizons and
certainty equivalents: An approach to stochastic programming of heating oil,
Management Science 4:3, 235–263.
17. Cooper, W.W., Park, K.S. and Yu, G. (1999). IDEA and AR-IDEA: Models for
dealing with imprecise data in DEA, Management Science 45, 597–607.
18. Wu and Olson (2008), op cit.
19. Charnes, A., Cooper, W. and Rhodes, E. (1978), op cit.
Notes 121
Data Mining Models and Enterprise Risk
Management 9
Datamining applications to business cover a variety of fields.1 Risk-related
applications are especially strong in insurance, specifically fraud detection.2 Fraud
detection modeling includes text mining.3 There are many financial risk manage-
ment applications, with heavy interest in developing tools to support investment.
Automated trading has been widely applied in practice for decades. More recent
efforts have gone into sentiment analysis, mining text of investment comments to
detect patterns, especially related to investment risk.4
There are a number of data mining tools. This includes a variety of software,
some commercial (powerful and expensive) as well as open-source. Open-source
classification software tools have been published.5 There are other modeling forms
as well, to include application of clustering analysis in fraud detection.6 We will use
an example dataset involving data mining of bankruptcy, a severe form of
financial risk.
Bankruptcy Data Demonstration
This data concerns 100 US firms that underwent bankruptcy.7 All of the sample data
are from the USA companies. About 400 bankrupt company names were obtained
from the Compustat database, focusing on the companies that went bankrupt over
the period January 2006 through December 2009. This yielded 99 firms. Using the
company Ticker code list, financial data ratios over the period January 2005–
December 2009 were obtained and used in prediction models of company bank-
ruptcy. The factor collected contain total asset, book value per share, inventories,
liabilities, receivables, cost of goods sold, total dividends, earnings before interest
and taxes, gross profit (loss), net income (loss), operating income after depreciation,
total revenue, sales, dividends per share, and total market value. To obtain
non-bankrupt cases for comparison, the same financial ratios for 200 non-failed
companies were gathered for the same time period. The LexisNexis database
provided SEC fillings after June 2010, to identify firm survival with CIK code.
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_9
123
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_9&domain=pdf
The CIK code list was input to the Compustat database to obtain financial data and
ratios for the period January 2005–December 2009 to match that of failed
companies.
The data set consists of 1321 records with full data over 19 attributes as shown in
Table 9.1. The outcome attribute is bankruptcy, which has a value of 1 if the firm
went bankrupt by 2011 (697 cases), and a value of 0 if it did not (624 cases).
This is real data concerning firm bankruptcy, which could be updated by going to
web sources.
Software
R is a widely used open source software. Rattle is a GUI system for R (also open
source) that makes it easy to implement R for data mining.
To install R, visit https://cran.rstudio.com/
Open a folder for R.
Select Download R for windows.
Table 9.1 Attributes in
bankruptcy data
No Short name Long name
1 fyear Data year—Fiscal
2 cik CIK number
3 at Assets—Total
4 bkvlps Book value per share
5 invt Inventories—Total
6 Lt Liabilities—Total
7 rectr Receivables—Trade
8 cogs Cost of goods sold
9 dvt Dividends—Total
10 ebit Earnings before interest and taxes
11 gp Gross profit (Loss)
12 ni Net income (Loss)
13 oiadp Operating income after depreciation
14 revt Revenue—Total
15 sale Sales-turnover (Net)
16 dvpsx_f Dividends per share—Ex-date—Fiscal
17 mkvalt Market value—Total—Fiscal
18 prch_f Price high—Annual—Fiscal
19 bankruptcy Bankruptcy (output variable)
124 9 Data Mining Models and Enterprise Risk Management
https://cran.rstudio.com/
To install Rattle:
Open the R Desktop icon (32 bit or 64 bit) and enter the following command at
the R prompt. R will ask for a CRAN mirror. Choose a nearby location.
• install.packages(“rattle”)
Enter the following two commands at the R prompt. This loads the Rattle package
into the library and then starts up Rattle.
• library(rattle)
• rattle()
If the RGtk2 package has yet to be installed, there will be an error popup indicating
that libatk-1.0-0.dll is missing from your computer. Click on the OK and then you
will be asked if you would like to install GTK+. Click OK to do so. This then
downloads and installs the appropriate GTK+ libraries for your computer. After this
has finished, do exit from R and restart it so that it can find the newly installed
libraries.
When running Rattle a number of other packages will be downloaded and
installed as needed, with Rattle asking for the user’s permission before doing
so. They only need to be downloaded once.The installation has been tested to
work on Microsoft Windows, 32bit and 64bit, XP, Vista and 7 with R 3.1.1, Rattle
3.1.0 and RGtk2 2.20.31. If you are missing something, you will get a message from
R asking you to install a package. I read nominal data (string), and was prompted that
I needed “stringr”. On the R console (see Fig. 9.1), click on the “Packages” word on
the top line:Give the command “Install packages” which will direct you to HTTPS
CRAN mirror. Select one of the sites (like “USA(TX) [https]”) and find “stringr” and
click on it. Then upload that package. You may have to restart R.
Data mining practice usually utilizes a training set to build a model, which can be
applied to a test set. In this case, 1178 observations (those through 2008) were used
for the training set and 143 observations (2009 and 2010) held out for testing. To run
a model, on the Filename line, click on the icon and browse for the file
“bankruptcyTrain.csv”. Click on the Execute icon on the upper left of the Rattle
window. This yields Fig. 9.2:Bankrupt is a categoric variable, and R assumes that is
the Target (as we want). We could delete other variables if we choose to, and redo
the Execute step for the Data tab. We can Explore—the default is Summary.
Execute yields macrodata, identify data types as well as descriptive statistics
(minima, maxima, medians, means, and quartiles). R by default holds out 30 % of
the training data as an intermediate test set, and thus builds models on the remaining
70 % (here 824 observations). The summary identifies the outcome of the training set
(369 not bankrupt, 455 bankrupt).
We can further explore the data through correlation analysis. Figure 9.3 shows the
R screen with the correlation radio button selected.Execute on this screen yields
output over the numerical variables as shown in Fig. 9.4:
Software 125
Figure 9.4 indicates high degrees of correlation across potential independent
variables, and further analysis might select some for elimination. Numerical correla-
tion values are also provided by R. The dependent variable was alphabetical, so R
didn’t include it, but outside analysis indicates low correlation between bankruptcy
and all independent variables—the highest in magnitude being 0.180 with cost of
goods sold (cogs) and with total revenue (revt).
Decision Tree Model
We can click on the Model tab and run models. Data mining for classification
models have three basic tools—decision trees, logistic regression, and neural
network models. To run a decision tree, select the radio button as indicated in
Fig. 9.5:Note that the defaults are to require a minimum of 20 cases per rule, with
a maximum number of 30 branches. These can be changed by entering desired
values in the appropriate window. Execute yields Fig. 9.6:Rattle also provides a
graphical display of this decision tree, as shown in Fig. 9.7:This model begins with
the variable revt, stating that if revt is less than 78, the conclusion is that bankruptcy
Fig. 9.1 R console
126 9 Data Mining Models and Enterprise Risk Management
would not occur. This rule was based on 44 % of the training data (360 out of 824),
over which 84 % of these cases were not bankrupt (count of 304 no and 56 yes).
On the other branch, the next variable to consider is dvpsx_f. If dvpsx_f was less
than 0.215 (364 cases of 464, or 44 % of the total), the conclusion is bankruptcy
(340 yes and 24 no, for 93 %).
If revt � 78 and dvpsx_f � 0.215 (100 cases), the tree branches on variable at. If
at �4169.341, the conclusion is bankruptcy (based on 31 of 31 cases). If
at<4169.341, the model branches on variable invt.
For these 69 cases, if invt < 16.179 (23 cases), there is a further branch on
variable at. For these 23 cases if at <818.4345, the conclusion is bankruptcy (based
on 13 of 13 cases). If at �818.4345, the conclusion is no bankruptcy (based on 7 of
10 cases).
Fig. 9.2 LoanRaw.csv data read
Decision Tree Model 127
If invt � 16.179 (46 cases), the model splits further on invt. If invt < 74.9215, the
conclusion is no bankruptcy (based on 18 of 18 cases). If invt � 74.9215, there is
further branching on variable mkvalt. For mkvalt < 586.9472, the conclusion is
bankruptcy based on 11 of 14 cases. If mkvalt � 586.9472, the conclusion is no
bankruptcy (based on 13 of 14 cases).
This demonstrates well how a decision tree works. It simply splits the data into
bins, and uses outcome counts to determine rules. Variables are selected by various
algorithms, often using entropy as a basis to select the next variable to split on
(Table 9.2).This model shows overall accuracy of 164/176, or 0.932. This validation
data was over the same period as the model was built upon, up to 2008. We now test
on a more independent testing set (2009–2010) as shown in Table 9.3:Here the
overall correct classification rate is 126/143, or 0.881. The model was correct in
80 of 90 cases where firms actually went bankrupt (0.889 correct). For test cases
where firms survived, the model was correct 46 of 53 times (0.868 correct).
Fig. 9.3 Selecting correlation
128 9 Data Mining Models and Enterprise Risk Management
Logistic Regression Model
We can obtain a logistic regression model from Rattle by clicking the Linear button
in Fig. 9.8, followed by the Logistic button.Execute yields Fig. 9.9 output:
Note that R threw out two variables (oiadp and revt), due to detected singularity.
This output indicates that variables rectr and gp are highly significant. Further
refinement of logistic regression might consider deleting some variables in light of
correlation output. Here we are simply demonstrating running models, so we will
evaluate the above model on both the validation set (Table 9.4) and the test set.This
model shows overall accuracy of 158/176, or 0.898. This is slightly inferior to the
decision tree model. We now test on a more independent testing set (2009–2010) as
shown in Table 9.5:Here the overall correct classification rate is 111/143, or 0.776.
The model was correct in 78 of 90 cases where firms actually went bankrupt (0.867
correct). For test cases where firms survived, the model was correct 33 of 53 times
(0.623 correct).
Fig. 9.4 Correlation plot
Logistic Regression Model 129
Neural Network Model
To run a neural network, on the Model tab, select the neural net button (see
Fig. 9.10):Execute yields a lot of values, which usually are not delved into. The
model can be validated and tested as with the decision tree and logistic regression
models. Table 9.6 shows validation results:This model shows overall accuracy of
156/176, or 0.886. This is slightly inferior to the decision tree model. We now test on
a more independent testing set (2009–2010) as shown in Table 9.7:Here the overall
correct classification rate is 121/143, or 0.846. The model was correct in 75 of
90 cases where firms actually went bankrupt (0.833 correct). For test cases where
firms survived, the model was correct 46 of 53 times (0.868 correct).
Here the decision tree model fit best, as shown in Table 9.8, comparing all three
model test results.All three models had similar accuracies, on all three dimensions
(although the decision tree was better at predicting high expenditure, and corre-
spondingly lower at predicting low expenditure). The neural network didn’t predict
any high expenditure cases, but it was the least accurate at doing that in the test case.
The decision tree model predicted more high cases. These results are typical and to
be expected—different models will yield different results, and these relative
advantages are liable to change with new data. That is why automated systems
Fig. 9.5 Selecting decision tree
130 9 Data Mining Models and Enterprise Risk Management
applied to big data should probably utilize all three types of model. Data scientists
need to focus attention on refining parameters in each model type, seeking better fits
for specific applications.
Of course, each model could be improved with work. Further, with time, new data
may diverge from the patterns in the current training set. Data mining practice is
usually to run all three models (once the data is entered, software tools such as Rattle
make it easy to run additional models, and to change parameters) and compare
results. Note that another consideration not demonstrated here is to apply these
models to new cases. For decision trees, this is easy—just follow the tree with the
values for the new case. For logistic regression, the formula in Fig. 9.9 could be used,
but it requires a bit more work and interpretation. Neural networks require entering
new case data into the software. This is easy to do in Rattle for all three models,
using the Evaluate tab and linking your new case data file.
Fig. 9.6 Default decision tree model
Neural Network Model 131
Fig. 9.7 Rattle graphical decision tree
Table 9.2 Coincidence matrix for validation set of decision tree model
Model not bankrupt Model bankrupt
Actual not bankrupt 70 6 76
Actual bankrupt 6 94 100
76 100 176
Table 9.3 Coincidence matrix for test set of decision tree model
Model not bankrupt Model bankrupt
Actual not bankrupt 80 10 90
Actual bankrupt 7 46 53
87 56 143
132 9 Data Mining Models and Enterprise Risk Management
Summary
We have demonstrated data mining on a financial risk set of data using R (Rattle)
computations for the basic classification algorithms in data mining. The advent of
big data has led to an environment where billions of records are possible. We have
not demonstrated that scope by any means, but it has demonstrated the small-scale
version of the basic algorithms. The intent is to make data mining less of a black-box
exercise, thus hopefully enabling users to be more intelligent in their application of
data mining.
We have demonstrated an open source software product. R is a very useful
software, widely used in industry and has all of the benefits of open source software
(many eyes are monitoring it, leading to fewer bugs; it is free; it is scalable). Further,
the R system enables widespread data manipulation and management.
Fig. 9.8 Selecting logistic regression
Summary 133
Fig. 9.9 Logistic regression output
Table 9.4 Coincidence matrix for validation set of logistic regression model
Model not bankrupt Model bankrupt
Actual not bankrupt 72 4 76
Actual bankrupt 14 86 100
86 90 176
134 9 Data Mining Models and Enterprise Risk Management
Table 9.5 Coincidence matrix for test set of logistic regression model
Model not bankrupt Model bankrupt
Actual not bankrupt 78 12 90
Actual bankrupt 20 33 53
98 45 143
Fig. 9.10 Selecting neural network model
Table 9.6 Coincidence matrix for validation set of neural network model
Model not bankrupt Model bankrupt
Actual not bankrupt 67 9 76
Actual bankrupt 11 89 100
78 98 176
Table 9.7 Coincidence matrix for test set of neural network model
Model not bankrupt Model bankrupt
Actual not bankrupt 75 15 90
Actual bankrupt 7 46 53
82 61 143
Table 9.8 Comparative test results
Model Correct not bankrupt Correct bankrupt Overall
Decision tree 0.889 0.868 0.889
Logistic regression 0.867 0.623 0.776
Neural network 0.833 0.867 0.846
Summary 135
Notes
1. Olson, D.L. and Shi, Y. (2006). Introduction to Business Data Mining. Irwin/
McGraw-Hill.
2. Debreceny, R.S. and Gray, G.L. (2010). Data mining journal entries for fraud
detection: An exploratory study. International Journal of Accounting Informa-
tion Systems 11(3), 157–181; Jan., M., van der Werf, J.M., Lybaert, N. and
Vanhoof, K. (2011). A business process mining application for internal transac-
tion fraud mitigation. Expert Systems with Applications 38(10), 13,351–13,359.
3. Holton, C. (2009). Identifying disgruntled employee systems fraud risk through
text mining: A simple solution for a multi-billion dollar problem. Decision
Support Systems 46(4), 853–864.
4. Groth, S.S. and Muntermann, J. (2011). An intraday market risk management
approach based on textual analysis. Decision Support Systems 50(4), 680–691;
Chan, S.W.K. and Franklin, J. (2011). A text-based decision support system for
financial sequence prediction. Decision Support Systems 52(1), 189–198;
Schumaker, R.P., Zhang, Y., Huang, C.-N. and Chen, H. (2012). Evaluating
sentiment in financial news articles. Decision Support Systems 53(3), 458–464;
Hagenau, M., Liebmann, M. and Neumann, D. (2013). Automated news reading:
Stock price prediction based on financial news using context-capturing features.
Decision Support Systems 55(3), 685–697; Wu, D.D., Zheng, L. and Olson,
D.L. (2014). A decision support approach for online stock forum sentiment
analysis. IEEE Transactions on Systems Man and Cybernetics: Systems 44(8),
1077–1087.
5. Olson, D.L. (2016). Data Mining Models. Business Expert Press.
6. Jans, M., Lybaert, N. and Vanhoof, K. (2010). Internal fraud risk reduction:
Results of a data mining case study. International Journal of Accounting Infor-
mation Systems 11, 17–41.
7. Olson, D.L., Delen, D., and Meng, Y. (2012). Comparative analysis of data
mining methods for bankruptcy prediction, Decision Support Systems, volume
52 (2), 464–473.
136 9 Data Mining Models and Enterprise Risk Management
Balanced Scorecards to Measure Enterprise
Risk Performance 10
Balanced scorecards are one of a number of quantitative tools available to support risk
planning.1 Olhager and Wikner2 reviewed a number of production planning and control
tools, where scorecards are deemed as the most successful approach in production
planning and control performance measurement. Various forms of scorecards, e.g.,
company-configured scorecards and/or strategic scorecards, have been suggested to
build into the business decision support system or expert system in order to monitor the
performance of the enterprise in the strategic decision analysis.3 This chapter
demonstrates the value of balanced scorecards with a case from a bank operation.
While risk needs to be managed, taking risks is fundamental to doing business.
Profit by necessity requires accepting some risk.4 ERM provides tools to rationally
manage these risks. Scorecards have been successfully associated with risk manage-
ment at Mobil, Chrysler, the U.S. Army, and numerous other organizations.5 It also
has been applied to the financial analysis of banks.6
Enterprise risk management (ERM) provides the methods and processes used by
business institutions to manage all risks and seize opportunities to achieve their
objectives. ERM began with a focus on financial risk, but has expended its focus to
accounting as well as all aspects of organizational operations in the past decade.
Enterprise risk can include a variety of factors with potential impact on an
organizations activities, processes, and resources. External factors can result from
economic change, financial market developments, and dangers arising in political,
legal, technological, and demographic environments. Most of these are beyond the
control of a given organization, although organizations can prepare and protect
themselves in time-honored ways. Internal risks include human error, fraud, systems
failure, disrupted production, and other risks. Often systems are assumed to be in place
to detect and control risk, but inaccurate numbers are generated for various reasons.7
ERM brings a systemic approach to risk management. This systemic approach
provides more systematic and complete coverage of risks (far beyond financial risk,
for instance). ERM provides a framework to define risk responsibilities, and a need
to monitor and measure these risks. That’s where balanced scorecards provide a
natural fit—measurement of risks that are key to the organization.
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_10
137
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_10&domain=pdf
ERM and Balanced Scorecards
Beasley et al.8 argued that balanced scorecards broaden the perspective of enterprise
risk management. While many firms focus on Sarbanes-Oxley compliance, there is a
need to consider strategic, market, and reputation risks as well. Balanced scorecards
explicitly link risk management to strategic performance. To demonstrate this,
Beasley et al. provided an example balanced scorecard for supply chain manage-
ment, outlined in Table 10.1.
Other examples of balanced scorecard use have been presented as well, as tools
providing measurement on a broader, strategic perspective. For instance, balanced
scorecards have been applied to internal auditing in accounting9 and to mental health
governance.10 Janssen et al.11applied a system dynamics model to the marketing of
natural gas vehicles, considering the perspective of sixteen stakeholders ranging
across automobile manufacturers and customers to the natural gas industry and
government. Policy options were compared, using balanced scorecards with the
following strategic categories of analysis:
• Natural gas vehicle subsidies
• Fueling station subsidies
• Compressed natural gas tax reductions
• Natural gas vehicle advertising effectiveness.
Balanced scorecards provided a systematic focus on strategic issues, allowing the
analysts to examine the nonlinear responses of policy options as modeled with
system dynamics. Five indicators were proposed to measure progress of market
penetration:
1. Ratio of natural gas vehicles per compress natural gas fueling stations
2. Type coverages (how many different natural gas vehicle types were available)
3. Natural gas vehicle investment pay-back time
4. Sales per type
5. Subsidies par automobile
Small Business Scorecard Analysis
This section discusses computational results on various scorecard performances
currently being used in a large bank to evaluate loans to small businesses. This
bank uses various ERM performance measures to validate a small business scorecard
(SBB). Because scorecards have a tendency to deteriorate over time, it is appropriate
to examine how well they are performing and to examine any possible changes in the
scoring population. A number of statistics and analyses will be employed to deter-
mine if the scorecard is still effective.
138 10 Balanced Scorecards to Measure Enterprise Risk Performance
Table 10.1 Supply chain management balanced scorecard
Measure Goals Measures
Learning & growth for
employees
To achieve our vision, how
will we sustain our ability to
change & improve?
Increase employee
ownership over process
Employee survey scores
Improve information flows
across supply chain stages
Changes in information reports,
frequencies across supply chain
partners
Increase employee
identification of potential
supply chain disruptions
Comparison of actual
disruptions with reports about
drivers of potential disruptions
Risk-related goals:
Increase employee
awareness of supply chain
risks
Number of employees
attending risk management
training
Increase supplier
accountabilities for
disruptions
Supplier contract provisions
addressing risk management
accountability & penalties
Increase employee
awareness of integration of
supply chain and other
enterprise risks
Number of departments
participating in supply chain
risk identification & assessment
workshops
Internal business processes
To satisfy our stakeholders
and customers, where must
we excel in our business
processes?
Reduce waste generated
across the supply chain
Pounds of scrap
Shorten time from start to
finish
Time from raw material
purchase to product/service
delivery to customer
Achieve unit cost
reductions
Unit costs per product/service
delivered, % of target costs
achieved
Risk-related goals:
Reduce probability and
impact of threats to supply
chain processes
Number of employees
attending risk management
training
Identify specific tolerances for
key supply chain processes
Number of process variances
exceeding specified acceptable
risk tolerances
Reduce number of exchanges
of supply chain risks to other
enterprise processes
Extent of risks realized in other
functions from supply chain
process risk drivers
Customer satisfaction
To achieve our vision, how
should we appear to our
customers?
Improve product/service
quality
Number of customer contact
points
Improve timeliness of
product/service delivery
Time from customer order to
delivery
Improve customer
perception of value
Customer scores of value
Risk-related goals:
Reduce customer defections Number of customers retained
Monitor threats to product/
service reputation
Extent of negative coverage in
business press of quality
(continued)
Small Business Scorecard Analysis 139
ERM Performance Measurement
Some performance measures for enterprise risk modeling are reviewed in this
section. They are used to determine the relative effectiveness of the scorecards.
More details are given in our work published elsewhere.12 There are four measures
reviewed: the Divergence, Kolmogorov-Smirnov (KS) Statistic, Lorenz Curve and
the Population stability index. Divergence is calculated as the squared difference
between the mean score of good and bad accounts divided by their average
variance. The dispersion of the data about the means is captured by the variances
in the denominator. The divergence will be lower if the variance is high. A high
divergence value indicates the score is able to differentiate between good and bad
accounts. Divergence is a relative measure and should be compared to other
measures. The KS Statistic is the maximum difference between the cumulative
percentage of goods and cumulative percentage of bads for the population rank-
ordered according to its score. A high KS value shows it is very possible that good
applicants can receive high scores and bad applicants receive low scores. The
maximum possible K-S statistic is unity. Lorenz Curve is the graph that depicts
the power of a model capturing bad accounts relative to the entire population.
Usually, three curves are depicted: a piecewise curve representing the perfect
model which captures all the bads in the lowest scores range of the model, the
random line as a point of reference indicating no predictive ability, and the curve
Table 10.1 (continued)
Measure Goals Measures
Increase customer feedback Number of completed customer
surveys about delivery
comparisons to other providers
Financial performance
To succeed financially, how
should we appear to our
stakeholders?
Higher profit margins Profit margin by supply chain
partner
Improved cash flows Net cash generated over supply
chain
Revenue growth Increase in number of
customers & sales per
customer; % annual return on
supply chain assets
Risk-related goals:
Reduce threats from price
competition
Number of customer defections
due to price
Reduce cost overruns Surcharges paid, holding costs
incurred, overtime charges
applied
Reduce costs outside the
supply chain from supply
chain processes
Warranty claims incurred, legal
costs paid, sales returns
processed
Developed from Beasley et al. (2006)
140 10 Balanced Scorecards to Measure Enterprise Risk Performance
lying between these two capturing the discriminant power of the model under
evaluation. Population stability index measures a change in score distributions
by comparing the frequencies of the corresponding scorebands, i.e., it measures the
difference between two populations. In practice, one can judge there is no real
change between the populations if an index value is no larger than and a definite
population change if index value is greater than 0.25. An index value between 0.10
and 0.25 indicates some shift.
Data
Data are collected from the bank’s internal database. ‘Bad’ accounts are defined into
two types: ‘Bad 1’ indicating Overlimit at month-end, and ‘Bad 2’ referring to those
with 35 days since last deposit at month-end. All non-bad accounts will be classified
as ‘Good’. We split the population according to Credit Limit: one for Credit Limit
less than or equal to $50,0000 and the other for Credit Limit between $50,000 and
$100,000. Data are gathered from two time slots: observed time slot and validated
time slot. Two sets (denoted as Set1 and Set2) are used in the validation. Observed
time slots are from August 2002 to January 2003 for Set1 and from September 2001
to February 2002 for Set2 respectively. While this data is relative dated, the system
demonstrated using this data is still in use, as the bank has found it stable, and they
feel that there is a high cost in switching. Validated time slot are from February 2003
to June 2003 for Set1 and from March 2002 to July 2002 for Set2 respectively. All
accounts are scored on the last business day of each month. All non-scored accounts
will be excluded from the analyses.
Table 10.2 gives the bad rates summary by Line Size for both sets while
Table 10.3 reports the score distribution for both sets, to include the Beacon score
accounts. From Table 10.2, we can see that in both sets, although the number of
Bad1 accounts is a bit less than that of Bad2 accounts, it is still a pretty balanced
Table 10.2 Bad loan rates by loan size
Limit Bad loans 1 Jan. 2003 (set1) Bad loans 2 Jan. 2003 (set1)
N # of bad
loans
Bad rate
(%)
N # of bad
loans
Bad rate
(%)
�$50 M 59,332 5022 8.46 61,067 1127 1.85
$50–100 M 6777 545 8.04 7000 69 0.99
Total 66,109 5567 8.42 68,067 1196 1.76
Bad loans 1 Feb. 2002 (set2) Bad loans 2 Feb. 2002 (set2)
N # of bad
loans
Bad rate
(%)
N # of bad
loans
Bad rate
(%)
�$50 M 61,183 5790 9.46 63,981 1791 2.80
$50–$
100 M
6915 637 9.21 7210 88 1.22
Total 68,098 6427 9.44 71,191 1879 2.64
Note: Bad 1: Overlimit; Bad 2: 35+ days since last deposit and overlimit
Small Business Scorecard Analysis 141
data. The bad rates by product line size are less than 10 %. The bad rates decreased
with respect to time by both product line and score band, as can be seen from both
tables. For example, for accounts less than or equal to 50 M dollars, we can see from
the third row of Table 10.2 that the bad rate decreased from 9.46 % and 2.80 % in
Feb. 2002 to 8.46 % and 1.85 % in Jan. 2003 respectively.
Results and Discussion
Computation is done in two steps: (1) Score Distribution and (2) Performance
Validation. The first step examines the evidence of a score shift. This population
consists of the four types of business line of credit (BLOC) products. The second
step measures how well models can predict the bad accounts within a 5-month
period. This population only contains one type of BLOC account.
Table 10.3 Score statistical summary
Score band Bad loans 1 Jan. 2003 (set1) Bad loans 2 Jan. 2003 (set1)
N Bad Bad rate (%) N Bad Bad rate (%)
0 1210 125 10.33 1263 27 2.14
1–500 152 58 38.16 197 27 13.70
501–550 418 117 27.99 508 49 9.65
551–600 1438 350 24.34 1593 109 6.84
601–650 4514 858 19.01 4841 194 4.01
651–700 11,080 1494 13.48 11,599 321 2.77
701–750 18,328 1540 8.40 18,799 312 1.66
751–800 21,083 888 4.20 21,356 149 0.70
�800 9096 262 2.88 9174 35 0.38
Beacon 12,813 769 6.00 13,054 328 2.51
Total 80,132 6461 8.06 82,384 1551 1.88
Score band Bad loans 1 Feb. 2002(set2) Bad loans 2 Feb. 2002(set2)
N Bad N Bad N Bad
0 1840 215 1840 215 1840 215
1–500 231 92 231 92 231 92
501–550 646 189 646 189 646 189
551–600 2106 533 2106 533 2106 533
601–650 5348 1078 5348 1078 5348 1078
651–700 11,624 1641 11,624 1641 11,624 1641
701–750 18,392 1647 18,392 1647 18,392 1647
751–800 20,951 969 20,951 969 20,951 969
�800 8800 278 8800 278 8800 278
Beacon 17,339 1349 17,339 1349 17,339 1349
Total 87,277 7991 87,277 7991 87,277 7991
142 10 Balanced Scorecards to Measure Enterprise Risk Performance
Score Distribution
Figure 10.1 depicts the population stability indices values from January 2001 to June
2003. The values of indices for the $50,000 and $100,000 segments show a steady
increase with respect time. The score distribution of the data set is becoming more
unlike the most current population as time spans. Yet, the indices still remain below
the benchmark of 0.25 that would indicate a significant shift in the score population.
The upward trend is due to two factors: time on books of the accounts and credit
balance. A book of the account refers to a record in which commercial accounts are
recorded. First, as the portfolio ages, more accounts will be assigned lower values
(i.e. less risky) by the variable time on books of the accounts, thus contributing to a
shift in the overall score. Second, more and more accounts do not have a credit
balance as time goes. As a result, more accounts will receive higher scores to indicate
riskier behavior.
The shifted score distribution indicates that the population used to develop the
model is different from the most recent population. As a result, the weights that had
been assigned to each characteristic value might not be the ones most suitable for the
current population. Therefore, we have to conduct the following performance
validation computation.
Performance
To compare the discriminate power of the SBB scorecard with the credit bureau
scorecard model, we depict the Lorenz Curve for both ‘Bad 1’ and ‘Bad 2’ accounts
in Figs. 10.2 and 10.3. From both Figs. 10.2 and 10.3, we can see that the SBB model
still provides an effective means of discriminating the ‘good’ from ‘bad’ accounts
and that the SBB scorecard captures bad accounts much more quickly than the
Beacon score. Based on the ‘Bad 1’ accounts in January 2003, SBS capture 58 % of
bad accounts, and outperforms the Beacon value of 42 %. One of the reason for
Beacon model being bad in capturing bad accounts is that the credit risk of one of the
owners may not necessarily be indicative of the credit risk of the business. Instead, a
Credit Bureau scorecard based on the business may be more suitable.
0.25
0.20
0.15
0.10
S
ta
b
il
it
y
I
n
d
e
x
0.05
0.00
Nov–01 Feb–02
Limit <= $50,000 Limit <= $100,000
May–02 Sep–02
Score Date
Dec–02 Mar–03 Jun–03 Oct–03
Fig. 10.1 Population stability indices (Jan. 02–June 03)
Small Business Scorecard Analysis 143
http://www.wordwebonline.com/en/RECORD
Table 10.4 reports various performance statistic values for both ‘Bad 1’ and ‘Bad
2’ accounts. Two main patterns are found. First, the Divergence and K-S score
values produce consistent results as Lorenz Curve did. For both ‘Bad 1’ and ‘Bad 2’,
the SBB scorecard performs better than the bureau score in predicting a bad account.
Second, SBS based on both bad accounts possibly experience performance deterio-
ration. Table 10.4 shows that all performance statistic based on the January 2003
data are worse than those of the February 2002 period. For example, the ‘Bad 1’
scorecard generates K-S statistic scores of 78 and 136, for January 2003 and
February 2003 respectively. The ‘Bad 2’ scorecard generates K-S statistic scores
of 233 and 394 for both periods.
Table 10.5 gives performance statistic values for both credit lines. i.e., accounts
with Credit Limit less than or equal to $50 M and between $50 M and 100 M. This
table shows a comparison between accounts with a limit of $50 M and those with
limits between $50 M and 100 M. Two main patterns are found. First, the Small
Business Scorecards perform well on both, and outperform the Beacon score on
both segments. Second, both scorecards, especially the Small Business Scorecard,
100%
80%
60%
SBS
SBS
Random
(Jan 2003) Beacon (Jan 2003)
Beacon
Exact
(Feb 2002)(Feb 2002)
40%
20%
0%
0% 20% 40%
Percent into Distribution
C
u
m
%
o
f
B
a
d
s
C
a
p
tu
re
d
60% 80% 100%
Fig. 10.2 Lorenz curve for ‘Bad 1’ accounts
100%
80%
60%
40%
20%
0%
0% 20% 40%
SBS (Jan 2003)
SBS (Feb 2002)
Random
Beacon (Jan 2003)
Beacon (Feb 2002)
Exact
Percent Into Distribution
C
u
m
%
o
f
B
a
d
s
C
a
p
tu
re
d
60% 80% 100%
Fig. 10.3 Lorenz curve for ‘Bad 2’ accounts
144 10 Balanced Scorecards to Measure Enterprise Risk Performance
T
a
b
le
1
0
.4
P
er
fo
rm
an
ce
st
at
is
ti
c
fo
r
b
o
th
‘B
ad
1
’
an
d
‘B
ad
2
’
ac
co
u
n
ts
S
ta
ti
st
ic
S
B
S
(J
an
.
2
0
0
3
)
B
ea
co
n
(J
an
.
2
0
0
3
)
S
B
S
(F
eb
.
2
0
0
2
)
B
ea
co
n
(F
eb
.
2
0
0
2
)
S
B
S
(J
an
.
2
0
0
3
)
B
ea
co
n
(J
an
.
2
0
0
3
)
S
B
S
(F
eb
.
2
0
0
2
)
B
ea
co
n
(F
eb
.
2
0
0
2
)
#
G
o
o
d
6
0
,5
4
2
6
0
,5
4
2
6
1
,6
7
1
6
1
,6
7
1
6
6
,8
7
1
6
6
,8
7
1
6
9
,3
1
2
6
9
,3
1
2
M
ea
n
g
o
o
d
1
0
8
.8
9
7
3
8
.7
1
1
2
7
.3
7
3
4
.6
7
1
3
7
.4
7
3
4
.2
8
1
7
1
.8
1
7
2
9
.2
3
S
ta
n
d
ar
d
g
o
o
d
1
7
2
.7
4
6
0
.1
8
2
0
3
.2
6
6
3
.5
3
2
2
1
.2
2
6
2
.7
8
2
8
4
.2
1
6
6
.6
6
‘B
ad
1’
ac
co
u
n
ts
‘B
ad
2
’
ac
co
u
n
ts
#
A
cc
o
u
n
ts
5
5
6
7
5
5
6
7
6
4
2
7
6
4
2
7
1
1
9
6
1
1
9
6
1
8
7
9
1
8
7
9
M
ea
n
sc
o
re
3
4
4
.9
6
9
3
.1
3
4
3
9
.6
3
6
8
5
.7
9
6
9
9
.8
2
6
7
8
.0
3
9
9
5
.6
5
6
6
3
.2
S
ta
n
d
ar
d
d
ev
ia
ti
o
n
3
2
1
.5
3
6
9
.4
5
3
8
7
.2
4
7
3
.2
7
5
7
0
.7
7
7
5
.4
2
7
5
6
.3
4
7
6
.0
8
B
ad
ra
te
8
.4
2
%
8
.4
2
%
9
.4
4
%
9
.4
4
%
1
.7
6
%
1
.7
6
%
2
.6
4
%
2
.6
4
%
D
iv
er
g
en
ce
0
.8
3
6
0
.4
9
2
1
.0
2
0
.5
0
8
1
.6
8
8
0
.6
5
7
2
.0
7
9
0
.8
5
2
K
-S
7
8
7
2
6
1
3
6
7
1
6
2
3
3
7
2
6
3
9
4
7
0
7
Small Business Scorecard Analysis 145
T
a
b
le
1
0
.5
P
er
fo
rm
an
ce
st
at
is
ti
cs
fo
r
b
o
th
cr
ed
it
li
n
es
C
re
d
it
li
n
e
L
im
it
�
$
5
0
M
L
im
it
$
5
0
–
1
0
0
M
S
ta
ti
st
ic
S
B
S
(J
an
.
2
0
0
3
)
B
ea
co
n
(J
an
.
2
0
0
3
)
S
B
S
(F
eb
.
2
0
0
2
)
B
ea
co
n
(F
eb
.
2
0
0
2
)
S
B
S
(J
an
.
2
0
0
3
)
B
ea
co
n
(J
an
.
2
0
0
3
)
S
B
S
(F
eb
.
2
0
0
2
)
B
ea
co
n
(F
eb
.
2
0
0
2
)
G
o
o
d
#
A
cc
o
u
n
ts
4
7
,6
8
2
4
7
,6
8
2
4
8
,5
3
9
4
8
,5
3
9
6
2
3
2
6
2
3
2
6
2
7
8
6
2
7
8
M
ea
n
1
1
6
.1
2
7
3
7
.7
7
1
3
8
.8
0
7
3
3
.1
2
1
1
5
.1
3
7
5
2
.1
8
1
2
5
.5
2
7
5
2
.6
4
S
ta
n
d
ar
d
1
7
7
.3
4
5
9
.1
2
2
1
3
.6
2
6
2
.5
2
1
6
1
.9
3
5
4
.6
1
1
7
4
.0
7
5
5
.8
6
B
ad
#
A
cc
o
u
n
ts
4
3
9
3
4
3
9
3
5
2
2
6
5
2
2
6
5
4
5
5
4
5
6
3
7
6
3
7
M
ea
n
sc
o
re
3
4
7
.4
0
6
9
5
.1
0
4
6
1
.0
6
6
8
6
.0
3
3
4
5
.8
2
7
1
5
.8
0
3
9
8
.0
5
7
1
1
.9
5
S
ta
n
d
ar
d
d
ev
ia
ti
o
n
3
1
4
.6
9
6
5
.6
8
3
9
1
.9
4
7
1
.8
7
2
8
5
.0
1
6
8
.3
5
3
1
0
.5
9
6
2
.2
8
P
er
fo
rm
an
ce
B
ad
ra
te
8
.4
4
%
8
.4
4
%
9
.7
2
%
9
.7
2
%
8
.0
4
%
8
.0
4
%
9
.2
1
%
9
.2
1
%
D
iv
er
g
en
ce
0
.8
2
0
0
.4
6
6
1
.0
4
2
0
.4
8
9
0
.9
9
1
0
.3
4
6
1
.1
7
2
0
.4
7
3
K
-S
7
8
7
2
6
1
3
6
7
1
7
1
2
5
7
3
5
1
6
2
7
4
2
146 10 Balanced Scorecards to Measure Enterprise Risk Performance
perform better on ‘Bad 2’ accounts. The main reason is that ‘Bad 2’ definition
specifies a more severe degree of delinquency and the difference between the good
and bad accounts is more distinct.
Conclusions
Balanced scorecard analysis provides a means to measure multiple strategic
perspectives. The basic principle is to select four diverse areas of strategic impor-
tance, and within each, to identify concrete measures that managers can use to gauge
organizational performance on multiple scales. This allows consideration of multiple
perspectives or stakeholders. Examples given included supply chain risk analysis,
and policy analysis of natural gas vehicle adoption. This chapter focused on the
example of a small bank credit situation. Computation results indicate there is
evidence of a shifting score distribution utilized by the scorecard. However, the
scorecard still provides an effective means to predict ‘bad’ accounts.
Balanced scorecards have been widely applied in general, but not specifically to
enterprise risk management. This chapter demonstrates how the balanced scorecard
can be applied to evaluate the risk management posture of a particular organization.
The demonstration specifically is for a bank, but other organizations could measure
appropriate risk elements for their circumstances. Balanced scorecards offer the
flexibility to include any type of measure key to production planning and operations
of any type of organization.
Notes
1. Kaplan, R.S. and Norton, D.P. (2006). Alignment: Using the Balanced Score-
card to Create Corporate Synergies. Cambridge, MA: Harvard Business School
Press Books.
2. Olhager, J. and Wikner, J. (2000), Production Planning and Control Tools.
Production Planning and Control 11:3, 210–222.
3. Al-Mashari, M., Al-Mudimigh, A. and Zairi, M. (2003). Enterprise resource
planning: A taxonomy of critical factors. European Journal of Operational
Research, 146:2, 352–364.
4. Alquier, A.M.B. and Tignol, M.H.L. (2006). Risk management in small- and
medium-sized enterprises. Production Planning & Control, 17, 273–282.
5. Kaplan and Norton (2006), op cit.
6. Elbannan, M.A. and Elbannan, M.A. (2015). Economic consequences of bank
disclosure in the financial statements before and during the financial crisis:
Evidence from Egypt. Journal of Accounting, Auditing & Finance 30(2),
181–217.
7. Schaefer, A. Cassidy, M., Marshall, K. and Rossi, J. (2006). Internal audits and
executive education: A holy alliance to reduce theft and misreporting. Employee
Relations Law Journal, 32(1), 61–84.
Notes 147
8. Beasley, M. Chen, A., Nunez, K. and Wright, L. (2006). Working hand in hand:
Balanced scorecards and enterprise risk management, Strategic Finance 87:9,
49–55.
9. Campbell, M. Adams, G.W., Campbell, D.R. and Rose, M.R. (2006). Internal
audit can deliver more value, Financial Executive 22:1, 44–47.
10. Sugarman, P. and Kakabadse, N. (2008). A model of mental health governance,
The International Journal of Clinical Leadership 16, 17–26.
11. Janssen, A., Lienin, S.F., Gassmann, F. and Wokaun, A. (2006). Model aided
policy development for the market penetration of natural gas vehicles in
Switzerland, Transportation Research Part A 40, 316–333.
12. Wu, D.D. and Olson, D.L. (2009). Enterprise risk management: Small business
scorecard analysis. Production Planning & Control 20(4), 362–369.
148 10 Balanced Scorecards to Measure Enterprise Risk Performance
Information Systems Security Risk 11
There are a number of threats to contemporary information systems. These include
the leakage and modification of sensitive intellectual property and trade secrets,
compromise of customer-employee-associate personal data, disruptions of service
attacks, Web vandalism, and cyber spying. Our culture has seen an explosion in
social networking and use of cloud computing, to include work environments where
employees can bring their own devices (BYOD) such as i-phones or computers to do
their work. In principle, this allows them to work 24 hours a day 7 days a week. In
practice, at least it allows them to work when they please anywhere they please.
Information security is the preservation of information confidentiality, integrity, and
availability. The aims of information security are to ensure business continuity,
comply with legal requirements, and to provide the organization with a competitive
edge (leading to profit in the private sector, more efficient administration in the
public sector).
The objectives of information security risk management can be described as1:
1. Risk identification
2. Risk assessment (prioritization of risks)
3. Identification of the most cost-effective means of controlling
4. Monitoring (risk review).
Step 3 includes risk mitigation options of avoidance, transfer, or active treatment of
one type or another. Three endemic deficiencies were identified:
1. Information security risk identification is often perfunctory, with failure to
identify risks related to tacit knowledge, failure to identify vulnerability from
interactions across multiple information assets, failure to identify indications of
fraud, espionage, or sabotage, failure to systematically learn from past events,
and failure to identify attack patterns in order to develop effective
countermeasures.
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_11
149
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_11&domain=pdf
2. Information security risks are commonly considered without reference to
reality.
3. Information security risk assessment is usually intermittent without reference
to historical data.
Internal threats are also present. Some problems arise due to turbulence in personnel,
through new hires, transfers, and terminations. Most insider computer security
incidents have been found to involve former employees.2 External threats include
attacks by organized criminals as well as potential threats from terrorists.3
Frameworks
There are a number of best practice frameworks that have been presented to help
organizations assess risks and implement controls. These include that of the interna-
tional information security management standard series ISO2700x to facilitate
planning, implementation and documentation of security controls.4 In 2005 this
series replaced the older ISO 17799 standards of the 1990s. The objective of the
standard was to provide a model for establishing, implementing, operating, monitor-
ing, reviewing, maintaining, and improving an information security management
system. It continues reliance on the Plan-Do-Check-Act (PDCA) model of the older
standard. Within the new series are:
• ISO 27001—specification for an ISMS including controls with their objectives;
• ISO 27002—code of practice with hundreds of potential control mechanisms;
• ISO 27003—guidance for implementation of an ISMS, focusing on PDCA;
• ISO 27004—standard covering ISMS measurement and metrics;
• ISO 27005—guidelines for information security risk management (ISRM);
• ISO 27006—Accreditation standards for certification and registration.
Gikas5 compared these ISO standards with three other standards, two governmental
and a third private. The Health Insurance Portability and Accountability Act
(HIPAA) was enacted in 1996, requiring publication of standards for electronic
exchange, privacy, and security for health information. HIPAA was intended to
protect the security of individual patient health information. The Federal Information
Security Management Act (FISMA) was enacted in 2002, calling upon all federal
agencies to develop, document and implement programs for information systems
security. The industry standard is the Payment Card Industry-Digital Security
Standard (PCI-DSS), providing a general set of security requirements meant to
give private organizations flexibility in implementing and customizing
organization-specific security measures related to payment account data security.
Table 11.1 gives PCI-DSS content:
Other frameworks address how information security can be attained. Security
governance can be divided into three divisions: strategic, managerial and opera-
tional, and technical.6 Strategic factors involved leadership and governance. These
150 11 Information Systems Security Risk
involve sponsorship, strategy selection, IT governance, risk assessment, and
measures to be used. Functions such as defining roles and responsibilities fall into
this category.7 The managerial and operational division includes organization and
security policies and programs. This division includes risk management in the form
of a security program, to include security culture awareness and training. Security
policies manifest themselves in the form of policies, procedures, standards,
guidelines, certification, and identification of best practices. The technical division
includes programs for asset management, system development, and incident man-
agement, as well as plans for business continuity.
Levels of such a capability maturity model for information systems security can
by8:
• Level 1—Security Leadership: strategy and metrics
• Level 2—Security Program: structure, resources, and skill sets needed
• Level 3—Security Policies: standards and procedures
• Level 4—Security Management: monitoring procedures, to include privacy
protection
• Level 5—User Management: developing aware users and a security culture
• Level 6—Information Asset Security: meta security, protection of the network
and host
• Level 7—Technology Protection & Continuity: protection of physical environ-
ment, to include continuity planning.
Information security faces many challenges, to include evolving business
requirements, constant upgrades of technology, and threats from a variety of
sources. Vendors and computer security firms send a steady stream of alerts about
new threats arising from the Internet. Internally, new hires, transfers, and
Table 11.1 PCI-DSS
Principle Requirement
Build and maintain a secure network 1. Install and maintain a firewall to protect cardholder
data
2. Don’t use vendor-supplied default passwords and
security parameters
Protect cardholder data 3. Protect stored cardholder data
4. Encrypt cardholder data transmission over open public
networks
Maintain a vulnerability
management program
5. Regularly update and use anti-virus software
6. Develop and maintain secure systems
Implement strong access control 7. Restrict access to cardholder data by need-to-know
8. Assign unique ID to each person with computer access
9. Restrict physical access to cardholder data
Regularly monitor and test 10. Track and monitor all access
11. Regularly test systems and processes
Maintain an information security
policy
12. To address information security
Frameworks 151
terminations may be the germination of threats from current or former employees.
There also are many changes in legal requirements, especially for those
organizations doing work involving the government.
Security Process
As a means to attain information technology security, consider the following9:
Establish a Mentality To be effective, the organization members have to buy in to
operating securely. This includes sensible use of passwords. Those dealing with
critical information probably need to change their passwords at least every 60 days,
which may be burdensome, but provides protection for highly vulnerable informa-
tion. Passwords themselves should be difficult to decipher, running counter to what
most of us are inclined to use. Training is essential in inculcating a security climate
within the organization.
Include Security in Business Decision Making When software systems are devel-
oped, especially in-house, an information security manager should certify that
organizational policies and procedures have been followed to protect organizational
systems and data. When pricing products, required funding for security measures
need to be included in business cases.
Establish and Continuously Assess the Network Security audits need to be
conducted using testable metrics. These audits should identify lost productivity
due to security failures, to include subsequent user awareness training.
Automation can be applied in many cases to accomplish essential risk compliance
and assessment tasks. This can include vulnerability testing, as well as incident
management and response. The benefits can include better use of information, lower
cost of compliance, and more complete compliance with regulations such as
Sarbanes-Oxley and HIPAA.
Table 11.2 provides a security process cycle within this framework:
This cycle emphasizes the ability to automate within an enterprise information
system context. A means to aid in assessing vulnerabilities is provided by the risk
matrices we discussed in Chap. 2. Cyber-crime includes ransom-ware (where con-
sumer computers are frozen until a ransom is paid), cyber blackmail (holding banks
at ransom with threat to publish client data), on-line banking, Trojan horses,
phishing, and denial of service, spying (governmental or commercial), as well as
mass hacking for political or ideological reasons. Table 11.3 provides a risk matrix
for this case11:
This matrix could be implemented by assigning responsibility for risk to the
executive board for Red categories, to heads of division for Yellow, and to line
managers for Green. Each of these responsibility levels could determine the extra
152 11 Information Systems Security Risk
mitigation measures suggested by their information technology experts to lower
residual risk.
Best Practices for Information System Security
Nine best practices to protect against information system security threats can
include12:
1. Firewalls—hardware or software, which block unallowed traffic. Firewalls do
not protect against malicious traffic moving through legitimate communication
channels. About 70 % of security incidents have been reported to occur inside fire
walls.
2. Software updates—application vulnerabilities are corrected by patches issued
by the software source when detected. Not adopting patches has led to
vulnerabilities that are commonly exploited by hackers.
Table 11.2 Tracy’s security process cycle10
Process IT impact Function
Inventory Assets available Access assets in hardware and software
Assess Vulnerabilities Automatically check systems for violations of risk policies
based on regulatory and commercially accepted standards
Notify Who needs to
know?
Automatically alert those responsible for patch
management, compliance
Remediate Action needed Automate security remediation by leveraging help desks,
patch databases, configuration management tools
Validate Did corrective
actions work?
Automatically confirm that remediation is complete, record
compliance and confirm compliance with risk posture
policies
Report Can you get
information
needed?
Give management views of enterprise IT risk and
compliance, generate
Table 11.3 Risk tolerance matrix for cyber crime
Negligible
impact
Low
impact
Significant
impact
Major
impact
Very severe
impact
Almost certain Green Yellow Red Red Red
Likely
probability
Green Yellow Red Red Red
Possible
probability
Green Green Yellow Red Red
Unlikely
probability
Green Green Yellow Red Red
Rare
probability
Green Green Green Yellow Red
Best Practices for Information System Security 153
3. Anti-virus, worm and Trojan software—should be installed on all machines.
Management policies to reduce virus vulnerability include limiting shareware and
Internet use, as well as user training and heightened awareness through education
can supplement software protection.
4. Password policy—users face a constant tradeoff between sound password struc-
ture and workability (the ability to remember). But sound password use is needed
to control access to authorized users. Human engineering in the form of naïve
acquisition of passwords by intruders continues to be a problem.
5. Physical security—including disaster recovering planning and physical protec-
tion in the form of locks to control access to critical system equipment. Trash
management also is important, as well as identification procedures.
6. Policy and training—because many information system security risks arise due
to unawareness, a program of enlightenment can be very beneficial in controlling
these risks. The other side of the coin is policy, the adoption of sound procedures
governing the use of hardware, e-mail, and the Internet. Policy and training thus
work together to accomplish a more secure system operating environment.
7. Secure remote connections—ubiquitous computing creates the opportunity to
vastly expand mobile computing connections, and thus make workers much more
productive. In order to gain these advantages, good encryption techniques are
required as well as sound authentication procedures.
8. Server lock down—limiting server exposure is a basic principle. Those servers
linking to the Internet need to be protected against intrusion.
9. Intrusion detection—systems are available to monitor network traffic to seek
malicious bit patterns.
Supply Chain IT Risks
Information technology makes supply chains work through the communication
needed to coordinate activities across organizations, often around the world.13
These benefits require openness of systems across organizations. While techniques
have been devised to provide the required level of security that enables us to do our
banking on-line, and for global supply chains to exchange information expeditiously
with confidence in the security of doing so, this only happens because of the ability
of information systems staff to make data and information exchange secure.
IT support to supply chains involves a number of operational forms, to include
vendor management inventory (VMI), collaborative planning forecasting and
replenishment (CPFR), and others. These forms include varying levels of informa-
tion system linkage across supply chain members, which have been heavily
studied.14
Within supply chains, IT security incidents can arise from within the organiza-
tion, within the supply chain network, or in the overall environment.15 Within each
threat origin, points of vulnerability can be identified and risk mitigation strategies
customized. The greatest threat is loss of confidentiality. An example would be a
case where a supplier lost their account when a Wal-Mart invoice was
154 11 Information Systems Security Risk
unintentionally sent to Costco with a lower price for items carried by both retailers.
Supply chains require data integrity, as systems like MRP and ERP don’t function
without accurate data. Inventory information is notoriously difficult to maintain
accurately.
Value Analysis in Information Systems Security
The value analysis procedure has been used to sort objectives related to information
systems security.16 That process involved three steps, which they described as:
1. Interviews to elicit individual values.
2. Converting individual values and statements into a common format, generally in
the form of object and preference. This step included clustering objectives into
groups of two levels.
3. Classifying objectives as either fundamental to the decision context or as a means
to achieve fundamental objectives.
Once the initial hierarchy was developed, it was validated by review with each of the
seven experts involved. Sub-objectives were then classified as essential, useful but
not essential, or not necessary for the given decision context. Hierarchy clustering
was also reviewed.
We will apply that hierarchy with the SMART procedure (also outlined earlier) to
a hypothetical decision involving selection of an enterprise information (EIS, or
ERP) system. Tradeoffs among alternative forms of ERP have been reviewed in
depth.17 The SMART method has been suggested for selecting among alternative
forms of ERP.18
Tradeoffs in ERP Outsourcing
Bryson and Sullivan cited specific reasons that a particular ASP might be attractive
as a source for ERP.9 These included the opportunity to use a well-known company
as a reference, opening new lines of business, and opportunities to gain market-share
in particular industries. Some organizations may also view ASPs as a way to aid cash
flow in periods when they are financially weak and desperate for business. In many
cases, cost rise precipitously after the outsourcing firm has become committed to the
relationship. One explanation given was the lack of analytical models and tools to
evaluate alternatives.
ASPs become risky from both success, or conversely, bankruptcy. ASP sites
might be attacked and vandalized, or destroyed by natural disaster. Each organiza-
tion must balance these factors and make their own decision.19
Value Analysis in Information Systems Security 155
ERP System Risk Assessment
The ideal theoretical approach is a rigorous cost/benefit study, in net present terms.
Methods supporting this positivist view include cost/benefit analysis, applying net
present value, calculating internal rate of return or payback. Many academics as well
as consulting practitioners take the position that this is crucial. However, nobody
really has a strong grasp on predicting the future in a dynamic environment such as
ERP, and practically, complete analysis in economic terms is often not applied.
The Gartner Group consistently reports that IS/IT projects significantly exceed
their time (and cost) estimates. Thus, while almost half of the surveyed firms
reported expected implementation expense to be less than $5 million, we consider
that figure to still be representative of the minimum scope required. However, recent
trends on the part of vendors to reduce implementation time probably have reduced
ERP installation cost. In the U.S., vendors seem to take the biggest chunk of the
average implementation. Consultants also take a big portion. These proportions are
reversed in Sweden. The internal implementation team accounts for an additional
14 % (12 % in Sweden). These proportions are roughly reversed in Sweden with
training.
Total life cycle costs are needed for evaluation of ERP systems, which have long-
range impacts on organizations. Unfortunately, this makes it necessary to estimate
costs that are difficult to pin down. Total costs can include:
• Software upgrades over time, to include memory and disk space requirements
• Integration, implementation, testing, and maintenance
• Providing users with individual levels of functionality, technical support and
service
• Servers
• Disaster recovery and business continuance program
• Staffing.
Qualitative Factors
While cost is clearly an important matter, there are other factors important in
selection of ERP that are difficult to fit into a total cost framework. A survey of
European firms in mid-1998 was conducted with the intent of measuring ERP
penetration by market, including questions about criteria for supplier selection.20
The criteria reportedly used are given in the first column of Table 11.4, in order of
ranking. Product functionality and quality were the criteria most often reported to be
important. Column 2 gives related factors from another framework for evaluating
ASPs, while column 3 gives more specifics in that framework.21
While these two frameworks don’t match entirely, there is a lot of overlap.
156 11 Information Systems Security Risk
Multiple Criteria Analysis
An example is extracted here from the literature22 to show the application of multiple
criteria analysis technique in managing IT risks. The data in the example are altered
to fit our analysis scope. The multiple criteria analysis was found useful when used
together with cost-benefit analysis, which seeks to identify accurate measures of
benefits and costs in monetary terms, and uses the ratio benefits/costs (the term
benefit-cost ratio seems more appropriate, and is sometimes used, but most people
refer to cost-benefit analysis). Because ERP projects involve long time frames (for
benefits if not for costs as well), considering the net present value of benefits and
costs is important.
Recognition that real life decisions involve high levels of uncertainty is reflected
in the development of fuzzy multiattribute models. The basic multiattribute model is
to maximize value as a function of importance and performance:
Table 11.4 Selection evaluation factors
ERP supplier selection (Van
Everdingen et al.)
ASP evaluation
(Ekanayaka et al.) Ekanayaka et al. subelements
1. Product functionality Customer service 1. Help desk & training
2. Support for account
administration
2. Product quality Reliability, scalability
3. Implementation speed Availability
4. Interface with other
systems
Integration 1. Ability to share data between
applications
5. Price Pricing 1. Effect on total cost structure
2. Hidden costs & charges
3. ROI
6. Market leadership
7. Corporate image
8. International orientation
Security Physical security of facilities
Security of data and applications
Back-up and restore procedures
Disaster recovery plan
Service level monitoring
& management
1. Clearly defined performance
metrics and measurement
2. Defined procedures for opening
and closing accounts
3. Flexibility in service offerings,
pricing, contract length
Multiple Criteria Analysis 157
valuej ¼
XK
i¼1
wi � u xij
� �
ð1Þ
where wi is the weight of attribute i, K is the number of attributes, and u(xij) is the
score of alternative xj on attribute i.
Multiple criteria analysis considers benefits on a variety of scales without
directly converting them to some common scale such as dollars. The method
(there are many variants of multiple criteria analysis) is not at all perfect. But it
does provide a way to demonstrate to decision makers the relative positive and
negative features of alternatives, and gives a way to quantify the preferences of
decision makers.
We will consider an analysis of six alternative forms of ERP: from an Australian
vendor, the Australian vendor system customized to provide functionality unique to
the organization, an SAP system, a Chinese vendor system, a best-of-breed system,
and a South Korean ASP. We will make a leap to assume that complete total life
cycle costs have been estimated for each option as given in Table 11.5.
The greatest software cost is expected to be for the best-of-breed option, while the
ASP would have a major advantage. The best-of-breed option is expected to have the
highest consulting cost, with ASP again having a relative advantage. Hardware is the
same for the four mainline vendor options, with the ASP option saving a great deal.
Implementation is expected to be highest for the customized system, with ASP
having an advantage. Training is lowest for the customized system, while the best-
of-breed system the highest.
But there are other important factors as well. This total cost estimate assumes that
everything will go as planned, and may not consider other qualitative aspects.
Multiple criteria analysis provides the ability to incorporate other factors.
Perhaps the easiest application of multiple criteria analysis is the simple
multiattribute rating theory (SMART). SMART provides decision makers with a
means to identify the relative importance of criteria in terms of weights, and
measures the relative performance of each alternative on each criterion in terms of
scores. In this application, we will include criteria of seven factors: Customer
service; Reliability and scalability, Availability, Integration; Financial factors;
Table 11.5 Total life cycle costs for each option ($ million)
Australian
vendor
Australian vendor
customized SAP
Chinese
vendor
B-
of-
B
South
Korean ASP
Software 15 13 12 2 16 3
Consultants 6 8 9 2 12 1
Hardware 6 6 6 4 6 0
Implement 5 10 6 4 9 2
Train 8 2 9 3 11 8
Total Cost 40 39 42 15 54 14
158 11 Information Systems Security Risk
Security; and Service level monitoring & management.14 The relative importance is
given by the order, following the second column of Table 11.4:
Scores
Scores in SMART can be used to convert performances (subjective or objective) to a
zero-one scale, where zero represents the worst acceptable performance level in the
mind of the decision maker, and one represents the ideal, or possibly the best
performance desired. Note that these ratings are subjective, a function of individual
preference. Scores for the criteria given in the value analysis example could be as in
Table 11.6:
The best imaginable customer service level would be provided by the
customizing the Australian vendor option. The South Korean ASP option is consid-
ered suspect on this factor, but not the worst imaginable. The Australian vendor
system without customization is expected to be the most reliable, while the South
Korean ASP options the worst. The SAP option is rated the easiest to integrate. The
South Korean ASP and best-of-breed systems are rated low on this factor, but not the
worst imaginable. Costs reflect Table 11.4, converting dollar estimates into value
scores on the 0–1 scale. The South Korean ASP option has the best imaginable cost.
The Australian vendor system without customization is rated as the best possible
with respect to security issues, while the South Korean ASP is rated the worst
possible. Service level ratings are high for the SAP system and the ASP, while the
best-of-breed system is rated low on this factor. The highest image score is for the
best-of-breed system, and the lowest for the South Korean ASP option.
Table 11.6 Relative scores by criteria for each option in example
Australian
vendor
Australian
vendor
customized SAP
Chinese
vendor
B-
of-
B
South
Korean
ASP
Customer service 0.6 1 0.9 0.5 0.7 0.3
Reliability,
Availability,
Scalability
1 0.8 0.9 0.5 0.4 0
Integration 0.8 0.9 1 0.6 0.3 0.3
Cost 0.6 0.7 0.5 0.9 0.2 1
Security 1 0.9 0.7 0.8 0.6 0
Service level 0.8 0.7 1 0.6 0.2 1
Image 0.9 0.7 0.8 0.5 1 0.2
Multiple Criteria Analysis 159
Weights
The next phase of the analysis ties these ratings together into an overall value
function by obtaining the relative weight of each criterion. In order to give the
decision maker a reference about what exactly is being compared, the relative range
between best and worst on each scale for each criterion should be explained. There
are many methods to determine these weights. In SMART, the process begins with
rank-ordering the four criteria. A possible ranking for a specific decision maker
might be as given in Table 11.7.
Swing weighting could be used to identify weights. Here, the scoring was used to
reflect 1 as the best possible and 0 as the worst imaginable. Thus the relative rank
ordering reflects a common scale, and can be used directly in the order given. To
obtain relative criterion weights, the first step is to rank-order criteria by importance.
Two estimates of weights can be obtained. The first assigns the least important
criterion ten points, and assesses the relative importance of each of the other criteria
on that basis. This process (including rank-ordering and assigning relative values
based upon moving from worst measure to best measure based on most important
criterion) is demonstrated in Table 11.8.
The total of the assigned values is 268. One estimate of relative weights is
obtained by dividing each assigned value by 268. Before we do that, we obtain a
second estimate from the perspective of the least important criterion, which is
assigned a value of 10 as in Table 11.9.
Table 11.7 Worst and best measures by criteria
Criteria Worst measure Best measure
Customer service 0.3—South Korean ASP 1—Australian vendor
Reliability, Availability,
Scalability
0—South Korean ASP 1—Australian vendor
customized
Integration 0.3—Best-of-Breed & South
Korean ASP
1—SAP
Cost 0.2—Best-of-breed 1—ASP
Security 0—South Korean ASP 1—Australian vendor
Service level 0.2—Best-of-Breed 1—SAP & ASP
Image 0.2—South Korean ASP 1—Best-of-Breed
Table 11.8 Weight estimation from perspective of most important criterion
Criteria Worst measure Best measure Assigned value
1-Customer service 0 1 100
2-Reliability, Availability, Scalability 0 1 80
3-Integration 0 1 50
4-Cost 0 1 20
5-Security 0 1 10
6-Service level 0 1 5
7-Image 0 1 3
160 11 Information Systems Security Risk
These add up to 820. The two weight estimates are now as shown in Table 11.10.
The last criterion can be used to make sure that the sum of compromise weights
adds up to 1.00.
Value Score
The next step of the SMART method is to obtain value scores for each alternative by
multiplying each score on each criterion for an alternative by that criterion’s weight,
and adding these products by alternative. Table 11.11 shows this calculation.
In this example, the ASP turned out to be quite unattractive, even though it had
the best cost and the best service level. The cost advantage was outweighed by this
option’s poor ratings on customer service levels expected, reliability, availability,
and scalability, and security, two of which were the highest rated criteria. The value
score indicates that the Australian vendor customized system would be best,
followed by the SAP system and the non-customized Australian vendor system.
The final ranking results reveal that adopting new technology such as ASP some-
times includes great potential risk. Multiple Criteria analysis helps focus on the
tradeoffs of these potential risks.
Table 11.9 Weight estimation from perspective of least important criterion
Criteria Worst measure Best measure Assigned value
7-Image 0 1 10
6-Service level 0 1 20
5-Security 0 1 30
4-Cost 0 1 60
3-Integration 0 1 150
2-Reliability, Availability, Scalability 0 1 250
1-Customer service 0 1 300
Table 11.10 Criterion weight development
Criteria Based on best Based on worst Compromise
1-Customer service 100/268 0.373 300/820 0.366 0.37
2-RAS 80/268 0.299 250/820 0.305 0.30
3-Integration 50/268 0.187 150/820 0.183 0.19
4-Cost 20/268 0.075 60/820 0.073 0.07
5-Security 10/268 0.037 30/820 0.037 0.04
6-Service level 5/268 0.019 20/820 0.024 0.02
7-Image 3/268 0.011 10/820 0.012 0.01
Multiple Criteria Analysis 161
T
a
b
le
1
1
.1
1
V
al
u
e
sc
o
re
ca
lc
u
la
ti
o
n
C
ri
te
ri
a
W
g
t
A
u
st
ra
li
an
v
en
d
o
r
A
u
st
ra
li
an
v
en
d
o
r
cu
st
o
m
iz
ed
S
A
P
C
h
in
es
e
v
en
d
o
r
B
es
t-
o
f-
B
S
o
u
th
K
o
re
an
A
S
P
C
u
st
o
m
er
se
rv
ic
e
0
.3
7
�
0
.6
¼
0
.2
2
2
�
1
.0
¼
0
.3
7
0
�
0
.9
¼
0
.3
3
3
�
0
.5
¼
0
.1
8
5
�
0
.7
¼
0
.2
5
9
�
0
.3
¼
0
.1
1
1
R
el
ia
b
il
it
y
,
A
v
ai
la
b
il
it
y
,
S
ca
la
b
il
it
y
0
.3
0
�
1
.0
¼
0
.3
0
0
�
0
.8
¼
0
.2
4
0
�
0
.9
¼
0
.2
7
0
�
0
.5
¼
0
.1
5
0
�
0
.4
¼
0
.1
2
0
�
0
¼
0
.0
0
0
In
te
g
ra
ti
o
n
0
.1
9
�
0
.8
¼
0
.1
5
2
�
0
.9
¼
0
.1
7
1
�
1
.0
¼
0
.1
9
0
�
0
.6
¼
0
.1
1
4
�
0
.3
¼
0
.0
5
7
�
0
.3
¼
0
.0
5
7
C
o
st
0
.0
7
�
0
.6
¼
0
.0
4
2
�
0
.7
¼
0
.0
4
9
�
0
.5
¼
0
.0
3
5
�
0
.9
¼
0
.0
6
3
�
0
.2
¼
0
.0
1
4
�
1
.0
¼
0
.0
7
0
S
ec
u
ri
ty
0
.0
4
�
1
.0
¼
0
.0
4
0
�
0
.9
¼
0
.0
3
6
�
0
.7
¼
0
.0
2
8
�
0
.8
¼
0
.0
3
2
�
0
.6
¼
0
.0
2
4
�
0
¼
0
.0
0
0
S
er
v
ic
e
le
v
el
0
.0
2
�
0
.8
¼
0
.0
1
6
�
0
.7
¼
0
.0
1
4
�
0
.1
¼
0
.0
0
2
�
0
.6
¼
0
.0
1
2
�
0
.2
¼
0
.0
0
4
�
1
.0
¼
0
.0
2
0
Im
ag
e
0
.0
1
�
0
.9
¼
0
.0
0
9
�
0
.7
¼
0
.0
0
7
�
0
.8
¼
0
.0
0
8
�
0
.5
¼
0
.0
0
5
�
1
.0
¼
0
.0
1
0
�
0
.2
¼
0
.0
0
2
T
o
ta
ls
0
.7
8
1
0
.8
8
7
0
.8
6
6
0
.5
6
1
0
.4
8
8
0
.2
6
0
162 11 Information Systems Security Risk
Conclusion
Information systems security is critically important to organizations, private and
public. We need the Internet to contact the world, and have benefited personally and
economically from using the Web. But there have been many risks that have been
identified in the open Internet environment.
A number of frameworks have been proposed. Some appear in the form of
standards, such as from the International Standards Organization. That set of
standards provides guidance in the macro-management of information systems
security. Frameworks can provide guidance in developing processes to attain IS
security, to include a Security Process Cycle and a list of best practices.
Supply chains are an especially important economic use of the Internet, and
involve a special set of risks. While there are many inherent risks in electronic
data interchange (needed to efficiently manage supply chains), methods have been
developed to make this a secure activity in well-managed supply chains.
One way that many organizations deal with information systems is to outsource,
hiring experts with strong software to do their information processing. This can be a
very cost-effective means, especially for those organizations who feel that their core
competencies do not include information technology (or at least all aspects of IT).
To more thoroughly evaluate information systems security, we suggest value
analysis, implemented through SMART. Value analysis provides a valuable means
of identifying factors of general importance. Each particular decision would be able
to filter this rather long list down to those issues of importance in a particular context.
Here we suggest value analysis as a means to focus on the impact of information
systems security factors on alternative forms of enterprise information systems. We
then demonstrated how the process, combined with SMART analysis, can be used to
identify the relative importance of factors, and provide a framework to more
thoroughly analyze tradeoffs among alternatives.
Notes
1. Webb, J., Ahmad, A., Maynard, S.B. and Shanks, G. (2014). A situation
awareness model for information security risk management. Computers &
Security 44, 1–15.
2. Tracy, R.P. (2007). IT security management and business process automation:
Challenges, approaches, and rewards, Information Systems Security
16, 114–122.
3. Porter, D. (2008). Business resilience, RMA Journal 90:6, 60–64.
4. Mijnhardt, F., Baars, T. and Spruit, M. (2016). Organizational characteristics
influencing SME information security maturity. Journal of Computer Informa-
tion Systems 56(2), 106–115.
5. Gikas, C. (2010). A general comparison of FISMA, HIPAA, ISO 27000 and
PCI-DSS standards. Information Security Journal: A Global Perspective 19(3),
132–141.
Notes 163
6. Da Veiga, A. and Eloff, J.H.P. (2007). An information security governance
framework, Information Systems Management 24, 361–372.
7. Tudor, J.K. (2000). Information Security Architecture: An Integrated Approach
to Security in an Organization. Boca Raton, FL: Auerbach.
8. McCarthy, M.P. and Campbell, S. (2001). Security Transformation. New York:
McGraw-Hill.
9. Tracy (2007), op cit.
10. Ibid.
11. VandePutte, D. and Verhelst, M. (2013). Cyber crime: Can a standard risk
analysis help in the challenges facing business continuity managers? Journal
of Business Continuity & Emergency Planning 7(2), 126–137.
12. Keller, S., Powell, A., Horstmann, B., Predmore, C. and Crawford, M. (2005).
Information security threats and practices in small businesses, Information
Systems Management 22, 7–19.
13. Faisal, M.N., Banwet, D.K. and Shankar, R. (2007). Information risks manage-
ment in supply chains: An assessment and mitigation framework, Journal of
Enterprise Information Management 20:6, 677–699.
14. Cigolini, R. and Rossi, T. (2006). A note on supply risk and inventory
outsourcing, Production Planning and Control 17:4, 424–437.
15. Smith, G.E., Watson, K.J., Baker, W.H. and Pokorski, J.A. II (2007). A critical
balance: Collaboration and security in the IT-enabled supply chain, Interna-
tional Journal of Production Research 45:11, 2595–2613.
16. Dhillon, G. and Torkzadeh, G. (2006). Value-focused assessment of information
system security in organizations, Information Systems Journal 16, 293–314.
17. Olson, D.L. (2004). Managerial Issues in Enterprise Resource Planning
Systems. New York: McGraw-Hill/Irwin.
18. Olson, D.L. (2007). Evaluation of ERP outsourcing. Computers & Operations
Research 34, 3715–3724.
19. Olson, D.L. (1996). Decision Aids for Selection Problems. New York: Springer.
20. Van Everdingen, Y., van Hellegersberg, J. and Waarts, E. (2000). ERP adoption
by European midsize companies. Communications of the ACM 43(4), 27–31.
21. Ekanayaka, Y., Currie, W.L. and Seltsikas, P. (2003). Evaluating application
service providers. Benchmarking: An International Journal 10(4), 343–354.
22. Olson, D.L. (2007). Evaluation of ERP outsourcing. Computers & Operations
Research 34, 3715–3724.
164 11 Information Systems Security Risk
Enterprise Risk Management in Projects 12
Project management inherently involves high levels of risk, because projects by
definition are being done for the first time. There are a number of classical project
domain types, each with their own characteristics. For instance, construction projects
focus on inanimate objects, such as materials that are transformed into some purpose-
ful object. There are people involved, although as time passes, more and more work is
done by machinery, with diminishing human control. Thus construction projects are
among the more predictable project domains. Government projects often involve
construction, but extend beyond that to processes, such as the generation of nuclear
material, or more recently, the processing of nuclear wastes. Government projects
involve high levels of bureaucracy, and the only aspect increasing predictability is that
overlapping bureaucratic involvement of many agencies almost ensures long time
frames with high levels of change. There is a very wide spectrum of governmental
projects. They also should include civil works, which drive most construction projects.
A third project domain is information system project management, focusing on the
development of software tools to do whatever humans want. This field, like construc-
tion and governmental projects, has been widely studied. It is found to involve higher
levels of uncertainty than construction projects, because software programming is a
precise activity, and getting a computer code to work without bugs is a precise activity.
Seyedhoseini et al.1 reviewed risk management processes within projects, using the
contexts of general project management, civil engineering, software engineering, and
public application. Those authors looked at sixteen risk management processes
published over the period 1990–2005, spread fairly evenly over their four context
areas, identifying methodologies. These contexts all involve basic project management,
but we argue that each context is quite different. Project management in civil engineer-
ing is usually easier to manage, as the uncertain elements involve natural science
(geology, weather). However, there are many different types of risk involved in any
project, to include political aspects2 and financial aspects.3 While these sources provide
more than enough uncertainty for project managers, there is a much more difficult task
facing software engineering project managers.4 We argue that this is because people are
more fundamental to the software engineering production process, in the form of
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_12
165
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_12&domain=pdf
developing systems, programming them, and testing them, each activity involving high
degrees of uncertainty.5 Public application projects are also unique unto themselves,
with high levels of bureaucratic process that take very long periods of time as the wheels
of bureaucracy grind slowly and thoroughly. Slowly enough that political support often
shifts before a project is completed, and thoroughly enough that opposition of the “not-
in-my-backyard” is almost inevitably uncovered prior to project completion.
Project Management Risk
The Project Management Institute views risk as general to projects, and through the
Project Management Body of Knowledge (PMBOK)6, which develops standards,
policies and guidelines for project management. It focuses on tools and techniques
related to project management skills and capabilities. Project management
responsibilities include achieving cost, schedule performance objectives. Risk man-
agement is a major element of PMBOK, with major categories of:
• planning,
• risk identification,
• quantitative risk analysis,
• quantitative risk analysis,
• risk response planning, and
• risk monitoring and control.
The Project Risk Analysis and Management (PRAM) Guide in the United King-
dom is very similar in approach,7 and fits the description of a typical risk management
program from other sources. Each of these categories applies to all projects to some
degree, although the level of uncertainty can make variants of tools applied appropri-
ate. A number of recent papers have proposed risk assessment methodologies in
construction, based on an iterative process of risk identification, risk analysis and
evaluation, risk response development, and administration.8 The key is to keep
systematic records over time to record risk experiences, with systematic updating.9
Risk Management Planning
As with any process, inputs need to be gathered to organize development of a
cohesive plan. Things such as the project purpose and stakeholders need to be
identified, followed by identification of tasks to be accomplished. This applies to
every kind of project. These tasks are cohesive activities, usually accomplished by a
specific individual or group, and for each task estimation of duration and resources
required, as well as immediate predecessor activities is needed. This is the input
needed for critical path analysis, to be demonstrated in this chapter. That quantitative
approach deals with risk in the form of probability distributions for durations
(demonstrated in this chapter through simulation).
166 12 Enterprise Risk Management in Projects
But there are other risk aspects that need to be considered. It is important to
consider the organization’s attitude toward risk, and qualitatively identify things that
can go wrong. Risk attitude depends upon stakeholders. Identification of what might
go wrong and stakeholder preference for dealing with them can affect project
management team roles and responsibilities.
Risk management planning concludes with a risk management plan. This plan
should define methodologies for dealing with project risks. Such methodologies can
include training internal staff, outsourcing activities that other organizations are
better equipped to deal with, or insurance in various forms. Ultimately, every
organization has to decide which risks they are competent to manage internally
(core competencies), and which risks they should offload (at some expected cost).
Risk Identification
Once the risk management plan is developed, it can naturally lead to the next step,
risk identification. The process of risk identification identifies major potential
sources of risk for the specific project. The risk management plan identifies tasks
with their risks, as well as project team roles and responsibilities. Historical experi-
ence should provide guides (usually implemented in the form of checklists) to things
that can go wrong, as well as the organization’s ability to cope with them.
Specific types of risk can be viewed as arising in various ways. A classical view is
the triumvirate of quality, time, and budget. Software projects are often said to allow
any two of the three—you can get code functioning as intended on time, but it
usually involves more cost than expected; you can get functional code within budget
as long as you are patient; you can get code on time and within budget as long as you
don’t expect it to work as designed. This software engineering project view often
generalizes to other projects, but with some different tendencies. In construction,
there is less duration variance, although unexpected delays from geology or the
weather commonly create challenges for project managers. If weather delays are
encountered, the tradeoff is usually whether to wait for better weather, or to pay more
overtime or extra resources. If geological elements are creating difficulties, more
time and money is usually required. The functionality of the project is usually not
degraded. Governmental projects may involve emergency response, where time is
not something that can be sacrificed. The tradeoff is between quality of response and
cost. Usually emergency response teams do the best they can within available
resources, and public outcry almost always criticizes the insufficiency of the effort.
There are a number of techniques that can be used to identify risks. Some qualita-
tive approaches include interviews of experts or stakeholders, supplemented by
techniques such as brainstorming, the nominal group technique, the Delphi method,
or SWOT analysis (strengths, weaknesses, opportunities, and threats). Each of these
methods are relatively easy to implement, and the quality of output depends on the
participation of a diverse group of stakeholders. Historical data can also be used if the
organization has experience with past projects similar to the current activity. This
works well if past experiences are well-documented and retrieved efficiently.
Project Management Risk 167
The outputs from risk identification is a more complete list of risks expected in the
project, as well as possible responses along with their expected costs. This results in
a set of responses that can be reviewed as events develop, allowing project managers
to more intelligently select appropriate responses. While success can never be
guaranteed, it is expected that organizational project performance will improve.
Qualitative Risk Analysis
After a more precise estimation of project element risk is identified, the relative
probabilities and risk consequences can be addressed. Initial estimations usually
require reliance on subjective expert opinion. Historical records enable more preci-
sion, but one project element of importance is that projects by definition almost
always involve new situations and activities. Experts have to judge the applicability
of historical records to current challenges.
A qualitative risk analysis can be used to rank overall risks to the organization. A
priority system can be used to identify those risks that are most critical, and thus
require the greatest degree of managerial attention. In critical path analysis terms,
critical path activities would seem to call for the greatest managerial attention.
Behaviorally, humans tend to work hardest when the boss is watching. However,
the fallacy of this approach is that other activities that are not watched may become
critical too if they delay too far beyond their expected duration.
Qualitative risk analysis can provide a valuable screening to cancel projects that
are just too risky for an organization. It also can affect project organization, with
more skilled personnel assigned to tasks that call for more careful management. It
also can be a guide to look for means to offload risk, either through subcontracting,
outsourcing, or insurance.
Quantitative Risk Analysis
We will present more formal quantitative tools in the following sections. Quantita-
tive analysis requires data. The critical path method calls for a specific duration
estimate, which we will demonstrate. Simulation is less restrictive, calling for
probability distributions. But this is often more difficult for humans to estimate,
and usually only works when there is some sort of historical data available with
which to estimate probability distributions.
Quantitative risk analysis, as will be demonstrated, can be used to estimate
probabilities of project completion times, as well as other items of interest that can
be included in what is essentially a spreadsheet model. These examples focus on
time. It is also possible to include cost probabilities.
Risk Response Planning
Once risk analysis (qualitative, quantitative, or both) is conducted, project
managers are hopefully in a more educated position to make plans and decisions
168 12 Enterprise Risk Management in Projects
to respond to events. Risk response planning is this process of developing options
and reducing threats if possible. The severity of risks as well as cost, time, and
impact on project output (quality) should be considered.
A broad categorization of risk treatment strategies include:
• Risk avoidance (adopting alternatives that do not include the risk at issue)
• Risk probability reduction (act to reduce the probability of adverse event occurance)
• Risk impact reduction (act to reduce the severity of the risk)
• Risk transfer (outsourcing)
• Risk transfer (insurance)
• Add buffers to the project schedule
The process of project risk management is for project decision makers to tradeoff
the costs of each risk avoidance strategy in light of organizational goals. The key to
success is for organizations to adopt those risks internally where they have compe-
tency in dealing with the risk at issue, and to pay some price to offload those risks
outside of their core competencies.
The output of risk response planning can be a prioritized list of risks with
potential responses. It also can include assignment of specific individual
responsibilities for monitoring events and triggering planned responses.
Risk Monitoring and Control
This category of activity is implementation of all prior categories. Accounting is the
first line of measurement of cost activity. Operational project management personnel
also need to keep on top of time and quality performance as the project proceeds.
When adverse events are identified, corrective action (either adoption of contingency
plans, or development of alternative actions) need to be applied. In the long run, it is
important to document projects, both in terms of specific time and cost experiences, as
well a qualitative case data to enable the organization to do better on future projects.
Project Management Tools
A variety of risk management implementation tools have been applied. We referred
to PMBOK earlier, which is intended to provide a process model to generic risk
management projects. There are other process models, to include the Software
Engineering Institute’s capability maturity model (CMM). The five levels of the
CMMI are shown in Table 12.1.
The CMM level 1 covers software engineering organizations that do nothing. The
other four levels involve distinctly different process areas, leading to better control
over software development. It should be noted that attaining each level involves an
organizational cost in added bureaucracy, which requires a business decision on the
part of each organization. However, there is a great deal of research that indicates that
Project Management Tools 169
in the long run, software quality is improved dramatically by moving from any level to
the next higher level, and that overall development cost and development time are
improved. This is a clear example of risk management—paying the price of more
formality to yield reduced risk in terms of product output. Other process risk manage-
ment models in software engineering include Boehm’s spiral model,10 which provides
iterative risk analysis throughout the phases of the software development.
Bannerman11 categorized software project risk management into the three areas
of process models (reviewed above), analystical frameworks (based on some dimen-
sion such as risk source, the project life cycle, or model elements), and checklists.
Checklists are often found as the means to implement risk management, with
evidence of positive value.12 Checklists can be (and have been) applied in any
type of project. To work well, the project must repeat a domain, as each type of
project faces its own list of specific risks. The value of a checklist of course improves
with the depth of experience upon which it is based.
Simulation Models of Project Management Risk
We will focus on demonstrating quantitative tools to project risk management. We
will demonstrate how simulation can be used to evaluate the time aspect of project
management risk. The models are based on critical path, which can be modeled in
Excel, enabling the use of distributions through Crystal Ball simulation. We begin
with a basic software engineering project using a traditional waterfall model. Fig-
ure 12.1 gives a schematic of the activities and their precedence relationships.
Table 12.1 Capability maturity model for software engineering processes
Level Features Key processes
1 Initial Chaos Survival
2 Repeatable Individual control Software configuration management
Software quality assurance
Software subcontract management
Software project tracking & oversight
Software project planning
Requirements management
3 Defined Institutionalized process Peer reviews
Intergroup coordination
Software product engineering
Integrated software management
Training program
Organization process definition
Organization process focus
4 Managed Process measured Quality management
Process measurement and analysis
5 Optimizing Feedback for improvement Process change management
Technology innovation
Defect prevention
Source: Olson (2004)
170 12 Enterprise Risk Management in Projects
Table 12.2 gives the input information, along with distributions assumed for each
activity. These distributions should be based on historical data if possible, subjective
expert judgment if historical data is not available.
Figure 12.2 gives the Microsoft Project output for this model.
The Excel model based on critical path analysis is given in Table 12.3.
Some modeling adjustments were needed. For all distributions, durations in
weeks were rounded up in the Duration column of Table 12.1. For normal
distributions, a minimum of 0 was imposed. Note that the lognormal distribution
in Crystal Ball requires a shape parameter (constrained to be less than the mean).
Here the shape parameter is 5, the mean 7, and standard deviation 1. Also note that
the exponential distribution’s mean is inverted, so for E Implementation, 5 weeks
becomes 0.2. Figure 12.3 gives the simulation results (based on 1000 replications).
The average for this data was 18.62 weeks, compared to the critical path analysis
16 weeks (which was based on assumed duration certainty). There was a minimum
of 15 weeks (0.236 probability) and a maximum of 58 weeks. There was a 0.490
probability of exceeding 16 weeks.
There are other simulation systems used for project management. Process simu-
lation allows contingent sequences of activities, as used in the Project Assessment by
Simulation Technique (PAST).13
A
requirements
analysis
B
programming
C
hardware
acquisition
D
user training
E
Implementation
F
Testing
Fig. 12.1 Network for software installation example
Table 12.2 Software installation input data
Activity Duration Distribution Predecessors
A Requirements analysis 3 weeks Normal (3,0.3) None
B Programming 7 weeks Lognormal (7,1) A
C Hardware acquisition 3 weeks Normal (3,0.5) A
D User training 12 weeks Constant A
E Implementation 5 weeks Exponential (5) B,C
F Testing 1 week Exponential (1) E
Project Management Tools 171
Governmental Project
We assume a very long term project to dispose of nuclear waste, with activities,
durations and predecessor relationships given in Table 12.4.
Table 12.5 gives the Excel (Crystal Ball) model for this scheduling project.
Normal distributions were used for project manager controllable activities, and
lognormal distributions used for activities beyond project manager control
Figs. 12.4, 12.5, and 12.6.
Fig. 12.2 Microsoft Project model output
Table 12.3 Crystal Ball model of software installation project. #Oracle. Used with permission
Activity Distribution Duration Start Finish
A Requirements analysis =CB.Normal(3,0.3) =INT(MAX(0,B2)+0.99) =0 =D2+C2
B Programming =CB.Lognormal(5,7,1) =INT(B3+0.99) =E2 =D3+C3
C Hardware acquisition =CB.Normal(3,5) =INT(MAX(0,C2)+0.99) =E2 =D4+C4
D User training 12 =B5 =E2 =D5+C5
E Implementation =CB.Exponential(0.2) =INT(B6+0.99) =MAX(E3,E4) =D6+C6
F Testing =CB.Exponential(1) =INT(B7+0.99) =E6 =D7+C7
=MAX(E2:E7)
172 12 Enterprise Risk Management in Projects
Minimum completion time based on 1000 replications was 280 months, and
maximum 391 months. The mean was 332 months, with a standard deviation of
16 months. The distribution of completion times appears close to normal. Table 12.6
gives the probabilities of completion in 10-month intervals:
Fig. 12.3 Simulated software installation completion time. #Oracle. Used with permission
Table 12.4 Nuclear waste disposal project
Activity Duration Distribution Predecessors
A Decision staffed 60 weeks None
B EIS 70 weeks A
C Licensing study 60 weeks A
D NRC 30 weeks A
E Conceptual design 36 weeks A
F Regulation compliance 70 weeks E
G Site selection 40 weeks A
H Construction permit 0 constant D,F,G
I Construction 100 weeks H
J Procurement 70 weeks F SS, I SS + 5weeks
K Install equipment 72 weeks I
L Operating permit 0 K
M Cold start test 16 weeks K
N Readiness test 36 weeks M
O Hot test 16 weeks N
P Begin conversion 0 L,O
Project Management Tools 173
Conclusions
We have argued that there are a number of distinct project types, to include more
predictable projects such as those encountered in civil engineering, highly unpre-
dictable projects such as encountered in software engineering, and projects involv-
ing massive undertakings or emergency response typically faced by government
bureaucracies. There are many other types of projects, of course. For instance, we
did not discuss military procurement projects, which are extremely important unto
themselves. This type of project is a specific kind of governmental project, but here
Table 12.5 Model for governmental project
A B C D E
1 Activity Duration Start End
2 A Decision
staffed
¼INT(CB.Normal
(60,5))
None 0 ¼D2 + B2
3 B EIS ¼INT(CB.
Lognormal(70,10))
A ¼E2 ¼D3 + B3
4 C Licensing
study
¼INT(CB.
Lognormal(60,10))
A ¼E2 ¼D4 + B4
5 D NRC ¼INT(CB.
Lognormal(30,5))
A ¼E2 ¼D5 + B5
6 E Conceptual
design
¼INT(CB.Normal
(36,6))
A ¼E2 ¼D6 + B6
7 F Regulation
compliance
¼INT(CB.Normal
(70,10))
E ¼E6 ¼D7 + B7
8 G Site selection ¼INT(CB.Normal
(40,5))
A ¼E2 ¼D8 + B8
9 H Construction
permit
¼0 D,F,G ¼MAX(D5,
D7,D8)
¼D9 + B9
10 I Construction ¼INT(CB.
Lognormal
(100,10))
H ¼D9 ¼D10 + B10
11 J Procurement ¼INT(CB.Normal
(70,5))
F SS, I
SS + 5weeks
¼MAX(D7,
D10 + 5)
¼D11 + B11
12 K Install
equipment
¼INT(CB.Normal
(72,5))
I ¼E10 ¼D12 + B12
13 L Operating
permit
¼0 K ¼E12 ¼D13 + B13
14 M Cold start test ¼INT(CB.
Lognormal(16,6))
K ¼E12 ¼D14 + B14
15 N Readiness test ¼INT(CB.
Lognormal(36,6))
M ¼E14 ¼D15 + B15
16 O Hot test ¼INT(CB.
Lognormal(16,6))
N ¼E15 ¼D16 + B16
174 12 Enterprise Risk Management in Projects
we focused more on emergency management (which military operations is closer
to).
We also presented a framework for project risk analysis, based on PMBOK. This
included a number of qualitative elements which can be extremely valuable in
Decision
staffed
EIS
Licensing
study
NRC
Site
selection
Conceptual
design
Regulation
compliance
Construction
permit
Procurement
Construction
Install
equipment
Operating
permit
Cold
start test
Readiness
test
Hot test
Begin
conversio
Fig. 12.4 Network for governmental project
Fig. 12.5 Gantt chart for governmental project
Conclusions 175
project management. But they are less concrete, and therefore we found it easier to
focus on quantitative tools. We want to point out that qualitative tools are also very
important.
The qualitative tools presented start with the deterministic critical path method,
which assumes no risk in duration nor in resource availability. We present simulation
as a very useful means to quantify project duration risk. Simulation allows any kind
of assumption, and could also incorporate some aspects of resource availability risk
through spreadsheet models.
While the ability to assess the relative probability of risk is valuable, the element
of subjectivity should always be kept in mind. A simulation model can assign a
probability of any degree of precision imaginable, but such probabilities are only as
Fig. 12.6 Histogram of governmental project completion time in months. #Oracle. Used with
permission
Table 12.6 Probability of
Completion
Months Probability
310 0.912
320 0.759
330 0.550
340 0.329
350 0.153
360 0.057
370 0.011
380 0.005
176 12 Enterprise Risk Management in Projects
accurate as the model inputs. These probabilities should be viewed as subject to a
great deal of error. However, they provide project managers with initial tools for
identification of the degree of risk associated with various project tasks.
Notes
1. Seyedhoseini, S.M., Noori S. and AliHatefi, M. 2008, Chapter 6: Two Polar
Concept of Project Risk Management, in D. L. Olson and D. Wu, eds., New
Frontiers in Enterprise Risk Management. Berlin: Springer, 77–106.
2. Skorupka, D. (2008). Identification and initial risk assessment of construction
projects in Poland, Journal of Management in Engineering 24:3, 120–127.
3. Kong, D., Tiong, R.L.K., Cheah, C.Y.J., Permana, A. and Ehrlich, M. (2008).
Assessment of credit risk in project finance, Journal of Construction Engineer-
ing and Management 134:11, 876–884.
4. Chua, A.Y.K. (2009). Exhuming IT projects from their graves: An analysis of
eight failure cases and their risk factors, Journal of Computer Information
Systems 49:3, 31–39.
5. Olson, D.L. (2004), Introduction to Information Systems Project Management,
2nd ed. NY: McGraw-Hill/Irwin.
6. Project Management Institute (2013), A Guide to the Project Management Body
of Knowledge, 5th ed.Newtown Square, PA: Project Management Institute.
7. Chapman, C. (2006). International Journal of Project Management 24(4),
303–313.
8. Schatteman, D., Herroelen, W., Van de Vonder, S. and Boone, A. (2008).
Methodology for integrated risk management and proactive scheduling of
construction projects, Journal of Construction Engineering and Management
134:11, 885–893.
9. Choi, H.-H. and Mahadevan, S. (2008). Construction project risk assessment
using existing database and project-specific information, Journal of Construc-
tion Engineering and Management 134:11, 894–903.
10. Boehm, B. (1988). Software Risk Management. Washington, DC: IEEE Com-
puter Society Press.
11. Bannerman, P.L. (2008). Risk and risk management in software projects: A
reassessment, The Journal of Systems and Software 81:12, 2118–2133.
12. Keil, M., Li, L., Mathiassen, L. and Zheng, G. (2008). The influence of
checklists and roles on software practitioner risk perception and decision-
making, The Journal of Systems and Software 81:6, 908–919.
13. Cates, G.R. and Mollaghasemi, M. (2007). The Project Assessment by Simula-
tion Technique, Engineering Management Journal 19:4, 3–10.
Notes 177
Natural Disaster Risk Management 13
We have considered business operational risks in the contexts of supply chains,
information systems, and project management. By definition, natural disasters are
surprises, and cause inconvenience and damage. Some things we do to ourselves,
such as revolutions, terrorist attacks, and wars. Some things nature does to us, to
include hurricanes, tornados, volcanic eruptions, and tsunamis. Some disasters are
caused by combinations of human and natural causes. We dam rivers to control
floods, to irrigate, to generate power, and for recreation, but dams have burst
causing immense flooding. We have developed low-pollution, low-cost (at the
time) electricity through nuclear power. Yet with plant failure, new protective
systems have made the price very high, and we have not figured out how to
acceptably dispose of the waste. While natural disasters come as surprises, we can
be prepared. This chapter addresses natural domain risks in the form of disaster
management.
Emergency Management
Natural disaster management is the domain of government, fulfilling its responsibil-
ity to protect the general welfare. Local, State and Federal agencies in the United
States are responsible for responding to natural and man-made disasters. This is
coordinated at the Federal level through the Federal Emergency Management
Agency (FEMA). While FEMA has done much good, it is almost inevitable that
more is expected of them than they deliver in some cases, such as hurricane
recovery. In 2006 Hurricane Katrina provided one of the greatest tests of the
emergency management system in the U.S.:
1. Communications outages disrupted the ability to locate people
2. Reliable transportation was disrupted or at least restricted
3. Electrical power was disrupted, cutting off computers
4. Multiple facilities were destroyed or damaged
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_13
179
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_13&domain=pdf
5. Some bank branches and ATMs were flooded for weeks
6. Mail was disrupted up to months.
Disasters are abrupt and calamitous events causing great damage, loss of lives,
and destruction. Emergency management is accomplished in every country to some
degree. Disasters occur throughout the world, in every form of natural, man-made,
and combination of disaster. Disasters by definition are unexpected, and tax the
ability of governments and other agencies to cope. A number of intelligence cycles
have been promulgated, but all are based on the idea of:
1. Identification of what is not known;
2. Collection—gathering information related to what is not known;
3. Production—answering management questions;
4. Dissemination—getting the answers to the right people.1
Information technology has been developing at a very rapid pace, creating a
dynamic of its own. Many technical systems have been designed to gather, process,
distribute, and analyze information in emergencies. These systems include
communications and data. Tools to aid emergency planners communicate include
telephones, whiteboards, and the Internet. Tools to aid in dealing with data include
database systems (for efficient data organization, storage, and retrieval), data mining
tools (to explore large databases), models to deal with specific problems, and
combination of these resources into decision support systems to assist humans in
reaching decisions quickly or expert systems to make decisions rapidly based on
human expertise. The role of information technology in disaster management to
include the functions of:2
• Information Extraction—gathering data from a variety of sources and storing
them in efficient databases.
• Information Retrieval—efficiently searching and locating key information dur-
ing crises.
• Information Filtering—focusing of pertinent data in a responsive manner.
• Data Mining—extract patterns and trends.
• Decision Support—analyze data through models to make better decisions.
Emergency Management Support Systems
A number of software products have been marketed to support emergency manage-
ment. These are often various forms of a decision support system. The Department
of Homeland Security in the U.S. developed a National Incident Management
System. A similar system used in Europe is the Global Emergency Management
Information Network Initiative.3 While many systems are available, there are many
challenges due to unreliable inputs at one end of the spectrum, and overwhelmingly
massive data content at the other extreme.
180 13 Natural Disaster Risk Management
Systems in place for emergency management include the U.S. National Disaster
Medical System (NDMS), providing virtual centers designed as a focal point for
information processing, response planning, and inter-agency coordination. NDMS is
a federally coordinated system augmenting disaster medical care. Its purpose is to
supplement an integrated National medical response capacity to assist State and local
authorities in dealing with medical impacts of major peacetime disasters, as well as
supporting military and Veterans Affairs medical systems in casualty care. EMSS
has also been implemented in Europe.4 Intelligent emergency management systems
are appearing as well.5
An example decision support system directed at aiding emergency response is the
Critical Infrastructure Protection Decision Support System (CIPDSS).6 CIPDSS was
developed by Los Alamos, Sandia, and Argonne National Laboratories sponsored by
the Department of Homeland Security in the U.S. The system includes a range of
applications to organize and present information, as well as system dynamics
simulation modeling of critical infrastructure sectors, such as water, public health,
emergency services, telecom, energy, and transportation. Primary goals are:
1. To develop, implement, and evolve a rational approach to prioritize CIP strategies
and resource allocations through modeling, simulation, and analyses to assess
vulnerabilities, consequences, and risks;
2. To propose and evaluate protection, mitigation, response, and recovery strategies
and options;
3. To provide real-time support to decision makers during crises and emergencies.
A key focus it to aid decision makers by enabling them to understand the
consequences of policy and investment options prior to action. Decision support
systems provide tools to examine trade-offs between the benefits of risk reduction
and the costs of protection action. Factors considered include threat information,
vulnerability assessments, and disruptive consequences. Modeling includes system
dynamics, simulation, and other forms of risk analysis. The system also includes
multi-attribute utility functions based upon interviews with infrastructure decision
makers. CIPDSS thus serves as an example of what can be done in the way of an
emergency management support system.
Other systems in place for emergency management include the U.S. National
Disaster Medical System (NDMS), providing virtual centers designed as a focal
point for information processing, response planning, and inter-agency coordination.
Systems have been developed for forecasting earthquake impact7 or the time and size
of bioterrorism attacks. This demonstrates the need for DSS support not only during
emergencies, but also in the planning stage.
Emergency Management Support Systems 181
Example Disaster Management System
Sahana is foundation offering a suite of free open-source web-based disaster
management system software for disaster response.8 The primary aim of the system
is to alleviate human suffering and help save lives through efficient use of informa-
tion technology. Sahana Eden is a humanitarian platform customizable to integrate
with local systems for planning or coping with crises. Vesuvius is a disaster
preparedness and response software providing support to family reunification as
well as hospital triage. Mayon provides emergency planning agencies with tools to
plan preparedness, response, recovery, and mitigation. Sahana can bring together
government, emergency management, non-government organizations, volunteers
and victims to disaster response. It is intended to empower victims, responders,
and volunteers to more efficiently utilize their efforts, while protecting victim data
privacy.
Sahana is a free open-source software system initially built by Sri Lankan
volunteers after the 2004 Asian tsunami.9 It has the following main applications:
1. Missing persons registry—bulletin board of missing and found persons, and
information of who is seeking individuals.
2. Organization registry—a tool to coordinate and balance distribution of relief
organization to affected areas.
3. Request/Pledge management system—log of incoming requests for support,
tracking relief provided and linking donors to relief requirements.
4. Shelter registry—tool to track location and numbers of victims by temporary
location.
5. Volunteer coordination—tool to coordinate contact information, skills and
assignments of volunteers and responders
6. Inventory management—tool to track location, quantities, and expiration dates of
supplies
7. Situation awareness—a geographic information system showing current status.
Sahana has been successfully deployed in many disasters including after the
tsunami as shown in Table 13.1:
The Sahana system uses plug-in architecture, which allows third party groups
easy access to system components, while simplifying overall integration. The system
does not need to be installed, but can be run as a portable application from a USB
drive (using a USB flash drive). The system can be translated into any language.
Granular security is provided through an access control system. The user interface
can be viewed through a number of devices, to include a PDA.
182 13 Natural Disaster Risk Management
Disaster Management Criteria
We review criteria sets used by two disaster management applications involving
multiple criteria. The first involved the engineering decision of protecting buildings
from earthquake damage.11 This of course is a more technical decision than what we
described in the banking industry, but the point is that risks appear in almost every
walk of life. Here the decision was to design buildings to be as secure as possible.
Earthquakes are common. Building codes in the past have been insufficient. Build-
ing design retrofit alternatives have been developed to modify performance in terms
of stiffness, strength, and ductility. Criteria that could be applied to seismic risk
management are given in Table 13.2:
Their model would enable building designers to score alternatives on each of
these eight risks and to express decision maker preferences.
The US Water Resource Council13 has a comprehensive set of 20 performance
criteria for infrastructure policies and investments given in Table 13.3:
A generic multiple criteria model was developed within this list14 with the criteria
of:
• Protection from coastal inundation
• Protection of public infrastructure systems
• Protection against storm surges and flooding
• Protection of wetlands and environment
• Protection of recreational activities
This model was to be used for specific coastal protection evaluations, with
normal options of building different types of revetments, seawalls, or nourishing
beaches or dunes. The evaluation they provided included evaluation under different
scenarios
Table 13.1 Sahana deployments10
Location Year Event Details
Sri Lanka 2005 Tsunami Deployed for the Government of Sri Lanka
Pakistan 2005 Earthquake Deployed for the Government of Pakistan
The Philippines 2006 Mudslide Southern Leyte
Indonesia 2006 Earthquake Yogjarkata
New York City 2007–2008 Hurricanes Coastal storm planning
Peru 2007 Earthquake Ica
China 2008 Earthquake Chendu-Shizuan province
Myanmar 2008 Cyclone Monsoon disaster planning
Haiti 2010 Earthquake Disaster planning
Disaster Management Criteria 183
Multiple Criteria Analysis
Once criteria pertinent to the specific decision are identified, analysis can be selec-
tion of a preferred choice from a finite set of alternatives, making it a selection
decision. (Finite alternatives could also be rank ordered by preference.) Multiple
objective programming is the application of optimization over an infinite set of
alternatives considering multiple objectives, a mathematical programming applica-
tion (see the chapter on DEA as one type). Chapter 3 presented the SMART multiple
criteria method, which fits with this case as well.
We can use a petroleum supply chain case to demonstrate the SMART proce-
dure.15 We begin with three alternatives relative to risk management in the petro-
leum supply chain:
1. Accept and control risk
2. Terminate operations
3. Transfer or share risk
The hierarchy of criteria could be as follows, to minimize risks:
Table 13.3 US Water Resource Council criteria
Provide protection for and reduce
displacement of residents
Provide protection for and reduce displacement of
residents
Provide protection for and reduce
displacement of residents
Ensure long-term economic productivity
Provide urban and agricultural flood
damage protection
Provide protection and reduce displacement of
businesses and farm
Ensure employment/income distribution
and equality
Protect wetlands, fish, and wildlife habitats
Protect commercial fishing and water
transportation
Provide agricultural drainage, irrigation, and
erosion control
Ensure power production, transmission, and
efficiency
Provide floodplain protection
Protect recreational activities Provide drought protection
Protect against natural disasters Protect endangered and threatened species and
habitats
Protect air quality Protect prime and unique farmland protection
Protect historic and cultural values Protect wildlife and scenic rivers
Table 13.2 Seismic risk
management criteria12
Economic/Social criteria Technical criteria
Installation cost Skilled labor required
Maintenance cost Need for foundation intervention
Disruption of use Significance of risk damage
Functional capability Significance of limitations
184 13 Natural Disaster Risk Management
• Exploration/production risk
• Environmental and regulatory compliance risk
• Transportation risk
• Availability of oil resource risk
• Geopolitical risk
• Reputational risk
We can create a decision matrix that can express the relative performance of each
alterative on each criterion through scores.
Scores
Scores in SMART can be used to convert performances (subjective or objective) to a
zero-one scale, where zero represents the worst acceptable performance level in the
mind of the decision maker, and one represents the ideal, or possibly the best
performance desired. Thus a higher score indicates lower risk. Note that these ratings
are subjective, a function of individual preference. Scores for the criteria could be as
in Table 13.4.
Table 13.4 indicates that the benefits of accepting the risk involved in this project
would have very good potential to obtain sufficient oil. If the project was to be
abandoned (the “Terminate” alternative), oil availability would be quite low. Hedg-
ing in some manner (the “Transfer” alternative) such as subcontracting, would
reduce oil availability significantly, although this is expected to be better than
abandoning the project. With respect to environment/regulatory factors, the greatest
risk reduction would be to not adopt the project. Transferring risk through
subcontracting would also be much more effective than taking on the project
alone. Transportation risk could be avoided entirely by abandoning the project.
Much of this risk could be transferred. The firm has the ability to cope with some
transportation issues, but the score is lowest for the option of Accept and Control
Transportation Risk. Accessing oil would be highest for adopting the project, with
slight advantage to the Accept option as it provides more control than the Transfer
option. Terminating the project would require obtaining oil on the market at higher
cost. Geopolitical risk would be eliminated by terminating the project. The other two
options are rated equal on this dimension. Risk to reputation could also be
eliminated by terminating the project. The firm would have more control over
Table 13.4 Relative
scores by criteria for each
option in example
Criteria Accept Terminate Transfer
Exploration/production 0.8 0.2 0.5
Environment/regulatory 0.1 1.0 0.6
Transportation 0.2 1.0 0.9
Oil availability 0.9 0.2 0.6
Geopolitical 0.3 1.0 0.4
Reputation 0.2 1.0 0.5
Multiple Criteria Analysis 185
risk response if they retained complete control over the project than if they trans-
ferred through insurance or subcontract.
The score matrix given in Table 13.4 provides a tabular expression of relative
value of each of the alternatives over each of the selected criteria. It can be used to
identify tradeoffs among these alternatives.
Weights
The next phase of the analysis ties these ratings together into an overall value
function by obtaining the relative weight of each criterion. In order to give the
decision maker a reference about what exactly is being compared, the relative range
between best and worst on each scale for each criterion should be explained. There
are many methods to determine these weights. In SMART, the process begins with
rank-ordering the three criteria. A possible ranking for a specific decision maker
might be as given in Table 13.5.
Swing weighting could be used to identify weights.16 Here, the scoring was used
to reflect 1 as the best possible and 0 as the worst imaginable. Thus the relative rank
ordering reflects a common scale, and can be used directly in the order given. To
obtain relative criterion weights, the first step is to rank-order criteria by importance,
indicated by the order of Criteria in Table 13.6. Estimates of weights can be
obtained by assigning 100 points to moving from the worst measure to the best
measure on the most important criterion (here oil availability). Then each of the
other criteria are assessed in a similar comparative manner in order, assuring that
more important criteria get at least as much weight as other criteria down the
Table 13.5 Worst and best measures by criteria
Criteria Worst measure Best measure
Oil availability Oil embargo Successful project—in-house
Exploration/production No project Successful project—in-house
Environment/regulatory Oil spills No project
Reputation Oil spills No project
Transportation Oil spills No project
Geopolitical War in drilling area No project
Table 13.6 Weight
estimation from
perspective of most
important criterion
Criteria Assigned value Weight
1 Oil availability 100 0.282
2 Exploration/production 90 0.254
3 Environment/regulatory 70 0.197
4 Reputation 60 0.169
5 Transportation 20 0.056
6 Geopolitical 15 0.042
Total 355 1.000
186 13 Natural Disaster Risk Management
ordinal list. Here we might assign moving from the worst measure on Exploration/
production 80 points compared to Oil availability’s 100. For purposes of demonstra-
tion, assume the assigned values given in Table 13.6:
The total of the assigned values is 355. An estimate of relative weights is obtained
by dividing each assigned value by 355.
Value score
The next step of the SMART method is to obtain value scores for each alternative by
multiplying each score on each criterion for an alternative by that criterion’s weight,
and adding these products by alternative. Table 13.7 shows this calculation:
In this example, the terminate was ranked first, followed by the option of
transferring (outsourcing), followed by accepting risk. However, these are all quite
close, implying that the decision maker could think more in terms of other
objectives, or possibly seek more input, or even other options.
Natural Disaster and Financial Risk Management
Risk is the probability of an adverse event occurring with the potential to result in
loss to exposed element. Natural hazards are meteorological or geological phenom-
ena that due to their location, frequency, and severity, have the potential to affect
economic activities. A natural event that results in human and economic losses is an
environmental problem contributed by the development in the region. Natural
catastrophe risk is generally characterized by low frequency and high severity,
though the level of severity varies quite significantly. The extent of the development
contributes to the financial vulnerability to the catastrophic effects of the natural
disaster. On the same token, the vulnerability of a firm from hazard events depends
on the size of its investment and revenue exposures in the region. Natural
hazards can be characterized by location, timing, magnitude and duration. The
principal causes of vulnerability include imprudent investments and ineffective
public policies.
Table 13.7 Value score calculations
Criteria Weight Accept Terminate Transfer
1 Oil availability 0.282 �0.9 ¼ 0.254 �0.2 ¼ 0.051 �0.6 ¼ 0.152
2 Exploration/production 0.254 �0.8 ¼ 0.203 �0.2 ¼ 0.051 �0.5 ¼ 0.127
3 Environment/regulatory 0.197 �0.1 ¼ 0.020 �1.0 ¼ 0.197 �0.6 ¼ 0.118
4 Reputation 0.169 �0.2 ¼ 0.034 �1.0 ¼ 0.169 �0.5 ¼ 0.084
5 Transportation 0.056 �0.2 ¼ 0.011 �1.0 ¼ 0.056 �0.9 ¼ 0.051
6 Geopolitical 0.042 �0.3 ¼ 0.013 �1.0 ¼ 0.042 �0.4 ¼ 0.017
Totals 0.534 0.566 0.549
Natural Disaster and Financial Risk Management 187
Natural disaster losses are the result of mismanaged and unmanaged disaster risks
that reflect current conditions and historical factors.17 Disaster risk exposure comes
from the interaction between a natural hazard (the external risk factor) and vulnera-
bility (the internal risk factor).18 Proactive disaster risk management requires a
comprehensive process that encompasses a comprehensive pre-disaster evaluation
involving the three broad steps involving the following activities:
• identification of the potential natural hazards and evaluation of investment at risk;
• risk reduction measures to address the vulnerability, and
• risk transfer to minimise financial losses.
The need to integrate disaster risk management into investment strategy is
necessary to manage corporate value and reduce risk in the future. These should
be supported by effective governance (e.g. policies, planning, etc.), supplemented by
effective information and knowledge sharing mechanisms among different
stakeholders.
First, risk identification involves creating an awareness and quantification of risk
through understanding vulnerabilities and exposure patterns. The process also
includes analysis of the risk elements and the underlying causes of the exposure.
This knowledge is essential for development of strategies and measures for risk
reduction. For example, firms operating in an earthquake-prone zone would need to
keep abreast of information on real-time seismic patterns complemented with
forecasts on expected hazards. This is complemented with the necessary exposure
analysis using mapping, modelling and hazard analysis to assess industry and
corporate risk. The evaluations should include calculating a probability profile of
occurrence and impacts of hazard events in terms of their characteristics and
factoring these elements into the firm’s decision-making process. Thus, risk identifi-
cation and analysis provide for informed decision-making on business investment
that will effectively reduce the impacts of potential disaster events and prioritization
of risk management efforts.
Second, risk reduction involves measures to avoid, mitigate or prepare against
the destructive and disruptive consequences of hazards to minimize the potential
financial impact. The mitigation measures are actions aimed at reducing the overall
risk exposure associated with disasters. This requires an ex-ante business strategy
that combines mitigation investments and pre-established financial protection. In
this respect, firms can prevent natural disaster losses by avoiding investment in
disaster prone regions (i.e. prevention investments) or they may take actions to
locate and structure its business operations to avoid heavy investments in disaster
prone regions. Such actions require short- and long-term strategic business planning
and disaster recovery mechanisms, such as those pertaining to supply chain man-
agement. Risk mitigation planning is aimed at taking into account the economic
impacts of disasters such as earthquakes. The access to relevant information is
important to better-informed decision making and planning. For example, access
188 13 Natural Disaster Risk Management
to hazard information such as frequency, magnitude and trends are required for
disaster risk mitigation for corporate investment decisions.
Finally, risk transfer mechanisms enable the distribution of the risks associated
with natural hazard events such as floods and earthquakes to reduce financial and
economic impacts. This might not fully eliminate the firm’s financial risk exposure
but it allows risk to be shared with other parties. The common risk transfer tool is
catastrophic insurance, which allows firms to recover some of their disaster losses
and thus managing the financial impacts of disasters. Other financial instruments
include catastrophic bonds (cat-bonds) and weather risk management products. The
issuance of catastrophe risk linked bonds by insurance or reinsurance companies
enables them to obtain coverage for particular risk exposures in case of predefined
catastrophic events (e.g. earthquakes). These catastrophe bonds allow the insurance
companies transfer risk and obtain complementary coverage in the capital market
and increase their capacity to take on more catastrophe risk coverage.
The use of insurance for mitigating financial losses from natural catastrophes is
generally lacking in the private sector in developing countries.19 Catastrophe risk is a
public shared risk (“covariate” risk) and collective in nature, therefore, making it
difficult to find individual and community solutions.20 An effective insurance market
is essential for financing post disaster recuperation and rehabilitation of firms. In the
absence of a sophisticated insurance market, the government normally acts as
financier for disaster recovery efforts. Governments can also influence the risk
financing arrangements by encouraging the establishment of insurance pools by
the local insurance industry and covering higher exposures in the global reinsurance
and capital markets.
Property insurance policies for firms in earthquake prone provinces may not be
readily available due to inadequate local regulation of property titles, building
codes and developmental planning. In this respect, the local governments play an
important role in ensuring proper public policies are implemented and regulations
enforced to lower premiums and achieve higher insurance coverage in these
provinces.
There is a bigger range of instruments for risk financing in the markets today.
Other than insurance coverage for disaster risk, new instruments such as catastrophe
risk swaps and risk-linked securities are also available in the global capital market. In
1994, the original capital market instrument linked to catastrophe risk called a
catastrophe bond was introduced. Since then, more risk-linked securities are avail-
able including those providing outright funding commitments to recover economic
losses from disasters. These contingent capital instruments are based on estimating
the amount of risk involved through risk and loss impact estimates to build a disaster
risk profile for the client. The implied risk profile is used to identify and define the
risk-linked financial instruments.
Natural Disaster and Financial Risk Management 189
Natural Disaster Risk and Firm Value21
The current dynamic business environment embraces the international flow of
investment to facilitate success and growth. Firms with sustainable competitiveness
and growth are likely to enhance their market value. Business globalisation invari-
ably means that firms become more proactive in scouting for opportunities in foreign
markets in order to sustain and build corporate value. Other than the social, eco-
nomic and political risk factors normally considered in foreign investment
evaluations and enterprise risk management processes, firms also need to take into
account natural disaster risk. The premiums for catastrophe risk insurance are
expensive and there must be a compelling case or economic incentives for firms to
establish adequate insurance coverage on their assets. We are interested in the
economic impacts of natural catastrophes from a financial management perspective.
The primary objective of the firm is to maximise shareholder wealth and an
effective corporate risk management program enhances corporate value. The exis-
tent literature contains a respectable body of theories and general acceptance in the
market that corporate value can be created with the proper understanding and
management of risk. There is a perception of risk associated with investments and
traditional finance suggests such perceptions imply that there must be a reward in the
form of a risk premium for investors to take on this risk. The firm as a corporate
investor is no different in that it also requires a risk premium for assuming risk. The
magnitude of the firm value depends on how efficient and effective it can manage its
risk exposure. From a firm value versus risk management perspective, it is possible
to construe the firm’s value as a function of all relevant risk factors.
While the frequency and severity of natural hazards are dictated by the natural
phenomenon itself, the losses caused can be controlled by understanding and
managing the business development and population density according to the vulner-
ability of the geographical location. Business development and population density
tend to have a positive correlation and therefore natural catastrophe risk has pro-
found social and economic impacts on the local inhabitants and economy.
Contemporary enterprise risk exposure modelling tends to ignore natural hazards
and focus on estimating the severity and frequency of financial or operational
exposures. The global warming phenomenon has brought about a heightened aware-
ness of many environmental risks that may affect business. Hence, there is a need for
firms and policy makers to model, monitor and measure the risk exposure from
natural hazards and prepare to manage the potential impacts.
The impacts from a natural catastrophe include the loss of property, life, injury,
business interruption and loss of profit. From a firm’s perspective, the financial
impact on its market value can be mathematically specified as:
Firms value at risk ¼ f hazard, vulnerabilityð Þ ð1Þ
From Eq. (1), the firm’s value at risk from natural phenomena is a function of
hazard and vulnerability. Equation (1) integrates the impact on the firm’s value
from natural phenomena and their consequence or exposure. The natural disaster
190 13 Natural Disaster Risk Management
risk management process has to be managed properly from the beginning therefore,
it is important that firms improve the evaluation, coordination, efficiency and control
of business development and management process to minimize such risks. The
issues in this context are the considerations and measures that are available to
firms in the natural disaster risk management process. Vulnerability in turn is a
function of three factors:
Firms vulnerability ¼ f fragility, resilience, exposureð Þ ð2Þ
Effective risk management requires attention to three factors—hazards, exposure,
and vulnerability. Primary disaster impacts include potential physical damage to
production facilities and infrastructure. But there also are often secondary impacts, to
include business interruption form lack of materials and information, especially in
interacting supply chain networks. Risk is a function of hazard and vulnerability,
while vulnerability is a function of fragility, resilience, and exposure.22
Coase’s theory of the firm stresses that the impetus for the emergence of business
corporations is the specialised institutional structure that comes into being to reduce
the transaction costs.23 Since the threat of natural disasters, like the volatility of
financial prices, implies potential transaction costs to the firm, it is imperative to
manage catastrophe risk as it can affect the cost of capital, the cost of production, and
revenues. Financial theory suggests that rational firms would hedge their risk
exposure to remove the variability in their cash flows. The significance of this
view is that by removing variability, firms enhance the predictability in cash flows
allowing them to invest in future projects without uncertainty about the negative
impact of price fluctuations. The manifestations of variability as a result of a natural
catastrophe are disruptions to the firm’s supply chain, production, logistics, man-
power and clientele. The management issues to be addressed in relation to catastro-
phe risk management using risk transfer instruments are moral hazard and adverse
selection. Moral hazard occurs when the firm fails to implement preventive measures
after the risk transfer has taken place and reports excessive losses. Adverse selection
happens if the firms uses inside knowledge about the exposure to obtain more
favorable terms in the risk transfer policy from the issuing company.
The firm’s overall exposure to natural catastrophes like earthquake need to be
analyzed based on the region’s vulnerability to assess the collective need for risk
mitigation arrangements. Therefore, it is necessary to identify and map the major
catastrophe risks that affect the region and assess how the business can be organised
by adopting a risk neutral structure and/or how to obtain aggregate risk-financing
arrangements.
The financial impact of natural disasters is determined by the frequency of an
event occurring and by the severity of the resulting loss. The vulnerability to natural
catastrophes can be reduced significantly through risk mitigation to lessen the
impact of disasters. The catastrophe risk exposures in individual investment
projects can be mitigated using a project-based approach to manage catastrophe
risk through risk transfer such as insurance to reduce specific project exposures.
Risk can also be reduced through corporate planning by building earthquake
Natural Disaster and Financial Risk Management 191
resistant structures, implementing risk neutral logistics or supply chain, market
diversification and other such actions that minimise the overall asset at risk of
the firm.
Financial Issues
Natural disasters can cause serious financial issues for firms as they affect the
efficient management and performance of their assets and liabilities. The structural
risks associated with natural disasters constitute one of the major sources of risk for
most enterprises.24 Disaster hazards can cause damages and losses to firms in partial
or total destruction of assets and disruptions in service delivery. Natural disasters
also cause macroeconomic effects in the economy as a whole and can bring signifi-
cant changes in the macroeconomic environment. The effects of a natural disaster
can interact with some of the normal risks faced by firms, including strategic
management, operational, financial and market risks. These effects will reveal
corporate vulnerabilities related to poor financial decisions.
The following financial issues in relation to risk management are analysed in this
section:
• systematic and unsystematic risk exposure
• investment evaluation and planning
• investment to meet strategic demands
• financial risk management and compliance
Firms are constantly trying to develop more efficient models to evaluate the size
and scope of risk exposure consequences using risk modelling approaches such as
shareholder value at risk (SVA), value at risk (VAR) and stress testing.
Systematic and Unsystematic Risk
The overall corporate risk can be divided into alpha (the competency of the
company’s management or unsystematic risk) and beta (the market or systematic
risk). The alpha risk is of an idiosyncratic nature can be eliminated by diversifying
the investment portfolio, leaving beta as the main variable. The risk exposure of a
firm can come from the political, economic or operating environments. The
operating environment refers more specifically to the idiosyncratic internal and
external environments in which the firm conducts it business and the inherent risks
to the firm. In this context, the natural disaster risk posed by earthquakes and floods
would fall within the definition of external environment. The implication of disaster
risk in the internal environment would be related to the internal processes and
resources available to manage this risk.
In terms of unsystematic effects of natural disasters like an earthquake, losses
related to disruptions in service delivery are the result of a combination of the direct
192 13 Natural Disaster Risk Management
damages to the firm’s assets institution and its human resource. The better prepared a
firm is in risk managing its resources the lesser the impact of damages and losses to
its assets and facilitate in post-disaster business recovery. Systematic risk effects to
the firms can be illustrated by damages to the overall infrastructure in the region
causing major disruptions to its operations even if the firm is reasonable unscathed at
the micro level.
Government normally intervenes in disaster risk management to mitigate sys-
temic risk as damage from disasters tends to be large and locally covariate and the
remedial actions are targeted at the provision of public goods, such as infrastructure.
The World Bank (2000) suggests that governments are more effective in covering
covariant risks, while most idiosyncratic or unsystematic risks may be handled better
by private providers. 25
Investment Evaluation
An investment evaluation is conducted when a firm is considering a major expendi-
ture. The variables taken into consideration are the cash flows, growth potential and
risk associated with the project. The common tools used in investment evaluation are
the net present value and internal rate of returns methods. Both these methods
incorporate a parameter to measure the risk exposure inherent in the project. As
the basic tenet of financial management is one of risk-return optimization. A central
feature in modern risk management is the issue of risk and return relationship in
investment decisions. The basic link between risk and return says that greater
rewards come with greater risk and firms investing in a high natural disaster prone
area would need to acknowledge this in their investment. This acknowledgement of
catastrophe risk in investment evaluation is similar to accounting for political or
economic risks of a country.
The price of risk is commonly referred to as the risk premium. A firm as the
investor would demand a risk premium commensurate with the risk characteristics of
their investment for the higher risk exposure of operating in a region with greater
natural disaster risk. The risk premium to compensate for potential disaster risk can
be built into the risk equation by factoring in liquidity risk from destabilizing cash
fluctuations, and default or credit risk. Moreover, liquidity risk and credit risk
interact under disaster conditions escalating risk premium and thus the cost of
capital. This will impact on firms after the disaster when they go back into the
capital markets to raise credit to rebuild their business.
Natural disasters typically trigger operational risks resulting in disruptions to cash
flows and possible default of loan obligations to creditors. However, firms with
efficient liquidity management will minimize the disaster effects on cash flows. The
nature and magnitude of the disaster and clients’ profile are factors that will influence
the severity of cash flow disruptions and the ensuing credit risk. The firm can
manage a credibility problem and spiralling cost of capital from a disaster if it
made prior financial arrangements with creditors. These effects may lead to
short term liquidity crises and heightened cost of capital in the medium term for
Natural Disaster and Financial Risk Management 193
firms. Credit risk is particularly heightened by a disaster due to disruptions to cash
flows and serious loss of assets used as collaterals for loans. Unless prior
arrangements are in place for creditors to mitigate repayment risks and redress the
deterioration in the quality of securities, firms may face delinquency actions and loss
of financial facilities.
Strategic Investment
Firms can reduce cash flow variability through business portfolio diversification by
engaging in different investments, different locations and activities whose returns are
not perfectly correlated. In the context of natural disaster risk management, strategic
investment refers to making a financial commitment in a location after considering
the risk implications and the available investment alternatives. That is, investment in
risky environments must be consistent and sensitive to the risk and return profile of
the firm. For instance, making a decision to invest in a new supply chain process in a
disaster prone area may require looking at risk neutral alternatives. The risk neutral
option may be more costly but would be appropriate if the new supply chain is to
service the entire firm’s operations. A Cost Effectiveness Analysis (CEA) technique
can be used to compare the monetary costs of different options that provide the same
physical outputs.
The commercial challenges after a natural disaster are the resumption and main-
tenance of client services and the financial viability of the business. Firms are caught
unprepared and will struggle during a disaster to provide emergency and recovery
services to their clients without adversely affecting its own financial position. The
strategic perspective of disaster effects is on the adequacy of organisational and
financial planning on the part of management in relation to the firm’s business
growth and the resultant structural design. Firms that have experienced rapid growth
but do not comprehensively plan and design their business model around a disaster
contingency plan are likely to be more affected by a disaster. Rapid business
expansion without a appropriately well designed business model, planned
investments and logistics addressing disaster risk will likely experience exacerbated
problems during a disaster.
Risk Management and Compliance
To fully address corporate risk exposure with respect to natural disasters,
companies need a comprehensive risk management process that identifies and
mitigates the major sources of risk. Formulating a detailed risk program with
capabilities for risk identification, assessment, measurement, mitigation, and transfer
is necessary in a complete risk management strategy. A comprehensive corporate
risk management process requires effective techniques that provide a
systematic evaluation of risks, which then enables risk managers to make
judgments on acceptable risks. Such a process should allow insight into primary
194 13 Natural Disaster Risk Management
areas of uncertainty by identification of the risk factors, highlighting likely outcomes
of events and measuring the possible financial impact on the company. The process
must also have built-in techniques that can provide a cost-benefit analysis of hedging
options as a basis for prioritizing risk strategies. Through the risk management
process, a company is able to set its risk tolerance level and any unwanted exposure
may be avoided or hedged and the company is left bearing the risk it is willing to
assume.
A firm-wide risk management system, using tools like the value at risk (VaR)
model, which is capable of capturing the aggregate effect of financial risk exposure
to financial, is important to enhance the company’s overall market value. The VAR
model summarizes the value at risk in a worst case scenario of possible loss under
normal conditions.
Conclusions
The severe climatic changes brought about by global warming are evident by the
freezing temperature which caused damages amounting to billions of dollars in
China in February 2008. The rapidly changing built environment in China also
means that new risk assessment models need to be developed to accurately reflect
and risk assess the real impact. Financial risk modelling and management using
computer simulations incorporating probabilistic and statistical models would be
valuable for evaluating potential losses from future natural catastrophes for better
managing potential losses. Firms operating in high natural disaster risk areas should
use risk modelling for investment evaluation, risk mitigation, disaster management
and recovery planning as part of the overall enterprise wide risk management
strategy. They also need to identify new business strategies for operating in disaster
prone regions and financial instruments to manage risk.
Governments play an important role in financial markets in encouraging financial
institutions to support borrowers in risk reduction and to mitigate the impacts of
natural disasters.
Notes
1. Mueller, R.S. III (2004). The FBI, Vital Speeches of the Day 71:4, 106–109.
2. Hristidis, V., Chen, S.-C., Li, T., Luis, S. Deng, Y. (2010). Survey of data
management and analysis in disaster situations. The Journal of Systems and
Software 83:10, 1701–1714.
3. Thompson, S., Altay, N., Green, W.G. III, Lapetina, J. (2006). Improving
disaster response efforts with decision support systems, International Journal
of Emergency Management 3:4, 250–263.
4. Lee, J.-K., Bharosa, N., Yang, J. Janssen, M., Rao, H.R. (2011). Group value
and intention to use – A study of multi-agency disaster management information
systems for public safety, Decision Support Systems 50:2, 404–414.
Notes 195
5. Amailef, K., Lu, J. (2011). A mobile-based emergency response system for
intelligent m-government services, Journal of Enterprise Information Manage-
ment 24:4, 338–359.
6. Santella, N., Steinberg, L.J., Parks, K. (2009). Decision making for extreme
events: Modeling critical infrastructure interdependencies to aid mitigation and
response planning, Review of Policy Research 26:4, 409–422.
7. Aleskerov, F., Say, A.L., Toker, A., Akin, H.L., Altay, G. (2005). A cluster-
based decision support system for estimating earthquake damage and casualties,
Disasters 3, 255–276.
8. www.sahana.lk/overview accessed 2/22/2010.
9. Morelli, R., Tucker, A., Danner, N., de Lanerolle, T.R., Ellis, H.J.C., Izmirli, O.,
Krizanc, D. and Parker, G. (2009) Revitalizing computing education through
free and open source software for humanity, Communications of the ACM 52:8,
67–75.
10. www.sahana.lk/overview accessed 8/2/2016; Wikipedia, Sahana FOSS Disaster
Management System, accessed 8/2/2016.
11. Tesfamariam, S., Sadiq, R., Najjaran, H. (2010). Decision making under uncer-
tainty – An example for seismic risk management, Risk Analysis 30:1, 78–94.
12. Ibid.
13. Karvetski, C.W., Lambert, J.H., Keisler, J.M., Linkov, I. (2011). Integration of
decision analysis and scenario planning for coastal engineering and climate
change, IEEE Transactions on Systems, Man, and Cybernetiocs – Part A:
Systems and Humans 41:1, 63–73.
14. Ibid.
15. Briggs, C.A., Tolliver, D. and Szmerekovsky, J. (2012). Managing and
mitigating the upstream petroleum industry supply chain risks: Leveraging
analytic hierarchy process. International Journal of Business and Economics
Perspectives 7(1), 1–12.
16. Edwards, W. (1977). How to use multiattribute utility measurement for social
decisionmaking, IEEE Transactions on Systems, Man, and Cybernetics,
SMC-7:5, 326–340.
17. Alexander, D. 2000, Confronting Catastrophe: New Perspectives on Natural
Disasters, Oxford University Press, New York.
18. Cardona, O. 2001, Estimación Holistica del Riesgo Sísmico Utilizando Sistemas
Dinámicos Complejos. Barcelona, Spain: Centro Internacional de Métodos
Numéricos en Ingeniería (CIMNE), Universidad Politécnica de Cataluña.
19. Guy Carpenter & Company 2000, The World Catastrophe Reinsurance Market,
New York.
20. Comfort, L. 1999, Shared Risk: Complex Systems in Seismic Response,
Pergamon, New York.
21. Extracted from Oh, K.B., Ho C. and Wu. D. Natural disaster and financial risk
management. Int. J. Emergency Management, Vol. 6, No. 2, 2009.
22. Merz, M., Hiete, M., Comes, T. and Schultmann, F. (2013). A composite
indicator model to assess natural disaster risks in industry on a spatial level.
Journal of Risk Research 16(9), 1077–1099.
196 13 Natural Disaster Risk Management
23. Coase, R. H. 1937, ‘The Nature of the Firm’, Econometrica, no. 4, pp. 386–405,
repr. in G.J. Stigler, and K.E. Boulding, (eds.), 1952, Readings in Price Theory,
Homewood, Ill.
24. Sebstad, J. and M. Cohen. 2000. “Microfinance, Risk Management, and Pov-
erty: Synthesis of Field Studies Conducted by Ronald T. Chua, Paul Mosley,
Graham A.N. Wright, Hassam Zaman”, Study submitted to Office of Microen-
terprise Development, USAID, Washington, D.C.
25. World Bank 2000, World Development Report 2000/2001: Attacking Poverty,
Oxford University Press, New York.
Notes 197
Sustainability and Enterprise Risk
Management 14
The challenge of environmental sustainability is important not only as a moral
imperative, but also a managerial responsibility to operate profitably. Environmental
sustainability has become a critical factor in business, as the threats to environmental
degradation from carbon emissions, chemical pollution, and other sources has
repeatedly created liability for firms that don’t consider the environment, as well
as regulatory attention. Legislators and journalists provide intensive oversight to
operations of any organization. There are many cases of multi-billion dollar
corporations brought to or near to bankruptcy by responsibilities for things like
asbestos, chemical spills, and oil spills. As the case of the fire and collapse of the
Dhaka garment factory in April 2013 attests, global supply chains create complex
relationships that place apparently unaware supply chain members such as Nike at
great risk, not only legally, but also in terms of market reputation.
Global warming is here, with notable temperature rise exceeding what appears to
be sustainable since 1980.1 This places ecosystem pressure, creating additional risks
to property through greater storm magnitude since the 1960s. Natural disasters are
increasing in financial magnitude, due to increased population and development.
There are many predictions of more intensive rainfall, stronger sotrms, and increased
sea levels along with simultaneous drought.
Other risks arise from:
• Medical risks from disease to include Zika virus, West Nile virus, malaria, and
others.
• Boycott risk from supply chain linkages to upstream vendors who utilize child
labor (affecting Nike) or unsafe practices (Dhaka, Bangladesh).
• Evolving understanding of scientific risks such as asbestos, once thought a cure
for building fire, now a major risk issue for health.
• Hazardous waste, such as nuclear disposal
• Oil and chemical spills
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_14
199
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_14&domain=pdf
Risk arises in everything humans attempt.2 Life is worthwhile because of its
challenges. Doing business has no profit without risk, rewarding those who best
understand systems and take what turns out to be the best way to manage these risks.
We will discuss risk management as applied to production in the food we eat, the
energy we use to live, and the manifestation of global economy, supply chains.
What We Eat
One of the major issues facing human culture is the need for quality food. Two
factors that need to be considered are first, population growth, and second, threats to
the environment. We have understood since Malthus that population cannot continue
to grow exponentially without severe changes to our ways of life. Some countries,
such as China, have been proactive in controlling population growth. Other areas,
such as Europe, seem to find a decrease in population growth, probably due to
societal consensus. But other areas, to include India and Africa, continue to see rapid
increases in population. Some think that this will change as these areas become more
affluent (see China and Europe). But there is no universally acceptable way to
control population growth. Thus we expect to see continued increase in demand
for food.
Agricultural science has been highly proactive in developing better strains of
crops, through a number of methods, including bioengineering and genetic science.
This led to what was expected to be a green revolution a generation ago. As with all
of mankind’s schemes, the best laid plans of humans involve many complexities and
unexpected consequences. North America has developed means to vastly increase
production of food free from many of the problems that existed a century ago.
However, Europe, and even Africa, are concerned about new threats arising from
genetic agriculture.
A third factor complicating the food issue is distribution. North America and the
Ukraine have long been fertile producing centers, generating surpluses of food. This
connects to supply chains, to be discussed below. But the issue is the interconnected
global human system with surpluses in some locations and dearth in others. Techni-
cally, this is a supply chain issue. But more important really is the economic issue of
sharing spoils, which ultimately lead to political issues. Contemporary business with
heavy reliance on international collaborative supply chains leads to many risks
arising from shipping (as well as other factors). Sustainable supply chain manage-
ment has become an area with heavy interest.3
Water is one of the most widespread assets Earth has (probably next to oxygen,
which chemists know is a related entity). Rainwater used to be considered pure. The
industrial revolution managed the unintended consequence of acid rain. Water used
to be free in many places. In Europe, population density and things like the black
plague made beer a necessary health food. In North America, it led to the bottled
water industry. Only 30 years ago paying for water would have been considered the
height of idiocy. Managing water is recognized as a major issue.4 Water manage-
ment also ultimately becomes an economic issue, leading to the political arena.
200 14 Sustainability and Enterprise Risk Management
The Energy We Use
Generation of energy in its various forms is a major issue leading to political debate
concerning tradeoffs among those seeking to expand existing fuel needs, often
opposed by those seeking to stress alternative sources of energy. Oil of course is a
major source of current energy, but involves not only environmental risks5 but also
related catastrophe risks6 and market risks.7 The impact of oil exploration on the
Mexican rain forest8 has been reported and cost risks in alternative energy resources
studied.9
Mining is a field traditionally facing high production risks. Power generation is a
major user of mine output. Cyanide management has occurred in gold and silver
mining in Turkey,10 and benzene imposes risks.11 Life cycle mine management has
been addressed through risk management techniques.12 The chemical industry also
is loaded with inherent risks. Risk management in the chemical industry has been
discussed as well.13
The Supply Chains that Link Us to the World
Supply chain risk management involves a number of frameworks, categorization of
risks, processes, and mitigation strategies. Frameworks have been provided by
many, some focusing on a context, such as supply chain14 or small-to-medium
sized enterprises15. Some have focused around context, such as food16 or pharma-
ceutical recalls, or terrorism.17 Five major components to a framework in managing
supply chain risk have been suggested:18
• Risk context and drivers.
Risk drivers arising from the external environment will affect all organizations,
and can include elements such as the potential collapse of the global financial
system, or wars. Industry specific supply chains may have different degrees of
exposure to risks. A regional grocery will be less impacted by recalls of Chinese
products involving lead paint than will those supply chains carrying such items.
Supply chain configuration can be the source of risks. Specific organizations can
reduce industry risk by the way the make decisions with respect to vendor
selection. Partner specific risks include consideration of financial solvency,
product quality capabilities, and compatibility and capabilities of vendor infor-
mation systems. The last level of risk drivers relate to internal organizational
processes in risk assessment and response, and can be improved by better
equipping and training of staff and improved managerial control through better
information systems.
• Risk management influencers
This level involves actions taken by the organization to improve their risk
position. The organization’s attitude toward risk will affect its reward system,
and mold how individuals within the organization will react to events. This
What We Eat 201
attitude can be dynamic over time, responding to organizational success or
decline.
• Decision makers
Individuals within the organization have risk profiles. Some humans are more
risk averse, others more risk seeking. Different organizations have different
degrees of group decision making. More hierarchical organizations may isolate
specific decisions to particular individuals or offices, while flatter organizations
may stress greater levels of participation. Individual or group attitudes toward risk
can be shaped by their recent experiences, as well as by the reward and penalty
structure used by the organization.
• Risk management responses
Each organization must respond to risks, but there are many alternative ways in
which the process used can be applied. Risk must first be identified. Monitoring
and review requires measurement of organizational performance. Once risks are
identified, responses must be selected. Risks can be mitigated by an implicit
tradeoff between insurance and cost reduction. Most actions available to
organizations involve knowing what risks the organization can cope with because
of their expertise and capabilities, and which risks they should outsource to others
at some cost. Some risks can be dealt with, others avoided. One view of the
strategic options available include the following six broad generalizations:19
– Break the law
– Take the low road
– Wait and see
– Show and tell
– Pay for principle
– Think ahead
The first option, breaking the law, apart from ethical considerations, poses serious
risks in terms of ability to operate and can lead to jail.
The second implies doing the absolute minimum required to comply with laws
and regulations. This approach satisfies legal requirements, but environmental laws
and regulations change, so modified behavior will probably be required in the future
and will probably be much more expensive than earlier consideration of
sustainability factors.
The wait and see option would see firms preparing for expected regulatory
changes as well as consumer behavior and competitor strategies. Thus option 3 is
more proactive than the prior two options.
Show and tell presumes that the organization is addressing environmental issues
but not fully publicizing these activities. Show and tell implies an honest portrayal of
environmental performance, as opposed to “greenwashing” where public relations is
used to present a misleading report. Show and tell has the deficiency that if problems
do arise, or if false accusations are made, firm reputation can suffer.
Pay for principle involves sacrificing some financial performance in order to meet
ethical and environmental standards. It implies financial sacrifice.
202 14 Sustainability and Enterprise Risk Management
Think ahead involves proceeding based on principle as well as business logic.
Benefits include gaining competitive advantage and protecting against future legis-
lation, seeking to be at the leading edge of sustainability.
Which of these broad general options is appropriate of course depends on firm
circumstances, although there is little justifiable support for options 1 and 2.
The Triple Bottom Line
Organizational performance measures can vary widely. Private for-profit
organizations are generally measured in terms of profitability, short-run and long-
run. Public organizations are held accountable in terms of effectiveness in delivering
services as well as the cost of providing these services. One effort to consider
sustainability and other aspects of risk management is the triple bottom line
(TBL),20 considering financial performance, environmental performance, and social
responsibility.
TBL ¼ f F, E, SR, costð Þ ð1Þ
All three areas need to be considered to maximize firm value. In normal times,
there is more of a focus on high returns for private organizations, and lower taxes for
public institutions. Risk events can make their preparation in dealing with risk
exposure much more important, focusing on survival.
Sustainability Risks in Supply Chains
As we covered in Chap. 1, supply chains involve many risks imposing disruptions
and delays due to problems of capacity, quality, financial liquidity, changing
demand and competitive pressure, and transportation problems. By their nature,
supply chains require networks of suppliers leading to the need for reliable sources
of materials and products with backup plans for contingencies. Demands are at the
whim of customers in most cases. There are endogenous risks somewhat within a
firm’s control, as well as exogenous risks. These can also be viewed by the triple
bottom line. Sustainability aspects arise in both endogenous and exogenous risks, as
shown in Table 14.1:
Table 14.2 in turn describes exogenous risks and possible responses with
practices to implement them.
Tables 14.1 and 14.2 both highlight the variety of things that can go wrong in a
supply chain, as well as some basic responses available. Each particular circum-
stance would of course have more specific appropriate practices available to ade-
quately respond.
Sustainability Risks in Supply Chains 203
Table 14.1 Endogenous risks related to the triple bottom line21
Endogenous Risk Response Practice
Environmental Accident Prevent
Mitigate
Reduce
Cooperate
Insure
Locate away from heavy population
Emergency response plans
Quick admission of responsibility
Work with suppliers to identify sources
Work with insurers to prevent &
mitigate
Pollution Avoid
Mitigate
Reduce
Use clean energy, avoid polluting
Monitor and reduce emissions
Sustainable waste management
Legal compliance Assure
Control
Share
Legal policies, disseminate
Monitor compliance
Sustainability audits with suppliers
Product/package
waste
Prevent
Mitigate
Cooperate
Apply lean management practices
Recycle
Design products with sustainable
packaging
Social Labor Avoid
Prevent
Mitigate
Shun sources using child labor
Fair wages/reasonable hours
Quick admission of responsibility
Safety Prevent
Mitigate
Insure
Training
Adequate medical access
Work with insurers to prevent &
mitigate
Discrimination Prevent
Mitigate
Transfer
Equal opportunity practices
Complaint handling system
Legal services and public relations
Economic Antitrust Avoid
Reduce
Mitigate
Avoid investing in unstable regions
Build local relationships
Create extra capacity
Bribery
Corruption
Prevent
Cooperate
Train management
Work with legal authorities
Price fixing
Patents
Prevent
Mitigate
Insure
Follow licensing laws
Use whistleblowing
Work with supply chain partners
Tax evasion Prevent Follow tax laws
204 14 Sustainability and Enterprise Risk Management
Models in Sustainability Risk Management
The uncertainty inherent in risk analysis has typically been dealt with in two primary
ways. One is to either measure distributions or to assume them, and to apply
simulation models. Rijgersberg et al.23 gave a discrete-event simulation model for
risks involved in fresh-cut vegetables. The management of risks in the interaction of
food production, water resource use, and pollution generation has been studied
through Monte Carlo simulation.24
The other way to treat risk is to utilize other models (optimization; selection)
with fuzzy representations of parameters. Multiple criteria models have been
widely applied that consider risk in various forms. Analytic hierarchy process is
commonly used in a fuzzy context.25 The related analytic network process (ANP)
has been presented in design of flexible manufacturing systems.26 Another multiple
criteria approach popular in Europe is based on outranking principles. Fuzzy
models of this type have been applied to risk contexts in selecting manufacturing
systems27 and in allocating capacity in semiconductor fabrication.28 These are only
representative of many other multiple criteria models considering risk.
Table 14.2 Exogenous risks related to sustainability22
Exogenous Risk Response Practice
Environmental Natural disaster Reduce
Mitigate
Insure
Have alternative sources available
Resilient contingency plan
Insure when risk unavoidable
Weather Prevent
Mitigate
Reduce
Insure
Built flexible supply chain, forecast
Resilient contingency plan
Water recycling
Insure when risk unavoidable
Social Demographic Mitigate
Reduce
Agile product design
Proactively advertise
Pandemic Reduce
Mitigate
Strong health procedures in place
Monitor in real-time
Social unrest Mitigate
Insure
Maintain good local relations
Have alternative sources, evacuation plans
Economic Boycotts Prevent
Reduce
Retain
Provide quality product
Public relations
Accept risk if cost is low
Litigation Avoid
Prevent
Insure
Quality control
Responsive public relations
Follow laws and regulations
Financial crisis Avoid
Insure
Keep informed
Have contingency sources
Energy Mitigate
Transfer
Improve environmental audits
Hedge
Models in Sustainability Risk Management 205
Sustainability Selection Model
We can consider the triple bottom line factors of environmental, social, and eco-
nomic as a framework of criteria. Calabrese et al.29 gave an extensive set of criteria
for an analytic hierarchy process framework meant to assess a company’s
sustainability performance. We simplify their framework and demonstrate with
hypothetical assessments. We follow the SMART methodology presented in
Chap. 3.
Criteria
Each of the triple bottom line categories has a number of potential sub-criteria. In the
environmental category, these might include factors related to inputs (materials,
energy, water), pollution generation (impact on biodiversity, emissions, wastes),
compliance with regulations, transportation burden, assessment of upstream supplier
environmental performance, and presence of a grievance mechanism. This yields six
broad categories, each of which might have another level of specific metrics.
In the social category, there could be four broad sub-criteria to include labor
practices (employment, training, diversity, supplier performance, and grievance
mechanism), human rights impact (child labor issues, union relations, security),
responsibility to society (anti-corruption, anti-competitive behavior, legal compli-
ance), and product responsibility (customer health and safety, service, marketing,
customer privacy protection). This yields four social criteria. Some of the specific
metrics at a lower level are in parentheses.
The economic category could include economic performance indicators (profit-
ability), market presence (market share, product diversity), and procurement reli-
ability (three economic criteria).
Weight Development
Weights need to be developed. AHP operates within each category and then rela-
tively weighting each category, but a bit more accurate assessment would be
obtained by treating all criteria together. We thus have 13 criteria to weight. This
is a bit large, but this application was intended by Calabrese et al. as a general
sustainability assessment tool (and they had 91 overall specific metrics). We dem-
onstrate with the following weight development using swing weighting in
Table 14.3:
The total of the swing weighting assessments in column 3 is 730. Dividing each
entry in column 3 by this 730 yields weights in column 4.
206 14 Sustainability and Enterprise Risk Management
Scores
We can now hypothesize some supply chain firms, and assume relative
performances as given in Table 14.4 in verbal form. Firm 1 might emphasize
environmental concerns. Firm 2 might emphasize social responsibility. Firm
3 might be one that stresses economic efficiency with relatively less emphasis on
environmental or social responsibility.
We can convert these to numbers to obtain overall ratings of the three firms. We
do this with the following scale:
Table 14.4 Firm assessment of performance by criteria
Criterion Firm 1 Firm 2 Firm 3
Env1—Input sustainability Very good Average Low
Soc2—Human rights impact Good Excellent Low
Econ2—Market presence Average Average Very good
Env2—Pollution control Excellent Good Low
Soc3—Responsibility to society Good Excellent Low
Soc1—Labor practices Good Excellent Good
Env3—Compliance with regulations Good Good Good
Econ1—Profit Average Low Very good
Econ3—Procurement reliability Good Average Excellent
Soc4—Product responsibility Very good Excellent Good
Env4—Transportation sustainability Excellent Very good Good
Env5—Upstream supplier performance Very good Good Good
Env6—Grievance mechanism Excellent Very good Low
Table 14.3 Swing weighting for sustainability selection model
Criterion Rank Compared to 1st Weight
Env1—Input sustainability 1 100 0.137
Soc2—Human rights impact 2 90 0.123
Econ2—Market presence 3 85 0.116
Env2—Pollution control 4 80 0.110
Soc3—Responsibility to society 5 70 0.096
Soc1—Labor practices 6 60 0.082
Env3—Compliance with regulations 7 50 0.068
Econ1—Profit 8 45 0.062
Econ3—Procurement reliability 9 40 0.055
Soc4—Product responsibility 10–11 30 0.041
Env4—Transportation sustainability 10–11 30 0.041
Env5—Upstream supplier performance 12–13 25 0.034
Env6—Grievance mechanism 12–13 25 0.034
Models in Sustainability Risk Management 207
Excellent 1.0
Very good 0.9
Good 0.7
Average 0.5
Low 0.2
These numbers yield scores for each firm that can be multiplied by weights as in
Table 14.5:
Value Analysis
In this case, Firms 1 and 2 perform relatively much better than Firm 3, but of course
that reflects the assumed values assigned. Note that one limitation of the method is
that the more criteria, the tendency is to have higher emphasis. There were only three
economic factors, as opposed to six environmental factors. Even though the weights
could reflect higher rankings for a particular category (here the last four ranked
factors were environmental), there is a bias introduced. The six factors for environ-
mental issues here may account for Firm 1 slightly outperforming Firm 2. The
overall bottom line is that one should pay attention to all three triple bottom line
categories. The performance index demonstrated here might be used by each firm to
draw their attention to criteria where they should expend effort to improve
performance.
Table 14.5 Performance Index Calculation
Criteria Weight Firm 1 Firm 2 Firm 3
Env1—Input sustainability 0.137 0.9 0.5 0.2
Soc2—Human rights impact 0.123 0.7 1.0 0.2
Econ2—Market presence 0.116 0.5 0.5 0.9
Env2—Pollution control 0.110 1.0 0.7 0.2
Soc3—Responsibility to society 0.096 0.7 1.0 0.2
Soc1—Labor practices 0.082 0.7 1.0 0.7
Env3—Compliance with regulations 0.068 0.7 0.7 0.7
Econ1—Profit 0.062 0.5 0.2 0.9
Econ3—Procurement reliability 0.055 0.7 0.5 1.0
Soc4—Product responsibility 0.041 0.9 1.0 0.7
Env4—Transportation sustainability 0.041 1.0 0.9 0.7
Env5—Upstream supplier performance 0.034 0.9 0.7 0.7
Env6—Grievance mechanism 0.034 1.0 0.9 0.2
Firm score 0.762 0.724 0.501
208 14 Sustainability and Enterprise Risk Management
Conclusions
There is an obvious growing move toward recognition of the importance of
sustainability. This is true in all aspects of business. We reviewed some of the
risks involve in the supply chain context, and considered risk management in a
framework including context and drivers, influences, decision maker profiles, and
general categories of response.
The triple bottom line is a useful way to focus on the role of sustainability in
business management. This chapter included a review of enterprise risk categories
along with common responses. We also demonstrated a SMART model, and
suggested value analysis considerations. Earlier in the book we provided modeling
examples where we emphasize the tradeoffs among choices available to contempo-
rary decision makers. But it must be realized that sustainability is not necessarily
counter to profitability. Wise contemporary decision making should seek to empha-
size attainment of sustainability, social welfare, and profitability. Admittedly it is a
challenge, but it is important for success of society that this be accomplished.
Notes
1. Anderson, D.R. and Anderson, K.E. (2009). Sustainability risk management.
Risk Management and Insurance Review 12(1), 25–38.
2. Olson, D.l., Birge, J.R., and Linton, J. (2014). Introduction to risk and uncer-
tainty management in technological innovation. Technovation 34(8), 395–398.
3. Seuring, S. and Műller, M. (2008). From a literature review to a conceptual
framework for sustainable supply chain management. Journal of Cleaner Pro-
duction 16(15), 1699–1710.
4. Lambooy, T. (2011). Corporate social responsibility: Sustainable water use.
Journal of Cleaner Production 19(8), 852–866.
5. Ng, D. and Goldsmith, P.D. (2010). Bio energy entry timing from a resource
based view and organizational ecology perspective. International Food &
Agribusiness Management Review 13(2), 69–100.
6. Meyler, D., Stimpson, J.P. and Cutghin, M.P. (2007). Landscapes of risk.
Organization & Environment 20(2), 204–212.
7. Pulver, S. (2007). Making sense of corporate environmentalism. Organization
& Environment 20(1), 44–83.
8. Santiago, M. (2011). The Huasteca rain forest. Latin American Research Review
46, 32–54.
9. Zhelev, T.K. (2005). On the integrated management of industrial resources
incorporating finances. Journal of Cleaner Production 13(5), 469–474.
10. Akcil, A. (2006). Managing cyanide: Health, safety and risk management
practices at Turkey’s Ovacik gold-silver mine. Journal of Cleaner Production
14(8), 727–735.
Notes 209
11. Nakayama, A., Isono, T., Kikuchi, T., Ohnishi, I., Igarashi, J., Yoneda, M. and
Morisawa, S. (2009). Benzene risk estimation using radiation equivalent
coefficients.Risk Analysis: An International Journal 29(3), 380–392.
12. Kowalska, I.J. (2014). Risk manasgement in the hard coal mining industry:
Social and environmental aspects of collieries liquidation. Resources Policy
41, 124–134.
13. Műller, G. (2015). Managing risk during turnarounds and large capital projects:
Experience from the chemical industry. Journal of Business Chemistry 12(3),
117–124.
14. Khan, O. and Burnes, b. (2007). Risk and supply chain management: Creating a
research agenda. International Journal of Logistics Management 18(2),
197–216; Tang, C. and Tomlin, B. (2008). The power of flexibility for
mitigating supply chain risks. International Journal of Production Economics
116, 12–27.
15. Nishat Faisal, M., Banwet, D.K. and Shankar, R. (2007). Supply chain risk
management in SMEs: Analysing the varriers. International Journal of Man-
agement & Enterprise Development 4(5), 588–607.
16. Roth, A.V., Tsay, A.A., Pullman, M.E. and Gray, J.V. (2008). Unraveling the
food supply chain: Strategic insights from China and the 2007 recalls. Journal of
Supply Chain Management 44(1), 22–39.
17. Williams, Z., Lueg, J.E. and LeMay, S.A. (2008). Supply chain security: An
overview and research agenda. International Journal of Logistics Management
19(2), 254–281.
18. Ritchie, B. and Brindley, C. (2007). An emergent framework for supply chain
risk management and performance measurement. Journal of the Operational
Research Society 58, 1398–1411.
19. Rosenberg, M. (2016). Environmental sensibility: A strategic approach to
sustainability. IESE Insight 29, 54–61.
20. Elkington, J. (1997). Cannibals with Forks. Capstone Publishing Ltd.
21. Giannakis, M. and Papadopoulos, T. (2016).. Supply chain sustainability: A risk
management approach. International Journal of Production Economics 17(part
4), 4555–470.
22. Ibid.
23. Rijgersberg, H., Tromp, S., Jacxsens, L. and Uyttendaele, M. (2009). Modeling
logistic performance in quantitative microbial risk assessment. Risk Analysis 30
(1), 10–31.
24. Sun, J., Chen, J., Xi, Y. and Hou, J. (2011). Mapping the cost risk of agricultural
residue supply for energy application in rural China. Journal of Cleaner Pro-
duction 19(2/3), 121–128.
25. Chan, F.T.S., Kumar, N., Tiwari, M.K., Lau, H.C.W. and Choy, K.L. (2008).
Global supplier selection: A fuzzy-AHP approach. International Journal of
Production Research 46(14), 3825–3857.
26. Kodali, R. and Anand, G. (2010). Application of analytic network process for
the edesign of flexible manufacturing systems. Global Journal of Flexible
Systems Management 11(1/2), 39–54.
210 14 Sustainability and Enterprise Risk Management
27. Saidi Mehrabad, M. and Anvari, M. (2010). Provident decision making by
considering dynamic and fuzzy environment for FMS evaluation. International
Journal of Production Research 48(15), 4555–4584.
28. Kang, H.-Y. (2011). A multi-criteria decision-making approach for capacity
allocation problem in semiconductor fabrication. International Journal of Pro-
duction Research 49(19), 5893–5916.
29. Calabrese, A., Costa, R., Levialdi, N. and Menichini, T. (2016). A fuzzy analytic
hierarchy process method to support materiality assessment in sustainability
reporting. Journal of Cleaner Production 121, 248–264.
Notes 211
Environmental Damage and Risk
Assessment 15
Among the many catastrophic damages inflicted on our environment, recent events
include the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, and the 2011
earthquake and tsunami that destroyed the Fukushima Daiichi nuclear power plant.
The Macondo well operated by British Petroleum, aided by driller Transocean Ltd.
and receiving cement support from Halliburton Co. blew out on 20 April 2010,
leading to eleven deaths. The subsequent 87 day flow of oil into the Gulf of Mexico
dominated news in the U.S. for an extensive period of time, polluted fisheries in the
Gulf as well as coastal areas of Louisiana, Mississippi, Alabama, Florida, and Texas.
The cause was attributed to defective cement in the well. The Fukushima nuclear
plant disaster led to massive radioactive decontamination, impacting 30,000 km2 of
Japan. All land within 20 km of the plant plus an additional 2090 km2 northwest
were declared too radioactive for habitation, and all humans were evacuated. The
Deepwater Horizon spill was estimated to have costs of $11.2 billion actual contain-
ment expense, another $20 billion in trust funds pledged to cover damages, $1 billion
to British Petroleum for other expenses, and risk of $4.7 billion in fines, for a total
estimated $36.9 billion.1 The value of total economic loss at Fukushima range
widely, from $250 billion to $500 billion. About 160,000 people have been
evacuated from their homes, losing almost off of their possessions2.
The world is getting warmer, changing the environment substantially. Oil spills
have inflicted damage on the environment in a number of instances. While oil spills
have occurred for a long time, we are becoming more interested in stopping and
remediating them. In the United States, efforts are under way to reduce coal
emissions. US policies have tended to focus on economic impact. Europe has had
a long-standing interest in additional considerations, although these two entities
seem to be converging relative to policy views. In China and Russia, there are
newer efforts to control environmental damage, further demonstrating convergence
of world interest in environmental damage and control.
We have developed the ability to create waste of lethal toxicity. Some of this
waste is on a small but potentially terrifying scale, such as plutonium. Other forms
of waste (or accident) involve massive quantities that can convert entire regions
# Springer-Verlag GmbH Germany, part of Springer Nature 2020
D. L. Olson, D. Wu, Enterprise Risk Management Models, Springer Texts in
Business and Economics, https://doi.org/10.1007/978-3-662-60608-7_15
213
http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-662-60608-7_15&domain=pdf
into wasteland, and turn entire seas into man-made bodies of dead water. Siting
facilities and controlling transmission of commodities lead to efforts to deal with
environmental damage lead to some of the most difficult decisions we face as a
society.
Recent U.S. issues have arisen from energy waste disposal. Nuclear waste is a
major issue from both nuclear power plants as well as from weapons dismantling.3
Waste from coal plants, in the form of coal ash slurry, has proven to be a problem as
well. The first noted wildlife damage from such waste disposal occurred in 1967
when a containment dam broke and spilled ash into the Clinch River in Virginia.4
Subsequent noted spills include Belews Lake, North Carolina in 1976, and the
Kingston Fossil Plant in Tennessee in 2008. Lemly noted 21 surface impoundment
damage cases from coal waste disposal, five due to disposal pond structural failure,
two from unpermitted ash pond discharge, two from unregulated impoundments, and
twelve from legally permitted releases.
Some waste is generated as part of someone’s plan. Other forms arise due to
accident, such as oil-spills or chemical plant catastrophes. Location decisions for
waste-related facilities are very important. Dangerous facilities have been
constructed in isolated places for the most part in the past. However, with time,
fewer places in the world are all that isolated. Furthermore, moving toxic material
safely to or from wherever these sites are compounds the problem.
Many more qualitative criteria need to be considered, such as the impact on the
environment, the possibility of accidents and spills, the consequences of such
accidents, and so forth. An accurate means of transforming accident consequences
into concrete cost results is challenging. The construction of facilities and/or the
processes of producing end products involve high levels of uncertainty. Enterprise
activities involve exposure to possible disasters. Each new accident is the coinci-
dence of several causes each having a low probability taken separately. There is
insufficient reliable statistical data to accurately predict possible accidents and their
consequences.
Specific Features of Managing Natural Disasters
Problems can have the following features:
1. Multicriteria nature
Usually there is a need for decision-makers to consider more than mere cost
impact. Some criteria are easily measured. Many, however, are qualitative,
defying accurate measurement. For those criteria that are measurable, measures
are in different units that are difficult to balance. The general value of each
alternative must integrate each of these different estimates. This requires some
means of integrating different measures based on sound data.
2. Strategic nature
214 15 Environmental Damage and Risk Assessment
The time between the making of a decision and its implementation can be
great. This leads to detailed studies of possible alternative plans in order to
implement a rational decision process.
3. Uncertain and unknown factors
Typically, some of the information required for a natural disaster is missing
due to incomplete understanding of technical and scientific aspects of a problem.
4. Public participation in decision making
At one time, individual leaders of countries and industries could make individ-
ual decisions. That is not the case in the twenty-first century.
While we realize that wastes need to be disposed of, none of us want to expose
our families or ourselves to a toxic environment.
Framework
Assessing the value of recovery efforts in response to environmental accidents
involves highly variable dynamics of populations, species, and interest groups,
making it impossible to settle on one universal method of analysis. There are a
number of environmental valuation methods that have been developed. Navrud and
Pruckner5 and Damigos6 provided frameworks of methods. Table 15.1 outlines
market evaluation approaches.
There are many techniques that have been used. Table 15.1 has three categories of
methods. Household production function methods are based on relative demand
between complements and substitutes, widely used for economic evaluation of
projects including benefits such as recreational activities.
The Travel Cost Method assumes that the time and travel cost expenses incurred
by visitors represent the recreational value of the site. This is an example of a method
based on revealed preference.
Hedonic price analysis decomposes prices for market goods based on analysis
of willingness-to-pay, often applied to price health and aesthetic values. Hedonic
price analysis assumes that environmental attributes influence decisions to consume.
Thus market realty values are compared across areas with different environmental
factors to estimate the impact of environmental characteristics. Differences are
assumed to appear as willingness to pay as measured by the market. An example
of hedonic price analysis was given of work-related risk of death and worker
characteristics.7 That study used US Federal statistics on worker fatalities and
Table 15.1 Methods of environmental evaluation
Household production function
methods Revealed preference Travel cost method
Hedonic price analysis Revealed preference of willingness
to pay
Benefit transfer
method
Elicitation of preferences Stated preference Contingent
valuation
Framework 215
worker characteristics obtained from sampling 43,261 workers to obtain worker and
job characteristics, and then ran logistic regression models to identify job character-
istic relations to the risk of work fatality.
Both household production function methods and hedonic price analysis utilize
revealed preferences, induced without direct questioning. Elicitation of preferences
conversely is based on stated preference, using hypothetical settings in contingent
valuation, or auctions or other simulated market scenarios. The benefit transfer
method takes results from one case to a similar case. Because household production
function and hedonic price analysis might not be able to capture the holistic value of
natural resource damage risk, contingent valuation seeks the total economic value of
environmental goods and services based on elicited preferences. Elicitation of
preferences seek to directly assess utility, to include economic, through lottery
tradeoff analysis or other means of direct preference elicitation.
Cost-benefit analysis is an economic approach pricing every scale to express
value in terms of currency units (such as dollars). The term usually refers to social
appraisal of projects involving investment, taking the perspective of society as a
whole as opposed to particular commercial interests. It relies on opportunity costs to
society, and indirect measure. There have been many applications of cost-benefit
analysis around the globe. It is widely used for five environmentally related
applications,8 given in Table 15.2:
The basic method of analysis is cost-benefit analysis outlined above. Regulatory
review reflects the need to expand beyond financial-only considerations to reflect
other societal values. Natural Resource Damage Assessment applies cost-benefit
analysis along with consideration of the impact on various stakeholders (in terms of
compensation). Environmental costing applies cost benefit analysis, with
requirements to include expected cost of complying with stipulated regulations.
Distinguishing features are that the focus of environmental costing is expected to
reflect a marginal value, and that marginal values of environmental services are
viewed in terms of shadow prices. Thus when factors influencing decisions change,
the value given to environmental services may also change. Environmental account-
ing focuses on shadow pricing models to seek some metric of value.
Cost-benefit analysis seeks to identify accurate measures of benefits and costs in
monetary terms, and uses the ratio benefits/costs (the term benefit-cost ratio seems
more appropriate, and is sometimes used, but most people refer to cost-benefit
analysis). Because projects often involve long time frames (for benefits if not for
costs as well), considering the net present value of benefits and costs is important.
Table 15.2 Environmental evaluation methods
Project evaluation Extended cost-benefit analysis—normative
Regulatory review Metric other than currency—normative
Natural Resource Damage Assessment Stakeholder consideration—compensatory
Environmental costing Licensing analysis
Environmental accounting Ecology-oriented
216 15 Environmental Damage and Risk Assessment
We offer the following example to seek to demonstrate these concepts. Yang9
provided an analysis of 17 oil spills related to marine ecological environments. That
study applied clustering analysis with the intent of sorting out events by magnitude
of damage, which is a worthwhile exercise. We will modify that set of data as a basis
for demonstrating methods. The data is displayed in Table 15.3:
This provides five criteria. Two of these are measured in dollars. While there
might be other reasons why a dollar in direct loss might be more or less important
than a dollar lost by fisheries, we will treat these at the same scale. Hectares of
general ocean, however, might be less important than hectares of fishery area, as the
ocean might have greater natural recovery ability. We have thus at least four criteria,
measured on different scales that need to be combined in some way.
Cost-Benefit Analysis
Cost-benefit analysis requires converting hectares of ocean and hectares of fishery as
well as affected population into dollar terms. Means to do that rely on various
economic philosophies, to include the three market evaluation methods listed in
Table 15.1. These pricing systems are problematic, in that different citizens might
well have different views of relative importance, and scales may in reality involve
significant nonlinearities reflecting different utilities. But to demonstrate in simple
form, we somehow need to come up with a way to convert hectares of both types and
affected population into dollar terms.
Table 15.3 Raw numbers for marine environmental damage
Event
Direct loss
($million)
Fishery loss
($million)
Polluted ocean
area hectares
Polluted fishery
area (hectares)
Population
affected
(millions)
1 60 12 216 77 20.47
2 11 14 53 10 2.20
3 31 14 217 48 14.65
4 36 11 105 40 11.48
5 14 17 69 12 4.65
6 16 16 17 3 1.96
7 15 15 164 25 13.77
8 38 13 286 90 23.94
9 8 15 24 0 3.88
10 26 13 154 41 16.40
11 9 16 59 15 6.40
12 19 12 162 55 18.82
13 27 11 68 11 8.15
14 18 16 38 4 6.44
15 14 15 108 13 12.89
16 11 17 6 3 5.39
17 5 20 32 0 3.99
Framework 217
We could apply tradeoff analysis to compare relative willingness of some subject
pool to avoid polluting a hectare of ocean, a hectare of fishery, and avoid affecting
one million people. One approach is to use marginal values, or shadow prices to
optimization models. Another approach is to use lottery tradeoffs, where subjects
might agree upon the following ratios:
Avoiding 1 ha of ocean pollution equivalent to $0.3 million
Avoiding 1 ha of fishery pollution equivalent to $0.5 million
Avoiding impact on 1 million people equivalent to $6 million
Admittedly, obtaining agreement on such numbers is highly problematic. But if it
were able to be done, the cost of each incident is now obtained by adding the second
and third columns iof Table 15.2 to the fourth column multiplied by 0.3, the fifth
column by 0.5, and the sixth column by 6. This would yield Table 15.4:
This provides a simple (probably misleadingly simple) means to assess relative
damage of these 17 events. By these scales, event 8 and event 1 were the most
damaging.
Wen and Chen10 gave a report of cost-benefit analysis to balance economic,
ecological, and social aspects of pollution with the intent of aiding sustainable
development, National welfare, and living quality in China. They used GDP as the
measure of benefit, allowing them to use the conventional approach of obtaining a
ratio of benefits over costs. Cost-benefit analysis can be refined to include added
features, such as net present value if data is appropriate over different time periods.
Table 15.4 Cost-benefit calculations of marine environmental damage demonstration
Event
Direct
loss
($million)
Fishery
loss
($million)
Polluted
ocean
($million)
Polluted
fishery
($million)
Population
affected
($million)
Total
($million)
1 60 12 64.8 38.5 122.82 298.12
2 11 14 15.9 5 13.2 59.1
3 31 14 65.1 24 87.9 222
4 36 11 31.5 20 68.88 167.38
5 14 17 20.7 6 27.9 85.6
6 16 16 5.1 1.5 11.76 50.36
7 15 15 49.2 12.5 82.62 174.32
8 38 13 85.8 45 143.64 325.44
9 8 15 7.2 0 23.28 53.48
10 26 13 46.2 20.5 98.4 204.1
11 9 16 17.7 7.5 38.4 88.6
12 19 12 48.6 27.5 112.92 220.02
13 27 11 20.4 5.5 48.9 112.8
14 18 16 11.4 2 38.64 86.04
15 14 15 32.4 6.5 77.34 145.24
16 11 17 1.8 1.5 32.34 63.64
17 5 20 9.6 0 23.94 58.54
218 15 Environmental Damage and Risk Assessment
Contingent Valuation
Contingent valuation uses direct questioning of a sample of individuals to state the
maximum they would be willing to pay to preserve an environmental asset, or the
minimum they would accept to lose that asset. It has been widely used in air and
water quality studies as well as assessment of value of outdoor recreation, wetland
and wilderness area protection, protection of endangered species and cultural heri-
tage sites.
Petrolia and Kim11 gave an example of application of contingent valuation to
estimate public willingness to pay for barrier-island restoration in Mississippi. Five
islands in the Mississippi Sound were involved, each undergoing land loss and
translocation from storms, sea level rise, and sediment. A survey instrument was
used to present subjects with three hypothetical restoration options, each restoring a
given number of acres of land and maintaining them for 30 years. Scales had three
points: status quo (small scale restoration), pre-hurricane Camille (medium restora-
tion), and pre-1900 (large scale restoration). Dichotomous questions were
presented to subjects asking for bids set at no action, 50 % baseline cost, 100 %,
150 %, 200 %, and 250 %. These were all expressed in one-time payments to
compare with the level of restoration, asking for the preferred bid and thus indicating
willingness to pay.
Carson12 reported on the use of contingent valuation in the Exxon Valdez spill of
March 1989. The State of Alaska funded such as study based on results of a 39 page
survey, yielding an estimate of the American public’s willingness to pay about $3
billion to avoid a similar oil spill. This compared to a different estimate based on
direct economic losses from lost recreation days (hedonic pricing) of only $4 million
dollars. Exxon spent about $2 billion on response and restoration, and paid $1 billion
in natural resource damages.
Conjoint Analysis
Conjoint analysis has been used extensively in marketing research to establish the
factors that influence the demand for different commodities and the combinations of
attributes that would maximize sales.13
There are three broad forms of conjoint analysis. Full-profile analysis presents
subjects with product descriptions with all attributes represented. This is the most
complete form, but involves many responses from subjects. The subject provides a
score for each of the samples provided, which are usually selected to be efficient
representatives of the sample space, to reduce the cognitive burden on subjects.
When a large number of attributes are to be investigated, the total number of
concepts can be in the thousands, and impose an impossible burden for the subject
to rate, unless the number is reduced by adoption of a fractional factorial. The use of
a fractional design, however, involves loss of information about higher-order
interactions among the attribute. Full profile ratings based conjoint analysis,
while setting a standard for accuracy, therefore remains difficult to implement if
Framework 219
there are many attributes or levels and if interactions among them are suspected.
Regression models with attribute levels treated with dummy variables are used to
identify the preference function, which can then be applied to products with any
combination of attributes.
Hybrid conjoint models have been developed to reduce the cognitive burden.
An example is Adaptive Conjoint Analysis (ACA), which reduces the number of
attributes presented to subjects, and interactively select combinations to present until
sufficient data was obtained to classify full product profiles.
A third approach is to decompose preference by attribute importance and value
of each attribute level. This approach is often referred to as trade-off analysis, or self-
explicated preference identification, accomplished in five steps:
1. Identify unacceptable levels on each attribute.
2. Among acceptable levels, determine most preferred and least preferred levels.
3. Identify the critical attribute, setting its importance rating at 100.
4. Rate each attribute for each remaining acceptable level.
5. Obtain part-worths for acceptable rating levels by multiplying importance from
step 3 by desirability rating from step 4.
This approach is essentially that of the simple multiattribute rating theory.14 The
limitations of conjoint analysis include profile incompleteness, the difference
between the artificial experimental environment and reality. Model specification
incompleteness recognizes the nonlinearity in real choice introduced by interactions
among attributes. Situation incompleteness considers the impact of the assumption
of competitive parity. Artificiality refers to the experimental subject weighing more
attributes than real customers consider in their purchases. Instability of tastes and
beliefs reflects changes in consumer preference.
For studies involving six or fewer attributes, full-profile conjoint methods would
be best. Hybrid methods such as Adaptive Conjoint Analysis (ACA) would be better
for over six attributes but less than 20 or 30, with up to 100 attribute levels total; and
self-explicated methods (trade-off analysis of decomposed utility models) would be
better for larger problems. The trade-off method is most attractive when there are a
large number of attributes, and implementation in that case makes it imperative to
use a small subset of trade-off tables.
Conjoint analysis usually provides a linear function fitting the data. This has been
established as problematic when consumer preference involves complex
interactions. In such contingent preference, what might be valuable to a consumer
in one context may be much less attractive in another context. Interactions may be
modeled directly in conjoint analysis, but doing so requires (a) knowing which
interactions need to be modeled, (b) building in terms to model the interaction
(thereby using up degrees of freedom), and (c) correctly specifying the alias terms
if one is using a fractional factorial design. With a full-profile conjoint analysis with
even a moderate number of attributes and levels, the task of dealing with
interactions expands the number of judgments required by subjects to impossible
levels, and it is not surprising that conjoint studies default to main-effects models in
220 15 Environmental Damage and Risk Assessment
general. Aggregate-level models can model interactions more easily, but again, the
number of terms in a moderate-sized design with a fair number of suspected
contingencies can become unmanageable. Nonlinear consumer preference functions
could arise due to interactions among attributes, as well as from pooling data to
estimate overall market response, or contextual preference.
Shin et al.15 applied conjoint analysis to estimate consumer willingness to pay for
the Korean Renewable Portfolio Standard. This standard aims at reducing carbon
emissions in various systems, to include electrical power generation, transportation,
waste management, and agriculture. Korean consumer subjects were asked to
tradeoff five attributes, as shown in Table 15.5:
There are 35 ¼ 243 combinations, clearly too many to meaningfully present to
subjects in a reasonable time. Conjoint analysis provides means to intelligently
reduce the number of combinations to present to subjects in order to obtain well-
considered choices that can identify relative preference. One sample choice set is
shown in Table 15.6:
Attributes were presented in specific measures as well as the stated percentages
given in Table 15.6. The fractional factorial design used 18 alternatives out of the
243 possible, divided into six choice sets, including no change. None of these had a
dominating alternative, thus forcing subjects to tradeoff among attributes. There
were 500 subjects. Selections were fed into a Bayesian mixed logit model to provide
estimated consumer preference.
When preference independence is not present, Clemen and Reilly16 discuss
options for utility functions over attributes. The first approach is to perform direct
assessment. However, too many combinations lead to too many subject responses, as
with conjoint analysis. The second approach is to transform attributes, using
Table 15.5 Conjoint structure for Korean carbon emission willingness to pay
Attribute Low level Intermediate level High level
Electricity price 2 % increase 6 % increase 10 % increase
CO2 reduction 3 % decrease/year 5 % decrease/year 7 % decrease/year
Reduction in
unemployment
10,000 new jobs/
year
20,000 new jobs/
year
30,000 new jobs/
year
Power outage 10 min/year 30 min/year 50 min/year
Forest damage 530 km2/year 660 km2/year 790 km2/year
Table 15.6 Sample questionnaire policy choice set
Attribute Policy 1 Policy 2 Policy 3 Do nothing
Electricity price 2 % increase 6 % increase 6 % increase 0 increase
CO2 reduction 7 % decrease 5 % decrease 7 % decrease 0 increase
Reduction in
unemployment
30,000 new
jobs
20,000 new
jobs
30,000 new
jobs
No new
jobs
Power outage 50 min/year 10 min/year 30 min/year No decrease
Forest damage 660 km2/year 660 km2/year 530 km2/year No
reduction
Framework 221
measurable attributes capturing critical problem aspects. Another potential problem
is variance in consumer statement of preference. The tedium and abstractness of
preference questions can lead to inaccuracy on the part of subject inputs.17 In
addition, human subjects have been noted to respond differently depending on
how questions are framed.18
Habitat Equivalency Analysis
Habitat equivalency analysis (HEA) quantifies natural resource service losses. The
effect is to focus on restoration rather than restitution in terms of currency. It has
been developed to aid governmental agencies in the US to assess natural resource
damage to public habitats from accidental events. It calculates natural resource
service loss in discounted terms and determines the scale of restoration projects
needed to provide equal natural resource service gains in discounted terms in order to
fully compensate the public for natural resource injuries.
Computation of HEA takes inputs in terms of measures of injured habitat, such as
acres damaged, level of baseline value of what those acres provided, losses inferred,
all of which are discounted over time. It has been applied to studies of oil spill
damage to miles of stream, acres of woody vegetation, and acres of crop vegeta-
tion.19 The underlying idea is to estimate what it would cost to restore the level of
service that is jeopardized by a damaging event.
Resource equivalency analysis (REA) is a refinement of habitat equivalency
analysis in that the units measured differ. It compares resources lost due to a
pollution incident to benefits obtainable from a restoration project. Compensation
is assessed in terms of resource services as opposed to currency.20 Components of
damage are expressed in Table 15.7:
Defensive costs are those needed for response measures to prevent or minimize
damage. Along with monitoring and assessment costs, these occur in all scenarios. If
resources are remediable, there are costs for remedying the injured environment as
well as temporary welfare loss. For cases where resources are not remediable,
damage may be reversible (possibly through spontaneous recovery), in which case
welfare costs are temporary. For irreversible situations, welfare loss is permanent.
Table 15.7 Resource equivalency analysis damage components21
Condition Remedial Irremediable
Reversible Defensive costs
Costs of monitoring & assessment
Remediation costs
Interim welfare costs
Defensive costs
Costs of monitoring & assessment
Interim welfare costs
Irreversible Defensive costs
Costs of monitoring & assessment
Remediation costs
Interim welfare costs
Defensive costs
Costs of monitoring & assessment
Permanent welfare losses
222 15 Environmental Damage and Risk Assessment
HEA and REA both imply adoption of compensatory or complementary remedial
action, and generation of substitution costs.
Yet a third variant is the value-based equivalency method, which uses the frame
of monetary value. Natural resource damage assessment cases often call for com-
pensation in non-monetary, or restoration equivalent, terms. This was the basic idea
behind HEA and REA above. Such scaling can be in terms of service-to-service,
seeking restoration of equivalent value resources through restoration. This approach
does not include individual preference. Value-to-value scaling converts restoration
projects into equivalent discounted present value. It requires individual preference to
enable pricing. This can be done with a number of techniques, to include the travel
cost method of economic valuation.22 Essentially, pricing restoration applies con-
ventional economic evaluation through utility assessment.
Summary
The problem of environmental damage and risk assessment has grown to be
recognized as critically important, reflecting the emphasis of governments and
political bodies on the urgency of need to control environmental degradation. This
chapter has reviewed a number of approaches that have been applied to support
decision making relative to project impact on the environment. The traditional
approach has been to apply cost-benefit analysis, which has long been recognized
to have issues. Most of the variant techniques discussed in this chapter are
modifications of CBA in various ways. Contingent valuation focuses on integrating
citizen input, accomplished through surveys. Other techniques focus on more accu-
rate inputs of value tradeoffs, given in Table 15.1. Conjoint analysis is a means to
more accurately obtain such tradeoffs, but at a high cost of subject input. Habitat
equivalency analysis modifies the analysis by viewing environmental damage in
terms of natural resource service loss.
Burlington23 reviewed natural resource damage assessment in 2002, reflecting
the requirements of the US Oil Pollution Act of 1990. The prior approach to
determining environmental liability following oil spills was found too time consum-
ing. Thus instead of collecting damages and then determining how to spend
these funds for restoration, the focus was on timely, cost-effective restoration of
damaged natural resources. An initial injury assessment is conducted to determine
the nature and extent of damage. Upon completion of this injury assessment, a plan
for restoration is generated, seeking restoration to a baseline reflecting natural
resources and services that would have existed but for the incident in question.
Compensatory restoration assessed reflects actions to compensate for interim losses.
A range of possible restoration actions are generated, and costs estimated for each.
Focus is thus on cost of actual restoration. Rather than abstract estimates of the
monetary value of injured resources, the focus is on actual cost of restoration to
baseline.
Summary 223
Notes
1. Smith, L.C., Jr., Smith, L.M. and Ashcroft, P.A. (2011). Analysis of environ-
mental and economic damages from British Petroleum’s Deepwater Horizon oil
spill, Albany Law Review 74:1, 563–585.
2. http://www.fairewinds.org/nuclear-energy-education/arnie-gundersen-and-
helen-caldicott-discuss-the-fukushima-daiichi-meltdowns
3. Butler, J. and Olson, D.L (1999). Comparison of Centroid and Simulation
Approaches for Selection Sensitivity Analysis, Journal of Multicriteria Deci-
sion Analysis 8:3, 146–161
4. Lemly, A.D. and Skorupa, J.P. (2012). Wildlife and the coal waste policy
debate: Proposed rules for coal waste disposal ignore lessons from 45 years of
wildlife poisoning. Environmental Science and Technology 46, 8595–8600.
5. Navrud, S. and Pruckner, G.J. (1997). Environmental valuation – To use or not
to use? A comparative study of the United States and Europe. Environmental
and Resource Economics 10, 1–26.
6. Damigos, D. (2006). An overview of environmental valuation methods for the
mining industry, Journal of Cleaner Production 14, 234–247.
7. Scotton, C.R and Taylor, L.O. (2011). Valuing risk reductions: Incorporating
risk heterogeneity into a revealed preference framework, Resource and Energy
Economics 33, 381–397.
8. Navrud and Pruckner (1997), op cit.
9. Yang, T. (2015). Dynamic assessment of environmental damage based on the
optimal clustering criterion – Taking oil spill damage to marine ecological
environment as an example. Ecological Indicators 51, 53–58.
10. Wen, Z. and Chen, J. (2008). A cost-benefit analysis for the economic growth in
China. Ecological Economics 65, 356–366.
11. Petrolia, D.R. and Kim, T.-G. (2011). Contingent valuation with heterogeneous
reasons for uncertainty, Resource and Energy Economics 33, 515–526.
12. Carson, R.T. (2012). Contingent valuation: A practical alternative when prices
aren’t available, Journal of Economic Perspectives 26:4, 27–42.
13. Green, P.E. and Srinivasan, V. (1990). Conjoint analysis in marketing: New
developments with implications for research and practice, Journal of Marketing
Science 54:4, 3–19.
14. Olson, D.L. (1996). Decision Aids for Selection Problems. New York: Springer-
Verlag.
15. Shin, J., Woo, J.R., Huh, S.-Y., Lee, J. and Jeong, G. (2014). Analyzing public
preferences and increasing acceptability for the renewable portfolio standard in
Korea, Energy Economics 42, 17–26.
16. Clemen, R.T. and Reilly, T. (2001). Making Hard Decisions. Pacific Grove, CA:
Duxbury.
17. Larichev, O.I. (1992). Cognitive validity in design of decision-aiding
techniques, Journal of MultiCriteria Decision Analysis 1:3, 127–138.
18. Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision
under risk, Econometrica 47, 263–291.
224 15 Environmental Damage and Risk Assessment
http://www.fairewinds.org/nuclear-energy-education/arnie-gundersen-and-helen-caldicott-discuss-the-fukushima-daiichi-meltdowns
http://www.fairewinds.org/nuclear-energy-education/arnie-gundersen-and-helen-caldicott-discuss-the-fukushima-daiichi-meltdowns
19. Dunford, R.W., Ginn, T.C. * Desvousges, W.H. (2004). The use of habitat
equivalency analysis in natural resource damage assessments, Ecological Eco-
nomics 48, 49–70.
20. Zafonte, M. and Hamptom, S. (2007). Exploring welfare implications of
resource equivalency analysis in natural resource damage assessments, Ecolog-
ical Economics 61, 134–145.
21. Defancesco, E., Gatto, P. and Rosato, P. (2014). A ‘component-based’ approach
to discounting for natural resource damage assessment, Ecological Economics
99, 1–9.
22. Parsons, G.R. and Kang, A.K. (2010). Compensatory restoration in a random
utility model of recreation demand, Contemporary Economic Policy 28:4,
453–463.
23. Burlington, L.B. (2002). An update on implementation of natural resource
damage assessment and restoration under OPA. Spill Science and Technology
Bulletin 7:1–2, 23–29.
Notes 225
Preface
Notes
Acknowledgment
Contents
1: Enterprise Risk Management in Supply Chains
Unexpected Consequences
Supply Chain Risk Frameworks
Risk Context and Drivers
Risk Management Influencers
Decision Makers
Risk Management Responses
Performance Outcomes
Cases
Models Applied
Risk Categories Within Supply Chains
Process
Mitigation Strategies
Conclusions
Notes
2: Risk Matrices
Risk Management Process
Risk Matrices
Color Matrices
Quantitative Risk Assessment
Strategy/Risk Matrix
Risk Adjusted Loss
Conclusions
Notes
3: Value-Focused Supply Chain Risk Analysis
Hierarchy Structuring
Hierarchy Development Process
Suggestions for Cases Where Preferential Independence Is Absent
Multiattribute Analysis
The SMART Technique
Plant Siting Decision
Conclusions
Notes
4: Examples of Supply Chain Decisions Trading Off Criteria
Case 1: Zhu, Shah and Sarkis (2018)1
Value Analysis
Case 2: Liu, Eckert, Yannou-Le Bris, and Petit (2019)2
Value Analysis
Case 3: Khatri and Srivastava (2016)3
Value Analysis
Case 4: Envinda, Briggs, Obuah, and Mbah (2011)4
Value Analysis
Case 5: Akyuz, Karahalios, and Celik (2015)5
Value Analysis
Conclusions
Notes
5: Simulation of Supply Chain Risk
Inventory Systems
Basic Inventory Simulation Model
System Dynamics Modeling of Supply Chains
Pull System
Push System
Monte Carlo Simulation for Analysis
Conclusion
Notes
6: Value at Risk Models
Definition
The Basel Accords
Basel I
Basel II
Basel III
The Use of Value at Risk
Historical Simulation
Variance-Covariance Approach
Monte Carlo Simulation of VaR
The Simulation Process
Demonstration of VaR Simulation
Conclusions
Notes
7: Chance-Constrained Models
Chance-Constrained Applications
Portfolio Selection
Demonstration of Chance-Constrained Programming
Maximize Expected Value of Probabilistic Function
Minimize Variance
Solution Procedure
Maximize Probability of Satisfying Chance Constraint
Real Stock Data
Chance-Constrained Model Results
Conclusions
Notes
8: Data Envelopment Analysis in Enterprise Risk Management
Basic Data
Multiple Criteria Models
Scales
Stochastic Mathematical Formulation
DEA Models
Conclusion
Notes
9: Data Mining Models and Enterprise Risk Management
Bankruptcy Data Demonstration
Software
Decision Tree Model
Logistic Regression Model
Neural Network Model
Summary
Notes
10: Balanced Scorecards to Measure Enterprise Risk Performance
ERM and Balanced Scorecards
Small Business Scorecard Analysis
ERM Performance Measurement
Data
Results and Discussion
Score Distribution
Performance
Conclusions
Notes
11: Information Systems Security Risk
Frameworks
Security Process
Best Practices for Information System Security
Supply Chain IT Risks
Value Analysis in Information Systems Security
Tradeoffs in ERP Outsourcing
ERP System Risk Assessment
Qualitative Factors
Multiple Criteria Analysis
Scores
Weights
Value Score
Conclusion
Notes
12: Enterprise Risk Management in Projects
Project Management Risk
Risk Management Planning
Risk Identification
Qualitative Risk Analysis
Quantitative Risk Analysis
Risk Response Planning
Risk Monitoring and Control
Project Management Tools
Simulation Models of Project Management Risk
Governmental Project
Conclusions
Notes
13: Natural Disaster Risk Management
Emergency Management
Emergency Management Support Systems
Example Disaster Management System
Disaster Management Criteria
Multiple Criteria Analysis
Scores
Weights
Value score
Natural Disaster and Financial Risk Management
Natural Disaster Risk and Firm Value21
Financial Issues
Systematic and Unsystematic Risk
Investment Evaluation
Strategic Investment
Risk Management and Compliance
Conclusions
Notes
14: Sustainability and Enterprise Risk Management
What We Eat
The Energy We Use
The Supply Chains that Link Us to the World
The Triple Bottom Line
Sustainability Risks in Supply Chains
Models in Sustainability Risk Management
Sustainability Selection Model
Criteria
Weight Development
Scores
Value Analysis
Conclusions
Notes
15: Environmental Damage and Risk Assessment
Specific Features of Managing Natural Disasters
Framework
Cost-Benefit Analysis
Contingent Valuation
Conjoint Analysis
Habitat Equivalency Analysis
Summary
Notes