Advantages of Quantitative Risk Management
Questions:
1. For this question you are required to make at least two (2) forum postings, arguing either for or against the quantiative method of risk assessment. You will be assessed on what you contribute to the debate in terms of quality not quantity (though your posting should at a minimum be a few sentences long). You may either create new thread or reply to a previous posting. All new threads should contain the subject line “Quantitative Debate”
2. Study Exhibits 61.1 and 61.2 from Reading 3, and answer the following questions:
(a) Explain in your own words what is meant by the terms Sweet Spot and Discretionary Area (see Exhibit 61.1)
(b) Explain the significance of a security decision that is located to the right of the Sweet Spot but outside the Discretionary Area (see Exhibit 61.1).
(c) Explain the significance of a security decision that is located to the left of the Sweet Spot but still inside the Discretionary Area (see Exhibit 61.1).
(d) Explain why you think the Defined Highest Acceptable Risk is located on the Sweet Spot, but the Defined Lowest Acceptable Risk is located to the right of the Sweet Spot (see Exhibit 61.2).
3. In Reading 7 for this subject, Ozier states that ‘The [ALE] algorithm cannot distinguish effectively between low frequency/high-impact threats (such as ‘fire’) and high-frequency/low impact threats (such as ‘misuse of resources’).’ Explain why this is the case. Give an appropriate example to illustrate your explanation.
4. (Note: Make sure you show ALL your working for this question)
The following threat statistics have been gathered by a risk manager. Based on these, calculate the ALE for each threat.
- (Note: Make sure you show ALL your working for this question)
Using the figures you calculated above, determine the relative ROSI (return on security investment) for each of the same threats with the following controls in place. Remember that a single control may affect more than one threat, and you need to take this into account when calculating the ROSI. Based on your calculations, which controls should be purchased?
- Consider the data in the two tables that appear in questions 4 and 5 above. Sometimes a control may affect the cost per incidentand sometimes the occurrence frequency, and sometimes both. Why is this the case? Illustrate your answer with an example drawn from the data provided.
7. The year is 1999 and you are the risk manager for a large financial institution. You apply the Jacobson’s Window model (Reading 11) to determine your company’s preferred response to the impending Y2K bug. According to the model, should you accept, mitigate, or transfer the Y2K risk? Why? Do you agree with the model’s recommendations? Why or why not?
8. (Note: Make sure you show ALL your working for this question)
You want to persuade management to invest in an automated patching system. You estimate the costs and benefits over the next five years as follows:Benefits: Year 1 Year 2 Year 3 Year 4 Year 5
$2,000 $2,500 $4,000 $4,000 $4,000
Costs: Year 1 Year 2 Year 3 Year 4 Year 5
$3000 $2000 $750 $250 $250Calculate the Net Present Value (NPV) for this investment. Assuming that management has set the Required Rate of Return at 10%, should the investment be made? Why or why not
- 9.There are a number of qualitative risk assessment models that are available for use, such as FRAAP, OCTAVE, OWASP and CRAMM. Choose one of these models and briefly describe how risk assessment is conducted under this model. Describe an example situation where you could use this selected model. Give your assessment of the validity, or otherwise, of this risk assessment model.
Answer 1
Post 1: for the motion
“Quantitative risk assessment is the most acclaimed risk assessment method across the globe. The main reason for this is that it provides a concreted result through considering four major aspects of the risk rising. Over the couple of facts and features it has been seen that the major sources of risk are from the individual level, public risk, environmental risk and the employee risk. In any country the it has a regulatory framework that guides the risk management in detailed level. The risk assessment through the quantitative calculation provides a better snapshot about the risk along with the regulatory framework. On the other hand, the quantification of the risk enables the organization and the associated departments to handle the risk according to its calculated magnitude. Therefore, the quantitative risk assessment helps in taking the prompt and the required action to the categorized risks (Dnv.com, 2015). This also helps the companies to develop a business solution model based on the identified and the assed risks
The main advantage of the quantitative risk management is that it allows demonstrating the risks in accordance with its severity value. This severity value helps the legislative departments to take the required actions to control the risk for the bigger betterment of the society. Most of the companies’ puts stress on the quantitative risk assessment because it helps the company to identify the potential risk and its link with the indentified four aspects. The other reason for using the quantitative risk assessment is that all global risk identification and measure manuals work through the quantified values rather than qualitative instructions (Charitoudi, 2013). ”
Key Areas Covered by Quantitative Risk Management
Post 2: For the motion
“The quantitative risk assessment helps in managing a risk through identifying five key areas. These key areas are identification of the high-risk areas, increment of client confidence, measuring the probability of success, mitigation planning and delivering accuracy in the daily level risk management. According to Sims (2012) through the quantitative process the identification of the high risk occurrence areas are very much easy than the descriptive process. The major disadvantage of the descriptive process is that it provides a large amount of data that takes a long time to manage. The other most advantageous aspect of the quantitative risk management is that it allows increasing the client confidence through showing the level of occurrence and its impact on the related factors. The quantitative demarcation helps the clients to identify and to assess the risks. The major advantage of this method is that it helps in managing the day-to-day risk management though more efficient manner. Through the descriptive risk-assessment provides a detailed level of information about the risk and its probable impact however, this mechanism is not very much preferable for maintaining the day-to-day risk management. The risk mitigation is a major issue for any organization. The quantitative assessment of the identified risks helps to take the required action to mitigate the risks more quickly than the qualitative way. The main reason for preferring the quantitative analysis for the mitigation is that it allows taking more proactive action for the identified risks according to its probable probability. Therefore, considering these aspects it is very highly recommended that for the assessment of risks it is better to follow the quantitative way than the qualitative way (Abouzakhar, 2014).”
Answer 2 Answers of the sub questions
Sweet spot and discretionary area
Every organization tries to minimise the risk and the occurrence of the risks in the future times. To achieve a risk free working environment companies needs to develop a security system within the organization. However, to implement a security system is very much costly and it takes a long time for the implementation. Therefore, to develop and implement a security system within the organizational system it incurs both the one time investment and the recurring investments (However, it has been observed through analyzing some of the academic resources that the effectiveness of the planned security system is directly proportionate with the system development and implementation cost. Therefore, if we develop a graph with the two lines one will define the risk and the other will define the cost. In this graph, the sweet spot will be that area where the risk and the cost line intercepts together (
On the other hand, there remain some of the associated costs that need to be incurred to maintain the risk. Therefore, it can be said that within organizations there remain some of the risk factors that cannot be diminished. More specifically, it can be said that the minimum risk and the minimum cost are the two factors that cannot be diminished. The region in between the minimum cost and the minimum risk defines the discretionary area (
Determining the Sweet Spot and Discretionary Area for Cost-Effective Security Measures
Explanation on the Significance of a security decision that is located to the right of the Sweet Spot but outside the Discretionary Area
The security decisions that remains in the right of the sweet spot but outside the discretionary area defines that the security enhancement requires big amount of fixed expenses. Moreover, this expense has lower impact on the risk. The main reason behind this is that the level of risk always tends in defining the lowest level. According to Sliwinski (2014the nature of the risk is that it always tends towards the lowest level in this scenario, the higher cost investment always has the lower impact on the risk. Therefore, the investment on the risk that resides within the right of the sweet spot but outside the discretionary area will not provide good return to the organization (Palasinski & Grieve,2014).
Explanation on the significance of a security decision that is located to the left of the Sweet Spot but still inside the Discretionary Area
The security decision that is located in the left of the sweet sop but still inside the discretionary area defines that in the security enhancement requires some amount of investment that has equal proportional impact on the risk reduction issue (Woo & Kim, 2014). The main reason behind this is that with the increment of the security measure the risk becomes low. Therefore, it can be said that the growing investment in managing the risk directly reduce the magnitude of the risk. Therefore, the investment on any risk that lies within the section of left of the sweet spot and within the discretionary areas will pay positively for the company thorough minimizing risks with the growing investments (Kizza, 2014).
Discussion on the Defined Highest Acceptable Risk and Defined Lowest Acceptable Risk
According to Palasinski & Grieve (2014) the acceptability of the risk becomes very much acceptable only when the risk becomes feasible for any organization. In the figure 1, it has been observed that the sweet spot is where the risk and the investment intercept together. The interception means that the risk and the investment became the feasible one. In this point, the investment becomes feasible in accordance with the potential risk. Through this, it can be said that the highest acceptable risk is located on the Sweet point.
On the other hand, the lowest acceptable risk means such kind of risk that cannot be met because of the higher investment cost involvement. According to Vacca (2014) when the management of risk requires higher investment in accordance with the magnitude of the risk then that risk becomes low acceptable risk. In the right side of the sweet spot, the investment grows more rapidly and the magnitude of the risk increases. Therefore, it becomes very tough to bring a balance in between the investment and the magnitude of the risk. Therefore, it can be said that the lower acceptable risk resides right side of the sweet spot.
Answer 3
It can be presented that the algorithm failed in making distinction among the low frequency and high impact threats and high frequency and low impact threats. For instance, the fire can be low frequency high impact threat and high frequency low impact threat can be wrong use of available resources. Therefore, ALE was not successful in making right differences (Woo & Kim, 2014). On the other hand, if a firm majorly focuses on analysing risk that can lead to loss then Annualized Loss Expectancy (ALE) can be computed.
Significance of Security Decisions within and outside the Sweet Spot and Discretionary Area
ALE = Exposure Factor × Asset Value
The above outlined formula can be helpful in knowing the loss exposure. However, only single loss exposure can be estimated by evaluating the formula. Therefore, the risk can be effectively determined, which can be considered as single dimension. Thus, it can be seen that it may fail in recognizing the frequency and impact. On the other hand, if low frequency and high impact threat is considered then the effect of the result may correspond with the outcome shown by the high frequency and low impact threat. Thus, it can be concluded that the ALE algorithm approach can be considered as main issue that may be accountable in providing a clear scenario and making distinction between the high frequency and low impact threat and low frequency and high impact threat. The failure in distinction may do not allow providing a clear picture and results (Rice & AlMajali, 2014).
Answer 4
The largest threat that can be understood from the table is Fire, which involves high amount of cost for each incident. The single loss expectancy caused due to fire is valued to 500000. Therefore, fire causes lot of damages than any other threats. Apart from that, the flood is second most influential threat that cause loss of 300000 but the estimated frequency of flood threat is very low than the others. The threat that most affect the system is software piracy as the estimated occurrence of software piracy is very high at 52 which show that it can lead to potential loss that can hamper all the system and can increase the level of malicious threat. The other threat that is computer virus leads to a loss of 2000, which is more than software piracy and it, occurs 12 times in one month approximately. Therefore, annualized loss expectancy of computer virus is valued at 24000, which are second highest after software piracy. The computer virus can break down the system and can erase all the necessary documents. On the other hand, hacker theft occurs 4 times in every three months that attacks and hacks the important information of the person and use for themselves or distribute unethically. It causes loss of approx 3500 in three months. Therefore, it can be concluded that software piracy threat occurs maximum times than any other threats.
From the above table, it can be analysed that in order to control the software piracy that happens once in 4 months, anti-piracy protection hardware has been build so that software cannot be supplied to third parties without proper authorisation. The cost that has to be bear is valued at $15000 per annum. On the other hand, for mitigating the effect of computer virus or worm, the antivirus has been developed so that computer system can be kept clean from virus and malicious items. The return on security investment for computer virus is 62%, which is more than the average. In order to tackle information system threat, IDS system has been designed so that hackers can be restricted in entering to any system. The return is quite from the investment made in IDS. On the other side, in order to protect the employee from information threat, Access Control has been improved so that only authorised person can only access the information and unauthorised person should not be allowed. As the estimated frequency of information theft of employee occurrence is low therefore, the return is not that good. Apart from that, in order to remain protective from the denial of service attack, Firewall has been designed so that it can be hard for the illegal person to break the wall and enter into the system. As it occurs once in 10 years so that only 15000 has been invested for it but it delivers had better return. Therefore, it can be analyzed from overall table is that maximum return is gained from the insurance and IDS. Therefore, for battling flood and information hacking, insurance and IDS can be increased.
Answer 6:
If question 4 and question 5 tables are taken in account then it can be know that cost per incident and occurrence frequency modifies considering control basis. On the other hand, the control factor impacts the cost per incident. The situation can be completely understood by an example so that impact-relating control in relation to occurrence frequency and incident can be known. In order to tackle computer worms, incident cost was $2000 and it occurred once in a month. Therefore, when the control antivirus was incorporated, it costs was $1300 and it occurred once in 5 month. Thus, it can be analysed that the reason for such difference is the different antivirus types and licensing policy copyright.
Answer 7:
Y2K problem is commonly known as the year 2000 problem or the Millennium bug. The computer programmings that were developed at the starting of the 1990 were designed in such a ways that it recognized the year in terms of two digits. The main problem that rose in 1999 is a risk of misinterpretation in the calculation could happen (Haimes, 2002). Most of the computer services providing firms have suspected that a probable mismatch could happen at the start of the year 2000. During that time, the most of the financial organizations have faced the issue about how to respond to this miscalculation risk. In 1999, I was a risk management manager in a large financial company and it was my duty to provide a response way to handle this issue in 1999. Therefore, to identify the potential of risk and to identify the way to handle the risk I have taken the help of the Jacobson’s window model.
According to the Jacobson’s window model the response to the risk is determined by the occurrences and the consequences. According to ÖÄŸüt et al. (2010) the concentration over the risks that have low occurrences and the low consequences needs to be handled by the company itself. The main reason behind this was that over the time it has been observed that these kinds of risks have the lower potentials to create any bigger damage for the companies. On the other hand, the risks that have the higher occurrence chances and the higher consequences needs to be handled through transferring the risk to some other organizations that have the capacity and the capability to handle the risk.
Considering the possible severity and the consequences of the Y2K problem, it resides in the high occurrence and high consequence quadrant. However, according to some of the resources the high occurrence and the high consequences risks often do not happens. However, through considering the severity of the impact I have decided to keep the Y2K issue in the mentioned quadrant. Y2K is a very serious problem for the financial service providing companies. If this issue rises in the starting of the year 2000 then the company and all of its customers will be affected badly. Moreover, a large and uncontrolled disparity will happen within the financial calculation of the company.
Therefore, the company should respond with immediate effect. To handle this risk the company should transfer the risk to some other IT companies. The main reason for doing this is that the transfer of risk to other organization will provide a scope of recovering the loss amount from the some of the agreed sources if the Y2K problem actually rises and the organization fails to respond to it. On the issue about the agreement of the model, I do not agree on the view that the higher occurrences and higher consequences does not happens. However, there are some of the issues happens like the Y2K problem that keeping the severity of the issue it should be considered in the high occurrence and high consequences quadrant until the issue is completely resolved (Voeller, 2014).
Recommendation
According to the Net Present Valuation, the calculated value is positive which suggests that investing in the project can be profitable. Therefore, the management can take up the decision to invest in the project. On the other hand, in first year the management can register a certain decline due to meeting the various costs and after second year the management can register growth that will gradually increase over the years. Therefore, after five-year period the management can ensure better return that can help in providing better system and better engagement of investors. Apart from that, the management may require to continuously upgrade their system so that expected profit can be acquired after the set period and risk of loss can be minimized. Therefore, it can be effective and profitable for the company to make investment and earn better income.
Answer 9: OCTAVE Model
How risk assessment model works
According to Violino (2014) octave is the strongest and the most formalised qualitative risk management model. CERT initially developed this model. According to Rice & AlMajali (2014) the strongest point of the model is that it considers hardware, systems, information and the peoples as the assets of the company. The main purpose for the development of this method was to conduct the risk analysis of a moderate size organization having almost strength of 300. This model has three other variants and the purposes of those three variants are different from the others.
OCTAVE Allegro is the most recent development on the OCTAVE model has reduced the processing steps from eight to four. Those steps are:
- Development of the risk management criteria consisting with the company mission, objectives, goals and critical success factors
- Creation of profile for each of the assets through developing specific boundaries for the identification of the potential risk
- Identification of the risk in accordance with the context and information
- Initiation of the mitigation approaches (org, 2014).
Example
The best example can be cited is the cyber attack on the J.P. Morgan. The cyber attack on the company has almost stolen 76000 customer data from the database. This attack has very critically affected the reputation of the company along with the financial loss (Snyder, 2014). This problem is best analysed through the OCTIVE allegro because this system provides a holistic analysis on the all the possible risk rising elements that can hamper the reputation of the company and could create a major financial loss to the company. Therefore, the application and the analysis of the threat through this model will be very much valid in this case.
References
Abouzakhar, N. (2014). Cyber security for industrial control and automation systems. Ind Eng Manage, 03(05). doi:10.4172/2169-0316.s1.003
Boyson, S. (2014). Cyber supply chain risk management: Revolutionizing the strategic control of critical IT systems. Technovation, 34(7), 342-353. doi:10.1016/j.technovation.2014.02.001
Cert.org,. (2014). OCTAVE | Cyber Risk and Resilience Management | The CERT Division. Retrieved 8 January 2015, from https://www.cert.org/resilience/products-services/octave/
Charitoudi, K. (2013). A Socio-Technical Approach to Cyber Risk Management and Impact Assessment. Journal Of Information Security, 04(01), 33-41. doi:10.4236/jis.2013.41005
Dnv.com,. (2015). Quantitative risk assessment. Retrieved 8 January 2015, from https://www.dnv.com/industry/oil_gas/services_and_solutions/risk_management_advisory/safety_risk_management/quantitative_risk_assessment_qra/
Haimes, Y. (2002). Risk of Terrorism to Cyber-Physical and Organizational-Societal Infrastructures. Public Works Management & Policy, 6(4), 231-240. doi:10.1177/1087724×02006004001
Iasiello, E. (2014). Is Cyber Deterrence an Illusory Course of Action?. Journal Of Strategic Security, 7(1), 54-67. doi:10.5038/1944-0472.7.1.5
Kizza, J. (2014). Computer Network Security and Cyber Ethics. Jefferson N.C.: McFarland & Company, Inc., Publishers.
McDonough, W. (2007). Cyber risk and privacy liability: A click in the right direction¿. Journal Of Healthcare Risk Management, 27(4), 9-12. doi:10.1002/jhrm.5600270403
ÖÄŸüt, H., Raghunathan, S., & Menon, N. (2010). Cyber Security Risk Management: Public Policy Implications of Correlated Risk, Imperfect Ability to Prove Loss, and Observability of Self-Protection. Risk Analysis, 31(3), 497-512. doi:10.1111/j.1539-6924.2010.01478.x
Olcott, J., & Sills, E. (2014). Cybersecurity: Energy Industry Mobilizing for Cyber Risk Control. Nat. Gas Elec., 30(10), 20-24. doi:10.1002/gas.21761
Palasinski, M., & Bowman-Grieve, L. (2014). Tackling cyber-terrorism: Balancing surveillance with counter-communication. Security Journal. doi:10.1057/sj.2014.19
Rice, E., & AlMajali, A. (2014). Mitigating the Risk of Cyber Attack on Smart Grid Systems. Procedia Computer Science, 28, 575-582. doi:10.1016/j.procs.2014.03.070
Sims, S. (2012). Qualitative vs. Quantitative Risk Assessment. Sans.edu. Retrieved 8 January 2015, from https://www.sans.edu/research/leadership-laboratory/article/risk-assessment
Sliwinski, K. (2014). Moving beyond the European Union’s Weakness as a Cyber-Security Agent. Contemporary Security Policy, 35(3), 468-486. doi:10.1080/13523260.2014.959261
Snyder, B. (2014). 5 huge cybersecurity breaches at companies you know. Fortune. Retrieved 8 January 2015, from https://fortune.com/2014/10/03/5-huge-cybersecurity-breaches-at-big-companies/
Vacca, J. (2014). Cyber Security and IT Infrastructure Protection. Rockland, Massachusetts: Syngress.
Violino, B. (2014). IT risk assessment frameworks: real-world experience. CSO Online. Retrieved 8 January 2015, from https://www.csoonline.com/article/2125140/metrics-budgets/it-risk-assessment-frameworks–real-world-experience.html
Voeller, J. (2014). Cyber Security. Wiley.
Woo, P., & Kim, B. (2014). A Study on Quantitative Methodology to Assess Cyber Security Risk of SCADA Systems. AMR, 960-961, 1602-1611. doi:10.4028/www.scientific.net/amr.960-961.1602