STATEMENT OF THE PROBLEM

 

Definition of the Problem

 

Research by Nobel Laureate Daniel Kahneman, Amos Tversky, and other judgment and decision-making (JDM) psychologists found that humans are poor estimators of uncertainty. Their studies also found this to be true regardless of the field of work or the level of experience (Kahneman et al. 1972). Later studies confirmed these findings (Onkal et al., 2003, Soll & Klayman, 2004, Speirs-Bridge et al., 2010). Researchers found that experience and level of training only weakly related to performance (Camerer & Johnson, 1991; Burgman et al., 2011). Reliance on experts for decision making in the presence of uncertainty is common in many fields. Such areas include ecology (McBride et al. 2012), weather forecasting (Murphy and Winkler, 1984), accounting (Ashton, 1974), finance (Onkal et al., 2003), clinical medicine (Christensen-Szalanski et al., 1982), psychiatry (Oskamp, 1965), and engineering (Jorgensen et al., 2004) and information security (Kouns and Minoli, 2010). Surveys by Hilborn & Ludwif found that managers do not know if their successes and failures are a result of their expert’s guidance (1993, Sutherland, 2006; Roura-Pascual et al., 2009). The observations made in these studies suggested that the use of experts in risk assessment may provide a false measurement of risk.

Methods for eliciting knowledge from experts with minimally biased results have been developed and tested in multiple disciplines. These methods reduced bias and enabled management to measure the accuracy and precision of their experts (McBride et al, 2012; Bolger & Onkal-Atay, 2004, Lichtenstein et al., 1982; Clemen & Winkler, 1999). The objective of this capstone is to identify and evaluate expert knowledge elicitation methods that address the fallibility of human estimation. What methods are available to treat this observed human error? What are the criticisms of these methods? Can methods used in other disciplines be applied to the field of information security?

Evidence Justifying the Problem

 

Periodic assessments of risk to data handling assets are standard practice for regulated organizations such as insurance companies, financial institutions, and medical offices (Kouns & Minoli, 2010). In the United States, banks may be examined by the FDIC, Credit Unions by the NCUA, medical record handling organizations by regulators that enforce HIPAA, and credit card information handling organizations by regulators enforcing PCI-DSS. Resources fund whole departments to keep up with these requirements. Compliance, Risk Management, and Governance personnel support some form of regular risk assessment (Kouns & Minoli, 2010). Risk assessments aim to identify threats and then systematically consider the probability and the impact of each threat perceived being exploited (Kouns & Minoli, 2010). Where scientific data is difficult to obtain to make these determinations, organizations request estimates from experts instead (Kuhnert et al., 2010). These experts are sometimes called advisors, consultants, senior analysts, or subject matter experts. Select studies showed that estimates provided by experts were made inaccurate by biases over the majority of trials performed. In those studies experts expressed 80% confidence that their estimates were true, but their estimates captured the truth only 49-65% of the time (Kahneman, Slovic, and Tversky, 1982; McBride, Fidler, & Burgman, 2012; McBride 2012, Onkal et al., 2003; Soll & Klayman, 2004; Speirs-Bridge et al., 2010). McBride et al. compared expert judgments to those of students upcoming in the same fields of work as the experts. Students, unlike experts, displayed a near-perfect awareness of their uncertainty while experts were not (McBride, Fidler, & Burgman, 2012). In other words, students demonstrated a better awareness of their own uncertainty. The research showed no consistent relationship between performance, years of experience, publication record, or self-assessment expertise (McBride, Fidler, & Burgman, 2012).  McBride et al.’s experiments with bias reducing methods showed that experts needed special training in order to accurately communicate their knowledge (McBride, Fidler & Burgman 2012). This may suggest that, critical decisions being made by our military, government, medical, financial and critical infrastructure sectors are 80% certain or higher in their estimates when they are more like 40-60% certain. Mcbride et al. believed that expert elicitation methods deserved more empirical scrutiny than they had received historically. Their experiments compared quantitative predictions made by experts, to the known outcomes of those events. Experts were overconfident most of the time (McBride, Fidler & Burgman 2012). Overconfidence meant that when experts were 80% certain that their estimates contained the correct answer, they were correct only 49-65% of the time. When students in the same fields of expertise underwent the same testing, they scored higher at being aware of their uncertainty than the people with more experience than them.

Deficiencies in Evidence

 

Minimal research is available that evaluates the different methods of collecting and processing estimates provided by experts. Even fewer journals specifically evaluate these methods used by Information Security professionals and their estimates. Standards organizations provide guidance in information security risk management, which involves assessment by experts, but none of the guidance provides instruction on how to address human bias or explain why they believe their methods work.

Defining the Audience

 

Executive decision makers, risk management personnel, and information security leaders and analysts may benefit from this study, as it reviews methods and criticisms that they, or at least their risk department, may be using with confidence, when the research suggests that they should not be. Experts interested in accurately communicating their knowledge may also benefit from this research since methods of increasing accuracy are discussed. Finally, individuals interested in human judgment and decision-making psychology may find this to be a valuable collection of peer-reviewed articles on systematic reduction of human bias.

Next section: Literature Review

References