I am going to post the beginning of my research into what methods used in risk management and decision making for cyber security have actually been proven to work. Industry standards organizations and every consulting firm I have encountered thus far provides a framework and methodology but plans on you to be the guinea pig for their ideas. I’ve had the opportunity to ask the owners of these consulting firms, including those that call themselves quants, how they know their methods work. In the field of “risk” this is a daunting question because you’re not permitted to take a pragmatic approach. That is, your organization has a tiny sample size. Saying “We’ll never be breached because we haven’t been breached in our 100 years of operation” is like saying “My grandfather is 100 years old, the odds of him dying now must therefore be slim to none!” I get it. It is takes some effort to “prove” your method works. But that really isn’t acceptable when people have already done the research. This dilemma is no mystery in the scientific academic community. In academia you take what you do know and you don’t pretend to know anything else. We can take what we DO know from the past 50+years of Judgement and Decision Making Psychology, Social Psychology, Cognitive Science, Operations Research Sciences, Actuarial Science, and Probability and Statistics research and create a framework for decision making the way scientists do. One that we are 100% expecting to update as more research findings become available.
You may be thinking “Well, isn’t that what businesses, standards organizations, Institutes of X, Y and Z, do to create their frameworks?” (Hint: no)
For anyone who works in business, risk management, decision research, or holds an advisory role as a quant, this is not news. This realization that most businesses, government, military, and policy decisions do not take a scientific approach is especially painful to “academics” who have made a transition into “industry”. There are fascinating studies into the why and how provided by psychology and organizational sociology researchers. That is an area I hope to research further in this blog. Not as the primary researcher but as one of the few responsible decision makers who takes into account the very important work of academics.
Some academic research is not credible. Some is terribly biased. Some academic literature depends on assumptions that have been proven false many years prior but the researcher performed an incomplete literature review before publishing their work and then the editor of that journal too did not do their job. Some say we can rely on the Impact Factor when vetting academic journals. These are all things we need to explore here. We need to be critical from the root of our problem and leave no stone unturned. We need to question all basic assumptions or we too will become the snakeoil salesman who is unaware that what they sell is snakeoil.