Journal of Operational Risk

Marcelo Cruz

Editor-in-chief

Welcome to the second issue of Volume 19 of The Journal of Operational Risk.

Many readers of this journal will be familiar with the Crowdstrike outage in July 2024, a few weeks before this letter was written. Since the outage, risk managers have taken considerable heat because most banks did not previously consider Crowdstrike a “critical vendor” (as noted by the Risk.net article “Crowdstrike outage spurs rethink on ‘critical’ vendors”).1 It seems like the challenges in cyber security and third-party risk (and fourth-party) management are outpacing risk managers’ abilities to meet them, as banks have on occasion been caught not properly classifying their third-party providers (another example, discussed in the same Risk.net article, is the ransomware attack that Ion Group suffered earlier this year). However, to come to industry practitioners’ defense, tackling such challenges seems like an almost impossible task, as it is very difficult these days to point out all the vendors that can cause the “level 1” damage that affects regular day-to-day functioning. Given its importance in the industry, we particularly welcome the submission of papers discussing this issue.

Other areas in which we would be interested to see more papers submitted include the application of machine learning (ML) techniques and artificial intelligence (AI), and cyber and IT risks (both their quantification and better ways to manage them) as well as enterprise risk management (ERM) and everything this broad subject encompasses (eg, establishing risk policies and procedures, implementing firmwide controls, risk aggregation, revamping risk organization, internal audit). Analytical papers on operational risk measurement are also welcome, particularly those that focus on stress testing and managing operational risk.

These are certainly exciting, perhaps even worrying, times! The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of OpRisk discussions and we welcome papers that shed light on all of the above topics.

RESEARCH PAPERS

In the first paper in this issue, “How is risk culture conceptualized in organizations? The pan-industry risk culture model”, Roger Noon notes that risk culture is a well-established concept that is considered by both boards and executive management as being key to effective risk management and critical to long-term corporate health. However, in the author’s opinion, the proactive management of a financial institution’s risk culture is still immature as a discipline, as reflected by the lack of consensus in the literature around what the concept involves, a lack of sophistication in assessing corporate health/strength, and ongoing challenges in effectively addressing weaknesses. Against this background, Noon’s study uses the conceptual encounter methodology to develop a pan-industry risk culture (PIRC) model based on the experiences of risk professionals across a range of industry sectors. The model is informed by complex-systems theory and an appreciation of organizational subcultures. It also describes “what good is” by defining three stages of organizational maturity. While the model incorporates well-understood attributes, such as leadership, accountability and incentives, Noon aims to widen understanding through the inclusion of more novel aspects, such as the alignment of risk culture with strategy, the role of middle managers, the consideration of unintended behavioral consequences, and the influence of an internal risk culture capability.

In “Integrating internal and external loss data via an equivalence principle”, the issue’s second paper, Ruben D. Cohen, Jia Lu and Jonathan Humphries develop an approach to address the common and troublesome issue of data scarcity in operational risk analysis and modeling. Their method uses external loss data to supplement internal loss data and operates by combining the two data sets via a principle of equivalence, linking the loss count with the time horizon through the loss frequency. The application of this principle, as the authors describe, leads to the merging of internal and external loss data, enabling longer-term loss projections such as those needed for scenario analysis, capital planning and stress testing. The beauty of their method is that this is accomplished in a logical and transparent way, without having to go through the conventional modeling procedures of writing complicated code to fit data into distributions and performing numerical computations and simulations.

Our third paper, “Do government audits raise the risk awareness of management? An investigation from the perspective of cost variability” by Zhoutianyang Sun and Jia Li, introduces a double-difference research method that the authors use to examine the impact of government audits on the cost variability of state-owned enterprises in China. The empirical results presented show that after the implementation of government audits, the cost variability of listed companies controlled by audited state-owned enterprises significantly increases, indicating that the implementation of government audits raises management’s risk awareness. A heterogeneity analysis shows that the effect of government audits on cost variability mainly occurs in samples of enterprises with higher operational risks and a cost-leadership strategy, which confirms the intermediate effect channel of government audits strengthening management’s risk awareness. Further research by the authors shows that the effect of government audits on enterprise operational risks mainly occurs in samples with more cost variability, and a three-step mechanism test also shows that cost variability is part of the mediating mechanism for government audits to reduce the operational risks of state-owned enterprises. Sun and Li’s study enriches the literature on the impact of government audits on operational risks of state-owned enterprises and provides empirical support for the use of government audits in promoting the high-quality development of state-owned enterprises.

Finally, in “Natural language processing-based detection of systematic anomalies among the narratives of consumer complaints”, Peiheng Gao, Ning Sun, Xuefeng Wang, Chen Yang and Riˇcardas Zitikis develop an natural language processing-based procedure for detecting systematic nonmeritorious consumer complaints (which they simply call systematic anomalies) among complaint narratives. While classification algorithms are used to identify meritorious complaints, in the case of smaller and frequent systematic patterns of nonmeritorious complaints, these algorithms may falter for a variety of reasons, such as technical issues or the natural limitations of human analysts. Therefore, at the next stage after classification, the authors’ procedure converts complaint narratives into quantitative data, which is then analyzed using indexes for detecting systematic anomalies. The authors illustrate the whole procedure using complaint narratives from the Consumer Complaint Database of the US Consumer Financial Protection Bureau using logistic regression, support vector machine, gradient boosting, multilayer perceptron, random forest and naive Bayes algorithms. Their results suggest that the support vector machine method outperforms the other selected classifiers. Although the classification results obtained with the Valence Aware Dictionary for sEntiment Reasoning (VADER) intensity (pertinent to the featurization step) have lower accuracy, they contain fewer nonmeritorious complaints than results obtained without using the VADER intensity. Gao et al’s procedure could be applied to consumer complaints to identify those with a higher priority to receive reliefs.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here