Journal of Risk
ISSN:
1465-1211 (print)
1755-2842 (online)
Editor-in-chief: Farid AitSahlia
Volume 12, Number 4 (June 2010)
Editor's Letter
Farid AitSahlia
Warrington College of Business Administration, University of Florida
Signs of the financial crisis that began in 2007 appear to point to a near recovery, but the lessons of the crisis are still being learned. How should risk be measured? Although a consensus may have been reached through the Basel accords, it is mostly because of the immediate need for a practical measure. Work continues to be done in developing and estimating alternatives to the widely used value-at-risk (VaR) measure. This issue contains two articles in this respect, with one that deals with capital adequacy and the other with the estimation of an alternative measure to VaR, namely expected shortfall (ES).
Given a risk measure, the main problem is that of risk management. Its twin facets of the accurate estimation of input parameters and the correct development of hedging strategies are both addressed in the other two papers in this issue. In the first article by Jokivuolle and Peura, we are reminded that the global financial crisis has demonstrated that a bank’s financial distress can start well before an actual default: at a fairly high although deteriorated rating level. To avoid the adverse consequences of such a distress, the bank should hold enough capital not only against default – that is, economic capital – but also to support, at least, a minimum non-distressed rating at all times. This requires much higher capital buffers than standard economic capital models typically imply. In their article, Jokivuolle and Peura argue that such minimum rating targeting may have explained many banks’ actual capital levels; at least in relation to the risks banks, regulators and rating agencies were able to anticipate and measure in the years preceding the crisis. The authors provide a two-stage simulation-based framework in the context of a corporate credit portfolio, in which the amount of capital needed to support a desired minimum target rating can be measured.
The second article, by Yu et al, deals with risk measurement. The VaR concept has been widely adopted by financial regulators all over the world for designing capital adequacy standards for banks and financial institutions. In addition, financial firms have adopted VaR for internal risk management and the allocation of resources. However, its potential failure at correctly measuring diversification has led to an increase in an alternative, namely ES, or expected loss beyond VaR. In their article, Yu et al propose a non-parametric, kernel-based approach that mitigates the occurrence of bias in the estimation of tail distribution. They exploit the representation of ES as an integral of the quantile function to develop a one-step kernel estimator. In a comparison with existing kernel estimators, they conduct a Monte Carlo study that supports the substantial improvement in the accuracy and efficiency provided by their technique.
The calculation of accurate futures hedge ratios is important for the practice of risk management. While the calculation of such ratios using naïve one-to-one or simple regression approaches are often preferred due to their simplicity, the more involved generalized autoregressive conditional heteroskedasticity (GARCH) approach is often reported to be more accurate. The third paper, by McMillan and Garcia, examines the performance of hedge ratios constructed using the realized volatility approach, which is simple in construction but grounded in a rigorous theoretical base. Through constructing hedged portfolios, the paper examines the performance of the simple regression, rolling simple regression, GARCH and realized hedge ratio measures. The results support the view that, based on minimizing the portfolio variance, the static regression, rolling regression and GARCH-based methods perform well; however, this is often at the expense of negative mean values. Portfolio performance that takes into account both mean and variance using the Sharpe ratio almost unanimously supports the portfolios constructed using the realized hedge ratios. A final issue that remains, however, is that the realized hedge ratio itself is volatile such that any benefits in portfolio construction may be negated by the transaction costs required. Therefore, future research could examine ways, such as smoothing, to allow the gains to be realized.
The fourth and final article, by Maller et al deals with parameter estimation in the context of portfolio diversification. The Sharpe ratio is one of the most common and also most important measures of the return–risk ratio of a portfolio. In practice the ratio must be estimated from returns data, and it is well known that the corresponding sampling error will transmit to the Sharpe ratio itself. Generalizing earlier analyses, Maller et al obtain in their paper for the first time, under very general conditions and in a definitive and highly usable form, the large-sample distribution of the estimated maximal Sharpe ratio. This distribution represents the spectrum of possible optimal return–risk tradeoffs that can be constructed from the data ex ante. There are many possible uses of it; a particular example, focusing on the question of whether it is better to select, ex ante, a suboptimal portfolio from a large class of assets or to perform a Markowitz optimal procedure on a subset of the assets, is discussed. The authors illustrate applications of the theory by analyzing a large sample of US companies, giving a comparison of constant correlation and momentum strategies with the optimal strategy. Simulations based on this data are also given for illustration.