Journal of Computational Finance
ISSN:
1460-1559 (print)
1755-2850 (online)
Editor-in-chief: Christoph Reisinger
Fast stochastic forward sensitivities in Monte Carlo simulations using stochastic automatic differentiation (with applications to initial margin valuation adjustments)
Need to know
- In this paper, we apply the stochastic (backward) algorithmic differentiation to calculate stochastic forward sensitivities, i.e., the random variable representing sensitivities at a future points in time.
- A typical application of stochastic forward sensitivities is the exact calculation of an initial margin valuation adjustment (MVA), assuming that the initial margin is determined from a sensitivity based risk model.
- We demonstrate that these forward sensitivities can be obtained in a single stochastic automatic differentiation sweep. Our test case generates 5 million sensitivities in seconds.
Abstract
In this paper, we apply stochastic (backward) automatic differentiation to calculate stochastic forward sensitivities. A forward sensitivity is a sensitivity at a future point in time, conditional on future states (ie, it is a random variable). A typical application of stochastic forward sensitivities is the exact calculation of an initial margin valuation adjustment, assuming the initial margin is determined from a sensitivity- based risk model. The ISDA Standard Initial Margin Model is an example of such a model. We demonstrate that these forward sensitivities can be obtained in a single stochastic (backward) automatic differentiation sweep with an additional conditional expectation step. Although the additional conditional expectation step represents a burden, it enables us to utilize the expected stochastic (backward) automatic differentiation: a modified version of the stochastic (backward) automatic differentiation. As a test case, we consider a hedge simulation requiring the numerical calculation of 5 million sensitivities. This calculation, showing the accuracy of the sensitivities, requires approximately 10 seconds on a 2014 laptop. However, in real applications the performance may be even more impressive, since 90% of the computation time is consumed by the conditional expectation regression, which does not scale with the number of products.
Introduction
1 Introduction
We consider a Monte Carlo simulation of state variables (possibly given by an Euler discretization of a stochastic differential equation (SDE), eg, a London Interbank Offered Rate (Libor) market model), modeled over a filtered probability space . Here, are our model primitives (and adapted processes). The model is usually specified by model parameters and the initial values .
Let denote the time- value of a financial product (eg, a derivative) under the given model. Then, is called sensitivity of with respect to .
Likewise, is called the time- forward sensitivity of with respect to . That is, the forward sensitivities are random variables representing the on-path sensitivities of with respect to the initial values .11 1 Although the exposition only considers sensitivities with respect to initial values (ie, deltas), sensitivities with respect to model parameters (eg, vegas) are also covered, because they can formally be seen as additional components of the vector-stochastic process.
The numerical valuation of forward sensitivities in a Monte Carlo simulation is demanding due to two aspects.
- •
In a Monte Carlo simulation, the value is often hard to obtain. Instead, we often deal with random variables such that is the time- conditional expectation of , ie, .22 2 An example for is the sum of the discounted future cashflows of a swap.
- •
The numerical valuation of forward sensitivities via a standard finite-difference approximation of the partial derivative (bump-and-revalue) would require a huge amount of revaluations, namely for each time and each path .
To solve the first issue, one may utilize analytic formulas or analytic approximations for in terms of . If this is not possible, one usually relies on estimation methods (regression, local regression, etc). These methods, sometimes referred to as American Monte Carlo, are well elaborated and fairly standard (see Fries 2007).
To solve the second issue, one may also utilize analytic formulas. If this is not possible, one may rely on numerical methods, eg, automatic (algorithmic) differentiation (AAD; see Capriotti and Giles 2011; Giles and Glasserman 2006; Homescu 2011).
However, a subtle problem is that, for products that involve stochastic operators (like conditional expectations) in their valuation (eg, Bermudan options), the direct application of backward AAD appears to be nontrivial (see Antonov 2017; Capriotti et al 2016). In Fries (2017b), the automatic differentiation of such products was greatly simplified by using expected stochastic AAD.
In the following, we first reformulate the calculation of the forward sensitivities such that they are represented by a single backward automatic differentiation; we then show that expected stochastic AAD can be used for forward sensitivities as well.
2 Stochastic automatic differentiation for forward sensitivities
Stochastic automatic differentiation (Fries 2017b) is the reformulation of the automatic differentiation algorithms for random variables, with a special treatment of some stochastic operators.
The algorithm allows us to numerically calculate
where and are random variables.
In the notation of Fries (2017b), the backward automatic differentiation is derived for an algorithm calculating depending on some inputs , ie,
where and are (discretized) random variables, by considering intermediate results, ie, calculation steps. Let , where, for ,
(2.1) |
denote intermediate results, where is an operator of arguments specified by the arguments indexes list .
Returning to our application of the calculation of forward sensitivities, we have that our model primitives are intermediate results of the Monte Carlo simulation, ie, for some . However, we are not in the situation of considering a single dependent variable . Instead, we are interested in the differentiation of the variables for .
We now assume that the valuation of the derivative is given by random variables with
(2.2) |
where are -measurable random variables.
This representation is very common in Monte Carlo valuations, where are numéraire-relative future cashflows and is the numéraire.33 3 For example, the implementation design in finmath.net (n.d.-b) is such that in a Monte Carlo simulation all financial derivatives are represented in this form.
To achieve the valuation of all forward sensitivities in a single automatic differentiation step, we assume (or observe) that
(2.3) |
The property (2.3) is natural, since a time- cashflow cannot depend on a later, ie, time-, model variable; otherwise, would not be -measurable. Hence, we can define the random variable44 4 Note that then .
(2.4) |
and get55 5 The equality may require additional regularity assumptions in general, but it is trivial for a Monte Carlo simulation on a discrete sample space.
for all state variables .
The additional term is just the theta of the derivative with respect to the numéraire accrual account. In cases where the numéraire is a model primitive state variable, eg, and/or we are not interested in the theta, we get the even simpler expression
As we will show in Section 2.2.1, we may replace by an unconditional expectation of and have
This is possible due to the application of the differential operator with respect to our -measurable random variables. Hence, it is possible to determine all forward sensitivities from a single stochastic automatic differentiation of with respect to the calculation nodes . In summary, we require two steps.
- •
Calculate using the stochastic automatic differentiation, where and (for some ).
- •
For each time step , apply the conditional expectation operator, or an estimator for the conditional expectation to .
The numerical costs of this calculation are stunningly low. First, note that all partial derivatives for are calculated in a single stochastic backward automatic differentiation sweep. Second, note that at each time step we can reuse the same conditional expectation estimator, which is a simple projection operator. Hence, the effort to calculate all forward sensitivities is comparable to a single stochastic backward automatic differentiation sweep plus a set of conditional expectation estimators, which, in turn, is comparable to a single valuation of a Bermudan option (which requires the estimation of conditional expectations at each time step).
2.1 Expected stochastic automatic differentiation
In Fries (2017b), it was shown that a modification of the backward automatic differentiation algorithm can be used to calculate an expectation of the stochastic differentiation. This modification is important for more complex derivatives, eg, Bermudan options. The theorem in Fries (2017b) also holds if a conditional expectation operator is applied to the derivative. Hence, we can apply this theorem here.
Theorem 1 from Fries (2017b) reads as follows.
Theorem 2.1.
(Expected stochastic backward automatic differentiation) Let denote a family of self-adjoint linear operators, ie, for any two random variables , , and any , we have
Further, let the operators be sufficiently regular such that .66 6 Although this assumption may be nontrivial in general, it is trivial if we consider to be a discrete (Monte Carlo) sampling space. Then, the modified backward automatic differentiation algorithm, defined as follows,
-
Initialize and for .
-
For all (iterating backward through the operator list),
-
for all (iterating through the argument list)
-
gives
The operator in this theorem can be replaced by a conditional expectation operator to get
given that the index corresponds to the calculation of , ie, (given that is an -measurable random variable).
This result is important, because it allows us to use the modified backward automatic differentiation presented in Fries (2017b) for forward sensitivities too.
The result also holds in more general cases where one does not have a decomposition of the product into a sum of discounted cashflows. In fact, the method also works for Bermudan options, where the random variable is constructed by backward algorithm. We will investigate this case later, after we have summarized the result as a theorem.
2.2 Main theorem
Theorem 2.2.
(Forward sensitivities via expected stochastic backward automatic differentiation) Let the time- value of a financial derivative be given by
(2.5) |
where do not depend on for . Let the random variable be given by
Assume that is constructed from the model quantities by an algorithm given by operators and intermediate results , defined in (2.1), with for and . Let fulfill the assumption of Theorem 2.1 with ; then, we have
where the are constructed using the modified backward automatic differentiation algorithm from Theorem 2.1.
Remark 2.3.
Alternatively, we may consider the process
The difference between and is that contains the past cashflows accrued by the numéraire (such that is a martingale).
2.2.1 Proof of the main theorem
Proof.
Let denote the random variable of aggregated (discounted) future cashflows such that the time- value is
(2.6) |
Let denote an -measurable state variable. This implies that we can interpret as an initial value of the (nested Monte Carlo) simulation on the (future) paths , where is the smallest set such that . Recall that -measurability is equivalent to requiring that is constant on the set for any fixed .
The definition of as aggregated future cashflows implies that does not depend on for .77 7 This property is best visualized in a nested Monte Carlo simulation, although we will later approximate conditional expectations in non-nested simulations. Note that we first derive the exact result and then apply approximation methods to it. That is, we have
(2.7) |
We are interested in the derivative of the conditional expectation of the future cashflows of our derivatives, that is,
(2.8) |
For a given , let . So, on path , we are interested in calculating the derivative of
(2.9) |
At this point, we might just apply the backward differentiation to each conditioning node . However, this would require multiple backward differentiation sweeps, namely initializing the differentials with the indicator functions .
However, due to (2.7) we can consider an unconditional expectation on , given that we differentiate with respect to the direction .
Since is measurable it has to stay constant on the sets . Hence,
and | ||||
Due to (2.7), we may take the expectation over all paths instead of just (because the other paths do not depend on the value of ), that is, we can replace the inner conditional expectation with an unconditional one:
This puts us in a situation to calculate the pathwise derivative of the unconditional expectation of , followed by a conditional expectation. The result has a simple interpretation: the conditional expectation can be replaced by an unconditional one since the paths on the complement of have no dependency on the initial value of . This is intuitive if we depict the simulation as a nested Monte Carlo simulation.
This proves the result since are the backward differentiations of the unconditional expectation of . ∎
2.2.2 Interpretation
The result can be easily illustrated using matrix notion. Given the pathwise derivative matrix , we apply the projection operator (corresponding to the conditional expectation) and have
Here, the projection on the left-hand side is due to being a conditional expectation, and the projection on the right-hand side is due to considering directional derivatives with respect to an -measurable random variable. The matrix consists of columns representing the renormalized indicator functions of the sets generating the filtration . The matrix is block diagonal with blocks of renormalized unit vectors. The matrix is an identity matrix (where is the number of sets ).
Now (2.7) implies that is a block-diagonal matrix, where each block corresponds to a column of , that is, for . In other words, is a diagonal matrix. Using the vector (the projection vector corresponding to the unconditional expectation), this implies
that is,
(2.10) |
that is,
(2.11) |
This last equation is the backward differentiation ( vector multiplication from left to right) of an unconditional expectation () carried out as a pathwise backward differentiation () followed by a conditional expectation on the row vector of results ().
These results are exact for a full-nested Monte Carlo simulation. If we replace the conditional expectation operators by some approximation (eg, regression methods, where the matrix of regression functions is not generating ), the different approaches may give different approximations of the derivative. The reason for this is simple: the differentiation of an approximation is not the same as the approximation of a differentiation.
Such problems are not specific to our application. A similar example would be comparing the calculation of a sensitivity via pathwise differentiation to that calculation via likelihood ratio (the dual method) and applying a Monte Carlo approximation to both: the approximation errors are different.
2.3 Forward sensitivity via forward differentiation
We may also derive the corresponding result for forward differentiation. Using (2.7), we have
that is, in correspondence with (2.10), we have
(2.12) |
This means that we can calculate the forward sensitivity by “bumping” the input simultaneously on all paths, followed by a single conditional expectation on the output values . For Markovian processes , this result can be improved further (see Fries 2018).
Note, however, that in many applications forward mode differentiation may be less efficient, since we may be interested in the dependency of a single on multiple -measurable random variables .
2.4 Bermudan options
The theorem also applies to Bermudan options or other products incorporating additional conditional expectation operators. For a Bermudan option, the valuation can be represented by a random variable such that , where is constructed using the backward algorithm (see Fries 2017a, 2007), ie, the Snell envelope.
In this case, the process
has a natural interpretation: it represents the value of the Bermudan option including the accrual of past cashflows and/or the accrual of the value at a past exercise. In other words, if is the exercise time of a Bermudan option, then for we have .
Note that the sensitivity of is exactly the quantity required in an xVA (margin valuation adjustment (MVA)) simulation, because the value includes the information of a past exercise.
2.4.1 Conditional expectation estimator: choice of basis functions
Assuming we use a least-squares regression for the estimation of the conditional expectation , there is a pitfall in the naive application of the theorem to a Bermudan option: a poor choice of regression basis functions.
To illustrate this choice-of-basis-function issue, consider the example of a Bermudan option where we have the choice to receive in or in . If denotes the optimal exercise time, we have that, conditional on , the product has a delta of . Indeed, we would find
Now, consider that we use a regression with basis functions that are functions of (this is a common approach). Assume, for simplicity, that our basis function is just the constant : in this case, the conditional expectation estimator is the unconditional expectation, and we get instead of . Further, a function of with cannot precisely capture the exercise boundary, because is random.
From this example, it is obvious that the conditional expectation estimator can be improved in a simple way by including the information in the basis functions, that is, our basis functions are multiplied by the indicator . We do not include the path in the regression, because the delta is known for these paths. Indeed, with this modification even the trivial basis function results in the correct estimate.
We illustrate the effect for the Bermudan option in Figure 7.
2.5 Discontinuous payoffs, differentiation of indicator functions
It is known that in a Monte Carlo simulation the presence of discontinuous payoffs will lead to high Monte Carlo errors for finite difference approximations of derivatives (Glasserman 2003). The expected stochastic automatic differentiation allows us to improve the derivative of discontinuities (see Fries 2017a).
Let us quickly mention that this representation can be used for the forward sensitivities without change. If we are only interested in the conditional expectation of the final result, it is sufficient to consider
which evaluates to
which can be approximated by | ||||
(2.13) |
That is, the differentiation of the indicator can be represented as a conditional expectation of the adjoint derivative . Since our algorithm allows us to handle conditional expectation operators, this result opens up new ways to approximate the differentiation of the indicator function. For the above approximation, we find that, if we are only interested in the conditional expectation of the final result, we can approximate
This is the same approximation as for the time-zero sensitivities, so no special treatment is required for forward sensitivities.
It is important to note that our implementation allows us to adapt the handling of the differentiation individually for each indicator function (on a per-operator basis); see Fries (2017a) for an example.
2.5.1 Indicator functions at exercise boundary
Indicator functions also occur to express the exercise boundary of a Bermudan option, for example. In case of an optimal exercise, the first derivative of the indicator is known to be zero (Piterbarg 2004). Hence, it can – theoretically – be neglected. That said, numerical errors, eg, improper estimation of the exercise boundary, may lead to a positive contribution of differentiation of the exercise boundary.
Since our method allows us to assess this effect, it may be used to check the optimality of the exercise boundary (see Fries 2017a).
The implementation allows us to enable or disable differentiation of the indicator function on an individual (per-operator) basis, eg, allowing us to avoid differentiating an optimal exercise.
3 Numerical results
A typical application of stochastic forward sensitivities is the exact calculation of an initial MVA, assuming that the initial margin is determined from a sensitivity-based risk model. The ISDA Standard Initial Margin Model (ISDA SIMM) is an example of such a model.
However, presenting results for an MVA calculation (which is straightforward now) is not a good test case, since we lack a benchmark.
Instead, we analyze the hedge error of a delta hedge in a hedge simulation under a model for which we know the analytic solution.
3.1 Hedge performance of a delta hedge (using stochastic AAD forward sensitivities)
We consider the valuation of a derivative , eg, a European option with , given a model SDE
for the asset and the bank account .
Under this model, using a time discretization , we consider the delta hedge portfolio given by
where
Note that the initial value implies .
The time-discrete delta hedge results in a replication portfolio with
The hedge error depends on two aspects:
- •
the frequency of the hedge, ie, the time-step size ; and
- •
the accuracy of the calculation of the sensitivities .
We perform a time-discrete delta hedge on 50 000 paths with 100 time steps. Note that is a random variable. Hence, the model requires the calculation of 5 million forward sensitivities. In this case, the numerical calculation of the forward sensitivities using stochastic automatic differentiation required 10 seconds on a standard MacBook Pro (Mid 2014, 2.8 GHz Core i7 (I7-4980HQ)). The implementation was performed in Java. The source code is available at finmath.net (n.d.-b).
3.1.1 Delta hedge of a European option
Under the given model, we have analytic expressions for the forward sensitivities of European options. Hence, we can benchmark the hedge simulation using sensitivities obtained by our numerical method against the hedge simulation using analytic sensitivities. Note that both methods will show a residual error due to the time discretization.
We test the method using a European option with maturity and strike .
In Figure 1, we show the final time- value of the option payoff and the replication portfolio . The delta hedge reproduces the final payoff with only small errors.
In Figure 2, we show the error distribution using the analytic formula for delta (blue) and the numerical (AAD) calculation for delta (green). From this, we see that the residual errors correspond to the error expected from the time discretization. We also depict the result if we omit the conditional expectation step in the calculation of forward sensitivity (red). We obtain wrong sensitivities and the hedge has a huge error.
In Figure 3, we show the error distribution using the analytic formula for delta and the numerical (AAD) calculation for delta with different numbers of paths. For the analytic method the number of paths is irrelevant. For the numerical method the number of paths enters into the accuracy of the estimation of the conditional expectation operator.
3.1.2 Delta hedge of a Bermudan option
We test the delta hedge of a Bermudan option. As we do not have analytic expressions for the forward sensitivities , we are limited to performing the replication using forward sensitivities obtained from the expected stochastic automatic differentiation. We can then analyze the terminal hedge error of the replication portfolio and the (accrued) derivative payoffs, eg, comparing them to the hedge error distribution of the European option analyzed in the previous section.
The exact specification of the test product is given in Table 1.
Exercise | Payoff upon | Exercise | |
---|---|---|---|
time | exercise | Strike | probability |
0 |
In Figure 4, we show the final time- value of the option payoff and the replication portfolio as a function of the underlying at the exercise time .88 8 This is a nice illustration of the payoff of the Bermudan option, because it somewhat overlays the payoffs at the different exercise times. Note that the difference between and is just a deterministic accrual factor. Apparently the delta hedge reproduces the final payoff with only small errors.
In Figure 5, we show the error distribution of the hedge error using the numerical (AAD) calculation for delta. We compare the hedge error of the European option (blue) with that of the Bermudan option (green). From this, we see that the residual errors correspond to the error expected from the European option. In this figure, we also depict the result if we omit the conditional expectation step in the calculation of forward sensitivity (red). We obtain wrong sensitivities and the hedge has a huge error. We see that the hedge error for the Bermudan option is slightly smaller than that of the largest European option. This is due to the shorter maturity options embedded in the Bermudan option having smaller hedge errors.
In Figure 6, we show the error distribution of the delta hedge of the Bermudan option using the numerical (AAD) calculation for delta with different numbers of paths. We also show the error distribution for the corresponding European option.
3.2 Choice of basis functions
In Figure 7, we compare the final derivative value (including accrual of past cashflows) and the corresponding replication portfolio , varying the basis functions used in the conditional estimation of the stochastic derivative.
3.3 Performance results
We now summarize some results from the performance of the algorithm. The algorithm was implemented in Java (Java 8 update 121), using finmath.net (n.d.-a,b) running on a MacBook Pro (Mid 2014, 2.8 GHz Core i7 (I7-4980HQ)). The results are summarized in Table 2.
Sensitivities | |||||||
Analytic | Stochastic AAD | ||||||
Product | EU | EU | EU | EU | EU | BER | BER |
No. of paths | 50 000 | 50 000 | 50 000 | 100 000 | 50 000 | 50 000 | 100 000 |
No. of time steps | 100 | 200 | 100 | 100 | 200 | 100 | 100 |
Valuation | — | — | 0.30s | 0.59s | 0.59s | 0.46s | 0.93 |
(model simulation) | |||||||
Stoch. derivatives | — | — | 0.08s | 0.15s | 0.17s | 0.16s | 0.33 |
(5 or 10 million) | |||||||
Derivatives | — | — | 11s | 21s | 23s | 14s | 31s |
(cond. expectation) | |||||||
Total | 3s | 5s | 12s | 22s | 24s | 15s | 32s |
(calculation time) | |||||||
Accuracy | 0.029 | 0.020 | 0.034 | 0.033 | 0.028 | 0.031 | 0.025 |
(hedge RMSE) |
The performance results are interesting with respect to their scaling properties. Apparently, the calculation of the 5 million or 10 million stochastic sensitivities is comparable to a single valuation. Here, the valuation also includes the model simulation and the building of the operator tree. The major part of the calculation time is used up by the conditional expectation step. Note, however, that the conditional expectation step – at least for noncallable products – does not scale with the number of derivative products in a portfolio, because the conditional expectation estimator is constructed from a model-dependent singular value decomposition, which may be shared among products or applied to an aggregate (as long as the basis functions are the same). This implies that, at least for some products, one may reuse the conditional expectation estimator.
3.4 Benchmark implementation
The results presented in this section were produced with version 0.7.0 of finmath.net (n.d.-a). The delta hedge is implemented in the package
net.finmath.montecarlo.assetderivativevaluation.products |
in the class
DeltaHedgedPortfolioWithAAD |
To reproduce the results of this section, run the unit test
DeltaHedgedPortfolioWithAADTest |
(in the same package). More results can be found at finmath.net (n.d.-b).
3.5 Calculation of an exact MVA based on ISDA SIMM
The ISDA SIMM requires forward sensitivities to calculate an initial margin. Using the forward sensitivities derived from our AAD algorithm, we get the ISDA SIMM initial margin by transforming from model sensitivities to SIMM sensitivities. This is just an additional step in the chain rule. For details on this additional step and additional performance improvements, see Fries et al (2018). We summarize some results taken from this paper.
The first step toward an MVA is to simulate the stochastic process of the forward initial margin, ie, the initial margin at the future time on path . The ISDA SIMM model gives this value in terms of on-path sensitivities, ie, forward sensitivities. Hence, it is an application of the algorithm presented in the previous sections. Given , the MVA is defined as the funding costs of the initial margin, which is given by aggregating and taking the expectation
where is the funding numéraire.99 9 The intuition behind this formula is simple: consider a constant initial margin for , dropping to zero after maturity , ie, for . Then, where is the (funding) zero-coupon bond with maturity . That is, we borrow the at the funding rate.
In Figure 8, we depict the paths of for selected values of (blue). For information purposes, we also depict the expected forward initial margin (red) as well as the 5% to 95% quantiles of (gray). Note, however, that for the calculation of the MVA, the expectation and integration do not commute.
4 Conclusion
In this paper, we presented the calculation of stochastic forward sensitivities using stochastic (backward) automatic differentiation.
Utilizing the common representation of the derivative value as a sum of numéraire-relative future payoffs, we represented all forward sensitivities by a single backward automatic differentiation, applying only a time-dependent conditional expectation operator.
Due to the presence of the conditional expectation operator, we were able to utilize the expected stochastic (backward) automatic differentiation from Fries (2017b) such that we derived forward sensitivities for complex derivatives, where the valuation algorithm included conditional expectation operators (eg, callable products; see Fries (2017a)). Thus, the method is completely general and can be applied to options with early exercise features and path-dependency without any modification.
An important application of this result is the fast and efficient calculation of an MVA, eg, when initial margins are based on sensitivities (like for the ISDA SIMM).
Declaration of interest
The views expressed in this work are the personal views of the authors and do not necessarily reflect the views or policies of current or previous employers. Feedback is welcomed at email@christian-fries.de.
References
- Antonov, A. (2017). Algorithmic differentiation for callable exotics. Working Paper, April 4, Social Science Research Network (https://doi.org/10.2139/ssrn.2839362).
- Capriotti, L., and Giles, M. (2011). Algorithmic differentiation: adjoint Greeks made easy. Working Paper, April 2, Social Science Research Network (https://doi.org/10.2139/ssrn.1801522).
- Capriotti, L., Jiang, Y., and Macrina, A. (2016). AAD and least squares Monte Carlo: fast Bermudan-style options and XVA Greeks. Working Paper, September 23, Social Science Research Network (https://doi.org/10.2139/ssrn.2842631).
- finmath.net (n.d.-a). finmath-lib automatic differentiation extensions: enabling finmath lib to utilise automatic differentiation algorithms (eg, AAD). URL: http://finmath.net/finmath-lib-automaticdifferentiationextensions.
- finmath.net (n.d.-b). finmath-lib: mathematical finance library – algorithms and methodologies related to mathematical finance. URLs: http://finmath.net/finmath-lib, https://github.com/finmath/finmath-lib.
- Fries, C. P. (2007). Mathematical Finance: Theory, Modeling, Implementation. Wiley (https://doi.org/10.1002/9780470179789).
- Fries, C. P. (2017a). Automatic backward differentiation for American Monte Carlo algorithms (conditional expectation). Working Paper, June 27, Social Science Research Network (https://doi.org/10.2139/ssrn.3000822).
- Fries, C. P. (2017b). Stochastic automatic differentiation: automatic differentiation for Monte Carlo simulations. Working Paper, June 27, Social Science Research Network (https://doi.org/10.2139/ssrn.2995695).
- Fries, C. P. (2018). Back to the future: comparing forward and backward differentiation for forward sensitivities in Monte Carlo simulations. Working Paper, January 16, Social Science Research Network (https://doi.org/10.2139/ssrn.3106068).
- Fries, C. P., Kohl-Landgraf, P., and Viehmann, M. (2018). Melting sensitivities: exact and approximate margin valuation adjustments. Working Paper, January 15, Social Science Research Network (https://doi.org/10.2139/ssrn.3095619).
- Giles, M., and Glasserman, P. (2006). Smoking adjoints: fast Monte Carlo Greeks. Risk 19(1), 88–92.
- Glasserman, P. (2003). Monte Carlo Methods in Financial Engineering. Stochastic Modelling and Applied Probability. Springer (https://doi.org/10.1007/978-0-387-21617-1).
- Homescu, C. (2011). Adjoints and automatic (algorithmic) differentiation in computational finance. Preprint (arXiv:1107.1831v1).
- Piterbarg, V. (2004). Computing deltas of callable Libor exotics in forward Libor models (2004). The Journal of Computational Finance 7, 107–144 (https://doi.org/10.21314/JCF.2004.109).
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net