fbpx

Best Practices in Risk Modelling

Apr. 1, 2009
Abigail Jacobsen
Published: Apr. 1, 2009

The blog positing on best practices in Excel modelling could be thought of as providing a reasonable and robust set of principles for building static Excel models. When building simulation models for risk analysis in Excel (for instance, with @RISK Monte Carlo software), some other points are worthy of consideration:

  • A risk model may need to be built at an appropriate level of detail. A model which is too detailed will be more complex to add risk distributions to and will require more effort to capture the dependencies between the larges number of variables. In many practical cases, key dependencies will simply not be captured, and the result will have an excessively narrow range (for additive type models e.g. cost budgeting) or an excessively wide range (for subtractive models e.g. profit as the difference between uncertain revenues and costs).
  • The inclusion or not of event risks. Generically, a static model of a situation in which there are event risks (e.g. something adverse happening in 20% of cases in a reserve estimation model) would not include such a risk as a line item (since the most likely outcome is its non-occurrence), whereas a risk model would.
  • The prioritisation of event risks to include may be non trivial, and depend on the decision maker’s risk profile (i.e. tolerance and decision-making criteria), as well as the potential total number of event risks under consideration.  For example, in a retirement planning model where a decision is to be based on the P90 outcome (i.e. the worst or best 10% of cases) it would be more important to include an event with an impact of 1 (with 100% probability) than an event with impact of 100 with 1% probability, as this latter event in isolation would have very little effect on the P90 of the output distribution (Were decisions to be made on the P99.5 value, we would have a different situation, of course.).
  • The use of DataTables will generally slow down simulation models, as the tables need to be recalculated at every iteration. DataTables may be used when building to model and as an error-checking tool (TopRank may also be used, to check that errors are zero across a range of scenarios), but may need to be removed before running the final simulation model.
  • The real challenges in risk modelling boil down to those related to model formulation and decision-making; that is: the selection of variables, capturing the true dynamic of the situation in the model, choice of distributions, capturing dependencies correctly etc. (so come to a training course!)
  • Other principles of model implementation (as discussed in the earlier blog) are essentially identical.

The blog positing on best practices in Excel modelling could be thought of as providing a reasonable and robust set of principles for building static Excel models. When building simulation models for risk analysis in Excel (for instance, with @RISK Monte Carlo software), some other points are worthy of consideration:

  • A risk model may need to be built at an appropriate level of detail. A model which is too detailed will be more complex to add risk distributions to and will require more effort to capture the dependencies between the larges number of variables. In many practical cases, key dependencies will simply not be captured, and the result will have an excessively narrow range (for additive type models e.g. cost budgeting) or an excessively wide range (for subtractive models e.g. profit as the difference between uncertain revenues and costs).
  • The inclusion or not of event risks. Generically, a static model of a situation in which there are event risks (e.g. something adverse happening in 20% of cases in a reserve estimation model) would not include such a risk as a line item (since the most likely outcome is its non-occurrence), whereas a risk model would.
  • The prioritisation of event risks to include may be non trivial, and depend on the decision maker’s risk profile (i.e. tolerance and decision-making criteria), as well as the potential total number of event risks under consideration.  For example, in a retirement planning model where a decision is to be based on the P90 outcome (i.e. the worst or best 10% of cases) it would be more important to include an event with an impact of 1 (with 100% probability) than an event with impact of 100 with 1% probability, as this latter event in isolation would have very little effect on the P90 of the output distribution (Were decisions to be made on the P99.5 value, we would have a different situation, of course.).
  • The use of DataTables will generally slow down simulation models, as the tables need to be recalculated at every iteration. DataTables may be used when building to model and as an error-checking tool (TopRank may also be used, to check that errors are zero across a range of scenarios), but may need to be removed before running the final simulation model.
  • The real challenges in risk modelling boil down to those related to model formulation and decision-making; that is: the selection of variables, capturing the true dynamic of the situation in the model, choice of distributions, capturing dependencies correctly etc. (so come to a training course!)
  • Other principles of model implementation (as discussed in the earlier blog) are essentially identical.
magnifierarrow-right
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram