A risk measure is defined as a mapping from a set of random variables to the real numbers. This set of random variables represents portfolio returns. The common notation for a risk measure associated with a random variable is . A risk measure should have certain properties:[1]
Normalized
Translative
Monotone
Set-valued
In a situation with -valued portfolios such that risk can be measured in of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs.[2]
Mathematically
A set-valued risk measure is a function , where is a -dimensional Lp space, , and where is a constant solvency cone and is the set of portfolios of the reference assets. must have the following properties:[3]
Variance (or standard deviation) is not a risk measure in the above sense. This can be seen since it has neither the translation property nor monotonicity. That is, for all , and a simple counterexample for monotonicity can be found. The standard deviation is a deviation risk measure. To avoid any confusion, note that deviation risk measures, such as variance and standard deviation are sometimes called risk measures in different fields.
Relation to acceptance set
There is a one-to-one correspondence between an acceptance set and a corresponding risk measure. As defined below it can be shown that and .[5]
Risk measure to acceptance set
If is a (scalar) risk measure then is an acceptance set.
If is a set-valued risk measure then is an acceptance set.
Acceptance set to risk measure
If is an acceptance set (in 1-d) then defines a (scalar) risk measure.
If is an acceptance set then is a set-valued risk measure.