abstract
| - Extreme value theory is a branch of statistics dealing with the extreme deviations from the median of probability distributions. The general theory sets out to assess the type of probability distributions generated by processes. Extreme value theory is important for assessing risk for highly unusual events, such as 100-year floods. Two approaches exist today: 1.
* Most common at this moment is the tail-fitting approach based on the second theorem in extreme value theory (Theorem II Pickands (1975), Balkema and de Haan (1974)). 2.
* Basic theory approach as described in the book by Burry (reference 2). In general this conforms to the first theorem in extreme value theory (Theorem I Fisher and Tippett (1928), and Gnedenko (1943)). The difference between the two theorems is due to the nature of the data generation. For theorem I the data are generated in full range, while in theorem II data is only generated when it surpasses a certain threshold (POT's models or Peak Over Threshold). The POT approach has been developed largely in the insurance business, where only losses (pay outs) above a certain threshold are accessible to the insurance company. Strangely this approach is often applied to theorem I cases which poses problems with the basic model assumptions. Extreme value distributions are the limiting distributions for the minimum or the maximum of a very large collection of random observations from the same arbitrary distribution. Emil Julius Gumbel (1958) showed that for any well-behaved initial distribution (i.e., F(x) is continuous and has an inverse), only a few models are needed, depending on whether you are interested in the maximum or the minimum, and also if the observations are bounded above or below.
|