A Note on the Aggregation of Event Probabilities
In: Risk analysis: an international journal, Band 33, Heft 5
ISSN: 1539-6924
11 Ergebnisse
Sortierung:
In: Risk analysis: an international journal, Band 33, Heft 5
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 33, Heft 5, S. 909-914
ISSN: 1539-6924
In this article, multiple forecasts given as probabilities of events are aggregated using two assumptions: calibration and conditional independence. The forecasts are treated as data and the aggregation is based on Bayes's theorem. A measure of discrimination is given and the behavior of the aggregated posterior probability is examined as the number of forecasters grows without bound. The work is motivated by recent research efforts employing large numbers of individual forecasts.
In: Decision sciences, Band 9, Heft 3, S. 414-420
ISSN: 1540-5915
AbstractThis paper presents a statistical test of the hypothesis that a time series has been generated by a Poisson process with a stable, but undetermined, intensity. The test has the virtues of easy application and economy with respect to data requirements.
In: Risk analysis: an international journal, Band 10, Heft 3, S. 401-406
ISSN: 1539-6924
The analysis of probabilistic fault trees often involves the investigation of events that contribute both to the frequency of the top event and to the uncertainty in this frequency. This paper provides a discussion of three measures of the contribution of an event to the total uncertainty in the top event. These measures are known as uncertainty importance measures. Two of these measures are new developments. Each of the measures is shown to have unique advantages and disadvantages. The three measures are based on, respectively, the expected reduction in the variance of the top‐event frequency should the uncertainty in an event be resolved, the same measure based on the log frequency, and a measure based on shifts in the quantiles of the distribution of top‐event frequency.
In: Risk analysis: an international journal, Band 10, Heft 1, S. 103-109
ISSN: 1539-6924
Probabilistic risk assessments (PRAs) of nuclear power plants proceed by modeling potential accident sequences at the plant of interest. These hypothesized accident sequences begin with initiating events. A very important initiating event phenomenon is the loss of off‐site power (LOSP). This is the interruption of the preferred power supply to the essential and nonessential switchgear buses resulting in the use of emergency power supplies. If off‐site power is not restored in a reasonable period of time, emergency backup sources of AC and DC power may subsequently fail, resulting in a possible core meltdown. Industry data show 63 LOSP incidents in the operating history of nuclear plants in the United States. Two Bayesian models for the frequency of incidents of LOSP are presented. One model assumes that all nuclear power generating stations share a common incidence rate for LOSP, and a posterior distribution is found for the mean incidence rate for all plants. The second model assumes that the incidence rates for individual plants belong to a superpopulation of incidence rates. Under this model the incidence rate for each plant is unique, but related to the incident rate of all other plants through the superpopulation. These models provide a vehicle for incorporating initiating event uncertainty into PRAs.
In: Risk analysis: an international journal, Band 9, Heft 1, S. 25-36
ISSN: 1539-6924
Bayesian methods can be very useful in modeling applications used in risk assessments. For example, a Bayesian analysis can be used to provide a probabilistic comparison of different probability models relative to a set of data, as well as to provide uncertainty bounds on the predictions from the various models. For more complex models or composite models, the Bayesian methods easily adapt to include the uncertainty on the weights associated with each of the models that comprise the composite model. Industry data representing the time to recovery of loss of off‐site power at nuclear power plants are used within this paper to demonstrate these aspects of Bayesian analysis.SUMMARY AND CONCLUSIONSThe Bayesian based method presented in Section 3 for the calculation of posterior odds provides the analyst with a way of quantifying the adequacy of different probability models for a set of data, and thus replaces the subjectivity with an objective criterion. The methods presented in Sections 4 and 5 provide a basis for constructing uncertainty bounds for recovery/probability curves. These uncertainty bounds are useful in risk assessments. The bounds capture parametric uncertainties and uncertainties about relative frequencies of various initiators of events. The methods presented in Section 6 demonstrate how to modify a model to incorporate specific information about the site under study.
In: Risk analysis: an international journal, Band 35, Heft 3
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 35, Heft 3, S. 459-475
ISSN: 1539-6924
Layered defenses are necessary for protecting the public from terrorist attacks. Designing a system of such defensive measures requires consideration of the interaction of these countermeasures. In this article, we present an analysis of a layered security system within the lower Manhattan area. It shows how portfolios of security measures can be evaluated through portfolio decision analysis. Consideration is given to the total benefits and costs of the system. Portfolio diagrams are created that help communicate alternatives among stakeholders who have differing views on the tradeoffs between security and economic activity.
In: Decision analysis: a journal of the Institute for Operations Research and the Management Sciences, INFORMS, Band 14, Heft 2, S. 87-107
ISSN: 1545-8504
Though the choices of terrorists' attack targets are vast, their resources are limited. In this paper, a game-theoretical model is proposed to study both the defender's (government) and the attacker's (terrorist) expenditures among multiple targets under budget constraints to guide investment in defense. We study how the defender's and the attacker's equilibrium allocations depend on the budget constraints, target valuations, cost effectiveness of their investments, and inherent defense levels of targets in both sequential-move and simultaneous-move games. The equilibrium solutions are provided using the Karush–Kuhn–Tucker conditions. At the subgame-perfect Nash equilibrium, the defender's total marginal effects are the same among targets. Moreover, the defender's total marginal effects can be decomposed into direct and indirect effects. We also use the multiple-infrastructure and multiple-urban-area data sets to demonstrate the model results. The regression analysis shows that both the attacker's and the defender's equilibrium investments increase with their own target valuations, because a higher valuation for themselves means a higher attractiveness. Interestingly, the attacker's equilibrium investment is negatively correlated with the defender's target valuations, since a higher defender's valuation would make it more difficult for the attacker to successfully attack the target. By contrast, the defender's equilibrium investment is positively correlated with the attacker's target valuations, as a higher attacker's valuations would increase the urgency for the defender to protect the target. To show the utility of the new model, we compare the results of this model and the model in which the defender assumes that only a single target will be attacked when there could actually be multiple targets attacked. Our results show that the defender will suffer higher expected losses if she assumes that the attacker will attack only one target. The analysis of the attacker's and the defender's budget constraints show that (a) the higher the budget the defender has, the less likely it is that her most valuable target will be attacked; (b) a higher proportion of defense resources should be allocated to the most valuable target if the defender's budget is low; and (c) the attacker is less concentrated on attacking the most valuable target and spreads the resources to attack more targets as his budget increases.
In: Decision analysis: a journal of the Institute for Operations Research and the Management Sciences, INFORMS, Band 10, Heft 4, S. 279-291
ISSN: 1545-8504
When multiple redundant probabilistic judgments are obtained from subject matter experts, it is common practice to aggregate their differing views into a single probability or distribution. Although many methods have been proposed for mathematical aggregation, no single procedure has gained universal acceptance. The most widely used procedure is simple arithmetic averaging, which has both desirable and undesirable properties. Here we propose an alternative for aggregating distribution functions that is based on the median cumulative probabilities at fixed values of the variable. It is shown that aggregating cumulative probabilities by medians is equivalent, under certain conditions, to aggregating quantiles. Moreover, the median aggregate has better calibration than mean aggregation of probabilities when the experts are independent and well calibrated and produces sharper aggregate distributions for well-calibrated and independent experts when they report a common location-scale distribution. We also compare median aggregation to mean aggregation of quantiles.
In: Risk analysis: an international journal, Band 17, Heft 2, S. 249-252
ISSN: 1539-6924
There is a need for plant‐specific distributions of incidence and failure rates rather than distributions from pooled data which are based on the "common incidence rate" assumption. The so‐called superpopulation model satisfies this need through a practically appealing approach that accounts for the variability over the population of plants. Unfortunately, the chosen order in which the integrals with respect to the individual plant rates λi, (i= 0, 1…, m) and the parameters a, β of the Γ‐population distribution are solved seems to drive the solution close to the common incidence rate distribution. It is shown that the solution obtained from interchanging the order and solving the integrals with respect to the individual plant rates by Monte Carlo simulation very quickly provides the plant specific distribution. This differing solution behaviour may be due to the lack of uniform convergence over (α, β, λI, (i= 1,…, m))‐space. Examples illustrate the difference that may be observed.