Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Alternativ können Sie versuchen, selbst über Ihren lokalen Bibliothekskatalog auf das gewünschte Dokument zuzugreifen.
Bei Zugriffsproblemen kontaktieren Sie uns gern.
141 Ergebnisse
Sortierung:
SSRN
Working paper
In: Logistics information management, Band 11, Heft 4, S. 213-223
ISSN: 1758-7948
Savage concentrated on building a small world, which is not a probabilistic, but the definite world, in which sure‐thing principle works. He reached Kullback‐Leibler's information through Bayes' theorem, in which he intends to improve personal probability as the a posteriori probability. However, he stopped his thinking there. Akaike obtained Akaike Information Criterion (AIC) by starting from the K‐L information. AIC enables us to evaluate which model is the closest to the true value which we cannot recognise. If we call the context of sure‐thing principle personal probability, Bayes' theorem and AIC the logical structure of information, the author thinks we have the same structure in relation to the Japanese production and distribution system.
In: Springer Texts in Statistics
Bayes Factors for Forensic Decision Analyses with R provides a self-contained introduction to computational Bayesian statistics using R. With its primary focus on Bayes factors supported by data sets, this book features an operational perspective, practical relevance, and applicability—keeping theoretical and philosophical justifications limited. It offers a balanced approach to three naturally interrelated topics: Probabilistic Inference - Relies on the core concept of Bayesian inferential statistics, to help practicing forensic scientists in the logical and balanced evaluation of the weight of evidence. Decision Making - Features how Bayes factors are interpreted in practical applications to help address questions of decision analysis involving the use of forensic science in the law. Operational Relevance - Combines inference and decision, backed up with practical examples and complete sample code in R, including sensitivity analyses and discussion on how to interpret results in context. Over the past decades, probabilistic methods have established a firm position as a reference approach for the management of uncertainty in virtually all areas of science, including forensic science, with Bayes' theorem providing the fundamental logical tenet for assessing how new information—scientific evidence—ought to be weighed. Central to this approach is the Bayes factor, which clarifies the evidential meaning of new information, by providing a measure of the change in the odds in favor of a proposition of interest, when going from the prior to the posterior distribution. Bayes factors should guide the scientist's thinking about the value of scientific evidence and form the basis of logical and balanced reporting practices, thus representing essential foundations for rational decision making under uncertainty. This book would be relevant to students, practitioners, and applied statisticians interested in inference and decision analyses in the critical field of forensic science. It could be used to support practical courses on Bayesian statistics and decision theory at both undergraduate and graduate levels, and will be of equal interest to forensic scientists and practitioners of Bayesian statistics for driving their evaluations and the use of R for their purposes. This book is Open Access.
Nutrition deficiency is one of the problems experienced by most children under five years old in Indonesia. This problem appear because the lack of knowledge from parents about nutrition which needed by their children. Regarding with such problem, the researcher then made a diagnose system which provided the information about nutrition deficiency of the children to the people, in order to prevent any problem of nutrition deficiency per se. The system was an expert system of nutrition deficiency using Naïve Bayes Classifier method. This method was developed from Bayes Theorem, where the value of probability from an object was classified based of specific requirements. Some research showed that Naïve Bayes Classifier was the best among other classification methods, so that the researcher employed this method in the research. In the system, the indication of nutrition deficiency was determined based on antropometri table including the data about height, weight, and age as well as additional indication to make a specific result. Then the data of 20 patients were examined using confusion matrix method. The result showed that the system was 90% accurate. Indeed, it proved that the system could be very helpful to make a classification for nutrition deficiency and it could be applied in any of local government clinic and hospital. Keywords— Antropometri, Deficiency, Naïve Bayes
BASE
In: Wiadomości statystyczne / Glówny Urza̜d Statystyczny, Polskie Towarzystwo Statystyczne: czasopismo Głównego Urze̜du Statystycznego i Polskiego Towarzystwa = The Polish statistician, Band 2013, Heft 12, S. 23-36
ISSN: 2543-8476
The author recalls that 250 years ago in the British scientific circles appeared theorem of probability of causes, now called Bayes' theorem (formulated in 1763 by Thomas Bayes). It refers to the situation where there has been some events and once need to assess the probability of another incident which was the reason, i.e. what is cause and what is the effect of this event. Despite the passing years Bayes' approach in statistical surveys for the conditional probability of occurrence is constantly confirmed and important.
In accordance with the vision of Indogrosir who wants to become a National Asset, since September 2001, Indogrosir has also introduced a modern minimarket franchise system under the name OMI (Outlet Partner of Indogrosir). Indogrosir cooperates with government and private cooperatives, MSMEs, gas stations or general entrepreneurs who want to develop their business in the form of modern minimarkets. Decision support system is a computer-based system that helps in making decisions to solve unstructured problems. The Naive Bayes method is a probabilistic classification technique based on the Bayes theorem which uses the assumption of attribute independence (there is no link between attributes) in the classification process. In an easy rule, a Naive Bayes classification is assumed that the presence or absence of certain features of a class has nothing to do with the characteristics of other classes. With the existence of this decision support system, it is expected to facilitate decision making in determining the recipient of new partners where policies and decisions are made based on predetermined criteria
BASE
SSRN
Working paper
SSRN
In: Political studies, Band 6, Heft 1, S. 124-127
ISSN: 0032-3217
CRITICALLY EVALUATES METHODOLGY AND ASSUMPTIONS IN ARTICLE BY FARLIE AND BUDGE ON HIGH PREDICTIVE VALUE OF SOCIAL BACKGROUND CHARACTERISTICS IN INDICATING POLITICAL ACTIVISM. POINTS OUT FAILURE TO MAKE FULL USE OF FRAMEWORK OF BAYES THEOREM. REJOINDER BY FARLIE AND BUDGE. FOCUS ON CALCULATION OF PROBABILITIES OF BEING A COUNCILLOR IN COLCHESTER, ENGLAND.
SSRN
Working paper
In: Studi e saggi
Cognition implies perception and judgement. Perception consists in interpreting a sensory stimulus: it is a common process in every animal with a brain, and can be described as a Bayesian inference where the interpretive algorithm is stored in long-term memory. Judgement, on the other hand, is related to the comparison between two perceptions which are coded in a language, in that short-term memory presents the first perception once again and compares it with the second. This operation is called inverted Bayes' theorem and while it does not presuppose an algorithm, it builds a new one using comparison. The book shows how the algorithmic leaps related to linguistic operations capture aspects of reality which cannot be reached through Bayesian chains of inference following the same algorithm. Nowadays, we experience the successes of Artificial Intelligence (AI), which, however, works thanks to a direct Bayes' theorem and speeds up recursive chains, but does not resort to algorithmic leaps; therefore, it does not contribute to human language.
In: Risk analysis: an international journal, Band 1, Heft 1, S. 11-27
ISSN: 1539-6924
A quantitative definition of risk is suggested in terms of the idea of a "set of triplets". The definition is extended to include uncertainty and completeness, and the use of Bayes' theorem is described in this connection. The definition is used to discuss the notions of "relative risk", "relativity of risk", and "acceptability of risk".
In: Revista Derecho y Proceso Penal nº 30/2013
SSRN
Working paper
Public service is an activity or series of activities in the framework of fulfilling service needs in accordance with statutory regulations for every citizen and resident of goods, services, or administrative services provided by public service providers. All types of services are provided and provided to the community by the village government apparatus, in general it has not satisfied many people. The services provided are too complicated for reasons according to the procedure, the amount of fees charged, and a very long time, so the services provided tend to be ineffective and inefficient. In this study predictions will be made about the community's satisfaction or dissatisfaction with the performance of village officials in improving services using the Naïve Bayes algorithm method. The Naive Bayes algorithm is a classification with probability and statistical methods proposed by the British scientist Thomas Bayes, which predicts future opportunities based on past experience so that it is known as Bayes theorem. This research was conducted using a data collection technique in the form of a questionnaire filled out by 50 respondents 10% of the total population, from a population of 5,539 permanent residents in the village of Lenek Lauk. Analysis of the results of the study was conducted using the Naïve Bayes algorithm method. Function to see the level of accuracy. Measurement with naïve bayes algorithm produces 92.26% accuracy and AUC value on the ROC curve is 0.924.DOI : 10.29408/jit.v3i2.2314
BASE
In: Springer eBook Collection
1 Probability and its laws -- 1.1 Uncertainty and probability -- 1.2 Direct measurement -- Exercises 1(a) -- 1.3 Betting behaviour -- 1.4 Fair bets -- 1.5 The Addition Law -- Exercises 1(b) -- 1.6 The Multiplication Law -- 1.7 Independence -- Exercises 1(c) -- 2 Probability measurements -- 2.1 True probabilities -- Exercises 2(a) -- 2.2 Elaboration -- Exercises 2(b) -- 2.3 The disjunction theorem -- Exercises 2(c) -- 2.4 The sum theorem -- Exercises 2(d) -- 2.5 Partitions -- 2.6 Symmetry probability -- Exercises 2(e) -- 3 Bayes' theorem -- 3.1 Extending the argument -- Exercises 3(a) -- 3.2 Bayes' theorem -- 3.3 Learning from experience -- Exercises 3(b) -- 3.4 Zero probabilities in Bayes' theorem -- 3.5 Example: disputed authorship -- 4 Trials and deals -- 4.1 The product theorem -- 4.2 Mutual independence -- Exercises 4(a) -- 4.3 Trials -- 4.4 Factorials and combinations -- Exercises 4(b) -- 4.5 Binomial probabilities -- Exercises 4(c) -- 4.6 Multinomial probabilities -- Exercises 4(d) -- 4.7 Deals -- 4.8 Probabilities from information -- Exercises 4(e) -- 4.9 Properties of deals -- 4.10 Hypergeometric probabilities -- Exercises 4(f) -- 4.11 Deals from large collections -- Exercises 4(g) -- 5 Random variables -- 5.1 Definitions -- 5.2 Two or more random variables -- Exercises 5(a) -- 5.3 Elaborations with random variables -- 5.4 Example: capture-recapture -- 5.5 Example: job applications -- Exercises 5(b) -- 5.6 Mean and standard deviation -- Exercises 5(c) -- 5.7 Measuring distributions -- 5.8 Some standard distributions -- Exercises 5(d) -- 6 Distribution theory -- 6.1 Deriving standard distributions -- 6.2 Combining distributions -- Exercises 6(a) -- 6.3 Basic theory of expectations -- 6.4 Further expectation theory -- Exercises 6(b) -- 6.5 Covariance and correlation -- Exercises 6(c) -- 6.6 Conditional expectations -- 6.7 Linear regression functions -- Exercises 6(d) -- 7 Continuous distributions -- 7.1 Continuous random variables -- 7.2 Distribution functions -- Exercises 7(a) -- 7.3 Density functions -- 7.4 Transformations and expectations -- Exercises 7(b) -- 7.5 Standard continuous distributions -- Exercises 7(c) -- 7.6 Two continuous random variables -- 7.7 Example: heat transfer -- Exercises 7(d) -- 7.8 Random variables of mixed type -- Exercises 7(e) -- 7.9 Continuous distribution theory -- Exercises 7(f) -- 8 Frequencies -- 8.1 Exchangeable propositions -- 8.2 The finite characterization -- Exercises 8(a) -- 8.3 De Finetti's theorem -- 8.4 Updating -- Exercises 8(b) -- 8.5 Beta prior distributions -- Exercises 8(c) -- 8.6 Probability and frequency -- 8.7 Calibration -- 9 Statistical models -- 9.1 Parameters and models -- 9.2 Exchangeable random variables -- Exercises 9(a) -- 9.3 Samples -- 9.4 Measuring mean and variance -- Exercises 9(b) -- 9.5 Exchangeable parametric models -- 9.6 The normal location model -- Exercises 9(c) -- 9.7 The Poisson model -- 9.8 Linear estimation -- Exercises 9(d) -- 9.9 Postscript -- Appendix — Solutions to exercises.