Qualitative Methodology, Hypothesis Testing and the Needs Assessment
In: Journal of sociology & social welfare, Volume 10, Issue 1
ISSN: 1949-7652
3424 results
Sort by:
In: Journal of sociology & social welfare, Volume 10, Issue 1
ISSN: 1949-7652
In: YGAME-D-23-00100
SSRN
SSRN
Working paper
In: Crime Science, Volume 7, Issue 1
ISSN: 2193-7680
SSRN
Working paper
In: Environmental management: an international journal for decision makers, scientists, and environmental auditors, Volume 62, Issue 2, p. 183-189
ISSN: 1432-1009
In: Frontiers in Quantitative Psychology and Measurement, Volume 2
SSRN
In: HELIYON-D-23-13184
SSRN
The way forward in system safety engineering will be quantitative, and this paper proposes an innovative method for generating a uniform way to understand the composite of testing and experience. In recent years, new approaches to exact hypothesis testing have been developed without a Gaussian probability distribution for success or failure rates. These techniques eliminate errors introduced by the Gaussian assumption, which is important for the small failure rates that are common in modern systems development, and offer considerable promise as a basis for the new direction. This paper presents a theory for exact hypothesis testing and combines two 18th-century theorems to derive an equation for the probability distribution of failure rate employing only the number of tests and the observed count of failures. The concept is expanded to demonstrate the combination of operational experience and expert opinion to update test results. The objective in this work is to derive the general likelihood distribution of failure rate given any set of test results, and then to examine the implications regarding testing requirements, design and interpretation. The particular application considered here is safety assessment for a military weapons system. While the theory developed is for deriving the exact failure rate distribution for system safety applications, it is equally valid for investigating success rates and/or for interpreting performance evaluation tests.
BASE
In: Statistical papers
ISSN: 1613-9798
AbstractThis study examines the varying coefficient model in tail index regression. The varying coefficient model is an efficient semiparametric model that avoids the curse of dimensionality when including large covariates in the model. In fact, the varying coefficient model is useful in mean, quantile, and other regressions. The tail index regression is not an exception. However, the varying coefficient model is flexible, but leaner and simpler models are preferred for applications. Therefore, it is important to evaluate whether the estimated coefficient function varies significantly with covariates. If the effect of the non-linearity of the model is weak, the varying coefficient structure is reduced to a simpler model, such as a constant or zero. Accordingly, the hypothesis test for model assessment in the varying coefficient model has been discussed in mean and quantile regression. However, there are no results in tail index regression. In this study, we investigate the asymptotic properties of an estimator and provide a hypothesis testing method for varying coefficient models for tail index regression.
In: Achieving Democracy : Democratization in Theory and Practice
In: Achieving Democracy : Democratization in Theory and Practice
In: Achieving Democracy : Democratization in Theory and Practice
In: Achieving Democracy : Democratization in Theory and Practice
We consider the quantum multiple hypothesis testing problem, focusing on the case of hypothesis represented by pure states. A sequential adaptive algorithm is derived and analyzed first. This strategy exhibits a decay rate in the error probability with respect to the expected value of measurements greater than the optimal decay rate of the fixed-length methods. A more elaborated scheme is developed next, by serially concatenating multiple implementations of the first scheme. In this case each stage considers as a priori hypothesis probability the a posteriori probability of the previous stage. We show that, by means of a fixed number of concatenations, the expected value of measurements to be performed decreases considerably. We also analyze one strategy based on an asymptotically large concatenation of the initial scheme, demonstrating that the expected number of measurements in this case is upper bounded by a constant, even in the case of zero average error probability. A lower bound for the expected number of measurements in the zero error probability setting is also derived. ; This work was supported in part by the Agencia Estatal de Investigación, Ministerio de Ciencia e Innovación, of the Spanish Government, under Grant RED2018-102668-T and Grant PID2019-104958RB-C41; in part by the Catalan Government under Grant 2017 SGR 578 AGAUR; and in part by the QuantumCAT within the European Regional Development Fund (ERDF) Program of Catalunya under Grant 001-P-001644. ; Postprint (published version)
BASE