The Decision Process and Entrepreneurial Returns*
In: The Manchester School, Band 34, Heft 3, S. 247-269
ISSN: 1467-9957
61436 Ergebnisse
Sortierung:
In: The Manchester School, Band 34, Heft 3, S. 247-269
ISSN: 1467-9957
FrontMatter -- Reviewers -- Preface -- Contents -- Tables, Figures, and Boxes -- Abbreviations and Acronyms -- Summary -- 1 Introduction -- 2 Evolving Concepts of Disability -- 3 Individuals with Disabilities -- 4 SSA's Disability Programs -- 5 The Listing of Impairments-Overview -- 6 The Listing of Impairments-Issues -- 7 Findings and Recommendations -- APPENDIXES -- A Biographical Sketches of Committee Members -- B Committee Charge -- C Interim Report, December 2005 -- D Appendix Tables.
In: The European journal of the history of economic thought, Band 19, Heft 3, S. 355-384
ISSN: 1469-5936
In: American economic review, Band 94, Heft 5, S. 1558-1590
ISSN: 1944-7981
We propose a model of addiction based on three premises: (i) use among addicts is frequently a mistake; (ii) experience sensitizes an individual to environmental cues that trigger mistaken usage; (iii) addicts understand and manage their susceptibilities. We argue that these premises find support in evidence from psychology, neuroscience, and clinical practice. The model is tractable and generates a plausible mapping between behavior and the characteristics of the user, substance, and environment. It accounts for a number of important patterns associated with addiction, gives rise to a clear welfare standard, and has novel implications for policy.
In: Administrative Science Quarterly, Band 21, Heft 2, S. 246
In: Journal of consumer research: JCR ; an interdisciplinary journal, Band 1, Heft 1, S. 51
ISSN: 1537-5277
SSRN
Working paper
We consider the problem of approximating the values and the optimal policies in risk-averse discounted Markov Decision Processes with in nite horizon. We study the properties of the rolling horizon and the approximate rolling horizon procedures, proving bounds which imply the convergence of the procedures when the horizon length tends to in nity. We also analyze the e ects of uncertainties on the transition probabilities, the cost functions and the discount factors. ; Nous considérons le problème de l'approximation de la fonction de valeur et des politiques optimales dans un processus de décision Markovien avec actualisation et aversion au risque. Nous étudions les propriétés de la procédure de l'horizon roulant et son approximation, et montrons des bornes qui impliquent la convergence de ces procédures quand l'horizon de temps tend vers l'in ni. Nous analysons aussi les e ets d'incertitudes sur les probabilités de transition, les fonctions de coût et les facteurs d'actualisation.
BASE
We consider the problem of approximating the values and the optimal policies in risk-averse discounted Markov Decision Processes with in nite horizon. We study the properties of the rolling horizon and the approximate rolling horizon procedures, proving bounds which imply the convergence of the procedures when the horizon length tends to in nity. We also analyze the e ects of uncertainties on the transition probabilities, the cost functions and the discount factors. ; Nous considérons le problème de l'approximation de la fonction de valeur et des politiques optimales dans un processus de décision Markovien avec actualisation et aversion au risque. Nous étudions les propriétés de la procédure de l'horizon roulant et son approximation, et montrons des bornes qui impliquent la convergence de ces procédures quand l'horizon de temps tend vers l'in ni. Nous analysons aussi les e ets d'incertitudes sur les probabilités de transition, les fonctions de coût et les facteurs d'actualisation.
BASE
We consider the problem of approximating the values and the optimal policies in risk-averse discounted Markov Decision Processes with in nite horizon. We study the properties of the rolling horizon and the approximate rolling horizon procedures, proving bounds which imply the convergence of the procedures when the horizon length tends to in nity. We also analyze the e ects of uncertainties on the transition probabilities, the cost functions and the discount factors. ; Nous considérons le problème de l'approximation de la fonction de valeur et des politiques optimales dans un processus de décision Markovien avec actualisation et aversion au risque. Nous étudions les propriétés de la procédure de l'horizon roulant et son approximation, et montrons des bornes qui impliquent la convergence de ces procédures quand l'horizon de temps tend vers l'in ni. Nous analysons aussi les e ets d'incertitudes sur les probabilités de transition, les fonctions de coût et les facteurs d'actualisation.
BASE
In: Communications and control engineering
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable. In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes: . innovative material on MDPs, both in constrained settings and with uncertain transition properties; . game-theoretic method for solving MDPs; . theories for developing roll-out based algorithms; and . details of approximation stochastic annealing, a population-based on-line simulation-based algorithm. The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research. The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflectsresearch in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.
In: Decision sciences, Band 6, Heft 1, S. 14-24
ISSN: 1540-5915
When a person is working with large scale Markov Decision Processes, he normally uses the policy iteration approach developed by Howard [1] and modified by White [3]. White's modification makes use of the method of successive approximations. Computational experience has shown that for many processes, the rate of convergence of the successive approximation is very slow. In this paper, techniques for speeding convergence are discussed. Numerical examples and computational experience which show the relative merits of the various approaches are presented.
In: American politics research series
In: IJIO-D-22-00480
SSRN
Corporate Action namely every action of the issuer that gives equal rights to all shareholders such as Dividends, Right Issues and Stock Splits. Dividend is the distribution of company profits to shareholders based on the percentage of ownership of capital owners. Management decisions in seeking new capital or funds through the stock exchange floor are usually for debt repayment actions, company goals, expansion through product innovation in improving and maintaining company stability for better prospects in the future so as to encourage the government to build a better economy in the next period. Corporate action applies to all companies, not limited to public companies. Several forms of corporate action that are generally carried out by issuers include the distribution of dividends, both cash and shares
BASE