Apriority and applied mathematics
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 92, Heft 3, S. 349-370
ISSN: 1573-0964
391 Ergebnisse
Sortierung:
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 92, Heft 3, S. 349-370
ISSN: 1573-0964
In: Applied Economic Perspectives and Policy, Band 21, Heft 2, S. 563-578
ISSN: 2040-5804
AbstractThe power of the symbolic mathematics program Maple is demonstrated for teaching topics from microeconomics (the benefits of portfolio diversification) and econometrics (a geometric perspective of multicollinearity). In particular, it appears that the use of symbolic mathematics programs has excellent potential to strengthen the development and integration of graphical and algebraic intuition for a wide variety of economic problems.
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 133, Heft 1/2, S. 43-57
ISSN: 1573-0964
In: Defence science journal: a journal devotet to science & technology in defence, Band 54, Heft 3, S. 303-308
ISSN: 0011-748X
In: Mathematics Preprint Archive Vol. 2003, Issue 6, pp 587-639
SSRN
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 4, S. 219-233
ISSN: 1532-7612
Neural networks are excellent mapping tools for complex financial data. Their mapping capabilities however do not always result in good generalizability for financial prediction models. Increasing the number of nodes and hidden layers in a neural network model produces better mapping of the data since the number of parameters available to the model increases. This is determinal to generalizabilitiy of the model since the model memorizes idiosyncratic patterns in the data. A neural network model can be expected to be more generalizable if the model architecture is made less complex by using fewer input nodes. In this study we simplify the neural network by eliminating input nodes that have the least contribution to the prediction of a desired outcome. We also provide a theoretical relationship of the sensitivity of output variables to the input variables under certain conditions. This research initiates an effort in identifying methods that would improve the generalizability of neural networks in financial prediction tasks by using mergers and bankruptcy models. The result indicates that incorporating more variables that appear relevant in a model does not necessarily improve prediction performance.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 4, S. 201-218
ISSN: 1532-7612
With the emergence of new capital markets and liberalization of stock markets
in recent years, there has been an increase in investors' interest in international
diversification. This is so because international diversification allows investors to have
a larger basket of foreign securities to choose from as part of their portfolio assets, so
as to enhance the reward-to-volatility ratio. This benefit would be limited if national
equity markets tend to move together in the long run. This paper thus studies the issue
of co-movement between stock markets in major developed countries and those in Asian
emerging markets using the concept of cointegration. We find that there is co-movement
between some of the developed and emerging markets, but some emerging markets do
differ from the developed markets with which they share a long-run equilibrium relationship.
Furthermore, it has been observed that there has been increasing interdependence
between most of the developed and emerging markets since the 1987 Stock Market Crash.
This interdependence intensified after the 1997 Asian Financial Crisis. With this phenomenon
of increasing co-movement between developed and emerging stock markets,
the benefits of international diversification become limited.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 4, S. 247-260
ISSN: 1532-7612
In this paper we apply a statistical model combining a random coefficient regression model and a latent class regression model. The EM-algorithm is used for maximum likelihood estimation of the unknown parameters in the model and it is pointed out how this leads to a straightforward handling of a number of different variance/covariance restrictions. Finally, the model is used to analyze how consumers' preferences for eight coffee samples relate to sensory characteristics of the coffees. Within this application the analysis corresponds to a model-based version of the so-called external preference mapping.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 3, S. 175-190
ISSN: 1532-7612
This article studies graph-theoretical conditions under which communication will lead to consensus among players about their decisions in circumstances where there are more than tow players and they interact in pair without public announcement. It is shown that comsensus on their decisions can be guaranteed if the communication graph contains no cycle. Where none of the requirements for player's knowledge is imposed as in the standard model of knowledg with partitional information structure.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 3, S. 191-200
ISSN: 1532-7612
Inventory problems generally have a structure that can be exploited for
computational purposes. Here, we look at the duals of two seemingly unrelated inventory
models that suggest an interesting duality between discrete time optimal control and
optimization over an ordered sequence of variables. Concepts from conjugate duality
and generalized geometric programming are used to establish the duality.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 3, S. 141-154
ISSN: 1532-7612
Ensemble classification techniques such as bagging, (Breiman, 1996a), boosting (Freund & Schapire, 1997) and arcing algorithms (Breiman, 1997) have received much attention in recent literature. Such techniques have been shown to lead to reduced
classification error on unseen cases. Even when the ensemble is trained well
beyond zero training set error, the ensemble continues to exhibit improved classification
error on unseen cases. Despite many studies and conjectures, the reasons behind
this improved performance and understanding of the underlying probabilistic structures
remain open and challenging problems. More recently, diagnostics such as edge and margin (Breiman, 1997; Freund & Schapire, 1997; Schapire et al., 1998) have been used to
explain the improvements made when ensemble classifiers are built. This paper presents
some interesting results from an empirical study performed on a set of representative
datasets using the decision tree learner C4.5 (Quinlan, 1993). An exponential-like decay
in the variance of the edge is observed as the number of boosting trials is increased.
i.e. boosting appears to 'homogenise' the edge. Some initial theory is presented which
indicates that a lack of correlation between the errors of individual classifiers is a key
factor in this variance reduction.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 2, S. 131-140
ISSN: 1532-7612
The interaction between linear, quadratic programming and regression analysis
are explored by both statistical and operations research methods. Estimation and
optimization problems are formulated in two different ways: on one hand linear and
quadratic programming problems are formulated and solved by statistical methods, and
on the other hand the solution of the linear regression model with constraints makes
use of the simplex methods of linear or quadratic programming. Examples are given to
illustrate the ideas.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 2, S. 67-86
ISSN: 1532-7612
Log-linear modeling is a popular statistical tool for analysing a contingency
table. This presentation focuses on an alternative approach to modeling ordinal categorical
data. The technique, based on orthogonal polynomials, provides a much simpler
method of model fitting than the conventional approach of maximum likelihood estimation,
as it does not require iterative calculations nor the fitting and re-fitting to search
for the best model. Another advantage is that quadratic and higher order effects can
readily be included, in contrast to conventional log-linear models which incorporate linear
terms only.The focus of the discussion is the application of the new parameter estimation technique
to multi-way contingency tables with at least one ordered variable. This will also
be done by considering singly and doubly ordered two-way contingency tables. It will
be shown by example that the resulting parameter estimates are numerically similar to
corresponding maximum likelihood estimates for ordinal log-linear models.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 2, S. 107-129
ISSN: 1532-7612
A variety of pivot column selection rules based upon the gradient criteria
(including the steepest edge) have been explored to improve the efficiency of the primal
simplex method. Simplex-like algorithms have been proposed imbedding the gradient
direction (GD) which includes all variables whose increase or decrease leads to an improvement
in the objective function. Recently a frame work has been developed in the
simplex method to incorporate the reduced-gradient direction (RGD) consisting of only
variables whose increase leads to an improvement in the objective function. In this paper,
the results are extended to embed GD in the simplex method based on the concept
of combining directions. Also mathematical properties related to combining directions
as well as deleting a variable from all basic directions are presented.
In: Journal of applied mathematics & decision sciences: JAMDS, Band 8, Heft 1, S. 33-42
ISSN: 1532-7612
This paper establishes new methodology for calculating the optimal sample size when a hypothesis test between two binomial populations is performed. The problem is addressed from the Bayesian point of view, with prior information expressed through a Dirichlet distribution. The approach of this paper sets an upper bound for
the posterior risk and then chooses as 'optimum' the combined sample size for which the likelihood of the data does not satisfy this bound. The combined sample size is divided equally between the two binomials. Numerical examples are discussed for which the two proportions are equal to either a fixed or to a random value.