Computational models pervade all branches of the exact sciences and have in recent times also started to prove to be of immense utility in some of the traditionally 'soft' sciences like ecology, sociology and politics. This volume is a collection of a few cuttingedge research papers on the application of variety of computational models and tools in the analysis, interpretation and solution of vexing real-world problems and issues in economics, management, ecology and global politics by some prolific researchers in the field.
The authors in this book have analyzed the socio-economic and psychological problems faced by People with Disabilities (PWDs) and their families. The study was made by collecting data using both fuzzy linguistic questionnaire / by interviews in case they are not literates from 2,15,811 lakhs people. This data was collected using the five Non Government Organizations (NGOs) from northern Tamil Nadu. Now any reader would be interested to know whether the Tamils (natives of Tamil Nadu) had ever spoken about people with disability. Even before 2000 years tamils had heroic poetry Purananuru (28th poem) about the war fare methods. In that poem mention was made about 8 types of disability. The 8 types of disability (by birth not due to disease or accidents or due to war) mentioned in order are as follows: (1) Blindness, (2) Aborted embryo (3) Hump back (lying with the body disposed in a crooked position) (4) Dwarf (5) Dumbness (speech impaired) (6) Deafness, (hearing impaired) (7) No proper development of bones (8) Bewilderment of mind (Not in a position to standup these persons mind never grows) (reference Purananuru poem 28).
Game theory is an excellent topic for a non-majors quantitative course as it develops mathematical models to understand human behavior in social, political, and economic settings. The variety of applications can appeal to a broad range of students. Additionally, students can learn mathematics through playing games, something many choose to do in their spare time! This text also includes an exploration of the ideas of game theory through the rich context of popular culture. It contains sections on applications of the concepts to popular culture. It suggests films, television shows, and novels with themes from game theory. The questions in each of these sections are intended to serve as essay prompts for writing assignments. Many colleges offer courses in quantitative reasoning for all students. One model for a quantitative reasoning course is to provide students with a single cohesive topic. Ideally, such a topic can pique the curiosity of students with wide-ranging academic interests and limited mathematical background. This text is intended for use in such a course. This text may also be appropriate for a high school enrichment course. Students can work through the text independently or as a class. The questions throughout the text help students discover the key ideas themselves. Although the materials have been classroom-tested, this PDF version is still in a draft form. A full instructor's guide, which includes classroom activities, discussion questions, key ideas, solutions, and other implementation suggestions, is available to verified course instructors by emailing the author. ; https://digitalcommons.linfield.edu/linfauth/1086/thumbnail.jpg
This study analyses leverage dynamics of Turkish non-financial firms over the last 20 years using a confidential and unique firm-level dataset. Results of dynamic panel estimations reveal that financial development fosters corporate leverage while government indebtedness inhibits it. Both impacts are more pronounced for private firms rather than public firms. Besides, even though improvements in financial development foster long-term debt usage for both SMEs and large firms, this impact seems stronger for SMEs. Conspicuously, results reveal that SMEs suffer much more than large firms in crowding-out periods of government leverage while both SMEs and large firms benefit in crowding-in periods. Moreover, higher business risk hinders corporate leverage of private firms and SMEs, which is not the case for either large firms or public firms. Results are robust to alternative firm size classification schemes and alternative model specifications.
The need to distribute massive quantities of multimedia content to multiple users has increased tremendously in the last decade. The current solution to this ever-growing demand are Content Delivery Networks, an application layer architecture that handle nowadays the majority of multimedia traffic. This distribution problem has also motivated the study of new solutions such as the Information Centric Networking paradigm, whose aim is to add content delivery capabilities to the network layer by decoupling data from its location. In both architectures, cache servers play a key role, allowing efficient use of network resources for content delivery. As a consequence, the study of cache performance evaluation techniques has found a new momentum in recent years.In this dissertation, we propose a framework for the performance modeling of a cache ruled by the Least Recently Used (LRU) discipline. Our framework is data-driven since, in addition to the usual mathematical analysis, we address two additional data-related problems: The first is to propose a model that a priori is both simple and representative of the essential features of the measured traffic; the second, is the estimation of the model parameters starting from traffic traces. The contributions of this thesis concerns each of the above tasks.In particular, for our first contribution, we propose a parsimonious traffic model featuring a document catalog evolving in time. We achieve this by allowing each document to be available for a limited (random) period of time. To make a sensible proposal, we apply the "semi-experimental" method to real data. These "semi-experiments" consist in two phases: first, we randomize the traffic trace to break specific dependence structures in the request sequence; secondly, we perform a simulation of an LRU cache with the randomized request sequence as input. For candidate model, we refute an independence hypothesis if the resulting hit probability curve differs significantly from the one obtained from original trace. With the insights obtained, we propose a traffic model based on the so-called Poisson cluster point processes.Our second contribution is a theoretical estimation of the cache hit probability for a generalization of the latter model. For this objective, we use the Palm distribution of the model to set up a probability space whereby a document can be singled out for the analysis. In this setting, we then obtain an integral formula for the average number of misses. Finally, by means of a scaling of system parameters, we obtain for the latter expression an asymptotic expansion for large cache size. This expansion quantifies the error of a widely used heuristic in literature known as the "Che approximation", thus justifying and extending it in the process.Our last contribution concerns the estimation of the model parameters. We tackle this problem for the simpler and widely used Independent Reference Model. By considering its parameter (a popularity distribution) to be a random sample, we implement a Maximum Likelihood method to estimate it. This method allows us to seamlessly handle the censor phenomena occurring in traces. By measuring the cache performance obtained with the resulting model, we show that this method provides a more representative model of data than typical ad-hoc methodologies. ; L'Internet d'aujourd'hui a une charge de trafic de plus en plus forte à cause de la prolifération des sites de vidéo, notamment YouTube. Les serveurs Cache jouent un rôle clé pour faire face à cette demande qui croît vertigineusement. Ces serveurs sont déployés à proximité de l'utilisateur, et ils gardent dynamiquement les contenus les plus populaires via des algorithmes en ligne connus comme « politiques de cache ». Avec cette infrastructure les fournisseurs de contenu peuvent satisfaire la demande de façon efficace, en réduisant l'utilisation des ressources de réseau. Les serveurs Cache sont les briques basiques des Content Delivery Networks (CDNs), que selon Cisco fourniraient plus de 70% du trafic de vidéo en 2019.Donc, d'un point de vue opérationnel, il est très important de pouvoir estimer l'efficacité d'un serveur Cache selon la politique employée et la capacité. De manière plus spécifique, dans cette thèse nous traitons la question suivante : Combien, au minimum, doit-on investir sur un serveur cache pour avoir un niveau de performance donné?Produit d'une modélisation qui ne tient pas compte de la façon dont le catalogue de contenus évolue dans le temps, l'état de l'art de la recherche fournissait des réponses inexactes à la dernière question.Dans nos travaux, nous proposons des nouveaux modèles stochastiques, basés sur les processus ponctuels, qui permettent d'incorporer la dynamique du catalogue dans l'analyse de performance. Dans ce cadre, nous avons développé une analyse asymptotique rigoureuse pour l'estimation de la performance d'un serveur Cache pour la politique « Least Recently Used » (LRU). Nous avons validé les estimations théoriques avec longues traces de trafic Internet en proposant une méthode de maximum de vraisemblance pour l'estimation des paramètres du modèle.
The need to distribute massive quantities of multimedia content to multiple users has increased tremendously in the last decade. The current solution to this ever-growing demand are Content Delivery Networks, an application layer architecture that handle nowadays the majority of multimedia traffic. This distribution problem has also motivated the study of new solutions such as the Information Centric Networking paradigm, whose aim is to add content delivery capabilities to the network layer by decoupling data from its location. In both architectures, cache servers play a key role, allowing efficient use of network resources for content delivery. As a consequence, the study of cache performance evaluation techniques has found a new momentum in recent years.In this dissertation, we propose a framework for the performance modeling of a cache ruled by the Least Recently Used (LRU) discipline. Our framework is data-driven since, in addition to the usual mathematical analysis, we address two additional data-related problems: The first is to propose a model that a priori is both simple and representative of the essential features of the measured traffic; the second, is the estimation of the model parameters starting from traffic traces. The contributions of this thesis concerns each of the above tasks.In particular, for our first contribution, we propose a parsimonious traffic model featuring a document catalog evolving in time. We achieve this by allowing each document to be available for a limited (random) period of time. To make a sensible proposal, we apply the "semi-experimental" method to real data. These "semi-experiments" consist in two phases: first, we randomize the traffic trace to break specific dependence structures in the request sequence; secondly, we perform a simulation of an LRU cache with the randomized request sequence as input. For candidate model, we refute an independence hypothesis if the resulting hit probability curve differs significantly from the one obtained from original trace. With the insights obtained, we propose a traffic model based on the so-called Poisson cluster point processes.Our second contribution is a theoretical estimation of the cache hit probability for a generalization of the latter model. For this objective, we use the Palm distribution of the model to set up a probability space whereby a document can be singled out for the analysis. In this setting, we then obtain an integral formula for the average number of misses. Finally, by means of a scaling of system parameters, we obtain for the latter expression an asymptotic expansion for large cache size. This expansion quantifies the error of a widely used heuristic in literature known as the "Che approximation", thus justifying and extending it in the process.Our last contribution concerns the estimation of the model parameters. We tackle this problem for the simpler and widely used Independent Reference Model. By considering its parameter (a popularity distribution) to be a random sample, we implement a Maximum Likelihood method to estimate it. This method allows us to seamlessly handle the censor phenomena occurring in traces. By measuring the cache performance obtained with the resulting model, we show that this method provides a more representative model of data than typical ad-hoc methodologies. ; L'Internet d'aujourd'hui a une charge de trafic de plus en plus forte à cause de la prolifération des sites de vidéo, notamment YouTube. Les serveurs Cache jouent un rôle clé pour faire face à cette demande qui croît vertigineusement. Ces serveurs sont déployés à proximité de l'utilisateur, et ils gardent dynamiquement les contenus les plus populaires via des algorithmes en ligne connus comme « politiques de cache ». Avec cette infrastructure les fournisseurs de contenu peuvent satisfaire la demande de façon efficace, en réduisant l'utilisation des ressources de réseau. Les serveurs Cache sont les briques basiques des Content Delivery Networks (CDNs), que selon Cisco fourniraient plus de 70% du trafic de vidéo en 2019.Donc, d'un point de vue opérationnel, il est très important de pouvoir estimer l'efficacité d'un serveur Cache selon la politique employée et la capacité. De manière plus spécifique, dans cette thèse nous traitons la question suivante : Combien, au minimum, doit-on investir sur un serveur cache pour avoir un niveau de performance donné?Produit d'une modélisation qui ne tient pas compte de la façon dont le catalogue de contenus évolue dans le temps, l'état de l'art de la recherche fournissait des réponses inexactes à la dernière question.Dans nos travaux, nous proposons des nouveaux modèles stochastiques, basés sur les processus ponctuels, qui permettent d'incorporer la dynamique du catalogue dans l'analyse de performance. Dans ce cadre, nous avons développé une analyse asymptotique rigoureuse pour l'estimation de la performance d'un serveur Cache pour la politique « Least Recently Used » (LRU). Nous avons validé les estimations théoriques avec longues traces de trafic Internet en proposant une méthode de maximum de vraisemblance pour l'estimation des paramètres du modèle.
This paper discusses the Central Limit Theorem (CLT) and its applications. The paper gives an introduction to what the CLT is and how it can be applied to real life. Additionally, the paper gives a conceptual understanding of the theorem through various examples and visuals. The paper discusses the applications of the CLT in fields such as computer science, psychology, and political science. The author then suggests a new mathematical theorem as an application of the CLT and provides a proof of the theorem. The new theorem relates to expected value and probabilities of random variables and provides a link between the two using the CLT.
International audience ; On s'intéresse à la modélisation d'un phénomène de dégradation de type propagation de fissure et à la nature des politiques de maintenance préventives que l'on peut mettre en oeuvre pour prévenir la rupture du matériel.
National audience ; Les modèles de dégradation ont fait l'objet de nombreux articles sur la dernière décennie. Il existe plusieurs approches possibles pour modéliser la dégradation d'un système. Deux grandes familles peuvent être considérés : les modèles de dégradation continue ; les modèles à espace fini d'états (ou modèles multi-états).
International audience ; On s'intéresse à la modélisation d'un phénomène de dégradation de type propagation de fissure et à la nature des politiques de maintenance préventives que l'on peut mettre en oeuvre pour prévenir la rupture du matériel.
National audience ; Les modèles de dégradation ont fait l'objet de nombreux articles sur la dernière décennie. Il existe plusieurs approches possibles pour modéliser la dégradation d'un système. Deux grandes familles peuvent être considérés : les modèles de dégradation continue ; les modèles à espace fini d'états (ou modèles multi-états).
International audience ; On s'intéresse à la modélisation d'un phénomène de dégradation de type propagation de fissure et à la nature des politiques de maintenance préventives que l'on peut mettre en oeuvre pour prévenir la rupture du matériel.
Engineering Principles of Combat Modeling and Distributed Simulation is the first book of its kind to address the three perspectives that simulation engineers must master for successful military and defense related modeling: the operational view (what needs to be modeled); the conceptual view (how to do combat modeling); and the technical view (how to conduct distributed simulation). Through methods from the fields of operations research, computer science, and engineering, readers are guided through the history, current training practices, and modern methodology related to combat modeling and distributed simulation systems. … [Amazon.com] ; https://digitalcommons.odu.edu/msve_books/1010/thumbnail.jpg
AbstractDuring Covid-19 pandemic, the government had policy to do online learning as an alternative in learning process. It expected that online learning built students' character. It predicted that students' character and online learning contributed on students' mathematics learning outcomes. The study aimed to examine positive correlation between students' character and online learning on mathematics learning outcomes. The research was quantitative research with correlation method. The sampling technique was non probability sampling. The data analysis used SPSS version 25. The result showed that the correlation value between students' character and online learning on mathematics learning outcomes was 0.440. It concludes that there is correlation between students' character and online learning on the mathematics learning outcomes of students at SDN 5 Panjer.Keywords: character, online learning, and mathematics