Multi-object tracking is one of the fundamental problems in computer vision. Almost all multi-object tracking systems consist of two main components; detection and data association. In the detection step, object hypotheses are generated in each frame of a sequence. Later, detections that belong to the same target are linked together to form final trajectories. The latter step is called data association. There are several challenges that render this problem difficult, such as occlusion, background clutter and pose changes. This dissertation aims to address these challenges by tackling the data association component of tracking and contributes three novel methods for solving data association. Firstly, this dissertation will present a new framework for multi-target tracking that uses a novel data association technique using the Generalized Maximum Clique Problem (GMCP) formulation. The majority of current methods, such as bipartite matching, incorporate a limited temporal locality of the sequence into the data association problem. This makes these methods inherently prone to ID-switches and difficulties caused by long-term occlusions, a cluttered background and crowded scenes. On the other hand, our approach incorporates both motion and appearance in a global manner. Unlike limited temporal locality methods which incorporate a few frames into the data association problem, this method incorporates the whole temporal span and solves the data association problem for one object at a time. Generalized Minimum Clique Graph (GMCP) is used to solve the optimization problem of our data association method. The proposed method is supported by superior results on several benchmark sequences. GMCP leads us to a more accurate approach to multi-object tracking by considering all the pairwise relationships in a batch of frames; however, it has some limitations. Firstly, it finds target trajectories one-by-one, missing joint optimization. Secondly, for optimization we use a greedy solver, based on local neighborhood search, making our optimization prone to local minimas. Finally GMCP tracker is slow, which is a burden when dealing with time-sensitive applications. In order to address these problems, we propose a new graph theoretic problem, called Generalized Maximum Multi Clique Problem (GMMCP). GMMCP tracker has all the advantages of the GMCP tracker while addressing its limitations. A solution is presented to GMMCP where no simplification is assumed in problem formulation or problem optimization. GMMCP is NP hard but it can be formulated through a Binary-Integer Program where the solution to small- and medium-sized tracking problems can be found efficiently. To improve speed, Aggregated Dummy Nodes are used for modeling occlusions and miss detections. This also reduces the size of the input graph without using any heuristics. We show that using the speed-up method, our tracker lends itself to a real-time implementation, increasing its potential usefulness in many applications. In test against several tracking datasets, we show that the proposed method outperforms competitive methods. Thus far we have assumed that the number of people do not exceed a few dozens. However, this is not always the case. In many scenarios such as, marathon, political rallies or religious rites, the number of people in a frame may reach few hundreds or even few thousands. Tracking in high-density crowd sequences is a challenging problem due to several reasons. Human detection methods often fail to localize objects correctly in extremely crowded scenes. This limits the use of data association based tracking methods. Additionally, it is hard to extend existing multi-target tracking to track targets in highly-crowded scenes, because the large number of targets increases the computational complexity. Furthermore, the small apparent target size makes it challenging to extract features to discriminate targets from their surroundings. Finally, we present a tracker that addresses the above-mentioned problems. We formulate online crowd tracking as a Binary Quadratic Programing, where both detection and data association problems are solved together. Our formulation employs target's individual information in the form of appearance and motion as well as contextual cues in the form of neighborhood motion, spatial proximity and grouping constraints. Due to large number of targets, state-of-the-art commercial quadratic programing solvers fail to efficiently find the solution to the proposed optimization. In order to overcome the computational complexity of available solvers, we propose to use the most recent version of Modified Frank-Wolfe algorithms with SWAP steps. The proposed tracker can track hundreds of targets efficiently and improves state-of-the-art results by significant margin on high density crowd sequences. ; 2016-05-01 ; Ph.D. ; Engineering and Computer Science, Computer Science ; Doctoral ; This record was generated from author submitted information.
In: Decision analysis: a journal of the Institute for Operations Research and the Management Sciences, INFORMS, Band 8, Heft 1, S. 78-80
ISSN: 1545-8504
David J. Caswell (" Analysis of National Strategies to Counter a Country's Nuclear Weapons Program ") is an officer in the U.S. Air Force and a research affiliate with the Center for International Security and Cooperation at Stanford University. David has served in various positions ranging from operational simulation development to operations analysis for national intelligence. He currently serves as an operations analyst in support of regional air and space employment in the Pacific. David received his Ph.D. in management science and engineering at Stanford University. His current research continues to apply computer science and operations research methods for gaining insights for nuclear policy and other international security issues. Address: http://www.stanford.edu/group/ERRG/davidc1.htm ; e-mail: david.caswell33@gmail.com . Kjell Hausken (" Governments' and Terrorists' Defense and Attack in a T-Period Game ") has since 1999 been a professor of economics and societal safety at the University of Stavanger, Norway. His research fields are strategic interaction, risk analysis, reliability, conflict, and terrorism. He holds a Ph.D. (thesis: "Dynamic Multilevel Game Theory") from the University of Chicago (1990–1994), and was a postdoc at the Max Planck Institute for the Studies of Societies (Cologne) from 1995 to 1998 and a visiting scholar at Yale School of Management from 1989 to 1990. He holds a doctorate program degree in administration from the Norwegian School of Economics and Business Administration, and an M.Sc. degree in electrical engineering from the Norwegian Institute of Technology. He completed military service at the Norwegian Defence Research Establishment, has published 110 articles, and is on the editorial board for Theory and Decision and Defence and Peace Economics. Address: Faculty of Social Sciences, University of Stavanger, N-4036 Stavanger, Norway; e-mail: kjell.hausken@uis.no . Ronald A. Howard (" Analysis of National Strategies to Counter a Country's Nuclear Weapons Program ") is a professor of management science and engineering in the School of Engineering at Stanford University. Professor Howard directs teaching and research in the Decision Analysis Program of the department, and is the director of the Decisions and Ethics Center, which examines the efficacy and ethics of social arrangements. He defined the profession of decision analysis in 1964 and has supervised more than 80 doctoral theses in decision analysis and related areas. His experience includes dozens of decision analysis projects that range over virtually all fields of application, from investment planning to research strategy, and from hurricane seeding to nuclear waste isolation. He has been a consultant to several companies and was a founding director and chairman of Strategic Decisions Group. He is president of the Decision Education Foundation, which he and colleagues founded to teach decision skills to young people. He has written four books, dozens of technical papers, and provided editorial service to seven technical journals. His society affiliations have included the Institute of Electrical and Electronics Engineers (Fellow); The Institute of Management Sciences, which he served as president, and the Institute for Operations Research and the Management Sciences (INFORMS) (Fellow). Continuing research interests are improving the quality of decisions, life-and-death decision making, and the creation of a coercion-free society. In 1986 he received the Frank P. Ramsey Medal "for Distinguished Contributions in Decision Analysis" from the Decision Analysis Special Interest Group of the Operations Research Society of America (the predecessor to the Decision Analysis Society of INFORMS). In 1998 he received from INFORMS the first award for the Teaching of Operations Research/Management Science Practice. In 1999 he was elected to the National Academy of Engineering. Address: Management Science and Engineering, Huang Engineering Center, 475 Via Ortega, Stanford University, Stanford, CA 94305-4121; e-mail: rhoward@stanford.edu . Joseph B. ("Jay") Kadane (" Partial-Kelly Strategies and Expected Utility: Small Edge Asymptotics ") is Leonard J. Savage University Professor of Statistics and Social Sciences, Emeritus, at Carnegie Mellon University. He received a B.S. in mathematics from Harvard and a Ph.D. in statistics from Stanford. He was recently elected to the American Academy of Arts and Sciences. His theoretical interests center on subjective Bayesian theory. His current applied interests include Internet security, medicine, law, physics, marketing, and air pollution. He serves as an expert witness in legal cases. His most recent book is Principles of Uncertainty, which is scheduled to be released in May 2011 by Chapman and Hall and will be available free on the Web for any noncommercial purpose. Address: Department of Statistics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213; e-mail: kadane@stat.cmu.edu . Konstantinos V. Katsikopoulos (" Psychological Heuristics for Making Inferences: Definition, Performance, and the Emerging Theory and Practice ") holds a Ph.D. in industrial engineering and operations research from the University of Massachusetts Amherst and is currently a senior research scientist at the Center for Adaptive Behavior and Cognition of the Max Planck Institute for Human Development. He has been a visiting assistant professor of operations research at the Naval Postgraduate School and of systems engineering at the Massachusetts Institute of Technology. He has made contributions to the theory of bounded rationality and its applications to decisions "in the wild" in fields such as engineering design and medicine. Address: Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany; e-mail: katsikop@mpib-berlin.mpg.de . L. Robin Keller (" Investment and Defense Strategies, Heuristics, and Games: From the Editor … ") is a professor of operations and decision technologies in the Merage School of Business at the University of California, Irvine. She received her Ph.D. and M.B.A. in management science and her B.A. in mathematics from the University of California, Los Angeles. She has served as a program director for the Decision, Risk, and Management Science Program of the U.S. National Science Foundation (NSF). Her research is on decision analysis and risk analysis for business and policy decisions and has been funded by NSF and the U.S. Environmental Protection Agency. Her research interests cover multiple attribute decision making, riskiness, fairness, probability judgments, ambiguity of probabilities or outcomes, risk analysis (for terrorism, environmental, health, and safety risks), time preferences, problem structuring, cross-cultural decisions, and medical decision making. She is currently Editor-in-Chief of Decision Analysis, published by the Institute for Operations Research and the Management Sciences (INFORMS). She is a Fellow of INFORMS and has held numerous roles in INFORMS, including board member and chair of the INFORMS Decision Analysis Society. She is a recipient of the George F. Kimball Medal from INFORMS. She has served as the decision analyst on three National Academy of Sciences committees. Address: The Paul Merage School of Business, University of California, Irvine, Irvine, CA 92697-3125; e-mail: lrkeller@uci.edu . Jeryl L. Mumpower (" Playing Squash Against Ralph Keeney: Should Weaker Players Always Prefer Shorter Games? ") is Director of the Master of Public Service and Administration Program at the Bush School of Government and Public Service at Texas A&M University, where he holds the Joe R. and Teresa Lozano Long Chair in Business and Public Policy. Previously he was at the Nelson A. Rockefeller College of Public Affairs and Policy, State University of New York at Albany, where he was a professor of public administration, public policy, public health, and information science and served in a variety of University-level administrative positions. His previous experience includes six years as a program director and policy analyst at the National Science Foundation. Mumpower received his B.A. from the College of William and Mary and his Ph.D. in social and quantitative psychology from the University of Colorado, Boulder. He is author or editor of nine books and more than 50 book chapters and articles. His research has addressed basic and applied topics in negotiation and bargaining, environmental policy, individual and group decision-making processes, the use of scientific expertise in public policy making, and risk analysis and management. Address: Bush School of Government and Public Service, Texas A&M University, 1092 Allen Building, 4220 TAMU, College Station, TX 77843-4220; e-mail: jmumpower@bushschool.tamu.edu . M. Elisabeth Paté-Cornell (" Analysis of National Strategies to Counter a Country's Nuclear Weapons Program ") is the Burt and Deedee McMurtry Professor and Chair, Department of Management Science and Engineering at Stanford University. Her specialty is engineering risk analysis with application to complex systems (including space systems and medical systems). Her research has focused on explicit consideration of human and organizational factors in the analysis of failure risks and, recently, on the use of game theory in risk analysis. Applications in the last few years have included counterterrorism and nuclear counterproliferation problems. She is a member of the National Academy of Engineering and of several boards (Aerospace, Draper, InQtel, etc.). She was a member of the President's Intelligence Advisory Board until December 2008. She holds an engineer degree (Applied Math/CS) from the Institut Polytechnique de Grenoble (France), and an M.S. in Operations Research and a Ph.D. in Engineering-Economic Systems, both from Stanford University. Address: Management Science and Engineering, Huang Engineering Center, 475 Via Ortega, Stanford University, Stanford, CA 94305-4121; e-mail: mep@stanford.edu . Jun Zhuang (" Governments' and Terrorists' Defense and Attack in a T-Period Game ") is an assistant professor of industrial and systems engineering at the University at Buffalo, the State University of New York. He has been a faculty member at SUNY Buffalo since he obtained his Ph.D. in summer 2008 from the University of Wisconsin–Madison. Dr. Zhuang's long-term research goal is to integrate operations research and game theory to better prepare for, mitigate, and manage both natural and man-made hazards. Other areas of interests include health care, transportation, logistics and supply chain management, and sustainability. Dr. Zhuang's research has been supported by the U.S. National Science Foundation, and by the U.S. Department of Homeland Security through the Center for Risk and Economic Analysis of Terrorism Events. Address: Department of Industrial and Systems Engineering, 403 Bell Hall, University at Buffalo, The State University of New York, Buffalo, NY 14260; e-mail: jzhuang@buffalo.edu .
학위논문 (박사) -- 서울대학교 대학원 : 공과대학 협동과정 기술경영·경제·정책전공, 2020. 8. 이종수. ; The present dissertation aims to provide insights into the application of different artificial neural network models in the analysis of consumer choice regarding next-generation transportation services (NGT). It categorizes consumers' decisions regarding the adoption of new services according to Dewey's buyer decision process and then analyzes these decisions using a variety of different methods. In particular, various artificial neural network (ANN) models are applied to predict consumers' intentions. Also, the dissertation proposes an attention-based ANN model that identifies the key features that affect consumers' choices. Consumers' preferences for different types of NGT services are analyzed using a hierarchical Bayesian model. The analyzed consumer preferences are utilized to forecast demand for NGT services, evaluate government policies within the transportation market, and provide evidence regarding the social conflicts among traditional and new transportation services. The dissertation uses the Multiple Discrete-Continuous Extreme Value (MDCEV) model to analyze consumers' decisions regarding the use of different transportation modes. It also utilizes this MDCEV model analysis to estimate the effect of NGT services on consumers' travel mode selection behavior and the environmental effects of the transportation sector. Finally, the findings of the dissertation's analyses are combined to generate marketing and policy insights that will promote NGT services in Korea. ; 본 연구는 기계학습 기반의 인공지능망과 기존의 통계적 마케팅 선택모형을 통합적으로 활용하여 제품 및 서비스 수용 이론으로 정의된 소비자들의 제품 수용 행위를 분석하였다. 기존의 제품 수용 이론들은 소비자들의 선택에 끼치는 영향을 단계별로 정의하였지만, 대부분의 이론은 제품 특성이 소비자 선택에 미치는 영향을 분석하기 보다는 소비자들의 의향, 제품의 대한 의견, 지각 수준과 소비자 선택의 관계 분석에 집중하였다. 따라서 본 연구는 소비자의 제품 수용 의향, 대안 평가 그리고 제품 및 사용량 선택을 포함하여 더욱 포괄적인 측면에서 소비자 제품 수용 행위를 분석하였다. 본 연구에서는 소비자의 제품 수용 관련 선택을 총 세 단계로 분류하였다. 첫 번째는 소비자의 제품 사용 의향을 결정하는 단계, 두 번째는 제품들의 대안을 평가하는 단계, 세 번째는 제품의 사용량을 선택하는 단계로, 각 단계를 분석하기 위해서 본 연구는 인공지능망과 통계적 마케팅 선택모형을 활용하였다. 인공지능망은 예측과 분류하는 작업에서 월등한 성능을 가진 모형으로 소비자들의 제품 수용 의향을 예측하고, 의향 선택에 영향을 주는 주요 변수들을 식별하는 데 활용되었다. 본 연구에서 제안한 주요 변수 식별을 위한 인공지능망은 기존의 변수 선택 기법 보다 모형 추정 적합도 측면에서 높은 성능을 보였다. 본 모형은 향후 빅데이터와 같이 많은 양의 소비자 관련 데이터를 처리하는데 활용될 가능성이 클 뿐만 아니라, 기존의 설문 설계 기법을 개선하는데 용이한 방법론으로 판단된다. 소비자 선호를 기반으로 한 대안 평가 및 사용량을 분석하기 위해서 통계적 선택 모형 중 계층적 베이지안 모형과 혼합 MDCEV 모형을 활용하였다. 계층적 베이지안 모형은개별적인 소비자 선호를 추정할 수 있는 장점이 있고, 혼합 MDCEV 모형의 경우 소비자들의 선호를 기반하여 선택된 대안들로 다양한 포트폴리오를 구성할 수 있고, 각 대안에 대한 사용량을 분석할 수 있다. 제안된 모형들의 실증 연구를 위해 차세대 자동차 수송 서비스에 대한 소비자들의 사용 의향, 서비스 대안에 대한 선호, 수송 서비스별 사용량을 분석하였다. 실증 연구에서는 차세대 자동차 수송 서비스를 수용하기까지 소비자들이 경험하는 단계별 선택 상황을 반영하였으며, 각 단계에서 도출된 결과를 통해 향후 차세대 자동차 수송 서비스의 성장 가능성과 소비자들의 이동 행위 변화에 대해 예측하였다. 본 연구를 통해 인공지능망이 소비자 관련 연구에서 유용하게 활용될 수 있음을 보였으며, 인공지능망과 통계적 마케팅 선택모형이 결합될 경우 소비자들의 제품 선택 행위뿐만 아니라, 제품 선택 의사결정 과정 전반에 걸쳐 소비자 선호를 포괄적으로 분석할 수 있음을 확인하였다. ; Chapter 1. Introduction 1 1.1 Research Background 1 1.2 Research Objective 7 1.3 Research Outline 12 Chapter 2. Literature Review 14 2.1 Product and Technology Diffusion Theory 14 2.1.1. Extension of Adoption Models 19 2.2 Artificial Neural Network 22 2.2.1 General Component of the Artificial Neural Network 22 2.2.2 Activation Functions of Artificial Neural Network 26 2.3 Modeling Consumer Choice: Discrete Choice Model 32 2.3.1 Multinomial Logit Model 32 2.3.2 Mixed Logit Model 34 2.3.3 Latent Class Model 37 2.4 Modeling Consumer Heuristics in Discrete Choice Model 39 2.4.1 Consumer Decision Rule in Discrete Choice Model: Compensatory and Non-Compensatory Models 39 2.4.2 Choice Set Formation Behaviors: Semi-Compensatory Models 42 2.4.3 Modeling Consumer Usage: MDCEV Model 50 2.5 Difference between Artificial Neural Network and Choice Modeling 53 2.6 Limitations of Previous Studies and Research Motivation 58 Chapter 3. Methodology 63 3.1 Artificial Neural Network Models for Prediction 63 3.1.1 Multiple Perceptron Model 63 3.1.2 Convolutional Neural Network 69 3.1.3 Bayesian Neural Network 72 3.2 Feature Identification Model through Attention 77 3.3 Hierarchical Bayesian Model 83 3.4 Multiple Discrete-Continuous Extreme Value Model 86 Chapter 4. Empirical Analysis: Consumer Preference and Selection of Transportation Mode 98 4.1 Empirical Analysis Framework 98 4.2 Data 101 4.2.1 Overview of the Survey 101 4.3 Empirical Study I: Consumer Intention to New Type of Transportation 110 4.3.1 Research Motivation and Goal 110 4.3.2 Data and Model Setup 114 4.3.3 Result and Discussion 123 4.4 Empirical Study II: Consumer Choice and Preference for New Types of Transportation 142 4.4.1 Research Motivation and Goal 142 4.4.2 Data and Model Setup 144 4.4.3 Result and Discussion 149 4.5 Empirical Study III: Impact of New Transportation Mode on Consumer's Travel Behavior 163 4.5.1 Research Motivation and Goal 163 4.5.2 Data and Model Setup 164 4.5.3 Result and Discussion 166 Chapter 5. Discussion 182 Bibliography 187 Appendix: Survey used in the analysis 209 Abstract (Korean) 241 ; Doctor
학위논문 (박사) -- 서울대학교 대학원 : 공과대학 협동과정 기술경영·경제·정책전공, 2020. 8. 이종수. ; The present dissertation aims to provide insights into the application of different artificial neural network models in the analysis of consumer choice regarding next-generation transportation services (NGT). It categorizes consumers' decisions regarding the adoption of new services according to Dewey's buyer decision process and then analyzes these decisions using a variety of different methods. In particular, various artificial neural network (ANN) models are applied to predict consumers' intentions. Also, the dissertation proposes an attention-based ANN model that identifies the key features that affect consumers' choices. Consumers' preferences for different types of NGT services are analyzed using a hierarchical Bayesian model. The analyzed consumer preferences are utilized to forecast demand for NGT services, evaluate government policies within the transportation market, and provide evidence regarding the social conflicts among traditional and new transportation services. The dissertation uses the Multiple Discrete-Continuous Extreme Value (MDCEV) model to analyze consumers' decisions regarding the use of different transportation modes. It also utilizes this MDCEV model analysis to estimate the effect of NGT services on consumers' travel mode selection behavior and the environmental effects of the transportation sector. Finally, the findings of the dissertation's analyses are combined to generate marketing and policy insights that will promote NGT services in Korea. ; 본 연구는 기계학습 기반의 인공지능망과 기존의 통계적 마케팅 선택모형을 통합적으로 활용하여 제품 및 서비스 수용 이론으로 정의된 소비자들의 제품 수용 행위를 분석하였다. 기존의 제품 수용 이론들은 소비자들의 선택에 끼치는 영향을 단계별로 정의하였지만, 대부분의 이론은 제품 특성이 소비자 선택에 미치는 영향을 분석하기 보다는 소비자들의 의향, 제품의 대한 의견, 지각 수준과 소비자 선택의 관계 분석에 집중하였다. 따라서 본 연구는 소비자의 제품 수용 의향, 대안 평가 그리고 제품 및 사용량 선택을 포함하여 더욱 포괄적인 측면에서 소비자 제품 수용 행위를 분석하였다. 본 연구에서는 소비자의 제품 수용 관련 선택을 총 세 단계로 분류하였다. 첫 번째는 소비자의 제품 사용 의향을 결정하는 단계, 두 번째는 제품들의 대안을 평가하는 단계, 세 번째는 제품의 사용량을 선택하는 단계로, 각 단계를 분석하기 위해서 본 연구는 인공지능망과 통계적 마케팅 선택모형을 활용하였다. 인공지능망은 예측과 분류하는 작업에서 월등한 성능을 가진 모형으로 소비자들의 제품 수용 의향을 예측하고, 의향 선택에 영향을 주는 주요 변수들을 식별하는 데 활용되었다. 본 연구에서 제안한 주요 변수 식별을 위한 인공지능망은 기존의 변수 선택 기법 보다 모형 추정 적합도 측면에서 높은 성능을 보였다. 본 모형은 향후 빅데이터와 같이 많은 양의 소비자 관련 데이터를 처리하는데 활용될 가능성이 클 뿐만 아니라, 기존의 설문 설계 기법을 개선하는데 용이한 방법론으로 판단된다. 소비자 선호를 기반으로 한 대안 평가 및 사용량을 분석하기 위해서 통계적 선택 모형 중 계층적 베이지안 모형과 혼합 MDCEV 모형을 활용하였다. 계층적 베이지안 모형은개별적인 소비자 선호를 추정할 수 있는 장점이 있고, 혼합 MDCEV 모형의 경우 소비자들의 선호를 기반하여 선택된 대안들로 다양한 포트폴리오를 구성할 수 있고, 각 대안에 대한 사용량을 분석할 수 있다. 제안된 모형들의 실증 연구를 위해 차세대 자동차 수송 서비스에 대한 소비자들의 사용 의향, 서비스 대안에 대한 선호, 수송 서비스별 사용량을 분석하였다. 실증 연구에서는 차세대 자동차 수송 서비스를 수용하기까지 소비자들이 경험하는 단계별 선택 상황을 반영하였으며, 각 단계에서 도출된 결과를 통해 향후 차세대 자동차 수송 서비스의 성장 가능성과 소비자들의 이동 행위 변화에 대해 예측하였다. 본 연구를 통해 인공지능망이 소비자 관련 연구에서 유용하게 활용될 수 있음을 보였으며, 인공지능망과 통계적 마케팅 선택모형이 결합될 경우 소비자들의 제품 선택 행위뿐만 아니라, 제품 선택 의사결정 과정 전반에 걸쳐 소비자 선호를 포괄적으로 분석할 수 있음을 확인하였다. ; Chapter 1. Introduction 1 1.1 Research Background 1 1.2 Research Objective 7 1.3 Research Outline 12 Chapter 2. Literature Review 14 2.1 Product and Technology Diffusion Theory 14 2.1.1. Extension of Adoption Models 19 2.2 Artificial Neural Network 22 2.2.1 General Component of the Artificial Neural Network 22 2.2.2 Activation Functions of Artificial Neural Network 26 2.3 Modeling Consumer Choice: Discrete Choice Model 32 2.3.1 Multinomial Logit Model 32 2.3.2 Mixed Logit Model 34 2.3.3 Latent Class Model 37 2.4 Modeling Consumer Heuristics in Discrete Choice Model 39 2.4.1 Consumer Decision Rule in Discrete Choice Model: Compensatory and Non-Compensatory Models 39 2.4.2 Choice Set Formation Behaviors: Semi-Compensatory Models 42 2.4.3 Modeling Consumer Usage: MDCEV Model 50 2.5 Difference between Artificial Neural Network and Choice Modeling 53 2.6 Limitations of Previous Studies and Research Motivation 58 Chapter 3. Methodology 63 3.1 Artificial Neural Network Models for Prediction 63 3.1.1 Multiple Perceptron Model 63 3.1.2 Convolutional Neural Network 69 3.1.3 Bayesian Neural Network 72 3.2 Feature Identification Model through Attention 77 3.3 Hierarchical Bayesian Model 83 3.4 Multiple Discrete-Continuous Extreme Value Model 86 Chapter 4. Empirical Analysis: Consumer Preference and Selection of Transportation Mode 98 4.1 Empirical Analysis Framework 98 4.2 Data 101 4.2.1 Overview of the Survey 101 4.3 Empirical Study I: Consumer Intention to New Type of Transportation 110 4.3.1 Research Motivation and Goal 110 4.3.2 Data and Model Setup 114 4.3.3 Result and Discussion 123 4.4 Empirical Study II: Consumer Choice and Preference for New Types of Transportation 142 4.4.1 Research Motivation and Goal 142 4.4.2 Data and Model Setup 144 4.4.3 Result and Discussion 149 4.5 Empirical Study III: Impact of New Transportation Mode on Consumer's Travel Behavior 163 4.5.1 Research Motivation and Goal 163 4.5.2 Data and Model Setup 164 4.5.3 Result and Discussion 166 Chapter 5. Discussion 182 Bibliography 187 Appendix: Survey used in the analysis 209 Abstract (Korean) 241 ; Doctor
Nowadays, transportation plays a key role in our modern countries'life, in particular for the goods flows. The logistics of flows between regions, countries and continents have benefited from technological and organizational innovations ensuring efficiency and effectiveness. It has not been the same at the urban scale, especially in city centers: the management of flows in a high population density environment has not yet found its organizational model. Today, urban logistics or "last mile" management is therefore a major issue, both socio-political and environmental as well as economic. Urban logistics is characterized by several actors (shippers or owners of goods, customers, carriers, public authorities, .) each with different priorities (reduction of pollution, improvement of service quality, minimization of total distance traveled, .). To overcome these challenges, one possible lever is to optimize the distribution and/or collection of goods in the context and under the constraints of the city.The goal of this PhD work is then to plan the distribution of goods in a logistics network, approached from a collaboration angle between shippers. This collaboration consists in grouping the demands of several shippers to optimize the loading rate of the trucks and to obtain better transport prices. Here, managing the "last mile" is similar to what is known in the literature as the Pickup and Delivery Problem (PDP). In this thesis, we are interested in variants of this problem more adapted to the urban context. After having realized a state of the art on the combinatorial optimization problems around the transport and the methods used for their resolution, we study two new variants of the problem of collection and delivery: the Selective PDP with Windows and Paired Demands and the Multi-period PDP with Windows and Paired Demands. The first allows carriers to deliver the maximum number of customers in a day for example; with the second, and in case of impossibility of delivery in this period, we determine the best delivery date by minimizing the distance traveled. Each of them is the subject of a formal description, of a mathematical modeling in the form of a linear program, then of a resolution by exact methods, heuristics and metaheuristics, in single-objective and multi-objective cases. The performance of each approach was evaluated by a substantial number of tests on instances of different sizes from the literature and / or that we generated. The advantages and drawbacks of each approach are analyzed, in particular in the context of collaboration between shippers. ; De nos jours, le transport joue un rôle clé dans la vie des pays modernes, en particulier pour les flux de marchandises. La logistique des flux entre régions, pays et continents a bénéficié d'innovations technologiques et organisationnelles assurant efficacité et efficience. Il n'en a pas été de même à l'échelle urbaine, plus particulièrement dans les centres-villes : la gestion des flux dans un environnement caractérisé par une forte densité démographique n'a pas encore véritablement trouvé son modèle d'organisation. Aujourd'hui, la logistique urbaine ou encore la gestion "du dernier kilomètre" constitue donc un enjeu de premier plan, tant socio politique et environnemental qu'économique. La logistique urbaine est caractérisée par la présence de plusieurs acteurs (chargeurs ou propriétaires de marchandises, clients, transporteurs, autorités publiques, …) ayant chacun des priorités différentes (réduction de la pollution, amélioration de la qualité de service, minimisation de la distance totale parcourue, …). Pour relever ces défis, un des leviers possibles consiste à optimiser les tournées de distribution et/ou collecte de marchandises, dans le contexte et sous les contraintes de la ville.Le but de ce travail de thèse réside alors dans la planification de la distribution des marchandises dans un réseau logistique, abordée sous un angle de collaboration entre les chargeurs. Cette collaboration consiste à regrouper les demandes de divers chargeurs pour optimiser le taux de chargement des camions et obtenir de meilleurs prix de transport. Ici, la gestion du « dernier kilomètre » s'apparente à ce que l'on identifie dans la littérature comme le Pickup and Delivery Problem (PDP). Dans le cadre de cette thèse, nous nous intéressons à des variantes de ce problème plus adaptées au contexte urbain. Après avoir réalisé un état de l'art sur les problèmes d'optimisation combinatoire autour du transport et les méthodes utilisées pour leur résolution, nous étudions deux nouvelles variantes du problème de collecte et de livraison : le Selective PDP with Time Windows and Paired Demands et le Multi-periods PDP with Time Windows and Paired Demands. La première permet aux transporteurs de livrer le maximum de clients dans une journée par exemple ; avec la seconde, et en cas d'impossibilité de livraison dans cette période, on détermine la meilleure date de livraison en minimisant la distance parcourue. Chacune d'elles fait l'objet d'une description formelle, d'une modélisation mathématique sous forme de programme linéaire, puis d'une résolution par des méthodes exacte, heuristiques et métaheuristiques, dans des cas mono-objectif et multi-objectifs. La performance de chaque approche a été évaluée par un nombre substantiel de tests sur des instances de différentes tailles issues de la littérature et/ou que nous avons générées. Les avantages et les inconvénients de chaque approche sont analysés, notamment dans le cadre de la collaboration entre chargeurs.
This special issue of the Journal of Energy is dedicated to the establishment of today the Department for Energy and Power Systems (ZVNE), University of Zagreb Faculty of Electrical Engineering and Computing in 1934. in that time the High Voltage Department as part of the Technical Faculty. For this reason, the history of the Department for Energy and Power Systems is presented in the introductory article, while the other articles are part of a broad scientific and professional work of the employees of the Department and some of the articles were created in wide cooperation with experts from the companies, that graduated from the Department. Journal of Energy special issue: present 17 papers selected for publication in Journal of Energy after having undergone the peer review process. We would like to thank the authors for their contributions and the reviewers who dedicated their valuable time in selecting and reviewing these papers. We hope this special issue will provide you valuable information of some achievements at Department of Energy and Power Systems, Faculty of Electrical Engineering and Computing. Short introduction of scientific and expert work of the Department for Energy and Power Systems (ZVNE): Besides educational energy related programmes for undergraduate, graduate and postgraduate students, DEPARTMENT OF ENERGY AND POWER SYSTEMS has been actively involved for many years in many scientific and expert studies. Studies on scientific projects include collaboration with industry, national institutions, electric utilities, and many foreign universities. The Department has developed valuable international cooperation with many research institutions around the world, either directly or through inter-university cooperation. The Department is the leading institution in the field of electrical power engineering in the region, it has a long lasting cooperation with the economic sector, and it is recognized for its scientific activities and a large number of published scientific papers in globally relevant journals, as well as numerous national and international scientific projects. Main Department areas of activities are: a) Power Engineering and Power Technologies, b) Energy, Environment, Energy Management and c) Nuclear Power Engineering In Power Systems Engineering the research is focused to development of both fundamental knowledge and applications of electrical power engineering. The research is generally directed to increasing the availability and the reliability of a power system with an emphasis on the adjustment to the open market environment. Specific goals include: improving models and methodologies for power system analysis, operation and control; development, production and application of models and methodologies for power systems planning, maintenance and development; application of soft-computing (artificial intelligence, meta-heuristics, etc.), information technologies (web-oriented technologies, geographic information systems, enterprise IT solutions, etc.) and operational research in improving processes of planning, development, exploitation and control of power systems; investigation on applications for coordinated control of power system devices and exploring the power system stability, security and economic operation; integration of intelligent devices and agents in energy management systems and distribution management systems equipment and software; advanced modelling of dynamics, disturbances and transient phenomena in transmission and distribution networks (in particular regarding distributed generation); advances in fault detection, restoration and outage management. The researches also cover high voltage engineering. At time of global changes in the energy sector, with emphasis on sustainable development, significant efforts are devoted to liberalization efforts, facilities revitalization, improved legislation and adoption of new standards. In area of Power Technologies, Energy and Environment, Energy Management the main framework for the research are: sustainable electricity generation on a liberalized market, modelling ETS and electricity market; energy security and climate change; power system optimization with emission trading; rational use of energy and energy savings; energy management in industry and buildings; energy conservation and energy auditing in industry and buildings. General objective of the research is to develop methodologies for quantitative assessment of the environmental impact of applicable energy technologies (electric power producing plants and their technology chains), as a base for estimating optimal long-term development strategy of the Croatian power system. Research work includes new strategies of energy sector and power system development; preparing medium and long-term electricity generation expansion plan for power system; comparison of energy, economic and environmental characteristics of different options for electric power generation; studies for rational use of energy and energy savings, assuming a centralized structure of the electricity market. Research work also includes renewable energy sources and its role in power sector, as well as electricity production considering cap on CO2 emissions. Research covers development of new models for power system generation optimization and planning under uncertainties on the open electricity market. The goal of that research is to create analytical and software tools which will enable a successful transition to liberalized electricity market and ensure healthy and efficient power system operation in compliance with environmental requirements. In the Nuclear Energy Field research cover nuclear physics reactor theory, nuclear power plants. fuel cycles and reactors materials and general objective of the research is to develop methodologies for reliable assessment of nuclear power plants operational safety. In the nuclear energy field the specific analysis cover calculations of transients and consequences of potential accidents in NPP Krško. In the field of safety analyses of nuclear power plants the research activities are oriented to the mathematical modelling of nuclear power plant systems and components.
To date, only two scholars (historians) have attempted to research thoroughly the Horace N. Allen Manuscripts (MSS) regarding the first American resident missionary in Korea. This paper makes an important contribution because, to my knowledge, no study has perused the entire Allen MSS and woven a single theme that connects Allen's actions in both Korea and Hawaii. Research on the development of Protestantism in Korea can be generally separated via religious and non-religious factors. In this paper, I emphasize how socio-historic contexts, expansionism, and various missionary activities allowed Allen to fill structural holes and employ social capital for personal and national advancements. I argue that Allen's social connections facilitated America's missionary and expansionistic endeavors in Korea and Hawaii at the turn of the 20th century. There is no shortage of scholarship regarding Horace N. Allen (1858-1923) and the burgeoning of Protestantism in Korea at the turn of the twentieth century. Some missionaries (e.g., Appenzeller, 1905; Hulbert, 1969 [1909]; Zwemer & Brown, 1908; Underwood, 1908; Brown, 1919; Clark, 1921 and 1930; Hall, 1978) who were in Korea during the same time frame as Allen over-emphasized the religious factors in explaining the growth of Protestantism. These works focused on the evangelistic nature of the missionaries' work; Protestant growth was a spiritual enterprise. In contrast, other scholars (Namkung, 1928, p. 8; Deuchler, 1977; Hunt, Jr., 1980, p. 3; Carter et al., 1990, p. 249; Lee, 2001) have employed non-religious heuristics whereby Protestantism served as a boundary marker against China and Japan and became associated with progress and hope (Westernization). Though some socio-historic (ethno-religious) studies have entailed the development of Protestantism at the turn of the twentieth century, the research was done without investigating the Allen Papers (MSS). For example, Young-Shin Park (2000, p. 507) associated Protestant developments with modernization and reactive ethnicity whereby Protestantism served as an anti-Japanese marker. Danielle Kane and Jung Mee Park (2009, pp. 366 and 368) employed a comparative analysis regarding "the puzzle of Christian success in Korea†and found a solution via geopolitical theory. Geopolitical theory was used as a heuristic and intersected with the concept of networks to explain why Protestantism grew in Korea but not in Japan or China. Andrew Kim (2000, p. 129) claimed "the dramatic growth of Protestantism in South Korea during the 1960s, 70s, and 80s was due in part to the way certain doctrines and practices of the imported faith agreed with those of the folk tradition.†Whether one agrees with his premise that American Protestants at the turn of the 20th century had doctrines that were readily compatible with Korea's indigenous religious beliefs may be a theological matter. Further, the contexts of reception and growth for Protestantism were not under the same conditions; there is a difference of one hundred years from 1880 to 1980. I have delimited this paper with a socio-historic analysis (primarily) on the Allen MSS. There is a huge gap in the literature regarding Horace N. Allen, who claimed that as a medical doctor he opened "the mission work in Korea†(Allen, H. N., 1883-1923, Allen to Rev. Josiah Strong D. D., August 30, 1888). No scholar questions that he was the first American Protestant resident missionary in Korea. Yet, depending on the source, he has been depicted as a medical missionary, a diplomat (proponent of American business), or both. According to the Yonsei University website (http://www.yonsei.ac.kr/eng/about/history/-chronicle/), Allen was crucial regarding "not only the birth of Yonsei University, but also the starting point of modern medical education in Korea and among the first in Asia.†Yonsei University has become one of the elite medical universities in South Korea. Allen's tenure in Korea entailed going from China to Korea in 1884; leaving the mission field to become a court doctor and "unofficial†advisor to the Korean government and going to the U.S. with a Korean delegation in 1887; returning to Korea in 1890 as a missionary and "almost immediately†becoming the Secretary of the American Legation; becoming the U.S. Minister in 1897; and being recalled in 1905 (Allen, H. N., 1883-1923, n.d.). It appears that only two scholars (historians) have mined the Allen MSS in depth. Fred Harrington (1980) has done the best work regarding Allen and concessions in Korea. Wayne Patterson (1988; 2000; 2003) is the most significant scholar regarding Allen and Korean laborers in Hawaii. Although both Harrington and Patterson provided the only extensive treatment of the Allen MSS, they seemed to depict two different Horace Allens; one who was involved in Korea and one who was involved in Hawaii. What I show in this paper is that America's interests in both expansionism and missions provided Allen the opportunities to be involved in Korea and Hawaii; under conditions of either expansionism or missions, Allen would not have had the same efficacy regarding concessions, the development of Christianity, and the illegal transfer of Korean laborers to Hawaii. I employ a socio-historic analysis by engaging primarily with the Allen MSS. I will argue that Allen was in a particular context of U.S. missions and expansionism, that he filled a structural hole (Allen became a nexus between various interests in the U.S., Korea, and Hawaii), and employed social capital for personal and national advancements.
Aufgrund des Skaleneffekts (economy of scale), sollte ein einzelner Nutzer eine Kooperation eingehen, um Kosten zu sparen. Eine Herausforderung für die Mitglieder einer Kooperation ist, dass sich alle einigen müssen, wie viel jeder bezahlen muss. Ansonsten kann die Zusammenarbeit nicht realisiert werden. Diese Dissertation befasst sich mit der fairen Verteilung der gemeinsamen Kosten einer Gruppe auf ihre Mitglieder. Die Arbeit verbindet kooperative Spieltheorie und state-of-the-art Algorithmen aus der linearen und ganzzahligen Optimierung, um faire Allokationen zu definieren und sie numerisch für große reale Anwendungen zu berechnen. Unsere Ansätze übertreffen traditionelle Kostenverteilungsmethoden im Sinne der Fairness und der Nutzerzufriedenheit. Kooperative Spieltheorie analysiert die möglichen Gruppierungen von Einzelnen, um Koalitionen zu bilden. Es bietet mathematische Werkzeuge um faire Preise in dem Sinne zu bestimmen, dass der Zusammenbruch der großen Koalition verhindert und die Stabilität erhöht wird. Bei der aktuellen Definition des Kostenallokationsspiels werden weder mögliche Koalitionen von Spielern beschränkt noch Bedingungen an die Preise gestellt, wie es häufig in der realen Anwendungen erforderlich ist. Unsere Verallgemeinerung bringt das Kostenallokationsspiel-Modell einen Schritt näher an die Praxis. Basierend auf unserer Definition, präsentieren und diskutieren wir in dieser Arbeit verschiedene mathematische Konzepte, die Fairness modellieren. Diese These behandelt auch die Frage, ob eine "beste Kostenallokation" existiert, die Menschen präferieren. Es ist bekannt, dass multikriterielle Optimierungsprobleme oftmals keine optimale Lösung besitzen, die gleichzeitig jede Zielfunktion optimiert. Es gibt kein "perfektes" Wahlverfahren, welches die fünf grundlegenden "social choice procedures" aus dem Buch "Mathematics and Politics. Strategy, Voting, Power and Proof" von Taylor et al. erfüllt. Gleiches gilt für das Kostenallokationsproblem. Es gibt keine Kostenallokation, die jede unserer gewünschten Eigenschaften erfüllt. Unsere Spiel-theoretischen Konzepte versuchen den Grad der axiomatischen Verletzung zu minimieren und erhalten gleichzeitig die Gültigkeit einiger wichtiger Eigenschaften. Aus Sicht der Komplexität ist es NP-schwer, die Allokationen, die auf den Spiel-theoretischen Konzepten basieren, zu berechnen. Die größte Herausforderung ist, dass es exponentiell viele mögliche Koalitionen gibt. Allerdings kann diese Schwierigkeit überwunden werden, indem man "constraint generation"-Ansätze benutzt. Einige primale und duale Heuristiken werden vorgestellt, um die Laufzeit des Separierungsproblems zu verringern. Basierend auf diesen Techniken können wir unsere Anwendungen lösen, deren Größen von klein mit 4 Spielern, bis mittel mit 18 Spielern, und groß mit 85 Spielern und 2^{85}-1 möglichen Koalitionen variieren. Durch Rechenergebnisse zeigen wir die Ungerechtigkeit der traditionellen Allokationen. Betrachten wir beispielsweise das Ticketpreis-Problem des niederländischen IC Schienennetzes. Der aktuelle Entfernungstarif resultiert in einer Situation, bei der die Passagiere in der zentralen Region des Landes über 25% mehr zahlen im Vergleich zu den ihnen entstehenden Kosten und dieser Überschuss subventioniert einige andere Bahn-Verbindungen, das ist absolut nicht fair. Im Gegensatz dazu senken unsere Spiel-theoretischen Preise diese Ungerechtigkeit und erhöhen den Anreiz für Spieler in der großen Koalition zu bleiben. ; Due to economy of scale, it is suggested that individual users, in order to save costs, should join a cooperation rather than acting on their own. However, a challenge for individuals when cooperating with others is that every member of the cooperation has to agree on how to allocate the common costs among members, otherwise the cooperation cannot be realised. Taken this issue into account, we set the objective of our thesis in investigating the issue of fair allocations of common costs among users in a cooperation. This thesis combines cooperative game theory and state-of-the-art algorithms from linear and integer programming in order to define fair cost allocations and calculate them numerically for large real-world applications. Our approaches outclasse traditional cost allocation methods in terms of fairness and users' satisfaction. Cooperative game theory analyzes the possible grouping of individuals to form their coalitions. It provides mathematical tools to understand fair prices in the sense that a fair price prevents the collapse of the grand coalition and increases the stability of the cooperation. The current definition of cost allocation game does not allow us to restrict the set of possible coalitions of players and to set conditions on the output prices, which often occur in real-world applications. Our generalization bring the cost allocation game model a step closer to practice. Based on our definition, we present and discuss in the thesis several mathematical concepts, which model fairness. This thesis also considers the question of whether there exists a "best" cost allocation, which people naturally like to have. It is well-known that multicriteria optimization problems often do not have "the optimal solution" that simultaneously optimizes each objective to its optimal value. There is also no "perfect" voting-system which can satisfy all the five simple, essential social choice procedures presented in the book "Mathematics and Politics. Strategy, Voting, Power and Proof" of Taylor et al. Similarly, the cost allocation problem is shown to experience the same problem. In particular, there is no cost allocation which can satisfy all of our desired properties, which are coherent and seem to be reasonable or even indispensable. Our game theoretical concepts try to minimize the degree of axiomatic violation while the validity of some most important properties is kept. From the complexity point of view, it is NP-hard to calculate the allocations which are based on the considered game theoretical concepts. The hardest challenge is that we must take into account the exponential number of the possible coalitions. However, this difficulty can be overcome by using constraint generation approaches. Several primal and dual heuristics are constructed in order to decrease the solving time of the separation problem. Based on these techniques, we are able to solve our applications, whose sizes vary from small with 4 players, to medium with 18 players, and even large with 85 players and 2^{85}-1 possible coalitions. Via computational results, we show the unfairness of traditional cost allocations. For example, for the ticket pricing problem of the Dutch IC railway network, the current distance tariff results in a situation where the passengers in the central region of the country pay over 25% more than the costs they incur and these excess payments subsidize operations elsewhere, which is absolutely not fair. In contrast, our game theory based prices decrease this unfairness and increase the incentive to stay in the grand coalition for players.
The paper presents a new method for solving the 0–1 linear programming problems (LPs). The general 0–1 LPs are believed to be NP-hard and a consistent, efficient general-purpose algorithm for these models has not been found so far. Cutting planes and branch and bound approaches were the earliest exact methods for the 0–1 LP. Unfortunately, these methods on their own failed to solve the 0–1 LP model consistently and efficiently. The hybrids that are a combination of heuristics, cuts, branch and bound and pricing have been used successfully for some 0–1 models. The main challenge with these hybrids is that these hybrids cannot completely eliminate the threat of combinatorial explosion for very large practical 0–1 LPs. In this paper, a technique to reduce the complexity of 0–1 LPs is proposed. The given problem is used to generate a simpler version of the problem, which is then solved in stages in such a way that the solution obtained is tested for feasibility and improved at every stage until an optimal solution is found. The new problem generated has a coefficient matrix of 0 s and 1 s only. From this study, it can be concluded that for every 0–1 LP with a feasible optimal solution, there exists another 0–1 LP (called a double in this paper) with exactly the same optimal solution but different constraints. The constraints of the double are made up of only 0 s and 1 s. It is not easy to determine this double 0–1 LP by mere inspection but can be obtained in stages as given in the numerical illustration presented in this paper. The 0–1 integer programming models have applications in so many areas of business. These include large economic/financial models, marketing strategy models, production scheduling and labor force planning models, computer design and networking models, military operations, agriculture, wild fire fighting, vehicle routing and health care and medical models ; В статье представлен новый метод решения задач 0–1 линейного программирования (ЛП). Общие 0–1 ЛП считаются NP-трудными, и до сих пор не найден последовательный эффективный общий алгоритм для этих моделей. Самыми ранними точными методами для 0–1 ЛП были метод секущих плоскостей и метод ветвей и границ. К сожалению, сами по себе эти методы не смогли последовательно и эффективно решить модель 0–1 ЛП. Гибриды, представляющие собой комбинацию эвристики, отсечений, ветвей и границ, а также ценообразования, успешно использовались для некоторых моделей 0–1. Основной проблемой гибридов является то, что они не могут полностью устранить угрозу комбинаторного взрыва для очень больших практических 0–1 ЛП. В данной статье предлагается метод снижения сложности 0–1 ЛП. Данная задача используется для создания более простой версии задачи, которая затем решается поэтапно таким образом, что полученное решение проверяется на осуществимость и совершенствуется на каждом этапе до тех пор, пока не будет найдено оптимальное решение. Новая задача имеет матрицу коэффициентов только 0 с и 1 с. Из данного исследования можно сделать вывод, что для каждой 0–1 ЛП с допустимым оптимальным решением существует еще одна 0–1 ЛП (называемая в данной статье двойником) с точно таким же оптимальным решением, но с другими ограничениями. Ограничения двойника состоят только из 0 с и 1 с. Двойника 0–1 ЛП непросто определить простым осмотром, но его можно получить поэтапно, как показано на числовом примере, представленном в этой статье. Модели 0–1 целочисленного программирования находят применение во многих сферах деятельности. К ним относятся крупные экономические/финансовые модели, модели маркетинговых стратегий, модели планирования производства и рабочей силы, модели компьютерного проектирования и нетворкинга, военные операции, сельское хозяйство, борьба с лесными пожарами, маршрутизация транспортных средств, а также модели здравоохранения и медицины ; У статті представлений новий метод вирішення задач 0–1 лінійного програмування (ЛП). Загальні 0–1 ЛП вважаються NP-важкими, і до цих пір не знайдений послідовний ефективний загальний алгоритм для цих моделей. Найбільш ранніми точними методами для 0–1 ЛП були метод січних площин і метод гілок і меж. На жаль, самі по собі ці методи не змогли послідовно і ефективно вирішити модель 0–1 ЛП. Гібриди, що представляють собою комбінацію евристики, вiдсiчень, гілок і меж, а також ціноутворення, успішно використовувалися для деяких моделей 0–1. Основною проблемою гібридів є те, що вони не можуть повністю усунути загрозу комбінаторного вибуху для дуже великих практичних 0–1 ЛП. У даній статті пропонується метод зниження складності 0–1 ЛП. Дана задача використовується для створення більш простої версії задачi, яка потім вирішується поетапно таким чином, що отримане рішення перевіряється на здійсненність і вдосконалюється на кожному етапі до тих пір, поки не буде знайдено оптимальне рішення. Нова задача має матрицю коефіцієнтів тільки 0 с і 1 с. З даного дослідження можна зробити висновок, що для кожної 0–1 ЛП з допустимим оптимальним рішенням існує ще одна 0–1 ЛП (звана в даній статті двійником) з точно таким же оптимальним рішенням, але з іншими обмеженнями. Обмеження двійника складаються тільки з 0 с і 1 с. Двійника 0–1 ЛП непросто визначити простим оглядом, але його можна отримати поетапно, як показано на числовому прикладі, представленому в цій статті. Моделі 0–1 цілочисельного програмування знаходять застосування в багатьох сферах діяльності. До них відносяться великі економічні/фінансові моделі, моделі маркетингових стратегій, моделі планування виробництва та робочої сили, моделі комп'ютерного проектування та нетворкінгу, військові операції, сільське господарство, боротьба з лісовими пожежами, маршрутизація транспортних засобів, а також моделі охорони здоров'я та медицини
ABSTRAK: Permasalahan dalam penelitian ini adalah: (1) Bagaimana bentuk-bentuk peranan keluarga dalam menanamkan nilai-nilai karakter pada anak perempuan di Desa Kasaka Kecamatan Kabawo Kabupaten Muna?; (2) Bagaimana perubahan peranan pendidikan keluarga pada anak perempuan di Desa Kasaka Kecamatan Kabawo Kabupaten Muna (1970-2000)?; dan (3) Apa saja nilai-nilai karakter yang ditanamkan dalam pendidikan keluarga pada anak perempuan di Desa Kasaka Kecamatan Kabawo Kabupaten Muna? Metode yang digunakan dalam penelitian ini adalah metode sejarah, dengan tahapan-tahapan sebagai berikut: 1) pemilihan topik dan penetapan judul, 2) heuristik, 3) kritik sumber, 4) interpretasi, dan 5) historiografi. Hasil penelitian ini, yaitu: 1) Bentuk-bentuk peranan keluarga dalam menanamkan nilai-nilai karakter pada anak perempuan di Desa Kasaka Kecamatan Kabawo Kabupaten Muna dilakukan sesuai dengan perubahan fisik dan kejiwaan manuasia yaitu: (a) saat ibu sedang mengandung anak pertama (umur 7 bulan-kelahiran) melalui pranata pendidikan kasambu, dengan diberi pembelajaran seperti mempersiapkan kelahiran serta persiapan merawat bayi sekaligus sebagai isyarat kasat mata pada janin bahwa ia bersiap memasuki alam baru yaitu alam dunia, (b) setelah kelahiran (umur 0-44 hari) melalui pranata pendidikan diazankan dan diiqamahkan, (c) pada saat penanggalan plasenta (umur 44 hari-1 tahun) melalui pranata pedindikan kampua atau aqiqah, (d) pada saat permulaan tumbuh gigi (umur 1-7 tahun) melalui pranata pendidikan sariga, (e) pada saat penanggalan (pencabutan) gigi pertama (umur 7-10 tahun) melalui pranata pendidikan kangkilo (sunatan) dan katoba (pengislaman), dan (f) saat mulai haid bagi anak perempuan (umur 15 tahun ke atas) melalui pranata pendidikan karia (pingitan); 2) Perubahan peranan pendidikan keluarga pada anak perempuan di Desa Kasaka (1970-2000) terbagi tiga periode, yaitu: (a) periode tahun 1970-1980 di Desa Kasaka masih kental sekali, mulai dari awal sampai akhir proses pelaksanaan suatu tradisi yang dijalankan masih sangat lengkap dan memenuhi fungsinya sebagai adat istiadat sekaligus penanaman nilai-nilai karakter, (b) periode 1980-1990 di Desa Kasaka pada masa ini adalah masa dimana tradisi dan budaya mulai terpengaruh dunia luar seperti terpengaruhnya perilaku dalam melaksanakan ajaran agama dan kepercayaan serta menurunnya sikap perilaku sopan santun dan kejujuran, (c) periode 1990-2000 di Desa Kasaka pada masa ini tradisi sangat jauh berbeda dengan tahun sebelumnya, dimana sebagian tradisi dan budaya sudah tidak dilakukan lagi dan sebagiannya dianggap hanya formalitas dalam kehidupan bermasyarakat; 3) Nilai-nilai karakter yang ditanamkan dalam pendidikan keluarga pada anak perempuan di Desa Kasaka yaitu nilai religius (agama), kejujuran, disiplin, kerja keras, kreatif, mandiri, demokratis, rasa ingin tahu, semangat kebangsaan, cinta tanah air, menghargai prestasi, bersahabat atau komunikatif, cinta damai, gemar membaca, peduli lingkungan, peduli sosial, dan tanggung jawab. Kata Kunci: Pendidikan, Bentuk-Bentuk, Peran, Nilai ABSTRACT: The problems in this study are: (1) What are the forms of family roles in instilling character values in girls in Kasaka Village, Kabawo District, Muna Regency ?; (2) How is the change in the role of family education for girls in Kasaka Village, Kabawo District, Muna Regency (1970-2000) ?; and (3) What are the character values instilled in family education in girls in Kasaka Village, Kabawo District, Muna Regency? The method used in this research is the historical the stages are as follows: 1) topic selection and title determination, 2) heuristics, 3) source criticism, 4) interpretation, and 5) historiography. The results of this study are: 1) The forms of family roles in instilling character values in girls in Kasaka Village, Kabawo District, Muna Regency are carried out in accordance with the physical and psychological changes of humans, namely: (a) when the mother is pregnant with the first child (age 7 months of birth) through kasambu educational institutions, by being given lessons such as preparing for birth and preparing to care for the baby as well as a visible signal to the fetus that it is ready to enter a new realm, namely the world, (b) after birth (age 0-44 days) through educational institutions are practiced and adhered to, (c) at the time of placental calendation (age 44 days-1 year) through the Kampua pededic system or aqiqah, (d) at the beginning of teething (aged 1-7 years) through sariga education institutions, (e ) at the time of dating (extraction) of the first tooth (age 7-10 years) through kangkilo (circumcision) and katoba (pengislaman) educational institutions, and (f) the start of menstruation for girls uan (aged 15 years and over) through karia education institutions (pingitan); 2) Changes in the role of family education in girls in Kasaka Village (1970-2000) are divided into three periods, namely: (a) the period 1970-1980 in Kasaka Village is still very strong, starting from the beginning to the end of the process of implementing a tradition that is still being carried out. is very complete and fulfills its function as customs as well as instilling character values, (b) the 1980-1990 period in Kasaka Village at this time was a time when traditions and culture began to be influenced by the outside world, such as the influence of behavior in implementing religious teachings and beliefs and decreased attitudes. the behavior of courtesy and honesty, (c) the period 1990-2000 in Kasaka Village at this time the tradition was very different from the previous year, where some traditions and cultures were no longer practiced and some were considered only formality in social life; 3) Character values instilled in family education in girls in Kasaka Village are religious values (religion), honesty, discipline, hard work, creative, independent, democratic, curiosity, national spirit, love for the country, respect for achievement. friendly or communicative, peace-loving, fond of reading, caring for the environment, social care, and responsibility. Keywords: Education, Forms, Roles, Values
The paper presents a new method for solving the 0–1 linear programming problems (LPs). The general 0–1 LPs are believed to be NP-hard and a consistent, efficient general-purpose algorithm for these models has not been found so far. Cutting planes and branch and bound approaches were the earliest exact methods for the 0–1 LP. Unfortunately, these methods on their own failed to solve the 0–1 LP model consistently and efficiently. The hybrids that are a combination of heuristics, cuts, branch and bound and pricing have been used successfully for some 0–1 models. The main challenge with these hybrids is that these hybrids cannot completely eliminate the threat of combinatorial explosion for very large practical 0–1 LPs. In this paper, a technique to reduce the complexity of 0–1 LPs is proposed. The given problem is used to generate a simpler version of the problem, which is then solved in stages in such a way that the solution obtained is tested for feasibility and improved at every stage until an optimal solution is found. The new problem generated has a coefficient matrix of 0 s and 1 s only. From this study, it can be concluded that for every 0–1 LP with a feasible optimal solution, there exists another 0–1 LP (called a double in this paper) with exactly the same optimal solution but different constraints. The constraints of the double are made up of only 0 s and 1 s. It is not easy to determine this double 0–1 LP by mere inspection but can be obtained in stages as given in the numerical illustration presented in this paper. The 0–1 integer programming models have applications in so many areas of business. These include large economic/financial models, marketing strategy models, production scheduling and labor force planning models, computer design and networking models, military operations, agriculture, wild fire fighting, vehicle routing and health care and medical models ; В статье представлен новый метод решения задач 0–1 линейного программирования (ЛП). Общие 0–1 ЛП считаются NP-трудными, и до сих пор не найден последовательный эффективный общий алгоритм для этих моделей. Самыми ранними точными методами для 0–1 ЛП были метод секущих плоскостей и метод ветвей и границ. К сожалению, сами по себе эти методы не смогли последовательно и эффективно решить модель 0–1 ЛП. Гибриды, представляющие собой комбинацию эвристики, отсечений, ветвей и границ, а также ценообразования, успешно использовались для некоторых моделей 0–1. Основной проблемой гибридов является то, что они не могут полностью устранить угрозу комбинаторного взрыва для очень больших практических 0–1 ЛП. В данной статье предлагается метод снижения сложности 0–1 ЛП. Данная задача используется для создания более простой версии задачи, которая затем решается поэтапно таким образом, что полученное решение проверяется на осуществимость и совершенствуется на каждом этапе до тех пор, пока не будет найдено оптимальное решение. Новая задача имеет матрицу коэффициентов только 0 с и 1 с. Из данного исследования можно сделать вывод, что для каждой 0–1 ЛП с допустимым оптимальным решением существует еще одна 0–1 ЛП (называемая в данной статье двойником) с точно таким же оптимальным решением, но с другими ограничениями. Ограничения двойника состоят только из 0 с и 1 с. Двойника 0–1 ЛП непросто определить простым осмотром, но его можно получить поэтапно, как показано на числовом примере, представленном в этой статье. Модели 0–1 целочисленного программирования находят применение во многих сферах деятельности. К ним относятся крупные экономические/финансовые модели, модели маркетинговых стратегий, модели планирования производства и рабочей силы, модели компьютерного проектирования и нетворкинга, военные операции, сельское хозяйство, борьба с лесными пожарами, маршрутизация транспортных средств, а также модели здравоохранения и медицины ; У статті представлений новий метод вирішення задач 0–1 лінійного програмування (ЛП). Загальні 0–1 ЛП вважаються NP-важкими, і до цих пір не знайдений послідовний ефективний загальний алгоритм для цих моделей. Найбільш ранніми точними методами для 0–1 ЛП були метод січних площин і метод гілок і меж. На жаль, самі по собі ці методи не змогли послідовно і ефективно вирішити модель 0–1 ЛП. Гібриди, що представляють собою комбінацію евристики, вiдсiчень, гілок і меж, а також ціноутворення, успішно використовувалися для деяких моделей 0–1. Основною проблемою гібридів є те, що вони не можуть повністю усунути загрозу комбінаторного вибуху для дуже великих практичних 0–1 ЛП. У даній статті пропонується метод зниження складності 0–1 ЛП. Дана задача використовується для створення більш простої версії задачi, яка потім вирішується поетапно таким чином, що отримане рішення перевіряється на здійсненність і вдосконалюється на кожному етапі до тих пір, поки не буде знайдено оптимальне рішення. Нова задача має матрицю коефіцієнтів тільки 0 с і 1 с. З даного дослідження можна зробити висновок, що для кожної 0–1 ЛП з допустимим оптимальним рішенням існує ще одна 0–1 ЛП (звана в даній статті двійником) з точно таким же оптимальним рішенням, але з іншими обмеженнями. Обмеження двійника складаються тільки з 0 с і 1 с. Двійника 0–1 ЛП непросто визначити простим оглядом, але його можна отримати поетапно, як показано на числовому прикладі, представленому в цій статті. Моделі 0–1 цілочисельного програмування знаходять застосування в багатьох сферах діяльності. До них відносяться великі економічні/фінансові моделі, моделі маркетингових стратегій, моделі планування виробництва та робочої сили, моделі комп'ютерного проектування та нетворкінгу, військові операції, сільське господарство, боротьба з лісовими пожежами, маршрутизація транспортних засобів, а також моделі охорони здоров'я та медицини
In this thesis, we address the problem of modeling and verification of complex systems exhibiting both probabilistic and timed behaviors. Designing such systems has become increasingly complex due to the heterogeneity of the involved components, the uncertainty resulting from open environment and the real-time constraints inherent to their application domains. Handling both software and (abstraction of) hardware in a unified view while also including performanceinformation (e.g. computation and communication times, energy consumption, etc.) becomes a must. Building and analyzing performance models is of paramount importance in order to give guarantees on the functional and extra-functional system requirements and to make well-founded design decisions based on quantitative measures at early design stages.This thesis brings several new contributions. First, we introduce a new modeling formalism called Stochastic Real-Time BIP (SRT-BIP) for the modeling, the simulation and the code generation of component-based systems. This formalism inherits from the BIP framework its component-based and real-time modeling capabilities and, extends it by providing comprehensive primitives to express complex stochastic behaviors.Second, we investigate machine learning techniques to ease the construction of performance models. We propose to enhance and adapt a state-of-the-art learning procedure to infer stochastic real-time models from concrete system execution and to represent them in the SRT-BIP formalism.Third, given performance models in SRT-BIP, we explore the use of statistical Model Checking (SMC) for the anaysis of system's functional and performance requirements. To do so, we provide a full framework, called SBIP, as a support tool for the modeling, simulation and analysis of SRT-BIP systems. SBIP is an Integrated Development Environment (IDE) that implements SMC algorithms for quantitative, qualitative and rare events analyses together with an automated exploring procedure for parameterized requirements. We validate our proposalson real-life case studies ranging from communication protocols and concurrent systems to embedded systems.Finally, we further investigate the interest of SMC when included in elaborated system analysis workflows. We illustrate this by proposing two risk assessment approaches. In the first approach, we introduce a spiral methodology to build resilient systems with FDIR components that we validate on the safety assessment of a planetary rover locomotion system. The second approach is concerned with the security assessment of organization's defenses following an offensive security approach. The goal is to synthesize impactful defense configurations against optimized attack strategies (that minimize attack cost and maximize success probability). These attack strategies are obtained by combining model learning with meta heuristics, and where SMC is used to score and prioritize potential candidate strategies. ; Dans cette thèse, nous abordons le problème de la modélisation et de la vérification de systèmes complexes présentant des comportements à la fois probabilistes et temporisés. La conception de tels systèmes est devenue de plus en plus complexe en raison de l'hétérogénéité des composants impliqués, l'incertitude découlant d'un environnement ouvert et les contraintes temps réelinhérentes à leurs domaines d'application. La gestion à la fois du logiciel et du matériel dans une vue unifiée tout en incluant des informations sur les performances (par exemple, temps de calcul et de communication, consommation d'énergie, etc.) devient indispensable. Construire et analyser des modèles de performance est d'une importance primordiale pour donner des garanties sur les exigences fonctionnelles et extra-fonctionnelles des systèmes, et permettre uneprise de décision fondée sur des mesures quantitatives dès les premières étapes de la conception.Cette thèse apporte plusieurs nouvelles contributions. Tout d'abord, nous introduisons un nouveau formalisme de modélisation appelé BIP stochastique et temps réel (SRT-BIP) pour la modélisation, la simulation et la génération de code de systèmes à base de composants. Ce formalisme hérite du framework BIP ses capacités de modélisation basées sur les composants et le temps réel et, en outre, il fournit des primitives pour exprimer des comportements stochastiquescomplexes.Deuxièmement, nous étudions des techniques d'apprentissage automatique pour faciliter la construction de modèles de performance. Nous proposons d'améliorer et d'adapter une procédure d'apprentissage présentée dans la littérature pour déduire des modèles stochastiques et temporisés à partir d'exécutions concrètes du système, et de les exprimer dans le formalisme SRT-BIP.Troisièmement, étant donné les modèles de performance dans SRT-BIP, nous explorons l'utilisation du model checking statistique (SMC) pour l'analyse d'exigences concernant la fonctionnalité et les performances du système. Pour ce faire, nous fournissons un framework complet, appelé SBIP, en tant qu'outil de support pour la modélisation, la simulation et l'analyse des systèmes SRT-BIP. SBIP est un environnement de développement intégré (IDE) qui implémente des algorithmes SMC pour des analyses quantitatives, qualitatives et d'événementsrares, en plus d'une procédure d'automatisation pour l'exploration des paramètres d'une propriété. Nous validons nos propositions sur des études de cas réels touchant à des domaines variés tels que les protocoles de communication, les systèmes concurrents et les systèmesembarqués.Enfin, nous étudions plus en détail l'intérêt du SMC lorsqu'il est inclus dans des méthodes d'analyse de système élaborées. Nous illustrons cela en proposant deux approches d'évaluation des risques. Dans la première approche, nous introduisons une méthodologie en spirale pour modéliser des systèmes résilients avec des composants FDIR que nous validons à travers l'évaluation de la sécurité du système de locomotion d'un rover d'exploration planétaire. La deuxième approche concerne l'évaluation des politiques de sécurité des organisations selon une approche de sécurité offensive. L'objectif est de synthétiser des configurations de défense efficaces contre des stratégies d'attaque optimisées (qui minimisent le coût d'attaque et maximisent la probabilité de succès). Ces stratégies d'attaque sont obtenues en combinant l'apprentissage de modèles et les méthodes méta-heuristiques, dans lesquels le SMC a le rôle principal d'évaluer et de prioriser les potentielles stratégies candidates.
Die Auswahl eines geeigneten Arbeitsfluides ist bei der Auslegung thermodynamischer Kreisprozesse ein entscheidender Schritt, da sowohl das Betriebsverhalten als auch die Effizienz der Prozesse maßgeblich durch die ausgewählten Fluide beeinflusst werden. Zu dem großen Bereich thermodynamischer Kreisprozesse gehören grundsätzlich etablierte Energiewandlungsprozesse mit langer Historie wie Kompressionskältemaschinen und -wärmepumpen im niedrigen bis mittleren Temperaturbereich. Außerdem werden auch immer wieder neue Prozesse bzw. Anwendungen wie Hochtemperaturwärmepumpen oder Strom-Wärme-Strom-Speichersysteme diskutiert. Nicht nur für solche neuartigen Prozesse bzw. Randbedingungen stellt sich stets die Frage nach geeigneten und effizienten Fluiden, sondern auch für die etablierten Prozesse, für die auf Grund der aktuellen Problematik der hohen Treibhauswirksamkeit vieler Kältemittel zeitnah alternative Arbeitsmittel gefunden werden müssen. Bisher waren die Fluidauswahlmethoden allerdings häufig unstrukturiert und beruhten nicht auf grundlegendem thermodynamischen Verständnis, sondern meist auf dem Prinzip Versuch und Irrtum oder auf sehr einfachen Heuristiken. Im Rahmen dieser Arbeit werden Methoden der Fluidauswahl für zwei Teilbereiche der thermodynamischen Energiewandlungsprozesse diskutiert, die bislang nicht ausreichend betrachtet wurden. Dies sind existierende Anlagen und Anlagenkonzepte von Kompressionskälte- maschinen und -wärmepumpen sowie auf dem Clausius-Rankine-Prozess basierte Strom-Wärme-Strom-Speichersysteme, wobei die Anknüpfungspunkte hierbei auf Grund der bis-herigen Kenntnisstände deutlich voneinander abweichen. Strom-Wärme-Strom-Speicher sind ein jüngst diskutierter Ansatz zur großtechnischen Speicherung elektrischer Energie. Eine mögliche Konfiguration dieses Konzepts ist die Kombination zweier Clausius-Rankine-Prozesse (Wärmepumpe und Organic Rankine Cycle) mit einem thermischen Energiespeicher. Der bisherige Kenntnisstand zu dieser Technologie kann prinzipiell als in der Anfangsphase befindlich bezeichnet werden. Der Einfluss des Fluides, generell infrage kommende Arbeitsmittel, vielversprechende Betriebsbedingungen und thermodynamische Grenzen der Effizienz sind bisher weitestgehend unbekannt. Zur Untersuchung dieser Punkte werden im Rahmen dieser Arbeit Strom-Wärme-Strom-Systeme, bestehend aus zwei Clausius-Rankine-Prozessen und einem idealen isothermen Speicher, modelliert, wobei die Irreversibilitäten der Prozesse schrittweise gesteigert werden. Die berücksichtigten Arbeitsfluide sind hierbei zunächst hypothetische Fluide, deren Parameter hinsichtlich des Prozesses und einiger Randbedingungen optimiert werden. Im weiteren Verlauf werden außerdem potenziell existierende Arbeitsmittel identifiziert. Die Untersuchung zeigt grundsätzlich für steigende Speichertemperaturen sinkende Wirkungsgrade und steigende Leistungen. Weiterhin geht aus der Untersuchung hervor, dass Leistungsabgabe und Wirkungsgrad bei konstanter Speichertemperatur zu einer Pareto-Front führen: bei der eine Leistungsabgabe nahe am Maximalwert zu einer deutlichen Reduktion des Wirkungsgrades führt. In diesem Zusammenhang wird ein Kompromissprozess vorgeschlagen, der beispielsweise für eine Speichertemperatur von 350 K und unter Berücksichtigung typischer isentroper Wirkungsgrade einzelner Komponenten mit dem optimierten hypothetischen Fluid einen Wirkungsgrad von 30 % aufwies. Aus der Fluidauswahl für diesen Prozess resultierte schließlich Ethylamin als bestes Fluid mit einem vorhergesagten Wirkungsgrad von 25,6 %. Im Bereich der bereits ausgelegten Kältemaschinen und Wärmepumpen stehen im Rahmen dieser Arbeit Verfahren zur Identifizierung geeigneter Substitutionsfluide im Zentrum. Auf Grund der aktuellen Gesetzgebung müssen zeitnah Ersatzfluide für zahlreiche konkrete Anlagen mit unterschiedlichen Randbedingungen gefunden werden. Ein theoretisches Modell zur individuellen Fluidauswahl kann hierbei eine entscheidende Hilfe sein. Gegenwärtig ist allerdings nicht klar, mit welchem Detaillierungsgrad eine konkrete Anlage in einem Modell abgebildet werden muss, um eine verlässliche Fluidempfehlung zu erhalten. Zur Erörterung dieser Frage wurden Kreisprozessmodelle unterschiedlicher Komplexität anhand von Messwerten, gewonnen für verschiedene Fluide mit einer Wärmepumpenversuchsanlage, überprüft. Die Modellierungen höherer Komplexität beinhalten hierbei ein eigens entwickeltes Modell zur fluidabhängigen Bestimmung isentroper Wirkungsgrade und Liefergrade des Verdichters. Aus der Untersuchung resultiert grundsätzlich, dass die Berechnung der unterschiedlichen Prozessgrößen an Genauigkeit bezogen auf die Messungen gewinnt, wenn der Detaillierungsgrad der Modellierung gesteigert wird. Allerdings hat sich auch gezeigt, dass vor allem das Modell zur Berechnung isentroper Wirkungsgrade und Liefergrade des Verdichters für eine valide Auswahl von Substitutionsfluiden unerlässlich ist. Aus den Erkenntnissen wurde schließlich ein schrittweises Verfahren zur Auswahl geeigneter Substitutionsfluide entwickelt und vorgeschlagen. ; The selection of a suitable working fluid is a decisive factor in the design of thermodynamic cycles, since both the operating behaviour and the efficiency of the processes are significantly influenced by the selected fluids. The extensive field of thermodynamic cycles basically includes established energy conversion processes with a long history, such as vapour compression refrigeration cycles and heat pumps in the low to medium temperature ranges. Furthermore, new processes and applications such as high-temperature heat pumps or pumped heat electricity storages (PHES) are constantly being addressed. The question of suitable and efficient fluids arises not only for these new processes and new applications, but also for the established processes for which, due to the current problem of the high global warming potential of many refrigerants, alternative working fluids must be quickly identified. To date, however, the methods of fluid selection have often been unstructured and were not based on basic thermodynamic understanding, but mostly on the principle of trial and error or on very simple heuristics. Within the scope of this study, methods of fluid selection for two sub-areas of thermodynamic energy conversion processes are discussed which have so far not been adequately addressed. These include existing vapour compression refrigeration cycles and heat pumps, as well as Rankine-cycles-based PHES systems, whereby the starting points differ considerably due to the current state of knowledge. Pumped heat electricity storage is a recently discussed approach to the large-scale storage of electrical energy. One possible configuration of this concept is the combination of two Rankine-cycles (heat pump and organic rankine cycle) with a thermal energy storage system. The current state of knowledge on this technology can, in principle, be described as being in the initial phase. The influence of the fluid, generally applicable working fluids, promising operating conditions and thermodynamic limits of efficiency, are as yet largely unknown. In order to investigate these points, pumped heat electricity storage systems consisting of two Rankine cycles and an ideal isothermal storage system are modelled, whereby the irreversibilities of the processes are gradually increased. The working fluids taken into account are initially hypothetical fluids whose parameters are optimised with regard to the process and specific boundary conditions. During the further course, promising existing working fluids will also be identified. The investigation essentially shows decreasing efficiencies and increasing values of the power output for rising storage temperatures. Furthermore, it is shown that power output and efficiency at constant storage temperatures lead to a Pareto front: power output close to the maximum value leads to a significant reduction in efficiency. In this context, a compromise cycle is proposed which, for example at a storage temperature of 350 K and taking into account typical isentropic efficiencies of individual components, has an efficiency of 30 % when using the optimised hypothetical fluid. The fluid selection for this process resulted in ethylamine as the best fluid with a predicted efficiency of 25.6 %. In the field of specific refrigerating cycles and heat pumps, this investigation focuses on methods for identifying suitable drop-in fluids. Due to current legislation, drop-in fluids must quickly be found for numerous existing systems with different boundary conditions. A theoretical model for individual fluid selection can be a decisive aid here. However, it is currently not clear with what level of detail a specific system must be mapped in a model in order to obtain a reliable fluid recommendation. In discussing this question, process cycle models of varying complexity were examined on the basis of measured values obtained for different fluids with a heat pump test rig. The models of higher complexity includes a specially developed model for the fluid-dependent calculation of isentropic and volumetric efficiencies of the compressor. As a result of the investigation, the calculation of the different process variables gains in accuracy, compared to the measurements, when the detail levels of the modelling is increased. However, it has also been shown that the model for calculating isentropic and volumetric efficiencies of the compressor in particular is indispensable for a valid selection of substitute fluids. Finally, a step-by-step procedure for the selection of suitable substitute fluids has been developed and proposed on the basis of the findings.
Human reasoning research using probability problem tasks offer a novel and exciting approach to understanding executive function impairments from a new perspective, such as with attention deficit/hyperactivity disorder (ADHD). ADHD is widely associated with lowered academic performance thought to arise from executive function deficits relating to inhibition, impulsivity, and attention.1 A wealth of research in executive function tasks and neurobiological fMRI studies cite impairments with executive inhibitory control as the origin of cognitive and behavioural deficits associated with ADHD.2,3,4 By integrating research from cognitive psychology on the science of reasoning, together with knowledge of ADHD and educational psychology, this research may offer new insight into the cognitive mechanisms of executive function impairments, such as with ADHD, to better prepare educators to instruct the 5% of children diagnosed with ADHD.5 Individuals typically implement a narrow range of procedural heuristics to simplify the process of problem-solving and decision-making, rather than systematically analyzing a problem through tedious rule-based approaches.6 Heuristic short-cuts are often based on prior experiences or beliefs that, while crucial to human survival, may sometimes be erroneous in nature.7,8 According to one account of dual process theories (DPT), our early judgments are considered to be drawn from fast, experiential, intuitive processes that conform to a belief bias (or the tendency to judge conclusions according to belief, regardless of validity).9 A central feature of these faster reasoning processes is autonomy; thus, beliefs generate a rapid default response, regardless of need, which must be suppressed for deeper analysis.10 Slower secondary processes enable analytic judgments that are effortful and taxing on working memory.11 As analytic reasoning must be deliberately engaged to override the quicker belief-based appraisals, it is assumed that slower logical evaluations do not interfere with intuitive responses. However, the autonomously produced and faster belief-based judgments are able to interfere with slower, more rational thinking operations.12This master's research examined and compared the reasoning abilities of university educated adults with and without ADHD. Both groups solved 24 base-rate problems in free time that asked for the likelihood of group membership for an individual (e.g., what is the likelihood that Paul is a doctor?) when offered two pieces of conflicting information: salient base-rates (3 doctors vs. 997 nurses) and a luring, but opposing, stereotypical description (Paul lives in a beautiful home in a posh suburb, is well spoken, interested in politics, and invests a lot of time in his career). The task was coupled with an instructional manipulation that asked reasoners to respond either with beliefs (cued by the stereotypical description) or to respond statistically by way of presented base-rates. For 12 randomly generated problems, base-rates and descriptions conflicted, cueing opposing judgements. Another 12 randomly generated problems had matching base-rates and descriptions that cued identical judgements. ADHD is broadly associated difficulties in inattention and impulsivity thought to originate from primordial deficits of executive inhibition.13 Thus, it was expected that ADHD participants would perform poorly on conflict problems when deciding with statistics, as inhibitory control is required to suppress the autonomously produced intuitive response derived from beliefs. It was very surprising to observe that ADHD reasoners were superior to controls at judging with statistics and on par with controls when deciding with beliefs. This is both startling and counterintuitive, considering ADHD is predominantly linked to difficulties in inhibition, attention to necessary tasks, and using working memory effectively.14, 15, 16The literature supporting disinhibition theories of ADHD is solid, and yet ADHD participants did not demonstrate poorer reasoning abilities on a base-rate task widely assumed to require inhibitory control. This leads to the conclusion that perhaps base-rate problems do not measure inhibitory control, as is assumed.17 The fact that both groups were better at reasoning with statistics relative to beliefs challenges the assumption that base-rate processing even requires deeper analytic processing. It may be that humans have a probabilistic intuition for processing base-rates spontaneously; 18 thus, suppression of an early, yet unfavourable response is not required. A second finding is that ADHD reasoners were on par with controls in solving problems with beliefs. This reveals that ADHD reasoners are just as capable of encoding lengthy and ambiguous information as their non-ADHD peers, despite recognized cognitive deficits. The third and most important finding is that ADHD reasoners were significantly better than controls when solving base-rate problems with statistics. This unexpected result raises the question of why this pattern emerged for ADHD thinkers. One hypothesis is that ADHD thinkers implemented a cost-effective cognitive strategy when clear, salient numerical information (base-rates) was offered that could be easily extracted for problem-solving. This strategy, however, was ineffective when resolving problems under the belief instruction that entailed the decoding of lengthy descriptive information.The hypothesis of strategy use by ADHD reasoners was confirmed by comparing response latencies for both groups. For conflict problems, both groups required similarly longer response latencies to resolve problems with beliefs as compared to statistics. This corroborates that solving with statistics was easier than with beliefs, which contradicts serial models of DPTs. However, when solving non-conflict problems with beliefs – a task that should have been relatively easy as base-rates and descriptions cue the same response – the ADHD group required significantly more time than controls. This clearly evidences that ADHD participants have developed cost-efficient problem-solving strategy when clear, salient information is offered that can be easily extracted to solve the problem. However, they misapply this strategy to non-conflict problems when simple shortcuts could be used much more effectively. The over-application of this strategy on non-conflict problems resulted in longer latencies for the more difficult belief instruction.The results evidence that ADHD university students are able to implement problem-solving strategies to overcome cognitive deficits when salient and easily extractable information is presented. This finding is vital for developing effective pedagogies relating to classroom instruction for students with ADHD. However, students must also be educated on the appropriate application of these strategies. Further comparative investigations related to ADHD and reasoning are vital to better understand the capacity for strategic learning in individuals with ADHD, leading to better more informed instructional approaches.
Combinatorial optimization deals with efficiently determining an optimal (or at least a good) decision among a finite set of alternatives. In business administration, such combinatorial optimization problems arise in, e.g., portfolio selection, project management, data analysis, and logistics. These optimization problems have in common that the set of alternatives becomes very large as the problem size increases, and therefore an exhaustive search of all alternatives may require a prohibitively long computation time. Moreover, due to their combinatorial nature no closed-form solutions to these problems exist. In practice, a common approach to tackle combinatorial optimization problems is to formulate them as mathematical models and to solve them using a mathematical programming solver (cf., e.g., Bixby et al. 1999, Achterberg et al. 2020). For small-scale problem instances, the mathematical models comprise a manageable number of variables and constraints such that mathematical programming solvers are able to devise optimal solutions within a reasonable computation time. For large-scale problem instances, the number of variables and constraints becomes very large which extends the computation time required to find an optimal solution considerably. Therefore, despite the continuously improving performance of mathematical programming solvers and computing hardware, the availability of mathematical models that are efficient in terms of the number of variables and constraints used is of crucial importance. Another frequently used approach to address combinatorial optimization problems are matheuristics. Matheuristics decompose the considered optimization problem into subproblems, which are then formulated as mathematical models and solved with the help of a mathematical programming solver. Matheuristics are particularly suitable for situations where it is required to find a good, but not necessarily an optimal solution within a short computation time, since the speed of the solution process can be controlled by choosing an appropriate size of the subproblems. This thesis consists of three papers on large-scale combinatorial optimization problems. We consider a portfolio optimization problem in finance, a scheduling problem in project management, and a clustering problem in data analysis. For these problems, we present novel mathematical models that require a relatively small number of variables and constraints, and we develop matheuristics that are based on novel problem-decomposition strategies. In extensive computational experiments, the proposed models and matheuristics performed favorably compared to state-of-the-art models and solution approaches from the literature. In the first paper, we consider the problem of determining a portfolio for an enhanced index-tracking fund. Enhanced index-tracking funds aim to replicate the returns of a particular financial stock-market index as closely as possible while outperforming that index by a small positive excess return. Additionally, we consider various real-life constraints that may be imposed by investors, stock exchanges, or investment guidelines. Since enhanced index-tracking funds are particularly attractive to investors if the index comprises a large number of stocks and thus is well diversified, it is of particular interest to tackle large-scale problem instances. For this problem, we present two matheuristics that consist of a novel construction matheuristic, and two different improvement matheuristics that are based on the concepts of local branching (cf. Fischetti and Lodi 2003) and iterated greedy heuristics (cf., e.g., Ruiz and Stützle 2007). Moreover, both matheuristics are based on a novel mathematical model for which we provide insights that allow to remove numerous redundant variables and constraints. We tested both matheuristics in a computational experiment on problem instances that are based on large stock-market indices with up to 9,427 constituents. It turns out that our matheuristics yield better portfolios than benchmark approaches in terms of out-of-sample risk-return characteristics. In the second paper, we consider the problem of scheduling a set of precedence-related project activities, each of which requiring some time and scarce resources during their execution. For each activity, alternative execution modes are given, which differ in the duration and the resource requirements of the activity. Sought is a start time and an execution mode for each activity, such that all precedence relationships are respected, the required amount of each resource does not exceed its prescribed capacity, and the project makespan is minimized. For this problem, we present two novel mathematical models, in which the number of variables remains constant when the range of the activities' durations and thus also the planning horizon is increased. Moreover, we enhance the performance of the proposed mathematical models by eliminating some symmetric solutions from the search space and by adding some redundant sequencing constraints for activities that cannot be processed in parallel. In a computational experiment based on instances consisting of activities with durations ranging from one up to 260 time units, the proposed models consistently outperformed all reference models from the literature. In the third paper, we consider the problem of grouping similar objects into clusters, where the similarity between a pair of objects is determined by a distance measure based on some features of the objects. In addition, we consider constraints that impose a maximum capacity for the clusters, since the size of the clusters is often restricted in practical clustering applications. Furthermore, practical clustering applications are often characterized by a very large number of objects to be clustered. For this reason, we present a matheuristic based on novel problem-decomposition strategies that are specifically designed for large-scale problem instances. The proposed matheuristic comprises two phases. In the first phase, we decompose the considered problem into a series of generalized assignment problems, and in the second phase, we decompose the problem into subproblems that comprise groups of clusters only. In a computational experiment, we tested the proposed matheuristic on problem instances with up to 498,378 objects. The proposed matheuristic consistently outperformed the state-of-the-art approach on medium- and large-scale instances, while matching the performance for small-scale instances. Although we considered three specific optimization problems in this thesis, the proposed models and matheuristics can be adapted to related optimization problems with only minor modifications. Examples for such related optimization problems are the UCITS-constrained index-tracking problem (cf, e.g., Strub and Trautmann 2019), which consists of determining the portfolio of an investment fund that must comply with regulatory restrictions imposed by the European Union, the multi-site resource-constrained project scheduling problem (cf., e.g., Laurent et al. 2017), which comprises the scheduling of a set of project activities that can be executed at alternative sites, or constrained clustering problems with must-link and cannot-link constraints (cf., e.g., González-Almagro et al. 2020).