In: Brown-Liburd, H., Cohen, J. and Zamora, V.L., 2018. CSR disclosure items used as fairness heuristics in the investment decision. Journal of Business Ethics, 152(1), pp.275-289.
The increasing throughput of DNA sequencing technologies creates a need for faster algorithms. The fate of most reads is to be mapped to a reference sequence, typically a genome. Modern mappers rely on heuristics to gain speed at a reasonable cost for accuracy. In the seeding heuristic, short matches between the reads and the genome are used to narrow the search to a set of candidate locations. Several seeding variants used in modern mappers show good empirical performance but they are difficult to calibrate or to optimize for lack of theoretical results. Here we develop a theory to estimate the probability that the correct location of a read is filtered out during seeding, resulting in mapping errors. We describe the properties of simple exact seeds, skip seeds and MEM seeds (Maximal Exact Match seeds). The main innovation of this work is to use concepts from analytic combinatorics to represent reads as abstract sequences, and to specify their generative function to estimate the probabilities of interest. We provide several algorithms, which together give a workable solution for the problem of calibrating seeding heuristics for short reads. We also provide a C implementation of these algorithms in a library called Sesame. These results can improve current mapping algorithms and lay the foundation of a general strategy to tackle sequence alignment problems. The Sesame library is open source and available for download at https://github.com/gui11aume/sesame. ; We acknowledge the financial support of the Spanish Ministry of Economy, Industry and Competitiveness (Centro de Excelencia Severo Ochoa 2013–2017, Plan Estatal PGC2018-099807-B-I00), of the CERCA Programme/Generalitat de Catalunya, and of the European Research Council (Synergy Grant 609989). RC was supported by the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement 608959. We also acknowledge support of the Spanish Ministry of Economy and Competitiveness (MEIC) to the EMBL partnership.
In flood risk management, the choice of vulnerability functions has a remarkable impact on the overall uncertainty of modelling flood damage. The spatial transferability of empirical vulnerability functions is limited, leading to the need for computation and validation of region-specific vulnerability functions. In data-scarce regions however, this option is not feasible. In contrast, the physical processes of flood impact model chains can be developed in these regions because of the availability of global datasets. Here we evaluated the implementation of a synthetic vulnerability function into a flood impact model. The function bases on expert heuristics on a targeted sample of representative buildings (targeted heuristics). We applied the vulnerability function in a meso-scale river basin and evaluated the new function by comparing the resulting flood damage with the damage computed by other approaches, (1) an ensemble of vulnerability functions available from the literature, (2) an individual vulnerability function calibrated with region-specific data, and (3) the vulnerability function used in flood risk management by the Swiss government. The results show that targeted heuristics can be a valuable alternative for developing flood impact models in regions without any data or only few data on flood damage.
In flood risk management, the choice of vulnerability functions has a remarkable impact on the overall uncertainty of modelling flood damage. The spatial transferability of empirical vulnerability functions is limited, leading to the need for computation and validation of region-specific vulnerability functions. In data-scarce regions however, this option is not feasible. In contrast, the physical processes of flood impact model chains can be developed in these regions because of the availability of global datasets. Here we evaluated the implementation of a synthetic vulnerability function into a flood impact model. The function bases on expert heuristics on a targeted sample of representative buildings (targeted heuristics). We applied the vulnerability function in a meso-scale river basin and evaluated the new function by comparing the resulting flood damage with the damage computed by other approaches, (1) an ensemble of vulnerability functions available from the literature, (2) an individual vulnerability function calibrated with region-specific data, and (3) the vulnerability function used in flood risk management by the Swiss government. The results show that targeted heuristics can be a valuable alternative for developing flood impact models in regions without any data or only few data on flood damage.
This paper focuses on iterated local search heuristics for the maximum cut‐clique (MCC, or clique neighborhood) problem. Given an undirected graph G = (V,E) and a clique C of G, the cut‐clique is the set of edges running between C and V\C, establishing the cut (C,V\C). The MCC in G is to find a clique with the largest number of edges in the neighborhood of the clique, also known as the maximum edge‐neighborhood clique. This problem has been recently introduced in the literature together with a number of applications, namely, in cell biology instances. However, it has only been addressed so far by exact methods. In this paper, we introduce the first approximate algorithms for tackling the MCC problem, compare the results with the exact methodologies, and explore a new application within marketing analysis, which provide a new alternative perspective for mining market basket problems. ; This work has been partially supported by the Portuguese National Funding by FCT (project PEst-OE/MAT/UI0152), Pedro Martins; and by the Ministerio de Economía y Competitividad (Spanish Government), A. Ladrón-de-Guevara (ECO2011-27942) and H. Ramalhinho (TRA2013-48180-C3-2-P).
[EN] In recent years, a large number of heuristics have been proposed for the minimization of the total or mean flowtime/completion time of the well-known permutation flowshop scheduling problem. Although some literature reviews and comparisons have been made, they do not include the latest available heuristics and results are hard to compare as no common benchmarks and computing platforms have been employed. Furthermore, existing partial comparisons lack the application of powerful statistical tools. The result is that it is not clear which heuristics, especially among the recent ones, are the best. This paper presents a comprehensive review and computational evaluation as well as a statistical assessment of 22 existing heuristics. From the knowledge obtained after such a detailed comparison, five new heuristics are presented. Careful designs of experiments and analyses of variance (ANOVA) techniques are applied to guarantee sound conclusions. The comparison results identify the best existing methods and show that the five newly presented heuristics are competitive or better than the best performing ones in the literature for the permutation flowshop problem with the total completion time criterion ; This research is partially supported by National Science Foundation of China (60874075, 61174187), and Science Foundation of Shandong Province, China (BS2010DX005), and Postdoctoral Science Foundation of China (20100480897). Ruben Ruiz is partially funded by the Spanish Ministry of Science and Innovation, under the project "SMPA-Advanced Parallel Multiobjective Sequencing: Practical and Theorerical Advances" with reference DPI2008-03511/DPI and by the Small and Medium Industry of the Generalitat Valenciana (IMPIVA) and by the European Union through the European Regional Development Fund (FEDER) inside the R+D program "Ayudas dirigidas a Institutos Tecnologicos de la Red IMPIVA" during the year 2011, with project number IMDEEA/2011/142. ; Pan, Q.; Ruiz García, R. (2013). A comprehensive review and evaluation of ...
This book results from a long-term research effort aimed at tackling complex non-standard packing issues which arise in space engineering. The main research objective is to optimize cargo loading and arrangement, in compliance with a set of stringent rules. Complicated geometrical aspects are also taken into account, in addition to balancing conditions based on attitude control specifications. Chapter 1 introduces the class of non-standard packing problems studied. Chapter 2 gives a detailed explanation of a general model for the orthogonal packing of tetris-like items in a convex domain. A number of additional conditions are looked at in depth, including the prefixed orientation of subsets of items, the presence of unusable holes, separation planes and structural elements, relative distance bounds as well as static and dynamic balancing requirements. The relative feasibility sub-problem which is a special case that does not have an optimization criterion is discussed in Chapter 3. This setting can be exploited by introducing an ad hoc objective function, aimed at facilitating the finding of integer-feasible solutions. The third chapter also discusses the issue of tightening the general MIP model by introducing valid inequalities. A MIP-based heuristic approach is developed in Chapter 4, where the basic concept of abstract configuration is presented. Chapter 5 is devoted to experimental results relevant to a real-world application framework. Chapter 6 adopts both extensions of the general MIP model and non-linear formulations to tackle two further non-standard packing issues. The final Chapter 7 presents conclusions and provides insights regarding prospective developments (including non-standard scheduling aspects). Practitioners and researchers interested in advanced optimization model development and solution in the context of logistics, transportation systems, complex structures, manufacturing and electronics will find this book useful. The book can also be used in graduate courses on nonlinear - including global and mixed integer - optimization, as a valuable collection of practically meaningful object packing applications.