Suchergebnisse
Filter
19 Ergebnisse
Sortierung:
El «fluido eléctrico» del teatro en la Guerra de la Independencia y las Cortes: la teatralización de la historia y la política ; The electrical fluid of the theatre in the Independence war and the Courts of Cádiz: history and politics theatralized
El autor traza en este artículo un breve panorama sobre la visión del teatro en torno a las Cortes de Cádiz, dentro del contexto general de la Revolución española. Se centra en la perspectiva del texto dramático durante el primer tercio del siglo xix (1808-1833): desde las creaciones cultas de un Martínez de la Rosa con su Viuda de Padilla hasta autores como Gaspar Zavala y Zamora, Juan Poveda, Diego González Robles o Francisco de Paula Martí: qué imagen pervive durante los años del proceso constitucional y cuál fue su trayectoria posterior, de 1814 hasta 1833. ; The author describes in this article a brief view of the theatre concerning The Courts of Cádiz inside the general context of the Spanish Revolution. This article focuses on the perspective of the dramatic text during the first third of the 19th century. (1808- 1833), from the cultured creations of Martínez de la Rosa to authors like Gaspar Zavala and Zamora, Juan Poveda, Diego González Robles and Francisco de Paula Martí; which image persisted with respect to the written theatre during the Spanish Revolution and what was new in the literature about The Courts of Cádiz written later between 1814 and 1833.
BASE
The electrical fluid of the Theatre in the Independence War and the Courts of Cádiz: History and politics theatralized ; El «fluido eléctrico» del teatro en la Guerra de la Independencia y las Cortes: la teatralización de la historia y la política
The author describes in this article a brief view of the theatre concerning The Courts of Cádiz inside the general context of the Spanish Revolution. This article focuses on the perspective of the dramatic text during the first third of the 19th century. (1808- 1833), from the cultured creations of Martínez de la Rosa to authors like Gaspar Zavala and Zamora, Juan Poveda, Diego González Robles and Francisco de Paula Martí; which image persisted with respect to the written theatre during the Spanish Revolution and what was new in the literature about The Courts of Cádiz written later between 1814 and 1833. ; El autor traza en este artículo un breve panorama sobre la visión del teatro en torno a las Cortes de Cádiz, dentro del contexto general de la Revolución española. Se centra en la perspectiva del texto dramático durante el primer tercio del siglo XIX (1808-1833): desde las creaciones cultas de un Martínez de la Rosa con su Viuda de Padilla hasta autores como Gaspar Zavala y Zamora, Juan Poveda, Diego González Robles o Francisco de Paula Martí: qué imagen pervive durante los años del proceso constitucional y cuál fue su trayectoria posterior, de 1814 hasta 1833.
BASE
Fuentes literarias para el estudio de las Cortes de Cádiz: historia y política desde la República de las letras. Bibliografía
[ES] El objetivo de este trabajo es llevar a cabo una clasificación y primer estudio de los distintos materiales literarios que generaron las Cortes de Cádiz y que, además de un corpus de fuentes muy importantes para el conocimiento de este episodio histórico, constituyen una parte ciertamente relevante de esa misma historia, como protagonista de los acontecimientos y de la propia historia literaria, a la que aportan una nueva concepción pública y política del texto. El elevado volumen de prensa, folletos, panfletos y dramas políticos generados o alusivos al episodio gaditano, de un lado, y de otro, la numerosa literatura de autobiografía y memoria, que lo recrean desde la voz de algunos de sus protagonistas o testigos más directos, necesitan de un inventario y clasificación que ordene todo este material para su posterior estudio y edición. Esta aproximación se considera, pues, el paso previo de un programa de investigación mucho más amplio que, desde el Grupo de Estudios del Siglo XVIII de la Universidad de Cádiz, se quiere afrontar ante la proximidad del segundo centenario de las Cortes de 1812. ; [EN] The aim of this work is to classify and to carry out the first analysis on the literature produced under the Courts of Cádiz. This material conforms both an essential part of history itself and a crucial element in the understanding of the literary history to which it provides with a new public and political concept of the text. The extensive amount of writings in the press, leaflets, pamphlets and political dramas related to this episode in Cadiz, as well as the copious information reflected in the memoirs and autobiographies of some of the main leaders and witnesses in this historical event give an account of the need to classify and systematize all these writings to edit and analyse them in the near future. Thus, this project could be understood as a first step in a more comprehensive research to be done by the Eighteenth- Century Studies Group of the University of Cadiz towards the celebration of the bicentenary of the Cours of 1812.
BASE
PLS model building with missing data: New algorithms and a comparative study
[EN] New algorithms to deal with missing values in predictive modelling are presented in this article. Specifically, 2 trimmed scores regression adaptations are proposed, one from principal component analysis model building with missing data (MD) and other from partial least squares regression model exploitation with missing values. Using these methods, practitioners can impute MD both in the explanatory/predictor and the dependent/response variables. Partial least squares is used here to build the multivariate calibration models; however, any regression method can be used after MD imputation. Four case studies, with different latent structures, are analysed here to compare the trimmed scores regression¿based methods against state-of-the-art approaches. The MATLAB code for these methods is also provided for its direct implementation at http://mseg.webs.upv.es, under a GNU license. ; Spanish Ministry of Science and Innovation; FEDER; European Union, Grant/Award Number: DPI2011-28112-C04-02 and DPI2014-55276-C5-1R; Spanish Ministry of Economy and Competitiveness, Grant/Award Number: ECO2013-43353-R ; Folch-Fortuny, A.; Arteaga, F.; Ferrer, A. (2017). PLS model building with missing data: New algorithms and a comparative study. Journal of Chemometrics. 31(7):1-12. https://doi.org/10.1002/cem.2897 ; S ; 1 ; 12 ; 31 ; 7 ; Grung, B., & Manne, R. (1998). Missing values in principal component analysis. Chemometrics and Intelligent Laboratory Systems, 42(1-2), 125-139. doi:10.1016/s0169-7439(98)00031-8 ; Arteaga, F., & Ferrer-Riquelme, A. J. (2009). Missing Data. Comprehensive Chemometrics, 285-314. doi:10.1016/b978-044452701-1.00125-3 ; Folch-Fortuny, A., Arteaga, F., & Ferrer, A. (2015). PCA model building with missing data: New proposals and a comparative study. Chemometrics and Intelligent Laboratory Systems, 146, 77-88. doi:10.1016/j.chemolab.2015.05.006 ; Arteaga, F., & Ferrer, A. (2002). Dealing with missing data in MSPC: several methods, different interpretations, some examples. Journal of ...
BASE
Comparison of multivariate statistical methods for dynamic systems modeling
This is the accepted version of the following article: Barceló, S., Vidal-Puig, S. and Ferrer, A. (2011), Comparison of multivariate statistical methods for dynamic systems modeling. Qual. Reliab. Engng. Int., 27: 107–124, which has been published in final form at http://dx.doi.org/10.1002/qre.1102. ; In this paper two multivariate statistical methodologies are compared in order to estimate a multi-input multi-output transfer function model in an industrial polymerization process. In these contexts, process variables are usually autocorrelated (i.e. there is time-dependence between observations), posing some problems to classical linear regression models. The two methodologies to be compared are both related to the analyses of multivariate time series: Box-Jenkins methodology and partial least squares time series. Both methodologies are compared keeping in mind different issues, such as the simplicity of the process modeling (i.e. the steps of the identification, estimation and validation of the model), the usefulness of the graphical tools, the goodness of fit, and the parsimony of the estimated models. Real data from a polymerization process are used to illustrate the performance of the methodologies under study. Copyright © 2010 John Wiley & Sons, Ltd. ; This research was partially supported by the Spanish Government (MICINN) and the European Union (RDE funds) under grant DPI2008-06880-C03-03/DPI. ; Barceló Cerdá, S.; Vidal Puig, S.; Ferrer, A. (2011). Comparison of multivariate statistical methods for dynamic systems modeling. Quality and Reliability Engineering International. 27(1):107-124. https://doi.org/10.1002/qre.1102 ; S ; 107 ; 124 ; 27 ; 1 ; Box, G. E. P., Jenkins, G. M., & Reinsel, G. C. (2008). Time Series Analysis. Wiley Series in Probability and Statistics. doi:10.1002/9781118619193 ; Reinsel, G. C. (1997). Elements of Multivariate Time Series Analysis. Springer Series in Statistics. doi:10.1007/978-1-4612-0679-8 ; Wold, S. (1994). Exponentially weighted moving principal components ...
BASE
Assessment of maximum likelihood PCA missing data imputation
386 393 30 7 ; S ; Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804
BASE
Assessment of maximum likelihood PCA missing data imputation
Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. ; Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. ; Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804 ; S ; 386 ; 393 ; 30 ; 7
BASE
Missing Data Imputation Toolbox for MATLAB
[EN] Here we introduce a graphical user-friendly interface to deal with missing values called Missing Data Imputation (MDI) Toolbox. This MATLAB toolbox allows imputing missing values, following missing completely at random patterns, exploiting the relationships among variables. In this way, principal component anal- ysis (PCA) models are fitted iteratively to impute the missing data until convergence. Different methods, using PCA internally, are included in the toolbox: trimmed scores regression (TSR), known data regres- sion (KDR), KDR with principal component regression (KDR-PCR), KDR with partial least squares regression (KDR-PLS), projection to the model plane (PMP), iterative algorithm (IA), modified nonlinear iterative partial least squares regression algorithm (NIPALS) and data augmentation (DA). MDI Toolbox presents a general procedure to impute missing data, thus can be used to infer PCA models with missing data, to estimate the covariance structure of incomplete data matrices, or to impute the missing values as a preprocessing step of other methodologies. ; Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1 R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. ; Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Missing Data Imputation Toolbox for MATLAB. Chemometrics and Intelligent Laboratory Systems. 154:93-100. https://doi.org/10.1016/j.chemolab.2016.03.019 ; S ; 93 ; 100 ; 154
BASE
A BRILS metaheuristic for non-smooth flow-shop problems with failure-risk costs
This paper analyzes a realistic variant of the Permutation Flow-Shop Problem (PFSP) by considering a non-smooth objective function that takes into account not only the traditional makespan cost but also failure-risk costs due to uninterrupted operation of machines. After completing a literature review on the issue, the paper formulates an original mathematical model to describe this new PFSP variant. Then, a Biased-Randomized Iterated Local Search (BRILS) algorithm is proposed as an efficient solving approach. An oriented (biased) random behavior is introduced in the well-known NEH heuristic to generate an initial solution. From this initial solution, the algorithm is able to generate a large number of alternative good solutions without requiring a complex setting of parameters. The relative simplicity of our approach is particularly useful in the presence of non-smooth objective functions, for which exact optimization methods may fail to reach their full potential. The gains of considering failure-risk costs during the exploration of the solution space are analyzed throughout a series of computational experiments. To promote reproducibility, these experiments are based on a set of traditional benchmark instances. Moreover, the performance of the proposed algorithm is compared against other state-of-the-art metaheuristic approaches, which have been conveniently adapted to consider failure-risk costs during the solving process. The proposed BRILS approach can be easily extended to other combinatorial optimization problems with similar non-smooth objective functions. ; This research has been partially supported by the Spanish Ministry of Economy and Competitiveness, projects MTM2011-29064-C03-02, MTM2014-59179-C2-01 & TRA2013-48180-C3-P, and FEDER. NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program.
BASE
Principal elementary mode analysis (PEMA)
Principal component analysis (PCA) has been widely applied in fluxomics to compress data into a few latent structures in order to simplify the identification of metabolic patterns. These latent structures lack a direct biological interpretation due to the intrinsic constraints associated with a PCA model. Here we introduce a new method that significantly improves the interpretability of the principal components with a direct link to metabolic pathways. This method, called principal elementary mode analysis (PEMA), establishes a bridge between a PCA-like model, aimed at explaining the maximum variance in flux data, and the set of elementary modes (EMs) of a metabolic network. It provides an easy way to identify metabolic patterns in large fluxomics datasets in terms of the simplest pathways of the organism metabolism. The results using a real metabolic model of Escherichia coli show the ability of PEMA to identify the EMs that generated the different simulated flux distributions. Actual flux data of E. coli and Pichia pastoris cultures confirm the results observed in the simulated study, providing a biologically meaningful model to explain flux data of both organisms in terms of the EM activation. The PEMA toolbox is freely available for non-commercial purposes on http://mseg.webs.upv.es. ; Research in this study was partially supported by the Spanish Ministry of Economy and Competitiveness and FEDER funds from the European Union through grants DPI2011-28112-C04-02 and DPI2014-55276-C5-1R. We would also acknowledge Fundacao para a Ciencia e Tecnologia for PhD fellowships with references SFRH/BD/67033/2009, SFRH/BD/70768/2010 and PTDC/BBB-BSS/2800/2012. ; Folch Fortuny, A.; Marques, R.; Isidro, IA.; Oliveira, R.; Ferrer, A. (2016). Principal elementary mode analysis (PEMA). Molecular BioSystems. 12(3):737-746. doi:10.1039/c5mb00828j ; S ; 737 ; 746 ; 12 ; 3
BASE
Enabling network inference methods to handle missing data and outliers
The inference of complex networks from data is a challenging problem in biological sciences, as well as in a wide range of disciplines such as chemistry, technology, economics, or sociology. The quantity and quality of the data greatly affect the results. While many methodologies have been developed for this task, they seldom take into account issues such as missing data or outlier detection and correction, which need to be properly addressed before network inference. Results Here we present an approach to (i) handle missing data and (ii) detect and correct outliers based on multivariate projection to latent structures. The method, called trimmed scores regression (TSR), enables network inference methods to analyse incomplete datasets by imputing the missing values coherently with the latent data structure. Furthermore, it substitutes the faulty values in a dataset by proper estimations. We provide an implementation of this approach, and show how it can be integrated with any network inference method as a preliminary data curation step. This functionality is demonstrated with a state of the art network inference method based on mutual information distance and entropy reduction, MIDER. Conclusion The methodology presented here enables network inference methods to analyse a large number of incomplete and faulty datasets that could not be reliably analysed so far. Our comparative studies show the superiority of TSR over other missing data approaches used by practitioners. Furthermore, the method allows for outlier detection and correction. ; Research in this study was partially supported by the European Union through project BioPreDyn (FP7-KBBE 289434), and the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grants MultiScales (DPI2011-28112-C04-02, DPI2011-28112-C04-03), and SynBioFactory (DPI2014-55276-C5-1-R, DPI2014-55276-C5-2-R). AF Villaverde also acknowledges funding from the Xunta de Galicia through an I2C postdoctoral fellowship (I2C ED481B 2014/133-0). We also ...
BASE
Multivariate Six Sigma: A Case Study in Industry 4.0
1 20 8 9 ; S ; [EN] The complex data characteristics collected in Industry 4.0 cannot be efficiently handled by classical Six Sigma statistical toolkit based mainly in least squares techniques. This may refrain people from using Six Sigma in these contexts. The incorporation of latent variables-based multivariate statistical techniques such as principal component analysis and partial least squares into the Six Sigma statistical toolkit can help to overcome this problem yielding the Multivariate Six Sigma: a powerful process improvement methodology for Industry 4.0. A multivariate Six Sigma case study based on the batch production of one of the star products at a chemical plant is presented. Palací-López, D.; Borràs-Ferrís, J.; Da Silva De Oliveria, LT.; Ferrer, A. (2020). Multivariate Six Sigma: A Case Study in Industry 4.0. Processes. 8(9):1-20. https://doi.org/10.3390/pr8091119 Linderman, K., Schroeder, R. G., Zaheer, S., & Choo, A. S. (2002). Six Sigma: a goal-theoretic perspective. Journal of Operations Management, 21(2), 193-203. doi:10.1016/s0272-6963(02)00087-6 Grima, P., Marco-Almagro, L., Santiago, S., & Tort-Martorell, X. (2013). Six Sigma: hints from practice to overcome difficulties. Total Quality Management & Business Excellence, 25(3-4), 198-208. doi:10.1080/14783363.2013.825101 Reis, M., & Gins, G. (2017). Industrial Process Monitoring in the Big Data/Industry 4.0 Era: from Detection, to Diagnosis, to Prognosis. Processes, 5(4), 35. doi:10.3390/pr5030035 Ferrer, A. (2007). Multivariate Statistical Process Control Based on Principal Component Analysis (MSPC-PCA): Some Reflections and a Case Study in an Autobody Assembly Process. Quality Engineering, 19(4), 311-325. doi:10.1080/08982110701621304 Peruchi, R. S., Rotela Junior, P., Brito, T. G., Paiva, A. P., Balestrassi, P. P., & Mendes Araujo, L. M. (2020). Integrating Multivariate Statistical Analysis Into Six Sigma DMAIC Projects: A Case Study on AISI 52100 Hardened Steel Turning. IEEE Access, 8, 34246-34255. ...
BASE