Llengua als països catalans: la nació catalana ; grau de cohesió i de culturació lingüística
In: Maregassa 3
11 results
Sort by:
In: Maregassa 3
In: Scripta Nova: revista electrónica de geografía y ciencias sociales, Volume 27, Issue 2
ISSN: 1138-9788
En el contexto de financiarización de entornos construidos, se pretende contribuir al análisis y caracterización de los actuales procesos de gentrificación y sus derivadas de desplazamientos y desigualdades. El objetivo del trabajo es aproximarse al análisis de las relaciones clase-desalojos en las gentrificaciones de quinta oleada en Palma, una de las ciudades más turistizadas y relevantes de los mercados inmobiliario-financieros del Sur de Europa. Se presenta un análisis cuantitativo y cartográfico de la evolución de la desposesión de vivienda en Palma (2003-2020) y se correlacionan las distribuciones de los desahucios con las clases sociales de los habitantes de los barrios donde se producen. Se concluye que los procesos de desposesión de vivienda varían en función de la clase social, tipología y evolución temporal. Los desalojos se concentran especialmente en espacios de clase muy baja y, en el caso de las ejecuciones hipotecarias, con presencia notable de inmigración del sur global.
The main computing tasks of a finite element code(FE) for solving partial differential equations (PDE's) are the algebraic system assembly and the iterative solver. This work focuses on the first task, in the context of a hybrid MPI+X paradigm. Although we will describe algorithms in the FE context, a similar strategy can be straightforwardly applied to other discretization methods, like the finite volume method. The matrix assembly consists of a loop over the elements of the MPI partition to compute element matrices and right-hand sides and their assemblies in the local system to each MPI partition. In a MPI+X hybrid parallelism context, X has consisted traditionally of loop parallelism using OpenMP. Several strate- gies have been proposed in the literature to implement this loop parallelism, like coloring or substructuring techniques to circumvent the race condition that appears when assembling the element system into the local system. The main drawback of the first technique is the decrease of the IPC due to bad spatial locality. The second technique avoids this issue but requires extensive changes in the implementation, which can be cumbersome when several element loops should be treated. We propose an alternative, based on the task parallelism of the element loop using some extensions to the OpenMP programming model. The task- ification of the assembly solves both aforementioned problems. In addition, dynamic load balance will be applied using the DLB library, especially efficient in the presence of hybrid meshes, where the relative costs of the different elements is impossible to estimate a priori. This paper presents the proposed methodology, its implementation and its validation through the solution of large computational mechanics problems up to 16k cores. ; The use of large part of a supercomputer, even more in normal conditions of use, is never an innocuous exercise. The research leading to these results has received funding from: the European Union's Horizon 2020 Programme (2014–2020) and from Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP), HPC4E Project, grant agreement 689772; the Energy oriented Centre of Excellence (EoCoE), grant agreement number 676629, funded within the Horizon2020 framework of the European Union; The Spanish Government (grant SEV2015-0493 of the Severo Ochoa Program); the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P); the Generalitat de Catalunya (contract 2014-SGR-1051); the Intel-BSC Exascale Lab collaboration project. Comissió Interdepartamental de Recerca i Innovació Tecnológica(Interdepartmental Commission for Research and Technological Innovation) ; Sí ; Post-print (author's final draft)
BASE
The main computing phases of numerical methods for solving partial differential equations are the algebraic system assembly and the iterative solver. This work focuses on the first task, in the context of a hybrid MPI+X paradigm. The matrix assembly consists of a loop over the elements, faces, edges or nodes of the MPI partitions to compute element matrices and vectors and then of their assemblies. In a MPI+X hybrid parallelism context, X has consisted traditionally of loop parallelism using OpenMP, with different techniques to avoid the race condition, but presenting efficiency or implementation drawbacks. We propose an alternative, based on task parallelism using some extensions to the OpenMP programming model. In addition, dynamic load balance will be applied, especially efficient in the presence of hybrid meshes. This paper presents the proposed methodology, its implementation and its validation through the solution of large computational mechanics problems up to 16k cores. ; The research leading to these results has received funding from: the European Union's Horizon 2020 Programme (2014–2020) and from Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP), HPC4E Project, grant agreement 689772; the Energy oriented Centre of Excellence (EoCoE), grant agreement number 676629, funded within the Horizon2020 framework of the European Union; The Spanish Government (grant SEV2015-0493 of the Severo Ochoa Program); the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P); the Generalitat de Catalunya (contract 2014-SGR-1051); the Intel-BSC Exascale Lab collaboration project. Comissió Interdepartamental de Recerca i Innovació Tecnológica(Interdepartmental Commission for Research and Technological Innovation) ; Peer Reviewed ; Postprint (author's final draft)
BASE
The main computing tasks of a finite element code(FE) for solving partial differential equations (PDE's) are the algebraic system assembly and the iterative solver. This work focuses on the first task, in the context of a hybrid MPI+X paradigm. Although we will describe algorithms in the FE context, a similar strategy can be straightforwardly applied to other discretization methods, like the finite volume method. The matrix assembly consists of a loop over the elements of the MPI partition to compute element matrices and right-hand sides and their assemblies in the local system to each MPI partition. In a MPI+X hybrid parallelism context, X has consisted traditionally of loop parallelism using OpenMP. Several strate- gies have been proposed in the literature to implement this loop parallelism, like coloring or substructuring techniques to circumvent the race condition that appears when assembling the element system into the local system. The main drawback of the first technique is the decrease of the IPC due to bad spatial locality. The second technique avoids this issue but requires extensive changes in the implementation, which can be cumbersome when several element loops should be treated. We propose an alternative, based on the task parallelism of the element loop using some extensions to the OpenMP programming model. The task- ification of the assembly solves both aforementioned problems. In addition, dynamic load balance will be applied using the DLB library, especially efficient in the presence of hybrid meshes, where the relative costs of the different elements is impossible to estimate a priori. This paper presents the proposed methodology, its implementation and its validation through the solution of large computational mechanics problems up to 16k cores. ; The use of large part of a supercomputer, even more in normal conditions of use, is never an innocuous exercise. The research leading to these results has received funding from: the European Union's Horizon 2020 Programme (2014–2020) and from Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP), HPC4E Project, grant agreement 689772; the Energy oriented Centre of Excellence (EoCoE), grant agreement number 676629, funded within the Horizon2020 framework of the European Union; The Spanish Government (grant SEV2015-0493 of the Severo Ochoa Program); the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P); the Generalitat de Catalunya (contract 2014-SGR-1051); the Intel-BSC Exascale Lab collaboration project. Comissió Interdepartamental de Recerca i Innovació Tecnológica(Interdepartmental Commission for Research and Technological Innovation) ; Sí ; Post-print (author's final draft)
BASE
In: Urban affairs review, Volume 50, Issue 2, p. 206-243
ISSN: 1552-8332
The urban and territorial changes caused by tourism are well-studied topics in contemporary scientific literature. This article uses an integrative approach that lies between the scientific traditions in urban geography and the geography of tourism to present a case study of a socialist city. Tourism is a strategic economic activity in Cuba, and the country's most popular sun and sand tourist destination is Varadero. At first consideration, its tourism model is not very different from those of other areas in the region (Dominican Republic, Riviera Maya, etc.), but the uniqueness of the Cuban government and emphasis on planning introduce several distinguishing features. The combined analysis of the development of tourism in the city and the recent history of territorial planning leads to conclusions regarding the role of tourism in urban development, which has resulted in the creation of a dual-city model, and the role land planning is playing.
Large scale time-dependent particle simulations can generate massive amounts of data, making it so that storing the results is often the slowest phase and the primary time bottleneck of the simulation. Furthermore, analysing this amount of data with traditional tools has become increasingly challenging, and it is often virtually impossible to have a visual representation of the full set. We propose a novel architecture that integrates an HPC-based multi-physics simulation code, a NoSQL database, and a data analysis and visualisation application. The goals are two: On the one hand, we aim to speed up the simulations taking advantage of the scalability of key-value data stores, while at the same time enabling real-time approximated data visualisation and interactive exploration. On the other hand, we want to make it efficient to explore and analyse the large data base of results produced. Therefore, this work represents a clear example of integrating High Performance Computing with High Performance Data Analytics. Our prototype proves the validity of our approach and shows great performance improvements. Indeed, we reduced by 67.5% the time to store the simulation while we made real-time queries run 52 times faster than alternative solutions. ; This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1). It is also partially supported by the grant SEV-2011-00067 of Severo Ochoa Program, the TIN2015-65316-P project, with funding from the Spanish Ministry of Economy and Competitivity, the European Union FEDER funds, and the SGR 2014-SGR-1051. ; Peer Reviewed ; Postprint (published version)
BASE
Large scale time-dependent particle simulations can generate massive amounts of data, making it so that storing the results is often the slowest phase and the primary time bottleneck of the simulation. Furthermore, analysing this amount of data with traditional tools has become increasingly challenging, and it is often virtually impossible to have a visual representation of the full set. We propose a novel architecture that integrates an HPC-based multi-physics simulation code, a NoSQL database, and a data analysis and visualisation application. The goals are two: On the one hand, we aim to speed up the simulations taking advantage of the scalability of key-value data stores, while at the same time enabling real-time approximated data visualisation and interactive exploration. On the other hand, we want to make it efficient to explore and analyse the large data base of results produced. Therefore, this work represents a clear example of integrating High Performance Computing with High Performance Data Analytics. Our prototype proves the validity of our approach and shows great performance improvements. Indeed, we reduced by 67.5% the time to store the simulation while we made real-time queries run 52 times faster than alternative solutions. ; This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1). It is also partially supported by the grant SEV-2011-00067 of Severo Ochoa Program, the TIN2015-65316-P project, with funding from the Spanish Ministry of Economy and Competitivity, the European Union FEDER funds, and the SGR 2014-SGR-1051. ; Peer Reviewed ; Postprint (published version)
BASE
Alya is a multi-physics simulation code developed at Barcelona Supercomputing Center (BSC). From its inception Alya code is designed using advanced High Performance Computing programming techniques to solve coupled problems on supercomputers efficiently. The target domain is engineering, with all its particular features: complex geometries and unstructured meshes, coupled multi-physics with exotic coupling schemes and physical models, ill-posed problems, flexibility needs for rapidly including new models, etc. Since its beginnings in 2004, Alya has scaled well in an increasing number of processors when solving single-physics problems such as fluid mechanics, solid mechanics, acoustics, etc. Over time, we have made a concerted effort to maintain and even improve scalability for multi-physics problems. This poses challenges on multiple fronts, including: numerical models, parallel implementation, physical coupling models, algorithms and solution schemes, meshing process, etc. In this paper, we introduce Alya's main features and focus particularly on its solvers. We present Alya's performance up to 100.000 processors in Blue Waters, the NCSA supercomputer with selected multi-physics tests that are representative of the engineering world. The tests are incompressible flow in a human respiratory system, low Mach combustion problem in a kiln furnace, and coupled electro-mechanical contraction of the heart. We show scalability plots for all cases and discuss all aspects of such simulations, including solver convergence. ; The authors would like to thank the following fellow researchers and institutions: • The Private Sector Program at NCSA and the BlueWaters sustained-petascale computing project-supported by the National Science Foundation (award number OCI 07-25070) and the state of Illinois. • Denis Doorly and Alister Bates (Imperial College London, UK), collaborators of the airways study. Part of this work was financed by European PRACE Type B/C projects. • The heart geometry was provided by Dr. A. Berruezo (Hospital Clinic de Barcelona) in collaboration with R. Sebastian (UVEG) and O. Camara (UPF), partially financed through project TIN2011-28067 from MINECO, Spain. • Part of the cardiac model development was financed by the grant SEV-2011-00067 of Severo Ochoa Program, awarded by the Spanish Government. • Part of the kiln model development was financed by the European Commission in the framework of the FP7 Collaborative project "Advanced Technologies for the Production of Cement and Clean Aggregates from Construction and Demolition Waste (C2CA)", Grant Agreement No 265189. ; Peer Reviewed ; Postprint (author's final draft)
BASE
Alya is a multi-physics simulation code developed at Barcelona Supercomputing Center (BSC). From its inception Alya code is designed using advanced High Performance Computing programming techniques to solve coupled problems on supercomputers efficiently. The target domain is engineering, with all its particular features: complex geometries and unstructured meshes, coupled multi-physics with exotic coupling schemes and physical models, ill-posed problems, flexibility needs for rapidly including new models, etc. Since its beginnings in 2004, Alya has scaled well in an increasing number of processors when solving single-physics problems such as fluid mechanics, solid mechanics, acoustics, etc. Over time, we have made a concerted effort to maintain and even improve scalability for multi-physics problems. This poses challenges on multiple fronts, including: numerical models, parallel implementation, physical coupling models, algorithms and solution schemes, meshing process, etc. In this paper, we introduce Alya's main features and focus particularly on its solvers. We present Alya's performance up to 100.000 processors in Blue Waters, the NCSA supercomputer with selected multi-physics tests that are representative of the engineering world. The tests are incompressible flow in a human respiratory system, low Mach combustion problem in a kiln furnace, and coupled electro-mechanical contraction of the heart. We show scalability plots for all cases and discuss all aspects of such simulations, including solver convergence. ; The authors would like to thank the following fellow researchers and institutions: • The Private Sector Program at NCSA and the BlueWaters sustained-petascale computing project-supported by the National Science Foundation (award number OCI 07-25070) and the state of Illinois. • Denis Doorly and Alister Bates (Imperial College London, UK), collaborators of the airways study. Part of this work was financed by European PRACE Type B/C projects. • The heart geometry was provided by Dr. A. Berruezo (Hospital Clinic de Barcelona) in collaboration with R. Sebastian (UVEG) and O. Camara (UPF), partially financed through project TIN2011-28067 from MINECO, Spain. • Part of the cardiac model development was financed by the grant SEV-2011-00067 of Severo Ochoa Program, awarded by the Spanish Government. • Part of the kiln model development was financed by the European Commission in the framework of the FP7 Collaborative project "Advanced Technologies for the Production of Cement and Clean Aggregates from Construction and Demolition Waste (C2CA)", Grant Agreement No 265189. ; Peer Reviewed ; Postprint (author's final draft)
BASE