In: Wiadomości statystyczne / Glówny Urza̜d Statystyczny, Polskie Towarzystwo Statystyczne: czasopismo Głównego Urze̜du Statystycznego i Polskiego Towarzystwa = The Polish statistician, Band 60, Heft 9, S. 23-29
The article highlights the need to broaden the analysis of the quality of the survey results, taking into account the negative impact of certain operations of so-called editing input data, such as checking their accuracy and correction of errors. In the conclusions it underlines the need to extend the programs for academic lectures in statistics for analysis of the impact of processing operations on the quality of the results.
"Lists reports and translations added . September 1961 to February 1963." ; "A bibliography of reports listed in . U.S. Government research reports and Technical translations." ; Mode of access: Internet.
Front Cover -- Dedication -- Health Data Processing: Systemic Approaches -- Copyright -- Contents -- Preface -- Introduction -- Memorization of information intended for a single category of actors or a single specialty is of little added value -- The complexity of human activity cannot find satisfactory answers in siloed systems -- Confronting the heterogeneity of data and systems -- Reusing data is necessary and provides high added value -- Design and implementation of flexible information systems -- Modeling as a way of responding to issues of flexibility -- Modeling to develop -- Principles for analyzing and implementing flexible information systems -- Current implementation of these principles -- Strategic alignment of information systems is confirmed but all too often not achieved -- 1. Understanding the Fundamental Nature of Information and its Processing -- 1.1. Introduction -- 1.2. Data, knowledge and information -- 1.3. Data structures -- 1.4. Data models -- 1.5. Qualities that make information valuable -- 1.6. Improving the quality of data -- 1.7. Uses of patient data -- 1.8. Processing information, applications, components and processes -- 2. A Few Questions on Information Sharing -- 2.1. Introduction -- 2.2. Twelve questions for better defining sharing and its objectives -- 2.3. Organization of information sharing is a prerequisite of technological choice -- 2.4. Summary and conclusion -- 3. The Place of Healthcare Delivery Processes in Information Systems -- 3.1. Introduction -- 3.2. The concept of the process -- 3.3. Modeling and the presentation of processes -- 3.4. Processes and procedures -- 3.5. Interests and limitations of the process-based approach -- 3.6. Conclusion -- 4. The Quality of the Urbanization of the Information System is Central to its Performance -- 4.1. Introduction
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
A magnetotelluric data processing code has been developed, which demonstrates high robustness to intense electromagnetic noise occurring in measured MT data. Key features of the code are specific approach for estimating different transfer functions and capability to utilize all four channels acquired at remote reference station. The code utilizes various techniques to reduce estimate errors, including robust Huber estimator, jackknife approach, improved remote reference technique and compensating for overestimation of power spectra. The proposed code has shown high efficiency in processing of low signal-to-noise data.
Data analyses using methods of exploratory and inductive statistics nowadays form an integral part of many areas of human activities. The paper is focused on the statistical processing of data using a new application STAT1 that works under Microsoft Office Excel. The explanation is given, the source is stated, and the comparison with alternative application software tools is mentioned. Moreover, the examples of practical utilization of STAT1 in the military area are presented.
This open access book is part of the LAMBDA Project (Learning, Applying, Multiplying Big Data Analytics), funded by the European Union, GA No. 809965. Data Analytics involves applying algorithmic processes to derive insights. Nowadays it is used in many industries to allow organizations and companies to make better decisions as well as to verify or disprove existing theories or models. The term data analytics is often used interchangeably with intelligence, statistics, reasoning, data mining, knowledge discovery, and others. The goal of this book is to introduce some of the definitions, methods, tools, frameworks, and solutions for big data processing, starting from the process of information extraction and knowledge representation, via knowledge processing and analytics to visualization, sense-making, and practical applications. Each chapter in this book addresses some pertinent aspect of the data processing chain, with a specific focus on understanding Enterprise Knowledge Graphs, Semantic Big Data Architectures, and Smart Data Analytics solutions. This book is addressed to graduate students from technical disciplines, to professional audiences following continuous education short courses, and to researchers from diverse areas following self-study courses. Basic skills in computer science, mathematics, and statistics are required.