This research uncovers areas of best practices that support achieving alignment between an organization's Information Technology (IT) and its business processes. One principal finding of this effort revealed that the means used to achieve alignment exists within the effective application of Enterprise Architecture (EA), a common practice found throughout the Federal Government, Department of Defense, and the Air Force. EA is the tool used to achieve alignment; likewise, the reason for developing IT architecture is to achieve alignment of IT investments and mission objectives. This research groups the best practices into vision, identification, framework, and governance. Interestingly, these practices relate to an Enterprise Architecture's depiction of the "to be" target state, the "as is" baseline, the tools and models used for communication, and the motivation and management of the "transition" plan. The insights achieved by this research should strengthen the use of Enterprise Architecture within the Air Force by enabling senior leaders and decision-makers to align strategy and IT investment towards improving mission accomplishment.
Also includes a copy of the testimony of Louis Gutierrez, Director, Information Technology Division, and Chief Information Officer to the House and Senate Committees on Science and Technology.
Part I - Introduction -- Chapter 1: "An Introduction to AI in the GCC" -- Chapter 2: "Framework of Study and Book Organization" -- Part II – Data, Governance and Regulations -- Chapter 3: "Public Sector Data for Academic Research: the case of the UAE" -- Chapter 4: "Strategy for Artificial Intelligence in Bahrain: Challenges and Opportunities" -- Chapter 5: "Thoughts and Reflections on the Case of Qatar: Should Artificial Intelligence be Regulated?" -- Part III – Existing Opportunities and Sectoral Applications -- Chapter 6: "Knowledge, Attitude, and Perceptions of Financial Industry Employees toward AI in the GCC Region" -- Chapter 7: "The GCC and Global Health Diplomacy: The New Drive towards Artificial Intelligence" -- Chapter 8: "Free Zones in Dubai: Accelerators for Artificial Intelligence in the Gulf" -- Part IV – Society, Utopia and Dystopia -- Chapter 9: "AI & Well-Being: Can AI make you happy in the city?" -- Chapter 10: "Women and the Fourth Industrial Revolution: An examination of the UAE's national AI strategy" -- Chapter 11: "The Art and Science of Data and Mind Control: AI in the UAE and Beyond" -- Chapter 12: "Fatwas from Islamweb.net on Robotics and Artificial Intelligence" -- Part V – Conclusion -- Chapter 13: "Outlook of the future of AI in the GCC".
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Imaging technology is highly important in today's medical environments. It provides information upon which the accuracy of the diagnosis and consequently the wellbeing of the patient rely. Increasing the quality and significance of medical image data is therefore one the aims of scientific research and development. We introduce an integrated hardware and software framework for real time image processing in medical environments, which we call RealTimeFrame. Our project is designed to offer flexibility, easy expandability and high performance. We use standard personal computer hardware to run our multithreaded software. A frame grabber card is used to capture video signals from medical imaging systems. A modular, user-defined process chain performs arbitrary manipulations on the image data. The graphical user interface offers configuration options and displays the processed image in either window or full screen mode. Image source and processing routines are encapsulated in dynamic library modules for easy functionality extension without recompilation of the entire software framework. Documented template modules for sources and processing steps are part of the software's source code.
ABSTRACT
ObjectivesThe Prescribing Information System (PIS) datamart, hosted by NHS National Services Scotland receives around 90 million electronic prescription messages per year from GP practices across Scotland. Prescription messages contain information including drug name, quantity and strength stored as coded, machine readable, data while prescription dose instructions are unstructured free text and difficult to interpret and analyse in volume.
The aim, using Natural Language Processing (NLP), was to extract drug dose amount, unit and frequency metadata from freely typed text in dose instructions to support calculating the intended number of days' treatment. This then allows comparison with actual prescription frequency, treatment adherence and the impact upon prescribing safety and effectiveness.
ApproachAn NLP algorithm was developed using the Ciao implementation of Prolog to extract dose amount, unit and frequency metadata from dose instructions held in the PIS datamart for drugs used in the treatment of gastrointestinal, cardiovascular and respiratory disease. Accuracy estimates were obtained by randomly sampling 0.1% of the distinct dose instructions from source records, comparing these with metadata extracted by the algorithm and an iterative approach was used to modify the algorithm to increase accuracy and coverage.
ResultsThe NLP algorithm was applied to 39,943,465 prescription instructions issued in 2014, consisting of 575,340 distinct dose instructions. For drugs used in the gastrointestinal, cardiovascular and respiratory systems (i.e. chapters 1, 2 and 3 of the British National Formulary (BNF)) the NLP algorithm successfully extracted drug dose amount, unit and frequency metadata from 95.1%, 98.5% and 97.4% of prescriptions respectively. However, instructions containing terms such as 'as directed' or 'as required' reduce the usability of the metadata by making it difficult to calculate the total dose intended for a specific time period as 7.9%, 0.9% and 27.9% of dose instructions contained terms meaning 'as required' while 3.2%, 3.7% and 4.0% contained terms meaning 'as directed', for drugs used in BNF chapters 1, 2 and 3 respectively.
ConclusionThe NLP algorithm developed can extract dose, unit and frequency metadata from text found in prescriptions issued to treat a wide range of conditions and this information may be used to support calculating treatment durations, medicines adherence and cumulative drug exposure. The presence of terms such as 'as required' and 'as directed' has a negative impact on the usability of the metadata and further work is required to determine the level of impact this has on calculating treatment durations and cumulative drug exposure.
Part I: Introduction to Visual Analytics in Data Science -- 1. Introduction to Visual Analytics by an Example -- 2. General Concepts -- 3. Principles of Interactive Visualisation -- 4. Computational Techniques in Visual Analytics -- Part II: Visual Analytics along the Data Science Workflow -- 5. Visual Analytics for Investigating and Processing Data -- 6. Visual Analytics for Understanding Multiple Attributes -- 7. Visual Analytics for Understanding Relationships between Entities -- 8. Visual Analytics for Understanding Temporal Distributions and Variations -- 9. Visual Analytics for Understanding Spatial Distributions and Spatial Variation -- 10. Visual Analytics for Understanding Phenomena in Space and Time -- 11. Visual Analytics for Understanding Texts -- 12. Visual Analytics for Understanding Images and Video -- 13. Computational Modelling with Visual Analytics -- 14. Conclusion.
This study examines how credibility affects the way people process information and how they subsequently perceive risk. Three conceptual areas are brought together in this analysis: the psychometric model of risk perception, Eagly and Chaiken's heuristic‐systematic information processing model, and Meyer's credibility index. Data come from a study of risk communication in the circumstance of state health department investigations of suspected cancer clusters (five cases, N= 696). Credibility is assessed for three information sources: state health departments, citizen groups, and industries involved in each case. Higher credibility for industry and the state directly predicts lower risk perception, whereas high credibility for citizen groups predicts greater risk perception. A path model shows that perceiving high credibility for industry and state—and perceiving low credibility for citizen groups—promotes heuristic processing, which in turn is a strong predictor of lower risk perception. Alternately, perceiving industry and the state to have low credibility also promotes greater systematic processing, which consistently leads to perception of greater risk. Between a one‐fifth and one‐third of the effect of credibility on risk perception is shown to be indirectly transmitted through information processing.
'Maschinenlesbare Quellen' werden alle Darstellungen einer Quelle auf einem Computer genannt, die von quellenorientierter Datenverarbeitung genutzt werden. Damit werden Daten erfaßt, die bedeutende Textteile zusammen mit codierten Informationen enthalten - wie in den Datenbanken demographischer Forschung verfügbar. Außerdem wird Material einbezogen, daß fast vollständig aus Text besteht, wenn auch aus sehr kleinen Einheiten, die grob strukturiert sind - Material also, wie es in den Datenbasen der mikroanalytischen Forschung existiert. Außerdem werden schließlich mehr oder weniger uncodierte Darstellungen fortlaufender maschinenlesbarer Texte erfaßt - wie sie normalerweise aus mit Hilfe des Computers erstellten Bearbeitungstechniken resultieren. Drei Ziele werden verfolgt: (1) die Darstellung eines Vorschlags für einen allgemeinen Standard zur Beschreibung und Definition von Inputformaten, die bei der Verarbeitung von maschinenlesbaren historischen Materialien benutzt werden; (2) die Unterbreitung eines Vorschlags für eine generelle Definition eines maschinenlesbaren Formats, in dem soziale Daten formal für eine Austauschprogramm beschrieben wird; (3) die Erstellung eines Arbeitspapiers für die Arbeit über das Softwaresystem, das als CLIO/C bekannt ist. (KWübers.)
The ubiquity of personal data online triggers the need to regulate the processing, storage and the deletion of such data. Within the territory of the European Union, its transfer (even past its boundaries) and processing of personal data are subject to the General Data Protection Regulation (GDPR) and its jurisprudence. Article 45(1) of the GDPR empowers the European Commission to issue an Adequacy Decision, which compares the regime of a third country against the GDPR and its jurisprudence. Its issuance would allow seamless transfer of personal data to third country. This provision begs the question if Indonesian legal landscape on data protection meets the yardsticks contained in GDPR. This editorial will introduce the regimes in European Union and Indonesia, analyse the pertinent Indonesian legislation on the subject, in light of the GDPR, along with the jurisprudence of the Court of Justice of the European Union on the matter. This paper concludes that the Indonesian present data protection regime is extremely lacking, notably due to its narrow scope of application, weak watchdog authority, and its feeble punishment for its violation.