Data recording, storage and display
In: Military technology: Miltech, Band 34, Heft 11, S. 69-75
ISSN: 0722-3226
4152 Ergebnisse
Sortierung:
In: Military technology: Miltech, Band 34, Heft 11, S. 69-75
ISSN: 0722-3226
World Affairs Online
In: Iraqi journal of science, S. 2492-2511
ISSN: 0067-2904
Cloud computing is an interesting technology that allows customers to have convenient, on-demand network connectivity based on their needs with minimal maintenance and contact between cloud providers. The issue of security has arisen as a serious concern, particularly in the case of cloud computing, where data is stored and accessible via the Internet from a third-party storage system. It is critical to ensure that data is only accessible to the appropriate individuals and that it is not stored in third-party locations. Because third-party services frequently make backup copies of uploaded data for security reasons, removing the data the owner submits does not guarantee the removal of the data from the cloud. Cloud data storage has grown in popularity, and the problem of ensured deletion has been solved. Several schemes to overcome the assured deletion problem have been proposed over the last few years. The proposed solutions have addressed the scaling overhead, trusted third parties, delays, single points of failure, and other inefficiencies. Customers had the option of receiving verifiable proof of deletion from cloud service providers. This article focuses on the issue of how cloud data storage clients may be confident that the deleted data from the cloud cannot be recovered. Furthermore, it discusses the practice of secure deletion. Moreover, the paper explores currently used methods to achieve the security of assuring the deletion of data faced by cloud entities such as cloud service providers, data owners, and cloud users. After that, the paper analyzes techniques to find the pros and cons of assured deletion and the problems that were solved by these techniques. Finally, the paper identifies some future directions for the development of assured deletion of cloud storage.
In: Historical social research: HSR-Retrospective (HSR-Retro) = Historische Sozialforschung, Band 29, Heft 4, S. 196-219
ISSN: 2366-6846
Der vom Verwaltungsapparat der DDR erhobene Datenspeicher "Gesellschaftliches Arbeitsvermögen" (DS GAV) ist eine einzigartige Datenquelle für die 1980er Jahre, die eine Vielzahl von Einträgen zu soziodemographischen und sozioökonomischen Merkmalen, Qualifikation und Beschäftigung für mehr als 7 Millionen ehemalige DDR-Bürger umfasst. Der DS GAV wurde erhoben, um das Management des Humanvermögens in der Zentralverwaltungswirtschaft der DDR effizienter zu gestalten, um also eine kontrollierte Arbeitskraftallokation und Personalfluktuation zwischen Wirtschaftszweigen und Unternehmen zu garantieren; das Potenzial dieses Projekts wurde jedoch höchstwahrscheinlich niemals in hinlänglicher Weise ausgenutzt. Fünfzehn Jahre nach dem Zusammenbruch des ostdeutschen Staatssozialismus bietet der DS GAV eine Quelle für die quantifizierende historische Sozialforschung auf dem Weg zu einer Soziologie der DDR-Gesellschaft. Ein Jenaer Forschungsprojekt zu DDR-Eliten und Prozessen sozialer Differenzierung, Teil des Sonderforschungsbereichs 850, stützt sich u.a. auf diese Datenquelle. Die Verfasser diskutieren den historischen Hintergrund, die Datenaufbereitung, die Explorationsphase und erste soziologische Analysen auf der Basis des DS GAV.
In: The journal of electronic defense: JED, Band 19, Heft 8, S. 44-51
ISSN: 0192-429X
An increase in processing power enabled to increase resolution and the number of ensemble members for both weather and climate simulations during the last decades. As a result, the amount of model input and output data that needs to be stored and processed increased significantly. However, supercomputing storage capacities did not grow at the same rate as processing power. Today, data storage and data processing generates significant cost for weather and climate modelling centres. This deliverable suggests three new compression methods to reduce data volume for model output of ensemble simulations. If the new methods are used, the number of bits that is used to store individual variables can be reduced to reduce overall data volume while keeping the loss of information minimal. Two of the new methods rely on similarities between ensemble members in ensemble forecasts. If these methods are used, numerical precision will be high when ensemble spread is small and precision will decrease with increasing ensemble spread throughout the forecast. The methods are tested for model output of operational ensemble forecasts at ECMWF. ; ESiWACE has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 675191
BASE
In: The journal of the Royal Anthropological Institute, Band 27, Heft S1, S. 76-94
ISSN: 1467-9655
AbstractAbandoned after the Cold War, nuclear bunkers around the world have found afterlives as ultra‐secure data storage sites for cloud computing providers. The operators of these bunkered data centres capitalize on the spatial, temporal, and material security affordances of their subterranean fortresses, promoting them as 'future‐proof' cloud storage solutions. Taking the concept of 'future‐proofing' as its entry‐point, this essay explores how data centre professionals work with the imaginative properties of the bunker to configure data as an object to be securitized. The essay takes the form of an ethnographic tour through a UK‐based data bunker. During this tour, threatening data futures and fragile data materialities are conjured in order to secure the conditions of possibility for the bunkered data centre's commercial continuity. Future‐proofing, it is argued, provides a conceptual opening onto the entangled imperatives of security and marketing that drive the commercial data storage industry.
The use of WhatsApp in health care has increased, especially since the COVID-19 pandemic, but there is a need to safeguard electronic patient information when incorporating it into a medical record, be it electronic or paper based. The aim of this study was to review the literature on how clinicians who use WhatsApp in clinical practice keep medical records of the content of WhatsApp messages and how they store WhatsApp messages and/or attachments. A scoping review of nine databases sought evidence of record keeping or data storage related to use of WhatsApp in clinical practice up to 31 December 2020. Sixteen of 346 papers met study criteria. Most clinicians were aware that they must comply with statutory reporting requirements in keeping medical records of all electronic communications. However, this study showed a general lack of awareness or concern about flaunting existing privacy and security legislation. No clear mechanisms for record keeping or data storage of WhatsApp content were provided. In the absence of clear guidelines, problematic practices and workarounds have been created, increasing legal, regulatory and ethical concerns. There is a need to raise awareness of the problems clinicians face in meeting these obligations and to urgently provide viable guidance.
BASE
In: The journal of electronic defense: JED, Band 17, Heft 11, S. 56-59
ISSN: 0192-429X
This work is at: 13th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM' 10), took pkace October, 17-21, 2010, in Bodrum (Turquía). This web site is:http://www.cmpe.boun.edu.tr/mswim2010/ ; This paper presents a novel framework for Data Centric Storage in a wireless sensor and actor network that enables the use of a randomly-selected set of data replication nodes which also change over the time. This allows reducing the average network traffic and energy consumption by adapt-ing the number of replicas to applications' traffic, while bal-ancing energy burdens by varying their location. To that end we propose and validate a simple model to determine the optimal number of replicas, in terms of minimizing av-erage traffic/energy consumption, from the measured ap-plications' production/consumption traffic. Simple proto-cols/mechanisms are proposed to decide when the current set of replication nodes should be changed, to enable new applications and sensor nodes to efficiently bootstrap into a working sensor network, to recover from failing nodes, and to adapt to changing conditions. Extensive simulations demon-strate that our approach can extend a sensor network's life-time by at least a 60%, and up to a factor of 10x depending on the lifetime criterion being considered. ; This work was partially supported by NSF Award CNS- 0509355, AFOSR Award FA9550-07-1-0428, IMDEA Networks Madrid, the Spanish government through the T2C2 Project TIN2008-06739-C04-01 and the regional government of Madrid through the MEDIANET project. ; Publicado
BASE
DICOM is a global Information technology standard for electronic medical image. Meta data information and images are stored in a single file with dcm extension. A single DICOM file, sizes from MBs to GBs based on the study. PACS uses RDBMS to store and retrieve DICOM data. Replacing RDBMS with Big data technologies will help to handle DICOM data efficiently. All Indian government hospitals stores around 1400PB data collectively. Thus DICOM qualifies as a problem for handling big data. Storing and retrieving these big data from large repositories are highly complex and challenging. Applying big data techniques for handling DICOM data will helps to save many patient lives and improve areas like research, treatment methods, patient similarity searching, disease progression monitoring, clinical follow-up, case studies, training and learning, expertise sharing and helps to understand different patterns in the medical image data archive in a secured way. This paper presents an extensive survey on selective big data technologies for DICOM data storage and retrieval and also analyses the performance of Apache Pig, Hive and Spark while performing storage and retrieval of DICOM data. Keywords: Big data, RDBMS, PACS, DICOM, Storage
BASE
Data-Centric Storage (DCS) appears as a novel information storage and delivery mechanism for Wireless Sensor and Actor Networks in which a rendezvous node (home node) is selected to store and serve all the information of a particular application. However, DCS was not designed to provide long-term data availability. In this paper we present a Dynamic DCS solution to enable a long-term storage system. Dynamic DCS proposes to periodically change home nodes over the time based on periods of fixed duration called epochs. This makes it possible to perform temporal queries to previous home nodes in order to retrieve information from the past. We evaluate our proposal using extensive simulations, and reveal that Dynamic DCS makes sensor events available at least 85 % of the maximum lifetime provided by an optimal (but non practical) solution. Finally, we show that Dynamic DCS could easily adapt its storage performance to the requirements of an application by just tuning the epoch duration. ; The research leading to these results has been partially funded by the Spanish MEC under the CRAMNET project (TEC2012 38362 C03 01) and eeCONTENT Project (TEC2011 29688 C02 02), by the General Directorate of Universities and Research of the Regional Government of Madrid under the ME DIANET Project (S2009/TIC 1468), and by the the INDECT project (Ref 218086) of the 7th EU Framework Programme. In addition, the work of G. de Veciana was supported by the National Science Foundation under Award CNS 0915928. ; Publicado
BASE
The complexity and scale of today's cloud storage systems is growing fast. In response to these challenges, Software- Defined Storage (SDS) has recently become a prime candidate to simplify storage management in the cloud. This article presents IOStack: The first SDS architecture for object stores (OpenStack Swift). At the control plane, the provisioning of SDS services to tenants is made according to a set of policies managed via a high-level DSL. Policies may target storage automation and/or specific SLA objectives. At the data plane, policies define the enforcement of SDS services, namely filters, on a tenant's requests. Moreover, IOStack is a framework to build a variety of filters, ranging from general-purpose computations close to the data to specialized data management mechanisms. Our experiments illustrate that IOStack enables easy and effective policy-based provisioning, which can significantly improve the operation of a multi-tenant object store. ; This work has been funded by the European Union through project H2020 "IOStack: Software-Defined Storage for Big Data" (644182) and by the Spanish Ministry of Science and Innovation through project "Servicios Cloud y Redes Comunitarias" (TIN-2013-47245-C2-2-R). ; Peer Reviewed ; Postprint (author's final draft)
BASE
The complexity and scale of today's cloud storage systems is growing fast. In response to these challenges, Software- Defined Storage (SDS) has recently become a prime candidate to simplify storage management in the cloud. This article presents IOStack: The first SDS architecture for object stores (OpenStack Swift). At the control plane, the provisioning of SDS services to tenants is made according to a set of policies managed via a high-level DSL. Policies may target storage automation and/or specific SLA objectives. At the data plane, policies define the enforcement of SDS services, namely filters, on a tenant's requests. Moreover, IOStack is a framework to build a variety of filters, ranging from general-purpose computations close to the data to specialized data management mechanisms. Our experiments illustrate that IOStack enables easy and effective policy-based provisioning, which can significantly improve the operation of a multi-tenant object store. ; This work has been funded by the European Union through project H2020 "IOStack: Software-Defined Storage for Big Data" (644182) and by the Spanish Ministry of Science and Innovation through project "Servicios Cloud y Redes Comunitarias" (TIN-2013-47245-C2-2-R). ; Peer Reviewed ; Postprint (author's final draft)
BASE
This paper deals with aspects of data sharing in a transnational environment. Comprehensive border and coastal surveillance and control are key components of EU's global protection. At present border surveillance is often accomplished through separated stovepiped systems. Data that might be of interest for not only one but many surveillance units from different Member States, especially in regions in which two or more Member States have a border with an third, non-EU country, can't be shared and thus the capabilities to secure the border are limited. To overcome these limitations, interoperable transnational surveillance systems, capable to include all relevant data sources and to share them among border related law enforcement bodies are needed. As the output of sensor and exploitation systems shall be accessible by different users, standardized data formats are needed. Commercial standards were often not defined for the surveillance domain, thus they would need to be adapted. Standards defined within the military domain are of interest because they already take the sharing of products in a heterogeneous security environment into account. Additionally the cooperation of military and civil surveillance and reconnaissance systems will be of greater importance in the future.
BASE
Part 2: Storing Data Smartly (Data storage) ; International audience ; With the increased use of Internet, governments and large companies store and share massive amounts of personal data in such a way that leaves no space for transparency. When a user needs to achieve a simple task like applying for college or a driving license, he needs to visit a lot of institutions and organizations, thus leaving a lot of private data in many places. The same happens when using the Internet. These privacy issues raised by the centralized architectures along with the recent developments in the area of serverless applications demand a decentralized private data layer under user control.We introduce the Private Data System (PDS), a distributed approach which enables self-sovereign storage and sharing of private data. The system is composed of nodes spread across the entire Internet managing local key-value databases. The communication between nodes is achieved through executable choreographies, which are capable of preventing information leakage when executing across different organizations with different regulations in place.The user has full control over his private data and is able to share and revoke access to organizations at any time. Even more, the updates are propagated instantly to all the parties which have access to the data thanks to the system design. Specifically, the processing organizations may retrieve and process the shared information, but are not allowed under any circumstances to store it on long term.PDS offers an alternative to systems that aim to ensure self-sovereignty of specific types of data through blockchain inspired techniques but face various problems, such as low performance. Both approaches propose a distributed database, but with different characteristics. While the blockchain-based systems are built to solve consensus problems, PDS's purpose is to solve the self-sovereignty aspects raised by the privacy laws, rules and principles.
BASE