Suchergebnisse
Filter
Format
Medientyp
Sprache
Weitere Sprachen
Jahre
6691 Ergebnisse
Sortierung:
SSRN
Multi-Factor Timing with Deep Learning
SSRN
Deep Learning, Text, and Patent Valuation
SSRN
Working paper
Deep Learning in Characteristics-Sorted Factor Models
In: Journal of Financial and Quantitative Analysis, Forthcoming
SSRN
Working paper
Statistical guarantees for sparse deep learning
In: Advances in statistical analysis: AStA, Band 108, Heft 2, S. 231-258
ISSN: 1863-818X
AbstractNeural networks are becoming increasingly popular in applications, but our mathematical understanding of their potential and limitations is still limited. In this paper, we further this understanding by developing statistical guarantees for sparse deep learning. In contrast to previous work, we consider different types of sparsity, such as few active connections, few active nodes, and other norm-based types of sparsity. Moreover, our theories cover important aspects that previous theories have neglected, such as multiple outputs, regularization, and $$\ell_{2}$$
ℓ
2
-loss. The guarantees have a mild dependence on network widths and depths, which means that they support the application of sparse but wide and deep networks from a statistical perspective. Some of the concepts and tools that we use in our derivations are uncommon in deep learning and, hence, might be of additional interest.
Machine understanding and deep learning representation
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 201, Heft 2
ISSN: 1573-0964
AbstractPractical ability manifested through robust and reliable task performance, as well as information relevance and well-structured representation, are key factors indicative of understanding in the philosophical literature. We explore these factors in the context of deep learning, identifying prominent patterns in how the results of these algorithms represent information. While the estimation applications of modern neural networks do not qualify as the mental activity of persons, we argue that coupling analyses from philosophical accounts with the empirical and theoretical basis for identifying these factors in deep learning representations provides a framework for discussing and critically evaluating potential machine understanding given the continually improving task performance enabled by such algorithms.
Deep Learning in Cybersecurity: Challenges and Approaches
In: International journal of cyber warfare and terrorism: IJCWT ; an official publication of the Information Resources Management Association, Band 10, Heft 2, S. 82-105
ISSN: 1947-3443
In this article, a review and summarization of the emerging scientific approaches of deep learning (DL) on cybersecurity are provided, a structured and comprehensive overview of the various cyberattack detection methods is conducted, existing cyberattack detection methods based on DL is categorized. Methods covering attacks to deep learning based on generative adversarial networks (GAN) are investigated. The datasets used for the evaluation of the efficiency proposed by researchers for cyberattack detection methods are discussed. The statistical analysis of papers published on cybersecurity with the application of DL over the years is conducted. Existing commercial cybersecurity solutions developed on deep learning are described.
Deep Learning Techniques in Radar Emitter Identification
In: Defence science journal: DSJ, Band 73, Heft 5, S. 551-563
ISSN: 0011-748X
In the field of electronic warfare (EW), one of the crucial roles of electronic intelligence is the identification of radar signals. In an operational environment, it is very essential to identify radar emitters whether friend or foe so that appropriate radar countermeasures can be taken against them. With the electromagnetic environment becoming increasingly complex and the diversity of signal features, radar emitter identification with high recognition accuracy has become a significantly challenging task. Traditional radar identification methods have shown some limitations in this complex electromagnetic scenario. Several radar classification and identification methods based on artificial neural networks have emerged with the emergence of artificial neural networks, notably deep learning approaches. Machine learning and deep learning algorithms are now frequently utilized to extract various types of information from radar signals more accurately and robustly. This paper illustrates the use of Deep Neural Networks (DNN) in radar applications for emitter classification and identification. Since deep learning approaches are capable of accurately classifying complicated patterns in radar signals, they have demonstrated significant promise for identifying radar emitters. By offering a thorough literature analysis of deep learning-based methodologies, the study intends to assist researchers and practitioners in better understanding the application of deep learning techniques to challenges related to the classification and identification of radar emitters. The study demonstrates that DNN can be used successfully in applications for radar classification and identification.
Initial Margin Simulation with Deep Learning
SSRN
Working paper
Trustworthy AI: the Deep Learning Perspective
The impact of AI, and in particular of deep learning, on the industry has been so disrupting that it gave rise to a new wave of research and applications that goes under the name of Industry 4.0. This expression refers to the application of AI and cognitive computing to leverage an effective data exchange and processing in manufacturing technologies, services and transports, laying the foundation of what is commonly known as the fourth industrial revolution. As a consequence, today's developing trend is increasingly focusing on AI based data-driven approaches, mainly because leveraging user's data (such as location, action patterns, social information, etc.) can make applications able to adapt to them, enhancing the user experience. To this aim, tools like automatic image tagging (e.g. those based on face recognition), voice control, personalised advertising, etc. process enormous amounts of data (often remotely due to the huge computational effort required) too often rich in sensitive information. Artificial intelligence has thus been proving to be so effective that today it is increasingly been using also in critical domains such as facial recognition, biometric verification (e.g. fingerprints), autonomous driving etc. Although this opens unprecedented scenarios, it is important to note that its misuse (malicious or not) can lead to unintended consequences, such as unethical or unfair use (e.g. discriminating on the basis of ethnicity or gender), or used to harm people's privacy. Indeed, if on one hand, the industry is pushing toward a massive use of artificial intelligence enhanced solution, on the other it is not adequately supporting researches in end-to-end understating of capabilities and vulnerabilities of such systems. The results may be very (negatively) mediatic, especially when regarding borderline domains such those related to subjects privacy or to ethical and fairness, like users profiling, fake news generation, reliability of autonomous driving systems, etc. We strongly believe that, since being just a (very powerful) tool, AI is not to blame for its misuse. Nonetheless, we claim that in order to develop a more ethical, fair and secure use of artificial intelligence, all the involved actors (in primis users, developers and legislators) must have a very clear idea about some critical questions, such as "what is AI?", "what are the ethical implications of its improper usage?", "what are its capabilities and limits?", "is it safe to use AI in critical domains?", and so on. Moreover, since AI is very likely to be an important part of our everyday life in the very next future, it is crucial to build trustworthy AI systems. Therefore, the aim of this thesis is to make a first step towards the crucial need for raising awareness about reproducibility, security and fairness threats associated with AI systems, from a technical perspective as well as from the governance and from the ethical point of view. Among the several issues that should be faced, in this work we try to address three central points: understanding what "intelligence" means and implies within the context of artificial intelligence; analyse the limitations and the weaknesses that might affect an AI-based system, independently from the particular adopted technology or technical solutions; assessing the system behaviours in the case of successful attacks and/or in the presence of degraded environmental conditions. To this aim, the thesis is divided into three main parts: in the first part we introduce the concept of AI, focusing on Deep Learning and on some of its more crucial issues, before moving to ethical implications associated with the notion of "intelligence"; in the second part we focus on the perils associated with the reproducibility of results in deep learning, also showing how proper network design can be used to limit their effects; finally, in the third part we address the implications that an AI misuse can cause in a critical domain such as biometrics, proposing some attacks duly designed for the scope. The cornerstone of the whole thesis are adversarial perturbations, a term referring to the set of techniques intended to deceive AI systems by injecting a small perturbation (noise, often totally imperceptible to a human being) into the data. The key idea is that, although adversarial perturbations are a considerable concern to domain experts, on the other hand, they fuel new possibilities to both favours a fair use of artificial intelligence systems and to better understand the "reasoning" they follow in order to reach the solution of a given problem. Results are presented for applications related to critical domains such as medical imaging, facial recognition and biometric verification. However, the concepts and the methodologies introduced in this thesis are intended to be general enough to be applied to different real-life applications.
BASE
Automatic Spoofing Detection Using Deep Learning
In: Global social sciences review: an open access, triple-blind peer review, multidisciplinary journal, Band IX, Heft I, S. 111-333
ISSN: 2616-793X
Deep fakes stand out to be the most dangerous side effects of Artificial Intelligence. AI assists to produce voice cloning of any entity which is very arduous to categorize whether it's fake or real. The aim of the research is to impart a spoofing detection system to an automatic speaker verification (ASV) system that can perceive false voices efficiently. The goal is to perceive the unapparent audio elements with maximum precision and to develop a model that is proficient in automatically extracting audio features by utilizing the ASVspoof 2019 dataset. Hence, the proposed ML-DL SafetyNet model is designed that delicately differentiate ASVspoof 2019 dataset voice speeches into fake or bonafide. ASVspoof 2019 dataset is characterized into two segments LA and PA. The ML-DL SafetyNet model is centred on two unique processes; deep learning and machine learning classifiers. Both techniques executed strong performance by achieving an accuracy of 90%.
A deep-learning-based emergency alert system
Emergency alert systems serve as a critical link in the chain of crisis communication, and they are essential to minimize loss during emergencies. Acts of terrorism and violence, chemical spills, amber alerts, nuclear facility problems, weather-related emergencies, flu pandemics, and other emergencies all require those responsible such as government officials, building managers, and university administrators to be able to quickly and reliably distribute emergency information to the public. This paper presents our design of a deep-learning-based emergency warning system. The proposed system is considered suitable for application in existing infrastructure such as closed-circuit television and other monitoring devices. The experimental results show that in most cases, our system immediately detects emergencies such as car accidents and natural disasters. ; N/A
BASE
Algorithmic Collusion: Insights from Deep Learning
SSRN
Deep Learning for Determinants of Default
SSRN
Working paper
Measuring Name Concentrations through Deep Learning
SSRN