Exploring emergent soundscape profiles from crowdsourced audio data
In: Computers, environment and urban systems, Band 110, S. 102112
4 Ergebnisse
Sortierung:
In: Computers, environment and urban systems, Band 110, S. 102112
In: European Journal of Futures Research, Band 10, Heft 1
ISSN: 2195-2248
AbstractIn Industry-Academia Collaborations (IAC) both academic, scientific research results and industrial practitioner findings and experiences are produced. Both types of knowledge should be gathered, codified, and disseminated efficiently and effectively. This paper investigates a recent (2014–2017) large-scale IAC R&D&I program case (Need for Speed, N4S) from a learning perspective. It was one of the programs in the Finnish SHOK (Strategic Centres of Science, Technology, and Innovation) system. The theoretical bases are in innovation management, knowledge management, and higher education (university) pedagogy. In the future, IAC projects should be more and more commonplace since major innovations are hardly ever done in isolation, not even by the largest companies. Both intra-organizational and inter-organizational learning networks are increasingly critical success factors. Collaborative learning capabilities will thus be required more often from all the participating parties. Efficient and effective knowledge creation and sharing are underpinning future core competencies. In this paper, we present and evaluate a collaboratively created and publicly shared digital knowledge repository called "Treasure Chest" produced during our case program. The starting point was a jointly created Strategic Research and Innovation Agenda (SRIA), which defined the main research themes and listed motivating research questions to begin with—i.e., intended learning outcomes (ILO). During the 4-year program, our collaborative industry-academia (I-A) learning process produced a range of theoretical and empirical results, which were iteratively collected and packaged into the Treasure Chest repository. Outstandingly, it contained, in addition to traditional research documents, narratives of the industrial learning experiences and more than 100 actionable knowledge items. In conclusion, our vision of the future is that such transparently shared, ambitious, and versatile outcome goals with a continuous integrative collection of the results are keys to effective networked I-A collaboration and learning. In that way, the N4S largely avoided the general problem of often conflicting motives between industrial firms seeking answers and applied solutions to their immediate practical problems and academic researchers aiming at more generalizable knowledge creation and high-quality scientific publications.
There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems. ; peerReviewed
BASE
Ethical concerns related to Artificial Intelligence (AI) equipped systems are prompting demands for ethical AI from all directions. As a response, in recent years public bodies, governments, and companies have rushed to provide guidelines and principles for how AI-based systems are designed and used ethically. We have learned, however, that high-level principles and ethical guidelines cannot be easily converted into actionable advice for industrial organizations that develop AI-based information systems. Maturity models are commonly used in software and systems development companies as a roadmap for improving the performance. We argue that they could also be applied in the context of developing ethically aligned AI systems. In this paper, we propose a maturity model for AI ethics and explain how it can be devised by using a Design Science Research approach. ; peerReviewed
BASE