The article highlights misconceptions around the concept of innovativeness and argues for a broader understanding that encompasses both inward and outward-looking perspectives. It emphasizes the importance of perceived innovativeness as influenced by marketing, innovation culture, leadership, and emotional connection with customers. This customer-centric viewpoint not only offers a fresh lens through which to evaluate firm innovativeness but also underscores its significant implications for innovation management and future research directions.
In: Kunz, Werner; Liu, Raymond (2014): Towards a Differentiated Understanding of Social Media Usage and Participation Benefits - A Cross-Cultural Comparison between Eastern and Western User, in Bernd Schmitt and Leonard Lee (Eds.), Exploring the Asian Consumer, Society from Consumer Psychology Book Seri
In: De Keyser, A. and Kunz, W. H. (2022), "Living and Working with Service Robots: A TCCM Analysis and Considerations for Future Research", Journal of Service Management, Forthcoming.
In: Werner Kunz, Kristina Heinonen, Jos Lemmink (2019): Future Service Technologies – Is Service Research on Track with Business Reality?," Journal of Services Marketing, Vol. 33, 4, 479-487
Digitization, artificial intelligence, and service robots carry serious ethical, privacy, and fairness risks. Using the lens of corporate digital responsibility (CDR), we examine these risks and their mitigation in service firms and make five contributions. First, we show that CDR is critical in service contexts because of the vast streams of customer data involved and digital service technology's omnipresence, opacity, and complexity. Second, we synthesize the ethics, privacy, and fairness literature using the CDR data and technology life-cycle perspective to understand better the nature of these risks in a service context. Third, to provide insights on the origins of these risks, we examine the digital service ecosystem and the related flows of money, service, data, insights, and technologies. Fourth, we deduct that the underlying causes of CDR issues are trade-offs between good CDR practices and organizational objectives (e.g., profit opportunities versus CDR risks) and introduce the CDR calculus to capture this. We also conclude that regulation will need to step in when a firm's CDR calculus becomes so negative that good CDR is unlikely. Finally, we advance a set of strategies, tools, and practices service firms can use to manage these trade-offs and build a strong CDR culture.
Advances in artificial intelligence (AI) are increasingly enabling firms to develop services that utilize autonomous vehicles (AVs). Yet, there are significant psychological barriers to adoption, and insights from extant literature are insufficient to understand customer emotions regarding AV services. To allow for a holistic exploration of customer perspectives, we synthesize multidisciplinary literature to develop the Customer Responses to Unmanned Intelligent-transport Services based on Emotions (CRUISE) framework, which lays the foundation for improved strategizing, targeting, and positioning of AV services. We subsequently provide empirical support for several propositions underpinning the CRUISE framework using representative multinational panel data ( N = 27,565) and an implicit association test ( N = 300). We discover four distinct customer segments based on their preferred degree of service autonomy and service risk. The segments also differ in terms of the valence and intensity of emotional responses to fully autonomous vehicle services. Additionally, exposure to positive information about AV services negatively correlates with the likelihood of membership in the two most resistant segments. Our contribution to service research is chiefly twofold; we provide: 1) a formal treatise of AV services, emphasizing their uniqueness and breadth of application, and 2) empirically validated managerial directions for effective strategizing based on the CRUISE framework.
In: Osburg, V.; Yoganathan, V.; Kunz, W. H.; Tarba, S. (2022): Can (A)I Give You a Ride? Development and Validation of the CRUISE Framework for Autonomous Vehicle Services, Vol 25., Forthcoming
The use of digital employees (DEs)—chatbots powered by artificial intelligence (AI)—is becoming increasingly common in the service industry. However, it is unclear whether collaborations between the human employee (HE) and DE can influence customer outcomes, and what the mechanisms behind such outcomes are. This research proposes and tests a theoretical model that explains how the communication of HE-DE collaboration in the form of interdependent behavioral cues can influence customer evaluations of the service they received from such a team. Five experimental studies involving a total of 1403 participants demonstrate that making HE-DE collaboration visible to customers during the service encounter can reinforce their perception of HE-DE team cohesiveness and service process fluency, driving satisfaction. The communication of coordination and team goal cues are two strong stimulants that strengthen such impressions. Further, this research also reveals that the HE-DE collaboration (vs. augmentation or substitution) appeals to customers thanks to their perception of a transparent process, which is induced through collaborative cues. This research provides theoretical implications for a transparent collaborative process between HE and DE and practical advice for firms seeking to integrate DE into their organizations' workflows.
Abstract Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.