We investigate optimal group member configurations for producing a maximally accurate group forecast. Our approach accounts for group members that may be biased in their forecasts and/or have errors that correlate with the criterion values being forecast. We show that for large forecasting groups, the diversity of individual forecasts linearly trades off with forecaster accuracy when determining optimal group composition.
Abstract This article comments on the article of Thorn and Schurz in this volume and focuses on, what we call, the problem of parasitic experts. We discuss that both meta- induction and crowd wisdom can be understood as pertaining to absolute reliability rather than comparative optimality, and we suggest that the involvement of reliability will provide a handle on this problem.
Abstract Meta-induction, in its various forms, is an imitative prediction method, where the prediction methods and the predictions of other agents are imitated to the extent that those methods or agents have proven successful in the past. In past work, Schurz demonstrated the optimality of meta-induction as a method for predicting unknown events and quantities. However, much recent discussion, along with formal and empirical work, on the Wisdom of Crowds has extolled the virtue of diverse and independent judgment as essential to maintenance of 'wise crowds'. This suggests that meta-inductive prediction methods could undermine the wisdom of the crowd inasmuch these methods recommend that agents imitate the predictions of other agents. In this article, we evaluate meta-inductive methods with a focus on the impact on a group's performance that may result from including meta-inductivists among its members. In addition to considering cases of global accessibility (i.e., cases where the judgments of all members of the group are available to all of the group's members), we consider cases where agents only have access to the judgments of other agents within their own local neighborhoods.
Agent-based models have played a prominent role in recent debates about the merits of democracy. In particular, the formal model of Lu Hong and Scott Page and the associated "diversity trumps ability" result has typically been seen to support the epistemic virtues of democracy over epistocracy (i.e., governance by experts). In this paper we first identify the modeling choices embodied in the original formal model and then critique the application of the Hong-Page results to philosophical debates on the relative merits of democracy. In particular we argue that the "best-performing agents" in the Hong-Page model should not be interpreted as experts. We next explore a closely related model in which best-performing agents are more plausibly seen as experts and show that the diversity trumps ability result fails to hold. However, with changes in other parameters (such as the deliberation dynamic) the diversity trumps ability result is restored. The sensitivity of this result to parameter choices illustrates the complexity of the link between formal modeling and more general philosophical claims; we use this debate as a platform for a more general discussion of when and how agent-based models can contribute to philosophical discussions.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis. This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals. I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115 people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially "wise," knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The "social influence effect" diminishes the diversity of the crowd without improvements of its collective error. The "range reduction effect" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The "confidence effect" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.
It's an evening network news show. The camera zooms in on Ted Kennedy speaking to an enthusiastic crowd in San Diego, while the voiceover sonorously ends a brief report with, "Can the Senator get the voters to forget about Chappaquidick?" It is a 30-second news item. Question: have we just seen a piece of nonfiction film or a mini-documentary? The Kennedy staffers will be outraged; others will insist that the coverage merely told it like it is. Yet those same staffers will be pleased several months later when the networks cover the Senator's speech to the Democratic convention by showing wildly cheering partisans, while the cameras pass over the many delegates who are either sitting on their hands or just chatting with each other. Others will claim that the coverage of the speech was visually distorted. They may all be right, but they may not be