Commit c1738a231203b3223c8564af797bee4cebacf939

Authored by Olivier
Exists in master

Added all the new stuff from the Scientific Reports version. Fixed conflicts in intro and abstract.

Showing 1 changed file Side-by-side Diff

CHIPLAYpaper/MarketPaper.tex View file @ c1738a2
... ... @@ -135,9 +135,9 @@
135 135  
136 136 \begin{abstract}
137 137 Crowdsourcing with human-computing games is now a well-established approach to help solving difficult computational problems (e.g. Foldit, Phylo). The current strategies used to distribute problems among participants are currently limited to (i) delivering the full problem to every single user and ask them to explore the complete search space (e.g. Foldit), or (ii) decomposing the initial problem into smaller sub-problems and aggregate the solutions returned by gamers (e.g. Phylo). The second approach could be used to explore larger solution space and harness collective intelligence, but interactions between players might lead to group-think phenomena. Therefore, popular crowdsourcing platforms such as Amazon Mechanical Turk deliberately forbid communication between participants.\\
138   - In this paper, we design a novel multi-player game-with-a-purpose, and analyze the impact of multiple game mechanisms on the performance of the system. We present a highly collaborative human-computing game that uses a market, skills and a challenge system to help the players solve a graph problem.
139   -The results obtained during 12 game sessions of 10 players show that the market helps players to build larger solutions. We also show that a skill system and, to a lesser extent, a challenge system can be used to influence and guide the players towards producing better solutions.\\
140   -Our collaborative game-with-a-purpose is open-source, and aims to serve as an universal platform for further independent studies.
  138 + In this paper, we design a novel multi-player game-with-a-purpose, and analyze the impact of multiple game mechanisms on the performance of the system. We present a highly collaborative human-computing game that uses a market, skills and a challenge system to help the players collectively solve a graph problem.
  139 + The results obtained during 12 game sessions of 10 players show that the market helps players to build larger solutions. We also show that a skill system and, to a lesser extent, a challenge system can be used to influence and guide the players towards producing better solutions.\\
  140 +Our collaborative game-with-a-purpose is open-source, and aims to serve as an universal platform for further independent studies.}
141 141 \end{abstract}
142 142  
143 143 \keywords{Game-with-a-purpose; Human computing; Collaboration; Crowdsourcing; Graph problem; Market; Trading game; Skills; Challenges.}
... ... @@ -147,7 +147,7 @@
147 147  
148 148 %Currently, popular crowd-computing platforms such as Amazon Mechanical Turk (AMT) \cite{Buhrmester01012011, Paolacci} or Crowdcrafting \cite{Crowdcrafting} are based on similar divide-and-conquer architectures, where the initial problem is decomposed into smaller sub-tasks that are distributed to individual workers and then aggregated to build a solution. In particular, these systems prevent any interaction between workers in order to prevent groupthink phenomena and bias in the solution \cite{Lorenz:2011aa}.
149 149  
150   -Currently, crowd-computing approaches make use of popular platforms such as Amazon Mechanical Turk (AMT) \cite{Buhrmester01012011, Paolacci} or Crowdcrafting \cite{Crowdcrafting}. The initial problem is decomposed into smaller sub-tasks that are distributed to individual workers and then aggregated to build a solution. In particular, these systems prevent any interaction between workers in order to prevent groupthink phenomena and bias in the solution \cite{Lorenz:2011aa}.
  150 +Currently, crowd-computing approaches make use of popular platforms such as Amazon Mechanical Turk (AMT) \cite{Buhrmester01012011, Paolacci} or Crowdcrafting \cite{Crowdcrafting}. The initial problem is decomposed into smaller sub-tasks that are distributed to individual workers and then aggregated to build a solution. This is also the case in popular scientific human-computing games such as Phylo \cite{Kawrykow:2012aa,Kwak:2013aa}. Importantly, these systems prevent any interaction between workers in order to prevent groupthink phenomena and bias in the solution \cite{Lorenz:2011aa}.
151 151  
152 152 However, such constraints are necessarily limiting the capacity of the system to harness the cognitive power of crowds and make full benefit of collective intelligence. For instance, iterative combinations of crowdsourced contributions can help enhancing creativity \cite{DBLP:conf/chi/YuN11}. Similarly, the presence of a broad spectrum of expertise in a crowdsourcing community has been shown to increase innovation and the advance of knowledge in the group~\cite{Dankulov2015}. The usefulness of parallelizing workflows has also been suggested for tasks accepting broad varieties of answers \cite{DBLP:conf/chi/Little10}.
153 153