Landing : Athabascau University

Social Influence Bias: A Randomized Experiment

http://www.sciencemag.org/content/341/6146/647.full

Fascinating article from 2013 on an experiment on a live website in which the experimenters manipulated rating behaviour by giving an early upvote or downvote. An early upvote had a very large influence on future voting, increasing the chances by nearly a third that a randomly chosen piece of content would gain more upvotes in future, with final ratings increased by 25% on average. Interestingly, downvotes did not have the same effect, making very little overall difference. Topics and prior relationships made some difference.

This accords closely with many similar studies and experiments, including a social navigation study I performed about a decade ago, involving clicking on a treasure map, the twist being that participants had to try to guess where, on average, most other people would click. About half the subjects could see where others had already clicked, the about half could not. The participants were aware that the average was taken from those that could not see where others had clicked. The click patterns of each set were radically different...

Mob effects in social navigation

On closer analysis, of those that could see where others had clicked, around a third of the subjects followed what others had done (as this recent experiment suggests), around a third followed a similar pattern to the 'blind' partipants, and around a third actively chose an option because others had not done so - on the face of it this latter behaviour was a bit bizarre, given the conditions of the contest, though it is quite likely that they were assuming just such a bias would occur and acting accordingly.

One thing that might be useful, though very difficult, would be to try to weed out the herd followers and downgrade their ratings. StackExchange tries to do something like this by giving more weight to those that have shown expertise in the past, but it has not fully sorted out the problem of the super-influential that have a lot of good karma as a result of gaming the system, as well as the networks that form within it leading to bias (a problem shared by the less-sophisticated but also quite effective Reddit). At the very least, it might be helpful to introduce a delay to feedback being shown until a certain amount of time has passed or a threshold has been reached.

One thing is certain, though: simple aggregated ratings that are fed back to prospective raters (including those voting in elections) are almost purpose-built to make stupid mobs. As several people have shown, including Surowiecki and Page, crowds are normally only wise when they do not know what the rest of the crowd is thinking. 

ABSTRACT

Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.

Comments

  • Richard Huntrods January 1, 2016 - 8:24pm

    Very interesting Jon. As someone who tried and gave up on Stack Overflow (and the ba-jillions of spawned stack hypenated sites, I find the problem fascinating. The problem with stack overflow is more complex than simple repuation and gaming the system. The ranking system becomes a self-perpetuating nightmare. Those who gamed to very high initial repuation became self-perpetuating 'gurus' who then anwered every single question, and the answer was immediately upvoted (whether the answer was good or bad) because of their reputation.

    There's also an insidious underbelly to stack-X, and that is that many who gamed are somehow also "insiders". To put it very simply, never, EVER down-rank an answer by one of these gurus. You will be punished in very mysterious ways - certainly ways no ordinary user can accomplish.

    The end problem is that only the "pro" answerers every answer stuff anymore. Amateurs can never gain enough reputation to have a "good" answer because the system rewards only the mighty. So for the most part very intellegent persons with real knowledge stop answering as they get tired of being downvoted simply because they are not "the gods".

    It reminds me of an episode of South Park where the boys took down a MMORPG player who had so many points he could and did kill all the other. The sequences where they built up points for the final battle by gaming the system was quite funny. (if you are a south park viewer).

  • Gerald Ardito January 2, 2016 - 6:48am

    Jon,

    I also find this veryinteresting.

    It actually reminds me of the "shared" highlighting within the Kindle apps of various flavors. A colleague is an English educator and reported on how students reading on a Kindle viewed what was they found interesting a text being significantly influenced by what others had already highlighted.

    Gerald

  • Jon Dron January 2, 2016 - 10:57am

    @Richard - indeed. Terry Anderson and I describe the social form of Stack Exchange as predominantly that of the set: it is about people clustering around shared interests, topics, etc, rather than becoming connected and, as long as it stays that way and the algorithms for collective intelligence are sound, it works pretty well. Unfortunately, networks form - people become known to others and, especially combined with the power you mention, the crowd is no longer so free of bias. In effect, they add more parts to the algorithm that work counter to the main one that drives it because, like all social media, it is a soft system composed of people and process.  I wonder whether it would help to anonymize (randomly for each post, so you cannot track individuals) every reply? Individuals would still see their own name and there would still be accountability, badges, relative power, and all the rest - there would just be no external signs of a person's identity.  Some people might self-identify, which could potentially mess things up again, but that would backfire if others impersonated those self-identified individuals in an attempt to boost their own karma, so I doubt that many would care to do so.

    @Gerald - very true. Also true of the glosses, annotations, etc of traditional books but the effects are very limited, for the most part, to individual volumes in libraries. I am certainly influenced by those highlights, not just when reading but to the extent that it feels weird adding my own highlight to the same place and even weirder to add one that is nearby or that overlaps. A similar problem affects citation indexes - the best way to get cited is to get cited. Andrew Chiarella has done some fascinating work on using this effect with his CoRead system, which exploits collective highlighting in a big way.  Like me in my own CoFIND system, he found it useful within a small, focused group with shared goals and surrounding pedagogical processes to drive it. The problem becomes bigger in larger crowds formed of networks and sets, where such group processes and norms are sparser or non-existent. Generically, it is an instance of the Matthew Effect - them that's got shall get, them that's not shall lose - which is one of a larger family of systems of preferential attachment. But there are lots of ways that complex adaptive systems in nature avoid that positive feedback trap to stay on the edge of chaos, including delay, parcellation, negative feedback loops, finite energy, etc. My first book (and a couple of papers derived from it) was in a large part a theoretically grounded attempt to come up with ways of designing social media for self-organized learning that utilize rather than suffer from such effects. I came up with a set of design principles that I really should get round to refining and revisiting some day.  One or two of these ideas have found their way into the Landing, though not as many as I'd like.

    Jon