A few years ago, a couple of neuroscientists asked a bunch of strangers to answer general questions with knowable answers. For instance, “what’s the distance of the border between Italy and Switzerland?” and “how much did the population of Zurich increase in 2006?”
Most of these people had to estimate their answers because they didn’t know the exact answers (who carries around this knowledge?). The neuroscientists then gave most of the strangers one of two sets of additional information. The first group received a table depicting how everyone else in the study answered each of the questions. The second group received the average answer provided by the group. A third group didn’t receive any information on the answers of the other study participants.
The strangers were then asked to answer the questions a second time with some now knowing how the group responded the first time around. And what did they find? The group of folks who were confident in their first estimate were more accurate.
Now, this was a one-time thing. The neuroscientists weren’t asking the strangers to develop a collective wisdom around the estimates. They were asking many of them to take a shot in the dark and then, when confronted with new information, those people thought “yeah, I’m probably wrong and someone else probably has a better idea.” Except their peers usually didn’t have a better idea. Instead, the estimates got worse.
To improve on the group estimates, the researchers went back to the first set of results and put a great weight on the results of people who were less likely to change their mind. This improved the wisdom of the group even more than the first set of estimates where each estimate was given an equal ranking.
There are two important points here. It’s possible that the folks who didn’t change their estimates knew that they were more likely to be right than the group as a whole. And, as it turns out, their estimates were more likely to be closer to the correct answer. However, it can also be said that those estimates made independent from group were more accurate.
In real life, it’s rare for folks to develop estimates independent from any group. In other words, we usually aren’t all strangers analyzing the same problems without knowledge of other people trying to do the same thing. This goes for experts and non-experts alike.
Therefore, in someone’s version of a utopia (perhaps Plato), you can envision a society of experts who spend all their time independently analyzing and developing solutions to complex problems. But as soon as you make the views of the group available to everyone else in the group… well… things can get worse.
One way to improve upon group estimates is to look for the outliers. Who is willing to stick with their estimates even when it isn’t the popular view? This group is probably wrong too but, to the extent that independence matters, their views may be less wrong than everyone else.