By David Nield
9 September 2018
(Science Alert) – Why is it sometimes so hard to convince someone that the world is indeed a globe, or that climate change is actually caused by human activity, despite the overwhelming evidence?Scientists think they might have the answer, and it’s less to do with lack of understanding, and more to do with the feedback they’re getting. [Paper: Certainty Is Primarily Determined by Past Performance During Concept Learning –Des].Getting positive or negative reactions to something you do or say is a greater influence on your thinking than logic and reasoning, the new research suggests – so if you’re in a group of like-minded people, that’s going to reinforce your thinking.Receiving good feedback also encourages us to think we know more than we actually do.In other words, the more sure we become that our current position is right, the less likely we are to take into account other opinions or even cold, hard scientific data.”If you think you know a lot about something, even though you don’t, you’re less likely to be curious enough to explore the topic further, and will fail to learn how little you know,” says one of the team members behind the new study, Louis Marti from the University of California, Berkeley.For the research, more than 500 participants were recruited and shown a series of colored shapes. As each shape appeared, the participants got asked if it was a “Daxxy” – a word made up for these experiments.The test takers had no clues as to what a Daxxy was or wasn’t, but they did get feedback after guessing one way or the other – the system would tell them if the shape they were looking at qualified as a Daxxy or not. At the same time they were also asked how sure they were about what a Daxxy actually was.In this way the researchers were able to measure certainty in relation to feedback. Results showed the confidence of the participants was largely based on the results of their last four or five guesses, not their performance overall.The team behind the tests says this plays into something we already know about learning – that for it to happen, learners need to recognise that there is a gap between what they currently know and what they could know. If they don’t think that gap is there, they won’t take on board new information.”What we found interesting is that they could get the first 19 guesses in a row wrong, but if they got the last five right, they felt very confident,” says Marti. “It’s not that they weren’t paying attention, they were learning what a Daxxy was, but they weren’t using most of what they learned to inform their certainty.”This recent feedback is having more of an effect than hard evidence, the experiments showed, and that might apply in a broader sense too. It could apply to learning something new or trying to differentiate between right and wrong. [more]

Scientists Say They’ve Found The Driver of False Beliefs, And It’s Not a Lack of IntelligenceGraphic from an experiment to measure subjective certainty of participants during concept learning and attempt to predict it using plausible model-based and behavioral predictors. Participants saw 24 trials, randomized between conditions. Feedback was displayed after responding. Graphic: Martí, et al., 2018 / Open Mind

ABSTRACT: Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Certainty Is Primarily Determined by Past Performance During Concept Learning