The predictive abilities of humans under uncertainty is a hot area of research today. Lots of time is spent determining how we go about making decisions, at least in part so that one day we will know how to improve these abilities.
Brown and Steyvers (2008) formulated an experimental design where subjects were asked to determine the hidden state of the process determining the shade of a particle when the only visible outputs were subject to noise. You can view the type of experiment here. In their first experiment, their subjects averaged an accuracy of about 70%, and the accuracy decreased as the noise increased, which is to be expected.
Aside from misinterpreting how many times the system changed its underlying state, they provided conditionally optimal answers. The success of these subjects was staggering in comparison to other tests under uncertainty, and the authors hypothesized that it was because most of those studies asked for predictions, while this study merely asked to explain what had already happened.
In their second experiment, they put this hypothesis to the test by comparing the prescriptive abilities of the subjects to their abilities to predict. In addition to being asked for the hidden state, subjects were also asked to predict where the next stimulus would fall. Mathematically the variability in the answers to these answers should be the same. Given the same sequences of stimuli, the answer to the prediction should be based on the same distribution of the hidden state of the last stimulus.
However, this is not what the researchers found. Not only were the subjects much less accurate on the prediction than the inference task (that was to be expected), but the subjects averaged 15 “response changes” in the predictive task compared with only 11 in the inference task. That means that the subjects were changing their responses more than they ought to have. In fact, if the subjects had simply re-produced the logical extension for the predictive task that they gave for the inference task, their answers would have been 73% better.
If humans truly were “prediction machines,” as so many philosophers have noted, then it would make sense for them to make sound predictions as compared to their retrodictions. But the empirical results, once again, do not support this hypothesis.
Brown SD, Steyvers M. 2008 Detecting and predicting changes. Cognitive Psychology 58:49-67. doi:10.1016/j.cogpsych.2008.09.002.