That is the focus of an interesting article called “Top-Down Predictions in the Cognitive Brain” from the journal Brain and Cognition. The authors summarize various models pertaining to how inputs from the retinas are transferred to high level areas in the brain.
They first draw attention to how complex and fast-moving our visual processes must be, by taking the example of driving. Drivers must be able to map two dimensional screens into 3 dimensional models in seconds, while taking into account past experiences, watching our speed, and shooting off a quick text to our friend.
In order to account for this fast moving process, there must be some sort of top-down filtering of information as well in addition to the obvious bottom-up stream from retina to thalamus to primary visual cortex. They presuppose that there must be a top-down filtering for a few reasons:
1) Given the multitude of shadows, angles, and changes in light patterns that appear in the every day world, it would be improbable and highly taxing for a bottom-up system to compute which “edges” belong to which objects.
2) Resolution of ambiguous objects (a feature that we know to be possible of the human brain) is impossible without some sort of top-down feedback. The system must analyze the inputs based on its past experience with the images.
3) From computer vision research, we have learned the how advantageous these top-down systems can be.
Most of these models rely on some sort of recursive, interactive loop between the top-down and the bottom-up systems. Once the higher-level systems have some amount of information, they send their predictions back “down” and it is checked against the input data. Then, a separate process measures the amount of error between this prediction and the actual stimulus generated activity. Depending on the level of error, the higher neural region will either create a new prediction (and continue this process until a prediction yields little error), or stop the cycle and intuit that the prediction must be correspond to reality.
The researchers discuss various specific models (with this general structure) of visual processing, and then they apply their models to explain priming, emotion, schizophrenia, and dyslexia. In their discussion they are a little too verbose about how advantageous cognitive predictions are (because, duh), but they also suggest some cool ways that this general visual model could be applied to other brain functions.
K. Kveraga, A.S. Ghuman, M. Bar, Top–down predictions in the cognitive brain, Brain and Cognition 65 (2007), pp. 145–168.