Are synaptomes impossible?

Javier DeFelipe writes eloquently about the progress from our basic understanding of two neurons to efforts to map all neural connections in a recent commentary. Check it out. At one point, he writes that “circuit diagrams of the nervous system can be considered at different levels, although they are surely impossible to complete at the synaptic level.” However, he doesn’t offer a time scale for this prediction. I agree with him that it would be impossible given our current technology, but at some point in the future, who can say? One thing we know is this: past bets against advancements in science have not fared well.

Validating neuronal classifications with structural data

One of the assumptions of biology is that structure should predict function. Classifying neural cells is no exception: we classify cells that look alike (Purkinje cells, spiny neurons, etc.) on the assumption that they will function alike as well. Otherwise the classification would serve no purpose.

But reversing the causality, and attempting to classify neurons on the basis of just their structure (as opposed to, e.g., simpler staining methods), has proven to be a difficult endeavor. There is little consensus to the taxonomy. One problem is that there is no established set of geometrical measurements which should be used.

Zawadzki et al mined data from NeuroMorph, an online database with quantitative structural data from 5000+ neurons. First, here is a diagram of the structural data that NeuroMorph provides for each neuron:

arXiv:1003.3036v1; height, width, and depth determined post alignment via PCA; bifurcation = when a branch splits into two (new) branches

When authors upload their neurons to NeuroMorph’s database, they often give the cell class that they have assigned to each neuron. The most common of these neuron classes are: 1) pyramidal cells from the hippocampus (Pyr-Hip), 2) medium spiny cells from the basal forebrain (Spi-Bas), 3) ganglion cells from retina (Gan-Ret), 4) uniglomerular projection neurons from olfactory bulb (Uni-Olf). Zawadzki et al then compared these classifications with the results of a naive statistical algorithm that categorizes cells solely based on the similarity and differences of their structures. The “temperature” result that their algorithm spits out seems to be basically the result of a principal component analysis.

As you can see, on the basis of this temperature, there is plenty of overlap between the classes:

arXiv:1003.3036v1; think of it as temperature = relative place in parameter space, and susceptibility = relative likelihood of neuron class being in that portion of parameter space

The pyramidal neurons are scattered across the parameter space, suggesting that their morphological features overlap with the other categories. In contrast, the medium spiny neurons have the least overlap and thus have the most morphological homogeneity, indicating that it medium spiny neurons are the easiest class to segregate based on morphology. Overall, it seems fairly difficult to disambiguate neural classes based solely on the structure given some of the most popular current classifications.

One solution to this problem is presented by Bota and Swanson, who suggest an ontological approach to classifying neurons. Here is their hierarchical classification schema:

doi = 10.1016/j.brainresrev.2007.05.005

They also argue that lower level neuron classifications should be regarded as a hypothesis that certain cell taxa fulfill distinct functional roles. This hypothesis can be tested and potentially discarded. Given the infancy of the field of neuron classification, such low barriers to change appear expedient.

References

Zawadzki K, et al. 2010 Title: Investigating the Morphological Categories in the NeuroMorpho Database by Using Superparamagnetic Clustering. arXiv:1003.3036v1

Bota M, Swanson L. 2007 The neuron classification problem doi: 10.1016/j.brainresrev.2007.05.005. Available in PMC here.

Isolating pure postsynaptic densities in vitro

An old study by Cohen et al looked at the structure and protein composition of postsynaptic densities (PSDs) in neurons of the cerebral cortex. They isolated the PSDs by breaking apart the cells (homogenizing them) and then centrifuging to separate other organelles from the nerve terminals. Here is a electron micrograph of part of the synaptosome they isolated, with arrows shown on the postsynaptic densities:

doi: 10.1083/jcb.74.1.181

The authors estimate that ~ 2% of the proteins in the postsynaptic density fraction are due to membrane contamination. They speculate that the majority of this contamination occurred during homogenization. So, the native morphological appearance of PSDs is largely maintained. This bodes well for other extraction and preservation techniques.

Reference

RS Cohen,  F Blomberg, K Berzins, and P Siekevitz. The structure of postsynaptic densities isolated from dog cerebral cortex: I. overall morphology and protein composition. J Cell Biol 1977 74:181-203. Published July 1, 1977, doi:10.1083/jcb.74.1.181

Problems with serial section em image reconstruction

According to Kubota et al (here), three of the main problems are:

1) Thickness: Accurately determining the thickness of serial sections. Messing this up will introduce error into 3d reconstructions. To get around this people use the minimal folds method. This looks for protrusions in the plane where small folds of tissue self-adhere. Section thickness is assumed to be one-half the width of these protrusions. However, in reality this method can give very variable results. Kubota et al suggest an “optical method” that uses a 3D laser confocal microscope instead, which gives much more reliable results, at least under resin-based conditions:

2) Synapse identification: Often synaptic junctions parallel to the section plane (or at a low angle to it) are not counted as synapses. This occurs ~ 25% of the time! So the authors use a sequence of 3d section analysis that accurately predicts the presence of a synapse, which you can see in the picture below: “(1) many synapse vesicles (A / B below), (2) presynaptic grids (C), (3) synaptic cleft structures, (4) postsynaptic densities (D), and (5) cytoplasm of postsynaptic dendrite or spine (E)”:

This method can be used to ID a lot of the synapses that older methods would have missed.

3) Shrinkage: They note that in their previous studies, after fixation, dehydration, and embedding, their GABAergic nonpyramidal cells often shrunk to up to 90% of their original size. They offer no solutions for this problem.

Elsewhere, Cordona et al note that “reconstruction of microcircuits requires serial section electron microscopy, due to the small size of terminal neuronal processes and their synaptic contacts. Because of the amount of labor that traditionally comes with this approach, very little is known about microcircuitry in brains across the animal kingdom. Many of the problems of serial electron microscopy reconstruction are now solvable with digital image recording and specialized software for both image acquisition and postprocessing.”

Reference

Kubota Y, Hatada S and Kawaguchi Y (2009) Important factors for the three-dimensional reconstruction of neuronal structures from serial ultrathin sections. Front. Neural Circuits 3:4. doi:10.3389/neuro.04.004.2009

Sebastian Seung on connectomics

You can find a video of Seung‘s Davos ’10 talk here. He talks about neuroscience from ~ 2:00 to ~ 5:00. The “powerful microscope” he discusses is the transmission electron microscope. In my mind he is on track in discussing the wide potential of connectomics. But, one quibble, which Seung surely knows quite well:

Although we do have the entire 302-neuron connectome of the nematode C. elegans, from what I can tell scientists still aren’t quite able to model all of its properties on a computer. In fact, it’s difficult for neuroscientists to even model the 11-neuron gastric mill network of the lobster, dependent as it is on intrinsic oscillator neurons and other non-linear activity.

So, we do no doubt need better ML algorithms to “read” and make sense of the TEM data. But, we also need tons more physio data (i.e., here), to construct and validate working input-output models of all important neural and glial cell classes, before we will be able to understand what Seung calls the “essence that makes us uniquely human.”

How many cell types are there in the brain? Masland noted in 2004 that there are ~ 60 cell types in the retina, and reports one guess that there are ~1000 cell types in the cortex. These are currently classified by electrophysio profile, morphology (dendritic arborization / cell size / position), and type of neurotransmitter. But eventually all putative cell types should be validated by their distinct genetics, perhaps determined via the Gal4/UAS system of transgenes to drive the cell-specific expression of some fluorescent protein.

The connectivity between our brain cells matters, but that’s not all that matters. Not all brain cells are made alike.

References

Meinerzthagen IA. 2010 The organisation of invertebrate brains: Cells, synapses and circuits. Acta Zoologica 91: 64-71 . DOI: 10.1111/j.1463-6395.2009.00425.x

Masland RH. 2004 Neuronal cell types. Current Bio 14: 497-500. doi:10.1016/j.cub.2004.06.035

Three techniques to map neural circuitry

Wired has photos and brief explanations of them here. The labs they discuss are Lichtman’s Harvard lab, which uses its automatic tape-collecting lathe ultramicrotome (ATLUM), Winfried Denk’s Heidelberg lab, which first sliced the brain to 25 nm and uses electron microscopy on rabbit neurons, and Van Weeden’s Harvard lab that uses diffusion tensor imaging. Many of these researchers look to the semiconductor industry for motivation. The goal is to figure out a few of the best techniques and then automate them.

Irregular Blood Flow and Iron Deposition in multiple sclerosis

As individuals age, there is an increased amount of iron deposition in the brain, due in part to dysregulation of the proteins that regulate iron influx and sensing of intercellular iron stores. As a redox element, it can catalytically generate reactive oxygen species, leading to higher susceptibility to oxidative stress, and contributes to neurodegeneration.

Today an article by Loz Blain reports that iron may be implicated in possibel treatments for multiple sclerosis as well. In one study:

Dr. Paolo Zamboni took 65 patients with relapsing-remitting MS, performed a simple operation to unblock restricted bloodflow out of the brain – and two years after the surgery, 73% of the patients had no symptoms. Dr. Zamboni’s thinking could turn the current understanding of MS on its head, and offer many sufferers a complete cure.

Singh and Zamboni’s article (pdf) in the Journal of Cerebral Blood Flow and Metabolism indicates why this might be. Blood flow out of the brain often is blocked in MS patients, causing a high rate of cerebral venous reflux. In the main extracranial cervical vein the rate of venous reflux flow is 70% as compared to 0% in control populations. It is possible that this leads to extra iron deposition in the brain and is responsible for the autoimmune activation that leads to demyelination and scarring.

The academic article is much more nuanced in its claims that the popular article, as is to be expected.

Reference

Singh AV, Zamboni P. 2009 Anomalous venous blood flow and iron deposition in multiple sclerosis. Journal of Cerebral Blood Flow and Metabolism 00:1–12.

Sleep and Consciousness: Anatomical Help Wanted

A mathematician wakes up and smells smoke. He goes to the hall and sees both a fire and a fire hose. He thinks for a moment and exclaims, “Ah, a solution exists!” and then goes back to bed.

Many researchers studying consciousness are similarly content to theorize over the mere feasibility of explaining phenomenal experiences, instead of examining the anatomical substrates that will allow us to answer the more intermediary questions. As an exercise in demonstrating the utility of these intermediary steps for our theories, this paper will examine how human neural circuit diagrams could immediately improve our understanding of the relationship between various components of sleep and consciousness.

One of the most prominent theories of the function of sleep is metabolic restoration. Benington and Heller (1995) propose that glycogen stores in astrocytes are exhausted during waking and replenished during sleep, and that the need for sleep is dictated by high levels of adenosine, which promotes neural synchronization. The authors advocate that astrocytes are necessary for preserving the proper cellular environment for neurons during and between action potentials. So, in their model the levels of glycogen in astrocytes indirectly affect neuronal responsiveness. But more recent research suggests that astrocytes may be directly involved in synaptic networks via the calcium-dependent release of glutamate to neurons (Fiacco et al, 2009). Additionally, astrocytes have been found to release ATP, which is quickly hydrolyzed to adenosine, and then causes a persistent synaptic suppression at neurons with adenosine-1 receptors (Pascual et al, 2005). Moreover, when mice without adenosine-1 receptors are sleep deprived, they do not display any change in synchronized slow-wave activity, even though synchronized slow wave activity does vary directly with previous waking duration in wildtype mice (Bjorness et al, 2009). This implies a mechanism through which adenosine could regulate sleep homeostasis. Overall, Benington and Heller’s original proposal has been falsified in some respects and confirmed in others since its publication, as a result of anatomical data. Given a working circuit diagram in the cerebral cortex and/or thalamus of either mice or humans, the metabolic theory could be further refined and potentially falsified.

Another major theory for the function of sleep is that sleep allows for memory consolidation. Sejnowski and Destexhe (2000) postulate that bursts of synchronized high-frequency action potentials in thalamic neurons depolarize dendrites but not the soma, allowing calcium ions to enter dendrites and affect gene expression in the nucleus. This allows for plasticity but comes at a cost, as synchronized networks can feedforward into epilepsy if not properly regulated. This inhibitory regulation is difficult during the day because we are constantly in the presence of sensory stimuli. So, since sleep allows synchronized high frequency action potentials without the risk of uncontrolled positive feedback, sleep may be the toll we pay for plasticity. The feasibility of their argument is based in large part on a reconstruction of synaptic connections of individual neurons in the thalamus. But in order to simulate the proper levels of firing in their model they had to assume that specific levels of GABAA-mediated and excitatory inputs would be afferent to their model neurons. This trial-by-error technique ensures that their computational results make sense but it risks biological irrelevance. Given more anatomical data, the researchers could have ensured the use of biologically plausible synaptic inputs.

The study of other sleep exotica could also benefit from additional anatomical data. Blackmore’s chapter on sleep (2003) touches on the evolution of dreams, a question that would be easier to answer if we knew the differences between our physiological substrates for dreams and those of other animals. Leberge’s (1990) article on lucid dreaming includes the speculation that seritonergic neurons form a system that normally inhibits hallucinations but is itself inhibited in REM sleep, a hypothesis that requires anatomical data to answer. Iranzo et al (2009) are able to note that a dopaminergic deficiency is likely not the cause of REM sleep behavior disorder, but are unable to conclude which brainstem deficiencies do cause the disorder. Although lesion and knock-out studies are good at falsifying hypotheses, they are unable to suggest any on their own in the way that a large neural circuit data corpus would be able to. Neural anatomical data might one day explain Cheyne and Girard’s (2007) observation that vestibular motor sleep paralysis experiences are associated with bliss while intruder and incubus experiences are not, beyond the presumption that the amygdala is somehow involved. And although Johns (1991) is correct in asserting that self-reports are currently our only gateway into subjective experience, this may not always be the case. Miyawaki et al (2008) were able to use fMRI to determine what subjects were currently viewing, based on training data where subjects saw random images. As anatomical data interfaces with new imaging techniques, the possibilities are tremendous.

If we want to answer the hard question (i.e., explaining phenomenal experiences) we will be best served iterating towards technical solutions of the easy problems. The ultimate solution to this technical problem is the creation of a “connectome,” perhaps initially via diffusion imaging but eventually via serial section transmission electron microscopy. Now, it is quite possible that the individual neuron level will not contain enough information to recover the computational detail necessary for consciousness. Indeed, it is conceivable that the density of NMDA receptors on a given pyramidal neuron in the hippocampus could correlate to the information content of a given memory. In that case, we would need to image and reconstruct the circuit at a resolution level that could capture membrane protein receptors. Although less likely, it is also conceivable that small cellular molecules such as microRNAs, which are known to be asymmetrically distributed throughout the brain (Olsen et al, 2009), could play a particular role in producing consciousness. In that case an effort to computationally simulate a conscious mind would be futile in the near future. But we cannot know the answers to these questions until we try. Although it may not be a sexy answer, the truth is that we need more anatomical data before we can intelligently discuss the hard question.

References

Blackmore SJ. 2003 Consciousness: An Introduction. Oxford University Press, USA, pp. 338-352.

Pascual O, Casper KB, Kubera C, Zhang J, Revilla-Sanchez R, Sul JY, Takano H, Moss SJ, McCarthy K,2 Haydon PG. 2005 Astrocytic purinergic signaling coordinates synaptic networks. Science 310:113-116.

Olsen L, Klausen M, Helboe L, Nielsen FC, Werge T. 2009 MicroRNAs show mutually exclusive expression patterns in the brain of adult male rats. PLoS ONE 4:e7225.

Miyawaki Y, Uchida H, Yamashita O, Sato MA, Morito Y, Tanabe HC, Sadato N, Kamitani Y. 2008 Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60:915-929.

Fiacco TA, Agulhon C, McCarthy KD. 2009 Sorting out astrocyte physiology from pharmacology. Annual Review of Pharmacology and Toxicology 49:157-171.

Bjorness TE, Kelly CL, Gao T, Poffenberger V, Greene RW. 2009 Control and function of the homeostatic sleep response by adenosine A1 receptors. Journal of Neuroscience 29:1267-1276.

Johns MW. 1991 A new method for measuring daytime sleepiness: the epworth sleepiness scale. Sleep 14:540-545.

Cheyne JA, Girard TA. 2007 Paranoid delusions and threatening hallucinations: a prospective study of sleep paralysis experiences. Consciousness and Cognition 16:959-974.

Iranzo A, Santamaria J, Tolosa E. 2009 The clinical and pathophysiological relevance of REM sleep behavior disorder in neurodegenerative diseases. Sleep Medicine Review xxx:1-17.

LaBerge S. 1990 Lucid dreaming: psychophysiological studies of consciousness during REM sleep. In Sleep and Cognition, pp. 109-126.

Benington JH, Heller HC. 1995 Restoration of brain energy metabolism as the function of sleep. Progress in Neurobiology 45:347-360.

Sejnowsku TJ, Destexhe A. 2000 Why do we sleep? Brain Research 886:208-223.

Book notes Creativity in Science

Dean Keith Simonton’s recent book is a tour de force, explaining the impact of publications and the number of publications produced by a given scientist as a function of that scientist’s attributes. If you are looking to produce your creative output, why bother with non-emprical, qualitative advice when you don’t have to? Read this book instead. Here are some of my notes. As always, assume that it is a direct quote from the book unless there are no quotes:

  • The ratio of high-impact work to total output in any given period neither increases nor decreases across the career course.
  • The ratio of citations to publications correlates -0.2 with the scientist’s age at each career period, signifying that the amount of variance is virtually zero.
  • Great scientists sedom make a name for themselves by focusing on one extremely narrow topic throughout the course of their career. Instead, they tend to display considerable scientific versatility by dealing with a variety of critical questions.
  • Highly creative scientists virtually never work on just one project at a time but rather they tend to pursue several independent inquiries simultaneously.
  • [Nevertheless], highly creative scientists are not mere dilettantes who flit from topic to topic without rhyme or reason. On the contrary, usually perfmeating most of their work is a core set of themes, issues, perspective, or metaphors.
  • [Crosstalk between multiple projects] can sometimes result in unexpected solutions, progress on one project impinging on anoter project, even when the two are not viewed as being closely related.
  • So poor is the consensus among referees that their recommendations to accept a paper agree only about one-fifth of the time. As a consequence, most published articles should suffer rejection if resubmitted for publication. This bizarre outcome has been empirically demonstrated.
  • The two upward trends in the number of ideas and the number of scientists taken together imply that the production of new discoveries should become highly accelerated.
  • The larger the scientific community in which an individual is embedded, and the richer the cumulative body of scientific knowledge, the more impressive the creativity that can be displayed by the most prolific scientist in a given field… According to this law, as the number of scientists increases, a smaller proportion of scientists will account for half the work. In a sense, the discipline becomes more and more elitist as the field recruits more active participants.
  • [Of the personality traits], most significantly and consistently, creativity is positively associated with openness to experience… This inclination is also related to the creative person’s capacity for defocused attention.
  • One inquiry found that eminent natural scientists had a 28% rate of exhibiting some mental disorder during their lifetime, which is significantly lower than the 73% rate for eminent artists and the 87% rate for eminent poets. Yet 28% exceeds the rate in the general population.
  • “One of the favorite maxims of my father was the distinction between the two sorts of truths, profound truths recognized by the fact that the opposite is also a profound truth, in contrast to trivialities where opposites are obviously absurd” – Neil Bohr’s son
  • [Education can be restricted], so somehow scientific talent must pull off a balancing act between mastering a domain and being mastered by a domain.
  • The discovery of the most appropriate heuristic may require a meta-heuristic-namely, trial and error in the application of heuristics.
  • According to the chance perspective, the creative process is contingent on so many complex and interacting factors that it necessarily behaves as if it operated via a random combinatorial mechanism.

SQAB Notes — Day 2

Gerald Edelman, From brain dynamics to consciousness, how matter becomes imagination:

  • Questions in neuroscience: How is the brain put together?, What are the fundamental operations?, How can we connect psychology to biology?
  • Consciousness is what we lose when we sleep and what we gain again when we wake up.
  • In neural development, neurons migrate as a crowd in various layers. No two individuals have the same dispersion. There are some functional implications of these facets of neural developmental:
  1. Precise, pre-specified, point-to-point wiring between neurons is excluded.
  2. Uniquely specific connections cannot exist across individuals (generally speaking).
  3. Divergent overlapping arbors imply the existence of vast numbers of inputs to cells, which sum to code.
  4. Even if individuals are genetically identical, no two of their neurons will be alike.
  • Brain constructs:  The majority of anatomical connections are not functionally expressed (i.e., silent synapses), alternative systems can account for the same behavior, hidden cues can have affect (i.e., you can have behavior within language), and algorithms are essential.
  • Even in vision, the brain constructs a context-dependent image. Motion is an essential element of perception.
  • The world is unlabeled, as physics does not contain any theory that deals with neuroscience.
  • Darwinian population thinking can be seen in somatic selection systems (i.e., natural selection), immunity, and neuronal group selection.
  • Neuronal group selection comes about because there is a creation of repertoires of circuits during embryogenesis by epigenetic variation and selection.
  • Re-entrant mapping is a massively parallel, dynamic process to get synchronous firing between distant regions.
  • The field potential of neurons can be correlated with various stuff, this indicates a relationship between them.
  • How do you sample the vast brain to get from molecule to behavior? You can’t sample a billion neurons.
  • Degeneracy is the ability of structurally different elements to perform the same activity. The best example of a degenerate code is the genetic system.
  • If you have a stroke in the thalamus you lose consciousness forever. This indicates that at the least this structure is necessary for consciousness, but it is probably not sufficient.
  • With a SQUID device you can measure minute magnetic fields, and one useful technique is to do so while conducting binocular rivalry experiments. If the brain has orthogonality, then the brain will suppress one of the images presented at a time. If you use 9.5 Hz for one of the images and 7.4 Hz for the other, you can tag the images with frequency. After a Fourier analysis, you can tell that the brain only responds to one of the frequency images at a time.
  • The dynamic core hypothesis (i.e., see Figure 1 here) stipulates that in order to contribute to consciousness a brain region must perform high integration in a matter of milliseconds; any larger and it can’t accomplish anything. Re-entry among neuronal groups in the dynamic core entails consciousness.
  • Qualia are just those discriminations entailed by selection among different core states.
  • Variance in the system is so great that it can’t be noise. So maybe it comes as a byproduct of selection. We want to study the *mechanism* of this selection process.
  • Memory will probably involve degeneracy of a profound sort. Every perception is an act of creation, every memory is an act of imagination.
  • It will be feasible one day to create a conscious artifact, at the interplay between physics and biology. Such would be a great human achievement, but also raises some concerns.
  • There are two types of consciousness in his dichotomy: primary (sensory), and higher-order (semantic), which only humans have.
  • Modularity vs. general purpose: He thinks that modularity as a concept of the brain is a perverse notion. You must not conflate the necessary with the sufficient. The most important facet of the brain, in his conception, is re-entry.

Tim Shahan, Conditioned reinforcement:

  • Reinforcement provides the connective tissues of learning and the living flesh of behavior of humans in natural environments.
  • An initially neutral event acquires value because of its relation to primary reinforcement. One can see how long the stimulus resists extinction in order to test its strength.
  • Baum’s generalized matching law accounts for the common deviations from matching (i.e., undermatching or bias).
  • In the contextual choice model (Grace, 1994), conditioned reinforcement is independent of context, but sensitivity to the relative value changes with temporal context. This is similar to hyperbolic discounting.
  • In order to measure response rates, a better measure may be resistance to changes in rates of reinforcement. So, you could disrupt reinforcement with extinction or prefeeding to see if the animal’s responding changes.
  • Behavioral momentum sometimes messes with conditioned reinforcement.
  • Also, if you vary the value of the stimulus and measure the resistance to change, you once again get no effect on the value of the stimulus in resistance to change. So we ask, are they really reinforcers? This has been a debate for over 50 years.
  • Conditioned reinforcement may be a misnomer. Instead, you should use a word like signals, feedback, and/or sign posts. These stimuli *guide* behavior, instead of strengthening it. They also could be seen as reducing uncertainty, which is reinforcing, but only when the information has some utility (i.e., by allowing them to actually leave the situation after observing a cue for S minus).
  • In this account, conditioned reinforcement is just signaling.
  • Another example of this could be alcohol self-administration, where the actual dippers of the stimuli are merely means to an end. This could be seen as underlying the U-shape dose-response curve (Shahan, 2006).
  • Perhaps animals are learning facts instead of learning to act in certain ways. Thus the notion of strengthening behavior may be superfluous. Instead, animals integrate learned facts in order to make decisions.

Cynthia Pietras, Effects of added cost on choice:

  • Risky choice is choice in situations with environmental variance.
  • Behavioral ecology takes an evolutionary perspective and assumes that choices are adaptive.
  • The energy budget model stipulates that choices should depend on the organism’s energy budget. You can maximize the energy budget by choosing the risky choice when the energy budget is negative.
  • In humans, you substitute money for food for ethical reasons. Only if the subjects earn enough money do they get to keep it.
  • In this model, individuals should choose the more risky of two schedules only if they would be unlikely to reach the survival threshold by choosing the safer option. This is the behavior subjects usually display.
  • In their study, they generally followed the predictions of the energy budget. Their study is better modeled by a dynamic function because, i.e., if people get lucky on the first trial in the risky schedule, it may make sense to then switch to the low risk option.
  • When it is more costly to deviate from optimality (i.e., later trials), subjects show a higher proportion of optimal responding.

William Baum, Dynamics of choice:

  • All behavior is choice, because every situation permits more than one activity. When the conditions change, the allocation changes. Because time is finite, if one activity increases another must decrease.
  • He studies concurrent VI VI schedules with pigeons! In this case, choice = log (B1/B2)
  • The transient phase occurs when conditions change, and then the behavior reaches a steady state, in idealized form. When the variation is no longer systematic (i.e., just unpredictable noise), then you are at the steady state.
  • To estimate the equilibrium, you need a long time scale. But to estimate the dynamics, you need a shorter time scale. So single sessions for dynamics vs. overall averages for equilibrium. Or you can get smaller relatively speaking, so within-session components vs. single sessions.
  • What is it that changes, always the behavior ratio? In that case, one could look for reward matching at each environmental condition. However, it’s also possible that local processes could lead to overall patterns only, and not the other way around.
  • Transition data is pretty sweet; you get abrupt changes in visits following schedule changes. These changes rely on matching on quite a small time scale (preference pulses), which is evident when you evaluate the log of the food ratio versus the log of the peck ratio.
  • When you look at switches only, you see a pattern of a long visit following reinforcement, then rapid changes back and forth. You can derive the preference pulse data from the switch visit data, but not vice versa, proving in this case the efficacy of reductionism.
  • How general is the approach of thinking about different time scale? He expects that there will be order among time scales in other avenues of research as well.

Joel Myerson, Cognitive aging, a behavior-theoretic approach:

  • There are age-related declines in processing speed, working memory, learning (complex), and reasoning. This order underlies a hypothesis that changes in speed feed forward to changes in working memory and then learning, etc.
  • The most pronounced change is processing speed, in a manner that is dose-dependent, cross-task ubiquitous and universal across individuals. But it is uniform?
  • There are two proposed mechanisms for these changes: exogenous causes, like listening comprehension, and endogenous causes like multi-tasking. The exogenous are sometimes worse because they are limited-time, and you may never be able to recover. So if you can’t listen fast enough to hear what somebody says it will be difficult to recover that moment. Whereas if you become a slower reader you can just read more to compensate.
  • One experimental task that has been studied in this context is mental rotation. Differences in response times in old versus young people increase significantly from 0 to 130 degrees orientation.
  • In a visual conjunction search task, the more elements there are in the array, the more of a divergence there is between reaction times in young and old subjects.
  • In the abstract matching task, once again increasing the level of complexity leads to an age by complexity interaction.
  • Qualitatively, it seems that almost anything you do that increases complexity boosts the size of the age difference, although I am curious to see if some of these effects would disappear if the data was passed through a log transform.
  • If you use a Brinley plot, you find that older individuals in most data sets are about 2.5 times slower. This suggests that the slowing is pretty much uniform, with the exception of verbal vs. visuospatial. That is, if words are involved, the slowing of RTs in older individuals is not as pronounced as it is for spatial tasks.
  • There are individual differences in general RT speed across tasks, such that some individuals are generally slower and some are faster. In group means, university students perform better than vocational students, suggesting that RTs may indicate (or be responsible) in some way for general intelligence.
  • One possible confound is that university students (in the young age group) may be more likely to be using their brain more. This can’t be the only explanation, however, as there is a dose-dependent effect across all age cohorts.
  • Intelligence is a useful construct because people who are good at one thing tend to be good at lots of other things. He says that this “may be the most replicated result in all of psychology.”
  • Working memory is another individual characteristic that may predict intelligence. You can assess this with spatial span tasks, and force subjects to perform a secondary task of some sort to induce multitasking. You get selective interference by multitasking, which means that secondary tasks only reduce performance when they require activity in the same domain (i.e., spatial task would interfere on other spatial tasks, but spatial and verbal would not).
  • In older groups there is no evidence of increased susceptibility to interference, although there is a drop in memory span in older groups in general, especially in visuospatial tasks.
  • Crystallized intelligence is background knowledge; fluid intelligence is success in novel situations, with either insight or rapid learning as the mechanism.
  • The correlation between fluid intelligence as measured by Raven’s advanced progressive matrix and a 3-term learning task (i.e., pattern learning/recognition) is about 0.50. This is much stronger than working memory, and in his words amazingly high for this kind of research.
  • His research team also got a really high correlation, 0.75, between learning and fluid intelligence in adults.
  • Intelligence predicts education achievement, education relies on learning (to an extent…), and therefore learning predicts intelligence.
  • The cascade hypothesis is that age leads to slower processing which leads to a reduction in working memory which leads to a reduction in learning abilities which leads to a reduction in fluid intelligence.
  • In their model, once you account for fluid intelligence, age adds no predictive power for the divergence in reaction times. This suggests that if you could boost fluid intelligence, you could mitigate some of the cognitive effects of aging.