Posted in Brain Imaging on May 29, 2009 |
The left inferior frontal gyrus is activated during the processing of hierarchical structures in language, and Freidrich et al wondered whether a similar pattern would emerge when subjects processed information containing first order logic in the form of mathematics. Indeed, they found increased activation in the left inferior frontal gyrus (i.e., Broca’s area) using fMRI, but only when the subjects completed a heirarchical string as opposed during a simple list structure. Additionally of note, they found a significantly increased number of activation foci when the subjects had incorrect answers rather than correct ones, suggesting that the more uncertain answer is run through additional redunancy tests in the pursuit of error detection.
Friedrich R, Friederici AD. 2009 Mathematical Logic in the Human Brain: Syntax. PLoS ONE 4(5): e5599. doi:10.1371/journal.pone.0005599
Read Full Post »
The ability of animals to sense weak magnetic fields is a fascinating ability that as of yet does not have a fully elucidated mechanism. One model has been proposed based on findings from the glass catfish, by Kolomytkin et al. In their model, glycoprotein molecules (which may consist of negatively charged oligosaccharide side chains) are tethered to an ion channel gate in electrorecepting cells. The glycoprotein molecules each contain many negative charges, and the applied electrical and/or magnetic field would exert a force on these glycoprotein molecules. If substantial enough (i.e., if it is greater than the thermal energy associated with one degree of freedom), it would mechanically cause the ion channel to open because they are covalenty bonded to the glycoproteins. If enough of these channels opened, they could sum to an action potential in the sensory neuron.
Using EEG, Marino et al evaluated the likelihood of a magnetic stimuli evoking a post-transduction action potential in ten human subjects. Even with only a 10 ms rise time and a 0.2 ms fall time, field potentials during the onset of the magnetic field occured 100% of the time, and field potentials in the offest of the magnetic field occured 60% of the time. They were able to rule out the possibility that the field potentials resulted from interactions between the field and the scalp electrodes based on other preliminary tests. The fact that the receptor was able to detect such a rapid change (0.2 ms) suggests that the signal transduction process may indeed be initiated by a mechanical force. So although their location has not been pinpointed, it appears that humans also have electromagnetic receptors and that they act through a phylogenetically conserved mechanism.
Kolomytkin OM, et al. 2007 Glycoproteins bound to ion channels mediate detection of electric fields: A proposed mechanism and supporting evidence. Bioelectromagnetics 28:379-385.
Marino AA, et al. 2009 Evidence that transduction of electromagnetic field is mediated by a force receptor. Neuroscience Letters. doi:10.1016/j.neulet.2009.01.051.
Read Full Post »
Posted in Molecular Neuroscience on May 26, 2009 |
The human brain takes up 20% of the body’s resting oxygen consumption. Half of that is used by potassium and sodium pumps, which must hydrolyze ATP to keep the resting potential, derived from the ionic concentration gradient across cell membranes, consistent. It is almost certain that this metabolic cost has a huge impact on the way that neural codes have been implemented.
In order to deduce the amount of energy that it takes for neurons to transmit information, Laughlin et al recorded from five blowfly photoreceptors that were randomly stimulated with 10^6 effective photons per second (i.e., daylight). They determined that each photoreceptor consumes 7 x 10^9 ATP molecules per second under these conditions, and 6 x 10^-5 milliliters of oxygen per minute. Given an information transportation rate of 1000 bits per second, this corresponds to 7 million ATP molecules consumed per bit of information.
In the second order large monopolar cell (i.e., an interneuron), they calculated that these neurons consume a lower bound of 9 x 10^5 molecules of ATP per second, as well as transmitting more information (1600 bits) per second at a higher reliability. The authors explain this discrepancy becuase this interneuron has a lower membrane conductance, in part because it need not have a large membrane area in order to accept and transduce photons like the photoreceptor.
The fact that neurons use so much energy may be due to the stochastic nature of receptor activation. Numerous components must converge, including molecular collision, diffusion, and vesicle release, all of which reduce reliable information transfer by introducing noise. Perhaps by using more ATP than the minimum required, neurons insert useful redundancy that helps stabilize their operations.
Laughlin SB, et al. 1998 The metabolic cost of neural information. Nature Neuroscience doi:10.1038/23.
Read Full Post »
Posted in Trends in Neuroscience on May 25, 2009 |
Gerald Edelman, From brain dynamics to consciousness, how matter becomes imagination:
- Questions in neuroscience: How is the brain put together?, What are the fundamental operations?, How can we connect psychology to biology?
- Consciousness is what we lose when we sleep and what we gain again when we wake up.
- In neural development, neurons migrate as a crowd in various layers. No two individuals have the same dispersion. There are some functional implications of these facets of neural developmental:
- Precise, pre-specified, point-to-point wiring between neurons is excluded.
- Uniquely specific connections cannot exist across individuals (generally speaking).
- Divergent overlapping arbors imply the existence of vast numbers of inputs to cells, which sum to code.
- Even if individuals are genetically identical, no two of their neurons will be alike.
- Brain constructs: The majority of anatomical connections are not functionally expressed (i.e., silent synapses), alternative systems can account for the same behavior, hidden cues can have affect (i.e., you can have behavior within language), and algorithms are essential.
- Even in vision, the brain constructs a context-dependent image. Motion is an essential element of perception.
- The world is unlabeled, as physics does not contain any theory that deals with neuroscience.
- Darwinian population thinking can be seen in somatic selection systems (i.e., natural selection), immunity, and neuronal group selection.
- Neuronal group selection comes about because there is a creation of repertoires of circuits during embryogenesis by epigenetic variation and selection.
- Re-entrant mapping is a massively parallel, dynamic process to get synchronous firing between distant regions.
- The field potential of neurons can be correlated with various stuff, this indicates a relationship between them.
- How do you sample the vast brain to get from molecule to behavior? You can’t sample a billion neurons.
- Degeneracy is the ability of structurally different elements to perform the same activity. The best example of a degenerate code is the genetic system.
- If you have a stroke in the thalamus you lose consciousness forever. This indicates that at the least this structure is necessary for consciousness, but it is probably not sufficient.
- With a SQUID device you can measure minute magnetic fields, and one useful technique is to do so while conducting binocular rivalry experiments. If the brain has orthogonality, then the brain will suppress one of the images presented at a time. If you use 9.5 Hz for one of the images and 7.4 Hz for the other, you can tag the images with frequency. After a Fourier analysis, you can tell that the brain only responds to one of the frequency images at a time.
- The dynamic core hypothesis (i.e., see Figure 1 here) stipulates that in order to contribute to consciousness a brain region must perform high integration in a matter of milliseconds; any larger and it can’t accomplish anything. Re-entry among neuronal groups in the dynamic core entails consciousness.
- Qualia are just those discriminations entailed by selection among different core states.
- Variance in the system is so great that it can’t be noise. So maybe it comes as a byproduct of selection. We want to study the *mechanism* of this selection process.
- Memory will probably involve degeneracy of a profound sort. Every perception is an act of creation, every memory is an act of imagination.
- It will be feasible one day to create a conscious artifact, at the interplay between physics and biology. Such would be a great human achievement, but also raises some concerns.
- There are two types of consciousness in his dichotomy: primary (sensory), and higher-order (semantic), which only humans have.
- Modularity vs. general purpose: He thinks that modularity as a concept of the brain is a perverse notion. You must not conflate the necessary with the sufficient. The most important facet of the brain, in his conception, is re-entry.
Tim Shahan, Conditioned reinforcement:
- Reinforcement provides the connective tissues of learning and the living flesh of behavior of humans in natural environments.
- An initially neutral event acquires value because of its relation to primary reinforcement. One can see how long the stimulus resists extinction in order to test its strength.
- Baum’s generalized matching law accounts for the common deviations from matching (i.e., undermatching or bias).
- In the contextual choice model (Grace, 1994), conditioned reinforcement is independent of context, but sensitivity to the relative value changes with temporal context. This is similar to hyperbolic discounting.
- In order to measure response rates, a better measure may be resistance to changes in rates of reinforcement. So, you could disrupt reinforcement with extinction or prefeeding to see if the animal’s responding changes.
- Behavioral momentum sometimes messes with conditioned reinforcement.
- Also, if you vary the value of the stimulus and measure the resistance to change, you once again get no effect on the value of the stimulus in resistance to change. So we ask, are they really reinforcers? This has been a debate for over 50 years.
- Conditioned reinforcement may be a misnomer. Instead, you should use a word like signals, feedback, and/or sign posts. These stimuli *guide* behavior, instead of strengthening it. They also could be seen as reducing uncertainty, which is reinforcing, but only when the information has some utility (i.e., by allowing them to actually leave the situation after observing a cue for S minus).
- In this account, conditioned reinforcement is just signaling.
- Another example of this could be alcohol self-administration, where the actual dippers of the stimuli are merely means to an end. This could be seen as underlying the U-shape dose-response curve (Shahan, 2006).
- Perhaps animals are learning facts instead of learning to act in certain ways. Thus the notion of strengthening behavior may be superfluous. Instead, animals integrate learned facts in order to make decisions.
Cynthia Pietras, Effects of added cost on choice:
- Risky choice is choice in situations with environmental variance.
- Behavioral ecology takes an evolutionary perspective and assumes that choices are adaptive.
- The energy budget model stipulates that choices should depend on the organism’s energy budget. You can maximize the energy budget by choosing the risky choice when the energy budget is negative.
- In humans, you substitute money for food for ethical reasons. Only if the subjects earn enough money do they get to keep it.
- In this model, individuals should choose the more risky of two schedules only if they would be unlikely to reach the survival threshold by choosing the safer option. This is the behavior subjects usually display.
- In their study, they generally followed the predictions of the energy budget. Their study is better modeled by a dynamic function because, i.e., if people get lucky on the first trial in the risky schedule, it may make sense to then switch to the low risk option.
- When it is more costly to deviate from optimality (i.e., later trials), subjects show a higher proportion of optimal responding.
William Baum, Dynamics of choice:
- All behavior is choice, because every situation permits more than one activity. When the conditions change, the allocation changes. Because time is finite, if one activity increases another must decrease.
- He studies concurrent VI VI schedules with pigeons! In this case, choice = log (B1/B2)
- The transient phase occurs when conditions change, and then the behavior reaches a steady state, in idealized form. When the variation is no longer systematic (i.e., just unpredictable noise), then you are at the steady state.
- To estimate the equilibrium, you need a long time scale. But to estimate the dynamics, you need a shorter time scale. So single sessions for dynamics vs. overall averages for equilibrium. Or you can get smaller relatively speaking, so within-session components vs. single sessions.
- What is it that changes, always the behavior ratio? In that case, one could look for reward matching at each environmental condition. However, it’s also possible that local processes could lead to overall patterns only, and not the other way around.
- Transition data is pretty sweet; you get abrupt changes in visits following schedule changes. These changes rely on matching on quite a small time scale (preference pulses), which is evident when you evaluate the log of the food ratio versus the log of the peck ratio.
- When you look at switches only, you see a pattern of a long visit following reinforcement, then rapid changes back and forth. You can derive the preference pulse data from the switch visit data, but not vice versa, proving in this case the efficacy of reductionism.
- How general is the approach of thinking about different time scale? He expects that there will be order among time scales in other avenues of research as well.
Joel Myerson, Cognitive aging, a behavior-theoretic approach:
- There are age-related declines in processing speed, working memory, learning (complex), and reasoning. This order underlies a hypothesis that changes in speed feed forward to changes in working memory and then learning, etc.
- The most pronounced change is processing speed, in a manner that is dose-dependent, cross-task ubiquitous and universal across individuals. But it is uniform?
- There are two proposed mechanisms for these changes: exogenous causes, like listening comprehension, and endogenous causes like multi-tasking. The exogenous are sometimes worse because they are limited-time, and you may never be able to recover. So if you can’t listen fast enough to hear what somebody says it will be difficult to recover that moment. Whereas if you become a slower reader you can just read more to compensate.
- One experimental task that has been studied in this context is mental rotation. Differences in response times in old versus young people increase significantly from 0 to 130 degrees orientation.
- In a visual conjunction search task, the more elements there are in the array, the more of a divergence there is between reaction times in young and old subjects.
- In the abstract matching task, once again increasing the level of complexity leads to an age by complexity interaction.
- Qualitatively, it seems that almost anything you do that increases complexity boosts the size of the age difference, although I am curious to see if some of these effects would disappear if the data was passed through a log transform.
- If you use a Brinley plot, you find that older individuals in most data sets are about 2.5 times slower. This suggests that the slowing is pretty much uniform, with the exception of verbal vs. visuospatial. That is, if words are involved, the slowing of RTs in older individuals is not as pronounced as it is for spatial tasks.
- There are individual differences in general RT speed across tasks, such that some individuals are generally slower and some are faster. In group means, university students perform better than vocational students, suggesting that RTs may indicate (or be responsible) in some way for general intelligence.
- One possible confound is that university students (in the young age group) may be more likely to be using their brain more. This can’t be the only explanation, however, as there is a dose-dependent effect across all age cohorts.
- Intelligence is a useful construct because people who are good at one thing tend to be good at lots of other things. He says that this “may be the most replicated result in all of psychology.”
- Working memory is another individual characteristic that may predict intelligence. You can assess this with spatial span tasks, and force subjects to perform a secondary task of some sort to induce multitasking. You get selective interference by multitasking, which means that secondary tasks only reduce performance when they require activity in the same domain (i.e., spatial task would interfere on other spatial tasks, but spatial and verbal would not).
- In older groups there is no evidence of increased susceptibility to interference, although there is a drop in memory span in older groups in general, especially in visuospatial tasks.
- Crystallized intelligence is background knowledge; fluid intelligence is success in novel situations, with either insight or rapid learning as the mechanism.
- The correlation between fluid intelligence as measured by Raven’s advanced progressive matrix and a 3-term learning task (i.e., pattern learning/recognition) is about 0.50. This is much stronger than working memory, and in his words amazingly high for this kind of research.
- His research team also got a really high correlation, 0.75, between learning and fluid intelligence in adults.
- Intelligence predicts education achievement, education relies on learning (to an extent…), and therefore learning predicts intelligence.
- The cascade hypothesis is that age leads to slower processing which leads to a reduction in working memory which leads to a reduction in learning abilities which leads to a reduction in fluid intelligence.
- In their model, once you account for fluid intelligence, age adds no predictive power for the divergence in reaction times. This suggests that if you could boost fluid intelligence, you could mitigate some of the cognitive effects of aging.
Read Full Post »
Posted in Trends in Neuroscience on May 24, 2009 |
Jay Moore, Some effects of procedural variables on the dynamics of operand choice:
- It is not only the overall rate of reinforcement that influences responding in choice paradigms, but also the amount of delay discounting. Procedural variations (inter-response times, change over delays, etc.) have a big impact, and can change the independent variable.
- Increasing the amount of time of first delay until reinforcement in a schedule decreases the preference for that schedule. This indicates that the pigeon emphasizes the time elapsed from the beginning of the schedule to the end.
Federico Sanabria, The dynamics of conditioning and extinction:
- How can you measure learning? Probe sessions have some problems, and may not be completely ecologically valid.
- Instead, you can do probabilistic automaintenence, and measure how well the animal can extract a US given signal from noise.
- They have a Momentum/Pavlovian model, which tracks both what the animal did on the previous trial, as well as conditioning, i.e., factoring in the reinforcement status on the last trial.
- With long inter-trial intervals (increased delays between USs), their model implies that positive momentum will have *more* impact. Also, conditioning is faster when the US presentation is *more* sporadic, they have apparently found.
- In terms of probability of producing an operant response, jumping from p=0.1 to p=0.2 produces a proportionally larger increase in responding than increasing from either p=0.025 to 0.05 or from p=0.05 to p=0.1.
Chris Podlesnik and Tim Shahan. Extinction, relapse, and behavioral momentum:
- Adding variable time behavior to a schedule (i.e., variable interval 60 seconds versus variable interval 60 seconds plus variable time 15 seconds) increases stimulus strength but decreases the response-reinforcer relation. Relative resistance to extinction, however, was greater in the schedule with the inclusion of the variable time.
- Relapse of extinguished operant behavior also depends on the Pavlovian relationship, using a similar procedure as above.
- Nevin and Grace (2001)’s augmented model of extinction takes into account this power law function describing the resistance to extinction, and in 2005 they added a parameter to amplify the disruptive effect. This model gets pretty good correlations, with r squared values typically around 0.75 for individuals and 0.95 for group data.
- If you reduce the disruptive effects preventing behavior, you’ll get an increase in behavior, or relapse. Drug abuse may be relevant here, because you get similar results in alcohol and cocaine self-administration.
Bruce Curry, The tautology of the matching law in consumer behavior analysis.
- Cost matching is more of a standard economic approach, but there are all sorts of interesting topics to be studied in the matching law.
- If you specify the model correctly the risk of tautology should be low. Things that are testable (i.e., falsifiable), cannot be tautological.
- You can escape from tautology by reference to aggregation, adding coefficients across brands or across consumers. If you do not do so, you run into a danger of circularity.
- Matching could be seen as optimization, which would theoretically justify a certain form of matching.
- If you test your specific functional form against a free form regression model (i.e., neural networks or kernel analysis), and find that the free form is better you have reason to doubt your functional form.
- McDowell’s generalized matching law equation contains an error term, and he is worried about it. It could be made into an extraneous or latent variable in consumer analysis, for example. You need to take into account error terms in the regression, so maybe that is why you cannot also insert them into the original equation.
- Many questioners doubted that consumer relations can adequately be explained by the matching law, because there are too many random variables at play in real life behavior.
- People started arguing (pretty virulently) about whether you can use the R sub e approximation in probabilistic data. They agreed that the solution is to put in a feedback function for the error term.
Steven Hirsch, Exponential demand and cross-price demand interactions, extensions for multiple reinforcers:
- There are two concepts of value: scalar value (the dose/potency/size of the reinforcer or commodity) and essential value (the importance of the commodity to the subjects, i.e., their ordinal preferences).
- The form of the demand function and the slope across qualitative demand curves can tell us about the relative essential value of a stimulus.
- The rate constant of the demand function (alpha) is what distinguishes commodities (i.e., luxuries vs. normal goods). Different doses of the same reinforcer have the same rate constants, and the only difference is the starting quantity.
- His equation is log Q = Q0 + k * ( exp (-alpha * (Q0 * c)) – 1), where Q0 = scalar value variance, alpha = essential value or rate of change in elasticity, and k = scaling constant.
- Most research has been into scalar values instead of essential values, but often drug companies use behavioral pharmacology research on rodents to determine how similar the effects of two different drugs will be upon reinstatement. By comparing them to opiates, they can quantify the potential for dependency.
- People assume that there must be negative feedback effects at higher levels that causes the inverted U-shape dose-dependent response curve you see in most drugs, either due to diminished motor responses or simple toxicity. However, you get the same inverted U-shaped dose-dependent response curves to food, and nobody would say that there is toxicity or diminished motor responses in that case, instead it is explained as a preference. So maybe there isn’t a negative feedback but merely an attenuation in pleasure that results in the downward sloping portion of the graph. This might come about because the animal has already reached its optimal level of responding based on dopamine receptors, for instance.
Ido Erev, Learning and decisions from experience:
- It is useful to go back to learning and behavior research in order to gain insight into behavioral economics. He uses a simple “clicking paradigm” in most of his experiments, where participants simply click on one of two buttons to gain a reward.
- Any behavior can be rationalized by different priors, so the whole non-rational approach has limitations. Nevertheless, in terms of seeking the highest expected value within his clicking paradigm, here are the ways in which subjects commonly deviate from optimality:
- Underweighting of rare events. Say that the two buttons have reward schedules of either 1 with p=0.9 and -10 with p=0.1, or simply 0. The best response would be to choose the schedule of 0, but people only do it 40% of the time. When you have a schedule with rewards of 10 with p=0.1 and -1 with p=0.9 against a schedule with a constant reward of 0, subjects tend to prefer the 0, even though in this case this it has a lower expected value. When asked in questionnaires following the task, subjects overestimate the probability of the 10 in both tasks. But subjects act as if they under weigh this rare event.
- Payoff variability effect. High payoff variability tends to move behavior towards more random choices, even if there is still one schedule that has a higher overall expected value.
- Big eyes effect. If you show the mean foregone payoff of the possibilities that were not chosen, individuals tend to prefer risky alternatives with low expected value. By showing the mean subjects may begin to assume that there are better alternatives out there. This is a counter-intuitive effect because it does not display the typical loss aversion that you see in other experiments.
- Regressive exploration. Individuals display too much exploration in binary choice tasks, but not enough exploration in tasks with multiple schedules.
- Allais paradox. This is an effect that shows how individuals can deviate from expected utility theory, in that if subjects prefer lottery A over B they will not necessarily prefer some A + C over B + C where C is some other lottery. This has also been demonstrated in rats and bees. In humans, there is an experience-description gap. Although individuals report that they would act consistent with the expected utility theory, in behavioral tests they usually fail to do so.
- He talked about applying his research to fixing his owns patterns of irrationality. In his introspection, he noted that we plan our behavior to be careful, but it doesn’t usually work out that way.
- He also applied this research to the broken window effect on cheating in exams. The optimal threshold for punishment depends on how many students as cheating, because an individual might as well cheat if lots of others are cheating, but you should not do so if you would be the only one; these are the two equilibria at the extremes. He suggests that in this scenario continuous punishment would be the optimal solution from the teacher’s perspective.
Read Full Post »
Posted in Theoretical Neuroscience on May 24, 2009 |
In many sensory systems, neurons in later stages are less active than those in earlier stages. At each level a higher degree of specificity is necessary to trigger responding. Sensory data may lie along a continuous curved surface of the n-dimensional state space of possible stimulus forms. If true, that would mean that the state space would be overrepresented by individual neurons, which would free up higher levels of cognition to identify additional patterns in the data. However, neurons are metabolically expensive, so this benefit would have to be weighed by natural selection against the cost.
Kurtosis is one way to measure sparseness in terms of deviations from the Guassian distribution, but it has problems sometimes due to overweighing outliers. Empirically, sparse coding has been found in V1 of primates, the auditory and barrel cortices of rats, layer 6 of the motor cortex of rabbits, and the nuclues higher vocal center of zebra finch, activates in response to specific song sequences. Further research could potentially find more, and quantify additional measures of kurtosis.
Olshausen BA, Field DJ. 2004 Spare coding of sensory inputs. Current Opinion in Neurobiology 14:481-487.
Read Full Post »
Posted in Aging on May 19, 2009 |
There have been sparce tests of genome-wide associations between gene expression and longevity in humans, but more are coming up the pipeline. Kerber et al (2009) used a dataset of Utah grandmothers to test 2151 always expressed genes in a proportional hazards model. The predicted mortality from their six gene model was able to account for 23% of the variance in predicted mortality. Here are the genes in their multivariate model as well as the coefficients (which are rough indicators of how important the gene is to the model):
- CORO1A (-0.27),
- FXR2 (0.21),
- CBX5 (-0.074),
- PIK3CA (-0.0094),
- AKAP2 (-0.0086),
- CUL3 (-0.0081).
A very cool study and hopefully there will be more like it soon.
Kerber RA, et al. 2009 Gene Expression Profiles Associated with Aging and Mortality in Humans DOI: 10.1111/j.1474-9726.2009.0046.
Read Full Post »