Diffusion is a type of passive transport that involves the net movement of molecules or ions from an area of higher concentration to an area of lower concentration down a concentration gradient. The concentration gradient is the difference in concentration between two points.
In biology, diffusion plays an important role in many biological events such as molecular transport, cell signaling, and neurotransmitter movement across a synaptic cleft.
How far would a typical molecule diffuse in a millisecond? A second? An hour?
Diffusion is a description of how molecules will randomly move around in a liquid. Their movement will be limited if they hit a barrier or randomly collide with another molecule and react, which is not described by diffusion.
The distance a molecule will diffuse in a certain amount of time depends on the size of the molecule, the viscosity of the fluid, and the temperature.
Assuming that we are talking about diffusion at 25° C and in water, then there is a nice calculator on physiologyweb.com that lists diffusion coefficients for different ions and molecules:
If we are talking about the diffusion of a small molecule neurotransmitter such as glutamate, it has a MW of 147, which is close to glucose’s MW of 180. So we can use glucose’s diffusion coefficient as a rough guide for the diffusion of some types of small molecule neurotransmitters.
This calculator suggests that glucose will diffuse 1000 nm in a millisecond, 31,000 nm (31 μm) in a second, or 1,900,000 nm (1.9 mm) in an hour.
Molecular diffusion rates are helpful when building intuition about what structural information is necessary to be able to infer in brain preservation. Because, in the way that I think about it, molecular events that occur more slowly than rapid long-term memory recall can be instantiated (which, conservatively, can occur in ~500-1000 ms) cannot be uniquely necessary for the structural information describing it.
Inspired by CalTech’s Question #21 for cognitive scientists: “What is diffusion? How far would a typical molecule diffuse in a millisecond? A second? An hour? How does the diffusion equation differ from the cable equation?”
– Cable theory can be derived in part from Ohm’s law, the fundamental theory of electricity that models the current flowing between two points as equal to the voltage distance between the two points divided by the material’s resistance, or in other words, the classic equation V = IR.
– The greater the cross-sectional area of the neurite’s cytosol (the interior part of it, containing biomolecules, electrolytes, and other ions), the easier an ion can flow through it, so the neurite’s longitudinal resistance, r_l, will be lower.
– If the cell membrane is more resistant to ion flow into or out of the cell (due to high membrane resistance, r_m), then charge will tend to accumulate inside the cell, and it will have a higher ionic current flowing down longer distances in the neurite. This is often represented by a paremeter called the length current, λ, equal to the square root of r_m divided by r_l.
– If a cell membrane has a lower membrane capacitance (c_m, which is usually a fairly constant value), then the relative ion flow down the neurite will be greater, due to a lower displacement current. How quickly the membrane voltage changes in response to a current injected at at given point can be predicted by the time constant, τ, equal to the product of c_m times r_m.
– An electrotonic potential results from a local change in ion conductance, e.g. after a synaptic event, that does not propagate. It becomes exponentially smaller as it spreads. This is opposed to an action potential, which reaches the voltage threshold by which it does propagate down the neurite (due to the opening of voltage-gated ion channels), and then spreads like a wave.
– Dendritic trees can perform non-linear integration of signals that can be predicted on the basis of cable theory. The existence of subthreshold membrane potential fluctuations in dendrites, which based on my understanding should dominate neuronal signaling, can allow variations in synaptic weight distributions and input timing to encode a substantial amount of information within a single neuron.
What are the biophysics of voltage-gated sodium channels?
Sodium channels are a major component of excitable membranes. They are an intrinsic component of excitable tissue that allows them to generate and propagate action potentials. These electrical signals are essential for proper neuronal communication.
The channel looks like a barrel, with 4-fold symmetry, and a diameter of about 10 nm. The channel has an activation gate, through which sodium ions can flow through. If the activation gate is closed, no ions can pass through, but if it is open, ions can pass through the pore. The channel is closed at rest, wherein the membrane voltage potential is polarized. When a sufficient voltage depolarization across the membrane occurs, the membrane will draw the gates open, allowing sodium ions to flow through and leading to further depolarization. When enough sodium has passed, the further voltage change causes the inactivation gate to close, thus stopping the flow of sodium ions and leading to repolarization.
Sodium channels are selective for sodium ions because the inner filter of the pore is highly negatively charged; the Na+ ion has a positive charge and will bind well to the inner filter. K+ ions, while also positively charged, cannot pass through because of a size restriction. The gate is not large enough for them to fit through. For ions to pass, they need to be smaller than the diameter of the filter; for ions with a larger diameter to pass, the filter would need to enlarge; however, the size of the filter cannot increase because the pore has a fixed size. These are the unique properties of the sodium channel that allow it to selectively conduct sodium.
Sodium channels are good targets for many drugs and toxins. For example, tetrodotoxin specifically binds to voltage-gated sodium channels and can stop sodium channels from opening, thereby blocking all neural signaling.
What are the biophysics of transmitter-gated channels?
Transmitter-gated channels are opened by transmitters. They are then generally ion-selective. To open the channel, the transmitter needs to bind to the receptor. The transmitter binding causes an allosteric change that allows another part of the channel to open, known as the ion channel gate. When open, the ion channel gate allows specific ions to pass through.
A special example is the NMDA receptor. Under normal circumstances, the NMDA receptor is blocked by Mg2+ and Zn2+ ions. When the post-synaptic neuron is depolarized, however, Mg2+ and Zn2+ ions are repelled. In this case, the receptor can be activated by glutamate. When activated, the NMDA receptor allows positive ions to pass through (K+, Na+, and Ca2+ ions), which can help sustain depolarization and lead to intracellular signaling events such as long-term potentiation.
NMDA receptors are often called “coincidence detectors” because these two events must occur together for the channel to open. First, the NMDA receptor must be activated by the post-synaptic being depolarized, and second, glutamate must be released.
Another example is the nicotinic acetylcholine receptor. When acetylcholine binds to the receptor, the channel opens. This allows sodium and potassium ions to pass through, which leads to depolarization and therefore a neural signal.
Most types of ion channel activity in the brain need regulation. Regulation can occur post-translationally through the addition of a phosphate group to one or more amino acids. The addition of a phosphate group to a particular location of the AMPA receptor, for example, has been shown to increase the probability of AMPA channel opening. The Ca2+/calmodulin kinase II pathway is able to phosphorylate the GluA1 AMPA receptor subunit at Ser831, causing an increase in AMPA channel conductance.
In addition to the post-translational regulation of channel activity, many channels are regulated by endogenous compounds in the brain. Serotonin is a monoamine neurotransmitter that regulates various types of sodium channels and potassium channels. Dopamine is also a monoamine neurotransmitter, and it can be found in extrasynaptic regions. Dopamine has been shown to increase potassium channel activity by activating dopamine D1 receptors in axons.
Together, the biophysics of ion channels allow for neural signaling by allowing for the passage of ions into and out of the cell. This allows for changes in membrane potential and intracellular signaling.
Inspired by CalTech’s Question #19 for cognitive scientists: “Describe the main biophysical characteristics of ionic channels. How does its biophysical properties contribute to its physiological function? What is thought to be the basis for the channels ion selectivity?”
Modeling is at the crux of what makes science iterative. Andrews and Arkin’s 2006 review (here) explains many of the major equations in modeling the reactions of chemical species in cells.
The most common model is the kinetic ordinary differential equation (ODE). It assumes that 1) concentrations of molecules are continuous (even though individual molecules are of course discrete), 2) reactions occur in a homogeneous region, and 3) reactions occur deterministically. The molecules in the reaction are described as a set of differential equations with one equation per chemical. Typically, these equations are solved numerically via computers, although this process is non-trivial.
One of the other common models accounts for the subcellular spatial localization of proteins and reaction components. The concentration at each analyzed region is modeled as a point in a vector, and the reaction is analyzed by taking the partial derivative of these vectors with respect to time. Thus it is called a partial differential equation (PDE). PDEs have the same assumptions as ODEs except that the homogeneous region (assumption #2) is of course not necessary. In order to achieve high spatial resolution, the equations require small subvolumes and short time steps. So PDEs are especially intensive computationally.
One of the main applications of modeling PDEs in neuroscience is in calcium wave models. Calcium imaging relies on indicator molecules that change spectral properties upon binding to calcium ions. So by measuring the fluorescence of the system it is possible to determine the quantity of calcium in a given region.
In particular, this is useful for measuring the rapid propagation of calcium waves through networks of astrocytes connected by gap junctions. Hung et al (here) show an example of an astrocytic calcium wave that can turn growth cones to guide growing axons:
With differential modeling, this type of calcium wave could be fit to a particular model that would help uncover relevant parameters, such as the initial Ca2+ concentration and the Ca2+ flux density amplitude. These parameters can then tell us things about how the astrocytic calcium wave’s propagation correlates with other relevant variables, like neuronal activity and axon mobilization.
Inspired by CalTech’s Question #18 for cognitive scientists: “Explain how a system of chemical reactions can be represented as ODEs and PDEs. What approximations are involved?”
Andrews SA, Arkin AP. 2006 Simulating cell biology. Current Biology. doi:10.1016/j.cub.2006.06.048.
Hung J, Colicos MA (2008) Astrocytic Ca2+ Waves Guide CNS Growth Cones to Remote Regions of Neuronal Activity. PLoS ONE 3(11): e3692. doi:10.1371/journal.pone.0003692
Functional magnetic resonance imaging is pretty technical stuff. As the Neuroskeptic aptly puts it, “when you do an fMRI scan you’re using a superconducting magnet to image human neural activity by measuring the quantum spin properties of protons. It doesn’t get much more technical.”
The technique for measuring neural activity has become a household name just 20 years after the discovery of the blood oxygen level-dependent response. The number of articles discussing “fMRI” in the abstract or title has been following this trend:
The “renaissance years” for fMRI seem to be between ’96 and ’98, while its growth really accelerated between ’02 and ’04. Now onto the condensed two-pronged basis of the tech…
1) Biological: When neurons are more active (i.e., have a higher rate of action potentials) they require more glucose for energy quickly. Thus, task-laden neurons do more of the rapid ( ~ 2 times faster) but non-oxygen requiring process of breaking down glucose via glycolysis. One of the reasons that glycolysis is upregulated is to provide energy for the membrane pump Na, K -ATPase, which allows for synaptic glutamate reuptake by astrocytes and AMPA receptor turnover at postsynaptic densities, among other tasks.
Due to the increased need for energy following task performance, blood flow to a brain region also increases when that region is involved in a task or stimulated in some way. Using PET imaging, Fox et al (here) showed that somatosensory stimulation led to a 29% increase in cerebral blood flox in the contralateral somatosensory cortex of human participants:
Concomitantly, blood flow almost always increases more than that of local O2 demand, as indicated by measurements of the partial pressure of O2, indicating that the blood flow increase is an overcompensation. This means that the local concentration of deoxyhemoglobin in the local veins should decrease.
Deoxyhemoglobin can be measured by magnetic resonance because it has unpaired electrons, which enables researchers to track the task-induced change. The changes in aerobic glycolysis and task-induced cerebral blood flow increases probably have a similar origin.
2) Physical. Deoxyhemoglobin has 4 unpaired electrons (i.e. it is paramagnetic), whereas oxyhemoglobin does not have unpaired electrons. This means that deoxyhemoglobin will have a magnetic moment that will affect the local magnetic field and alter the ability of the MR machine to flip the spin state of protons at a given magnetic field value.
So, a change in the ratio of oxy- / deoxy- hemoglobin leads to a change in the T2* relaxation times of MR images, changing the image intensity by a few percent. This image intensity difference extends beyond just the cerebral blood volume because a local magnetic field will form across arteries / veins if one of the regions has a higher ratio of oxy- / deoxy- hemoglobin.
Bammer et al (here) explain this concept with a beautiful diagram and provide a chart describing the relationship between T2* intensity and time given different oxygenation concentrations of hemoglobin:
I refuse to go into more detail here because I didn’t do very well on the MR section of my organic chemistry class and reading more about it now is bringing up painful memories.
Statistically, these T2* differences can be extracted to give indications of the cerebral blood flow, which should correlate with the neural activity upregulation above baseline in the given region. Unfortunately, the temporal resolution isn’t great and changes in blood flow tend to occur ~ 5 seconds after changes in activity. But it is so noninvasive that the tech will likely continue to receive widespread use.
Inspired by CalTech’s Question #17 for cognitive scientists: “What is the physical and biological basis of structural and functional MRI for brain imaging?”
Bammer R, et al. 2005 Foundations of Advanced Magnetic Resonance Imaging. NeuroRX PMID: PMC1064985.
The local field potential (LFP) is extracted by placing a extracellular microelectrode in the middle of a group of neurons without being too close to any particular cell. When ion channels open and close, charged molecules diffuse around. This changes the electrical potential of the medium surrounding the cells, which the electrode detects and transduces.
Researchers then can reduce the amplitude of any signals with higher than ~ 3oo Hz (oscillations / second) via a low-pass filter, which should remove rapid fluctuations in the electrical potential like action potentials. Instead of APs, the local field potential (LFP) is meant to measure relatively slower moving electrical currents in the surrounding area. Mainly, LFPs should reflect changes in the membrane potential of postsynaptic neurons following the binding of a neurotransmitter to a receptor at the postsynaptic density.
This is an empirical question though, and one that has been investigated more of late. With this in mind, David et al (here) correlated activity between spiking events in the high frequency filtered band (greater than ~ 600 Hz) with activity in the LFP band. They recorded from the primary auditory cortex of passively listening ferrets with tungsten microelectrodes with resistances of 1–5 MΩ.
The following figure indicates that there is a correlation between single unit (i.e. neuron) activity (SUA; denoted via green circles and blue x’s) and raw LFP activity (black curve in middle). The researchers “cleaned” the LFP signal by removing the parts predicted by the spiking activity, this is shown via the blue and green curves and the differences between the raw and cleaned curves is shown at bottom:
So, there is sometimes a correlation between single unit activity and LFP recordings, which introduces a bias into the LFP activity and should make researchers a bit wary. Nevertheless, LFP studies have yielded some interesting insight. For example:
1) Groups of neurons in spatially distinct areas of the cortex coordinate their activity via oscillatory synchronization. In their highly cited paper, Gray and Singer (here) recorded from the primary visual cortex (BA17) of kittens. When they passed a light bar stimulus of a particular orientation and direction through the receptive field of their recorded regions, the amplitude of multi-neuron activity and LFPs tracked the optimal light bar orientation in 14 of 25 trials*. The following chart shows recordings when the light bar is passed through the receptive field (open squares / above) and baseline recordings at the start of each trial (filled squares / below):
Because the multi unit activity and LFP responses are so similar, the authors conclude that neurons involved in the generation of oscillations are restricted to a small volume of cortical tissue and are probably organized in a single cortical column.
* These were the 14 trials in which the MUA response matched the LFP response, the other ones were passed off as anomalies.
2) Propagation through cortical networks can occur via neuronal avalanches. Beggs and Plenz (here) recorded from cell cultures prepared from rat sensory cortex and grown on multi-electrode arrays with 60 channels. They used the LFP definition as negative population spikes. The LFPs were at least 24 ms apart at any given electrode. They then organized these LFPs into bin widths of 4 ms and showed how the neural activity propagated in the network:
Especially interesting is that the avalanche is a form of correlated population activity distinct from oscillations,synchrony, and waves.
3) Inhibition of motor cortices via beta region activity in congruent action observation. Tkach et al (here) trained macaque monkeys to move a cursor to a target while recording LFPs in cortical motor regions (BA4 and BA6). They also recorded LFPs while the monkeys were passively watching themselves repeat the cursor movement. The lines on left show the mean LFP power in the 10 – 25 Hz (beta) range averaged over several electrodes. The bars on right show the integrated mean power, indicate that beta activity is consistently higher when the monkeys are passively watching:
The authors suggest that there is a continuum betweeninhibition and activation in the motor system. These results would fit that contention in that motor activity in this beta range is higher when the monkeys are watching the cursor, which they have control over during active trials and which they presumably would have to inhibit the desire to move more strongly.
Inspired by CalTech’s Question #16 for cognitive scientists: “What is a local field potential? What are its underlying causes? What can we learn from such data?”
Gray CM, Singer W. 1989 Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. PNAS link here.
Beggs JM, et al. 2003 Neuronal Avalanches in Neocortical Circuits. J Neuro PMID: 14657176
Neural tissue is nearly an order of magnitude more energetically taxing than other tissue types, so evolutionarily, larger brains would only evolve if they endowed a selective advantage large enough to compensate for the high energy cost. On the molecular level, increases in brain size could be accounted for by various gene duplications and divergences. Here are two examples of specific genes that impact brain size:
1) ASPM codes for a protein that is necessary for the function of the mitotic spindle in neural progenitor cells. Ali and Meier sequenced ASPM exons of 28 primate types and correlated codon changes in each primate lineage with regional brain volumes. They found a positive selection effect of 16 amino acids for cerebral cortex volume but not cerebellar nor whole brain volume. It makes sense that increases in brain density due to ASPM were localized to the cortex because that’s where ASPM is primarily expressed. So, polymorphisms here could account for some regional differences in brain size.
2) DAB1 codes for a protein that is phosphorylated at a tyrosine residue following the binding of reelin and regulates cell positioning in the developing brain. As evidence for this, knock-out studies have shown that without DAB1 mice have misplaced neurons (ectopias) in various brain regions. Pramatarov et al used a mice model that expressed 15% of wildtype DAB1 and found that they had reduced cerebellar volumes. So, polymorphisms at this locus could also account for regional differences in brain size.
Across species, animals with larger bodies tend to have larger brains on an absolute scale. However, as size increases brains become relatively smaller.
For example, Roth and Dicke point out that among larger mammals, humans have relatively the largest brains at 2% of body mass. Whereas shrews, the smallest mammals, have brain sizes of ~ 10% of their body mass. They then chart the log brain mass vs log body weight for 20 mammals:
Most of the increase in brain size in larger animals is probably due to more need for motor / sensory integration, not intelligence. Instead, interspecies intelligence differences are mainly due to increases in the size of the neocortex, as indicated by the correlation between the neocortex to overall brain size ratio and degree of socialization among primates:
Given the huge metabolic cost of neural tissue, another covariate of brain size is metabolic capacity. In mammals, Isler et al show that 2.6% of the variance in brain mass across mammals can be explained by increases in metabolic turnover (indexed by BMR):
Mammals can meet the metabolic requirements of more brain tissue by intaking more energy content or by reducing allocation to other functions like reproduction, locomotion, etc. One hypothesis for the solution of this “energy crisis” is a reduction in gut size and an increase in food nutritional value and digestibility. Exactly how this trade-off works remains an open question as far as I know.
Inspired by CalTech’s Question #15 for cognitive scientists: “Why do some animals have larger brains than others? Why do animals with larger bodies have larger brains? How does brain size relate to metabolism or to longevity?”
The frontal lobe is predictably divided from the temporal lobe by the central sulcus. However, the morphology of the central sulcus varies between subjects, due in part to handedness and aging. The frontal lobes have traditionally been associated with executive functions.
The prefrontal cortex is a part of the frontal lobe. Specifically, it contains the Brodmann areas #8, #9, #10 (possibly important to human evolution), #11, #44, #45, #46 (the DLPFC!), and #47 (involved in speech syntax).
The orbitalfrontal cortex (OFC) consists of Brodmann area #’s 10, 11, and 47. It has roles in coding reward value and predicting the expected reward value of an event. Specifically, lesions to the OFC tend to have the following effects:
1) Makes drug-related responses faster, more erratic, and less associated with learned (i.e., conditioned) cues. For example, Grakalic et al found that mice with OFC lesions began self-administering cocaine after fewer sessions and had higher steady states of drug responding.
2) Diminishes the ability to form associations between probabilistic stimuli and reward. For example, Rudebeck et al showed that in a stimulus–reinforcement experiment, the number of trials necessary for OFC-lesioned macaque monkeys to achieve the optimal response pattern was much slower than for control or ACC lesioned monkeys:
3) Decreases the ability to regulate emotional states. For example, Reekie et al studied marmoset monkeys with orbitalfrontal cortex (OFC) lesions and found that OFC lesions lead to longer autonomic arousal long after the conditioned stimulus and reinforcement has been removed, as measured by blood pressure:
The authors conclude that “…our immediate reactions to emotive stimuli are not always beneficial, and, therefore, an important element of emotion is the ability to appropriately adapt and rapidly modify emotional responses on a moment-by-moment basis. The contribution of the OFC to the regulation of positive emotional states is clearly demonstrated…”
These lesion studies paint the picture of an orbitalfrontal cortex intricately involved with expected reward and its relation to emotion.
Inspired by CalTech’s Question #14 for cognitive scientists: “”Anatomically, what are the frontal and pre-frontal cortical areas? What do you know about patients with lesions in the orbital-frontal cortex?”
Cykowski et al, 2008. The Central Sulcus: an Observer-Independent Characterization of Sulcal Landmarks and Depth Asymmetry. Cerebral Cortex doi:10.1093/cercor/bhm224
Reekie YL et al, 2008. Uncoupling of behavioral and autonomic responses after lesions of the primate orbitofrontal cortex. PNAS doi: 10.1073/pnas.0800417105
Here’s a quick rundown of transgenic animals: By deleting a particular gene or a particular promoter or some aspect of the genetic code, researchers can determine the role of that particular gene. Transgenic knock-outs can be for one or both of the gene alleles at a given locus.
These days the name of the game is time- or tissue-specific gene knock-outs. This allows for the deletion of your favorite gene only when you the experimenter manipulate it. In this manner, you can logically determine the function of genes whose universal knock-out might prove lethal or in some other way mess with normal development. The two most publicized uses of conditional knockouts are in optogenetics and tissue-specific imaging, and I’ll give recent examples of each:
1) Identifying neuron types in vivo: One way of using channelrhodopsin-2 is to express it only in certain neurons so that neuron type can be identified via extracellular recording when blue light is flashed, based on the presence or absence of a short latency action potential. Lima et al used this method with transgenic mice that expressed the gene Cre recombinase driven by the parvalbumin promoter, which is expressed in many interneurons.
These researchers then inserted a viral vector (AAV) containing a transcriptional insulator flanked by two loxP sites (loxP-STOP-loxP), downstream of the cytomegalovirus promoter into their mice and upstream of the genes for channelrhodopsin-2 (ChR2) / yellow fluorescent protein. In cells expressing Cre recombinase (i.e., the same cells that express parvalbumin) and in which the virus successfully enters, Cre excises the STOP sequence and allows for the expression of ChR2.
The system allows the determination of whether or not a given cell is of a type (i.e., a parvalbumin-expressing interneuron) via just extracellular recording, because light will cause a ChR2-dependent action potential in those cells. The reliability of cells to respond to light activation (LED) with short AP’s as a result of optogenetics never ceases to amaze me:
2) Color timer mice: Livet et al’s brainbow study (here) has been cited 100+ times in the 2.5 years since it’s been published, so you’ve all heard of that form of tissue-specific imaging and now it’s boring. Building on that technique is Kanki et al’s strategy to image the differentiation of neural stem cells into adult neurons.
First, they inserted the gene for an orange fluorescent protein after the promoter for the intermediate filament protein Nestin, which is expressed specifically in neural stem cells. The gene will be present in all cells of the mice but will only be expressed in neural stem cells, and since the protein it makes is fluorescent the researchers can tell whether the cell is a neural stem cell. To test the effectiveness of the genetic implant they measured the correlation between an antibody for Nestin (alexa488) and the orange fluorescent protein (KOr) in dissociated neural cells, and found it to be linear and positive:
They then developed transgenic mice with gene fluorescent protein driven by the promoter for doublecortin, a microtubule-associated protein expressed in immature neurons. Next, they crossed the two type of transgenic mice to get double tissue-specific transgenic mice. This allowed them to visualize the transition from neural stem cell to neurons between the subgranular zone and olfactory bulb (the rostral migratory stream) of adult mice:
As you can see, orange fluorescent protein (KOr) becomes less distinct along the path as green flourescent protein (EGFP) becomes more distinct, as expected during neuronal differentiation.
The disadvantage of this technique is that either you have to breed the animals to have the transgene or you have to insert it in vivo, which is problematic both in targeting the right cells, getting the DNA vectors inside the cells, and getting the genes to incorporate into the genome properly. And if you know how to do the latter effectively, don’t bother with this stuff… go collect your Nobel already.
Inspired by CalTech’s Question #13 for cognitive scientists: “What is a transgenic animal? A transgenic knockout mouse? What are the advantages and disadvantages of such animals for neuroscientific studies? Please give one or two specific examples.”
Kanki H, et al. 2010 “Color Timer” mice: visualization of neuronal differentiation with fluorescent proteins. doi: 10.1186/1756-6606-3-5.
Lima SQ, Hromádka T, Znamenskiy P, Zador AM (2009) PINP: A New Method of Tagging Neuronal Populations for Identification during In Vivo Electrophysiological Recording. PLoS ONE 4(7): e6099. doi:10.1371/journal.pone.0006099
Central pattern generators (CPGs) are networks of neurons that endogenously produce rhythmic output, typically used in motor control. Scott Hooper’s accepted definition of CPGs is that they must have rhythms in which: “(1) two or more processes that interact such that each process sequentially increases and decreases, and (2) that, as a result of this interaction, the system repeatedly returns to its starting condition.” Developmentally, indications are that CPG properties are innately established, and that new motor patterns are acquired by the animal but old capabilities are not lost, such that the CPG becomes increasingly multifunctional. There are two major types of CPGs:
1) CPGs driven by an endogenous oscillator neuron, whose oscillations are induced via interactions of its own membrane currents. An example of this is the pyloric network of crustaceans, driven by an endogenous oscillator called the anterior burster neuron. If you isolate the endongenous oscillator neuron in vitro, it will still produce its typical oscillatory output as driven by its own membrane currents.
2) CPGs driven by the activity of a large network. For example, consider the leech heartbeat rhythm generator. In this CPG, motorneurons have two possible states on each side of the heart, one of which beats the heart from back to front, and one of which causes the heart to beat in unison. Around every 20 beats the two sides of the heart switch states. Two of the neurons of the network possess both a hyperpolarization–activated inward current and a low-threshold persistent Na current. Neurons 3 and 4 are reciprocally inhibited by neurons 1 and 2. The result of this inhibition is that neurons 3 and 4 fire in antiphase, to allow only one of each state to be activated at a given time. This second type of CPG is more common.
CPGs are believed to underly many human functions. These include locomotion (with CPGs found in the thoracolumbar segments of the spinal cord), swallowing (with independent CPGs in the brainstem controlling oral, pharyngeal, and esophageal phases, and requring sensory feedback), respiration, ejaculation (neurons called the spinal generator for ejaculation in the spinal cord are capable of self-sustained rhythmic output to relevant motoneurons), and scratching.
Inspired by CalTech’s Question #12 for cognitive scientists: “What is a central pattern generator (CPG)?”
Lang IM. 2009 Brain stem control of the phases of swallowing. Dysphagia DOI: 10.1007/s00455-009-9211-6.
Hooper SL. 1999 Central Pattern Generators. Embryonic ELS. Pdf here.
Guertin PA, et al. 2009 Key central pattern generators of the spinal cord. Journal of Neuroscience Research DOI: 10.1002/jnr.22067
Buono PL, et al. 2002 A mathematical model of motorneuron dynamics in the heartbeat of the leech. Physica D: Nonlinear Phenomena doi:10.1016/j.physd.2003.08.003