Kechen Zhang (2014): How to compress sequential memory patterns into periodic oscillations: general reduction rules. Neural Computation, 26:1542-1599.
A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrieval in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented.
Download reprint PDF file (Zhang_2014_NC.pdf).
Christopher DiMattina and Kechen Zhang (2013): Adaptive stimulus optimization for sensory systems neuroscience. Frontiers in Neural Circuits, 26:1542-1599.
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.
Download reprint PDF file (DiMattina_2013_Front.pdf).
James J. Knierim and Kechen Zhang (2012): Attractor dynamics of spatially correlated neural activity in the limbic system. Annual Review of Neuroscience , 35:267-285.
Attractor networks are a popular computational construct used to model different brain systems. These networks allow elegant computations that are thought to represent a number of aspects of brain function. Although there is good reason to believe that the brain displays attractor dynamics, it has proven difficult to test experimentally whether any particular attractor architecture resides in any particular brain circuit. We review models and experimental evidence for three systems in the rat brain that are presumed to be components of the rat's navigational and memory system. Head-direction cells have been modeled as a ring attractor, grid cells as a plane attractor, and place cells both as a plane attractor and as a point attractor. Whereas the models have proven to be extremely useful conceptual tools, the experimental evidence in their favor, although intriguing, is still mostly circumstantial.
Download reprint PDF file (Knierim_2012_AnnRev.pdf).
John B. Issa and Kechen Zhang (2012): Universal conditions for exact path integration in
Proceedings of the National Academy of Sciences USA
Animals are capable of navigation even in the absence of prominent
landmark cues. This behavioral demonstration of path integration
is supported by the discovery of place cells and other
neurons that show path-invariant response properties even in the
dark. That is, under suitable conditions, the activity of these
neurons depends primarily on the spatial location of the animal
regardless of which trajectory it followed to reach that position.
Although many models of path integration have been proposed, no
known single theoretical framework can formally accommodate
their diverse computational mechanisms. Here we derive a set of
necessary and sufficient conditions for a general class of systems
that performs exact path integration. These conditions include
multiplicative modulation by velocity inputs and a path-invariance
condition that limits the structure of connections in the underlying
neural network. In particular, for a linear system to satisfy the path-invariance
condition, the effective synaptic weight matrices under
different velocities must commute. Our theory subsumes several
existing exact path integration models as special cases. We use
entorhinal grid cells as an example to demonstrate that our
framework can provide useful guidance for finding unexpected
solutions to the path integration problem. This framework may
help constrain future experimental and modeling studies pertaining
to a broad class of neural integration systems.
Download reprint PDF file (Issa_2012_PNAS.pdf).
Adam C. Welday,
I. Gary Shlifer,
Matthew L. Bloom,
and Hugh T. Blair
Cosine directional tuning of theta cell burst frequencies: Evidence
for spatial coding by oscillatory interference.
Journal of Neuroscience 31: 16157-16176.
The rodent septohippocampal system contains "theta cells", which burst rhythmically at 4 - 12 Hz, but the functional significance of this rhythm remains poorly understood (Buzsaki, 2006). Theta rhythm commonly modulates the spike trains of spatially tuned neurons such as place (O'Keefe and Dostrovsky, 1971), head direction (Tsanov et al., 2011a), grid (Hafting et al., 2005), and border cells (Savelli et al., 2008; Solstad et al., 2008). An "oscillatory interference" theory has hypothesized that some of these spatially tuned neurons may derive their positional firing from phase interference among theta oscillations with frequencies that are modulated by the speed and direction of translational movements (Burgess et al., 2005, 2007). This theory is supported by studies reporting modulation of theta frequency by movement speed (Rivas et al.,1996; Geisler et al., 2007; Jeewajee et al., 2008a), but modulation of theta frequency by movement direction has never been observed. Here we recorded theta cells from hippocampus, medial septum, and anterior thalamus of freely behaving rats. Theta cell burst frequencies varied as the cosine of the rat's movement direction, and this directional tuning was influenced by landmark
cues, in agreement with predictions of the oscillatory interference theory. Computer simulations and mathematical analysis demonstrated how a postsynaptic neuron can detect location-dependent synchrony among inputs from such theta cells, and thereby mimic the spatial tuning properties of place, grid, or border cells. These results suggest that theta cells may serve a high-level computational function by encoding a basis set of oscillatory signals that interfere with one another to synthesize spatial memory representations.
Download reprint PDF file (Welday_2011_JNs.pdf).
Joseph D. Monaco, James J. Knierim, and Kechen Zhang (2011): Sensory feedback, error correction, and remapping in a multiple oscillator model of place cell activity. Frontiers in Computational Neuroscience 5:39 doi: 10.3389/fncom.2011.00039
Mammals navigate by integrating self-motion signals ("path integration") and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6-10 Hz) rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of "partial remapping" responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.
Download reprint PDF file (Monaco_2011_Frontiers.pdf).
Christopher DiMattina and Kechen Zhang (2011):
Active data collection for efficient estimation and comparison of nonlinear neural models.
Neural Computation 23: 2242-2288.
The stimulus-response relationship of many sensory neurons is nonlinear, but fully quantifying this relationship by a complex nonlinear model may require too much data to be experimentally tractable. Here we present a theoretical study of a general two-stage computational method that may help significantly reduce the number of stimuli needed to obtain an accurate mathematical description of nonlinear neural responses. Our method of active data collection first adaptively generates stimuli that are optimal for estimating the parameters of competing nonlinear models, and then uses these estimates to generate stimuli on-line which are optimal for discriminating these models. We applied our method to simple hierarchical circuit models, including nonlinear networks built upon the spatio-temporal or spectral-temporal receptive fields, and confirmed that collecting data using our two-stage adaptive algorithm was far more effective for estimating and comparing competing nonlinear sensory processing models than standard non-adaptive methods using random stimuli.
Download reprint PDF file (DiMattina_2011_NC.pdf).
Mauktik Kulkarni, Kechen Zhang and Alfredo Kirkwood (2011): Single-cell persistent activity in anterodorsal thalamus. Neuroscience Letters 498: 179-184.
The anterodorsal nucleus of the thalamus contains a high percentage of head-direction cells whose activities are correlated with an animal's directional heading in the horizontal plane. The firing of head-direction cells could involve self-sustaining reverberating activity in a recurrent network, but the thalamus by itself lacks strong excitatory recurrent synaptic connections to sustain tonic reverberating activity. Here we examined whether a single thalamic neuron could sustain its own activity without synaptic input by recording from individual neurons from anterodorsal thalamus in brain slices with synaptic blockers. We found that the rebound firing induced by hyperpolarizing pulses often decayed slowly so that a thalamic neuron could keep on firing for many minutes after stimulation. The hyperpolarization-induced persistent firing rate was graded under repeated current injections, and could be enhanced by serotonin. The effect of depolarizing pulses was much weaker and only slightly accelerated the decay of the hyperpolarization-induced persistent firing. Our finding provides the first direct evidence for single-cell persistent activity in the thalamus, supporting the notion that cellular mechanisms at the slow time scale of minutes might potentially contribute to the operations of the head-direction system.
Download reprint PDF file (Kulkarni_2011_Neurosci_Lett.pdf).
Eric T. Carlson, Russell J. Rasquinha, Kechen Zhang and Charles E. Connor (2011): A sparse object coding scheme in area V4. Current Biology 21: 288-293.
Sparse coding has long been recognized as a primary goal of
image transformation in the visual system. Sparse
coding in early visual cortex is achieved by abstracting local
oriented spatial frequencies and by excitatory/inhibitory
surround modulation. Object responses are thought to
be sparse at subsequent processing stages, but neural
mechanisms for higher-level sparsification are not known.
Here, convergent results from macaque area V4 neural
recording and simulated V4 populations trained on natural
object contours suggest that sparse coding is achieved in
midlevel visual cortex by emphasizing representation of
acute convex and concave curvature. We studied 165 V4
neurons with a random, adaptive stimulus strategy to minimize
bias and explore an unlimited range of contour shapes.
V4 responses were strongly weighted toward contours containing
acute convex or concave curvature. In contrast, the
tuning distribution in nonsparse simulated V4 populations
was strongly weighted toward low curvature. But as sparseness
constraints increased, the simulated tuning distribution
shifted progressively toward more acute convex and
concave curvature, matching the neural recording results.
These findings indicate a sparse object coding scheme in
midlevel visual cortex based on uncommon but diagnostic
regions of acute contour curvature.
Download reprint PDF file (Carlson2011CurrBiol.pdf).
Christopher DiMattina and Kechen Zhang (2010): How to modify a neural network gradually without changing its input-output functionality. Neural Computation 22: 1-47.
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.
Download reprint PDF file (nc-confound.pdf).
Christopher DiMattina and Kechen Zhang (2008):
How optimal stimuli for sensory neurons are constrained
by network architecture.
Neural Computation 20: 668-708.
Identifying the optimal stimuli for a sensory neuron is often a difficult
process involving trial and error. By analyzing the relationship between
stimuli and responses in feedforward and stable recurrent neural network
models, we find that the stimulus yielding themaximum firing rate
response always lies on the topological boundary of the collection of
all allowable stimuli, provided that individual neurons have increasing
input-output relations or gain functions and that the synaptic connections
are convergent between layers with nondegenerate weight matrices.
This result suggests that in neurophysiological experiments under
these conditions, only stimuli on the boundary need to be tested in order
to maximize the response, thereby potentially reducing the number of
trials needed for finding the most effective stimuli. Even when the gain
functions allow firing rate cutoff or saturation, a peak still cannot exist in
the stimulus-response relation in the sense that moving away from the
optimum stimulus always reduces the response.We further demonstrate
that the condition for nondegenerate synaptic connections also implies
that proper stimuli can independently perturb the activities of all neurons
in the same layer. One example of this type of manipulation is
changing the activity of a single neuron in a given processing layer while
keeping that of all others constant. Such stimulus perturbations might
help experimentally isolate the interactions of selected neurons within a
Download reprint PDF file (nc-boundary.pdf).
Huge T. Blair, Kishan Gupta, and Kechen Zhang (2008): Conversion of a phase-coded to a rate-coded position signal by a three-stage model of theta cells, grid cells, and place cells. Hippocampus 18:1239-1255.
As a rat navigates through a familiar environment, its position in space is encoded by firing rates of place cells and grid cells. Oscillatory interference models propose that this positional firing rate code is derived from a phase code, which stores the rat's position as a pattern of phase angles between velocity-modulated theta oscillations. Here we describe a three-stage network model, which formalizes the computational steps that are necessary for converting phase-coded position signals (represented by theta oscillations) into rate-coded position signals (represented by grid cells and place cells). The first stage of the model proposes that the phase-coded position signal is stored and updated by a bank of ring attractors, like those that have previously been hypothesized to perform angular path integration in the head-direction cell system. We show analytically how ring attractors can serve as central pattern generators for producing velocity-modulated theta oscillations, and we propose that such ring attractors may reside in subcortical areas where hippocampal theta rhythm is known to originate. In the second stage of the model, grid fields are formed by oscillatory interference between theta cells residing in different (but not the same) ring attractors. The model's third stage assumes that hippocampal neurons generate Gaussian place fields by computing weighted sums of inputs from a basis set of many grid fields. Here we show that under this assumption, the spatial frequency spectrum of the Gaussian place field defines the vertex spacings of grid cells that must provide input to the place cell. This analysis generates a testable prediction that grid cells with large vertex spacings should send projections to the entire hippocampus, whereas grid cells with smaller vertex spacings may project more selectively to the dorsal hippocampus, where place fields are smallest.
Download reprint PDF file (hipp-3stage.pdf).
Hugh T. Blair, Adam C. Welday, and Kechen Zhang (2007):
Moire interference between grid fields that produce theta
oscillations: A computational model.
Journal of Neuroscience 27: 3211-3229.
The dorsomedial entorhinal cortex (dMEC) of the rat brain contains a remarkable population of spatially tuned neurons called grid cells
(Hafting et al., 2005). Each grid cell fires selectively at multiple spatial locations, which are geometrically arranged to form a hexagonal
lattice that tiles the surface of the rat's environment. Here, we show that grid fields can combine with one another to form moire
interference patterns, referred to as "moire grids", that replicate the hexagonal lattice over an infinite range of spatial scales.Wepropose
that dMEC grids are actually moire grids formed by interference between much smaller "theta grids," which are hypothesized to be the
primary source of movement-related theta rhythm in the rat brain. The formation of moire grids from theta grids obeys two scaling laws,
referred to as the length and rotational scaling rules. The length scaling rule appears to account for firing properties of grid cells in layer
II of dMEC, whereas the rotational scaling rule can better explain properties of layer III grid cells. Moire grids built from theta grids can
be combined to form yet larger grids and can also be used as basis functions to construct memory representations of spatial locations
(place cells) or visual images. Memory representations built from moire grids are automatically endowed with size invariance by the
scaling properties of the moire grids. We therefore propose that moire interference between grid fields may constitute an important
principle of neural computation underlying the construction of scale-invariant memory representations.
Download reprint PDF file (jns-moire.pdf).
Kechen Zhang and Terrence J. Sejnowski (2000):
A universal scaling law between
gray matter and white matter of cerebral cortex.
Proceedings of the National Academy of Sciences USA 97: 5621-5626.
Neocortex, a new and rapidly evolving brain structure in mammals,
has a similar layered architecture in species over a wide range of
brain sizes. Larger brains require longer fibers to communicate
between distant cortical areas; the volume of the white matter that
contains long axons increases disproportionally faster than the
volume of the gray matter that contains cell bodies, dendrites,
and axons for local information processing, according to a power
law. The theoretical analysis presented here shows how this
remarkable anatomical regularity might arise naturally as a
consequence of the local uniformity of the cortex and the requirement
for compact arrangement of long axonal fibers. The predicted power
law with an exponent of 4/3 minus a small correction for the
thickness of the cortex accurately accounts for empirical data
spanning several orders of magnitude in brain sizes for various
mammalian species including human and non-human primates.
Download reprint PDF file (pnas-brains.pdf).
Kechen Zhang and Terrence J. Sejnowski (1999):
A theory of geometric constraints on
neural activity for natural three-dimensional movement.
Journal of Neuroscience 19: 3122-2145.
Although the orientation of an arm in space or the
static view of an object may be represented by a population of
neurons in complex ways, how these variables change with movement
often follows simple linear rules, reflecting the underlying
geometric constraints in the physical world. A theoretical
analysis is presented for how such constraints affect the average
firing rates of sensory and motor neurons during natural movements
with low degrees of freedom, such as a limb movement and rigid
object motion. When applied to non-rigid reaching arm movements,
the linear theory accounts for cosine directional tuning with
linear speed modulation, predicts a curl-free spatial distribution
of preferred directions, and also explains why the instantaneous
motion of the hand can be recovered from the neural population
activity. For three-dimensional motion of a rigid object, the
theory predicts that, to a first approximation, the response of a
sensory neuron should have a preferred translational direction and
a preferred rotation axis in space, both with cosine tuning
functions modulated multiplicatively by speed and angular speed,
respectively. Some known tuning properties of motion-sensitive
neurons follow as special cases. Acceleration tuning and
nonlinear speed modulation are considered in an extension of the
linear theory. This general approach provides a principled method
to derive mechanism-insensitive neuronal properties by exploiting
the inherently low dimensionality of natural movements.
Download reprint PDF file (jns-object.pdf).
Kechen Zhang and Terrence J. Sejnowski (1999):
Neuronal tuning: To sharpen or broaden?
Neural Computation 11: 75-84.
Sensory and motor variables are typically represented by a population
of broadly tuned neurons. A coarser representation with broader tuning
can often improve coding accuracy, but sometimes the accuracy may also
improve with sharper tuning. The theoretical analysis here shows that
the relationship between tuning width and accuracy depends crucially on
the dimension of the encoded variable. A general rule is derived for
how the Fisher information scales with the tuning width, regardless of
the exact shape of the tuning function, the probability distribution of
spikes, and allowing some correlated noise between neurons. These
results demonstrate a universal dimensionality effect in neural
Download reprint PDF file (nc-tuning.pdf).
Alexandre Pouget, Kechen Zhang, Sophie Deneve and Peter E. Latham (1998):
Statistically efficient estimation using population code.
Neural Computation 10: 373-401.
Coarse codes are widely used throughout the brain to encode sensory
and motor variables. Methods designed to interpret these codes, such
as population vector analysis, are either inefficient (the variance of the
estimate is much larger than the smallest possible variance) or biologically
implausible, like maximum likelihood. Moreover, these methods
attempt to compute a scalar or vector estimate of the encoded variable.
Neurons are faced with a similar estimation problem. They must read out
the responses of the presynaptic neurons, but, by contrast, they typically
encode the variable with a further population code rather than as a scalar.
We show how a nonlinear recurrent network can be used to perform estimation
in a near-optimal way while keeping the estimate in a coarse code
format. This work suggests that lateral connections in the cortex may be
involved in cleaning up uncorrelated noise among neurons representing
Download reprint PDF file (nc-ml.pdf).
Link to Alex Pouget's homepage.
Clip here to see a key figure
(in Marty Sereno's homepage).
Download reprint PDF file (nc-mst.pdf).