Action potentials, or spikes, are the main unit of information transfer in our brains. It can be helpful to analyze spikes without considering the underlying membrane mechanics.

Single Neurons

A neuron’s receptive field is a description of the stimuli that generate a response from the neuron, a response meaning a change in firing rate. A tuning curve is a graph of firing rate as a function of some stimulus. For time-dependent fields, spatio-temporal receptive fields can be graphed.

Sensory neurons respond to particular physical changes impacting an organism.

Center-surround is a common feature where a neuron responds best when their target stimuli are surrounded with very different stimuli (think a white ball surrounded by black). This could be due to lateral inhibition which is when a neuron is inhibited by neurons with different but similar receptive fields.

Receptive fields are most useful for neurons that are close to sensory neurons (a primary sensory cortex) and can be tested for simpler features. It’s important to note that typically neurons receive more input from their neighbors in a cortex that from sensory neurons. Anesthesia reduces this effect at the cost of maybe oversimplifying things.

Neurons need to be measured across many trials to achieve any useful accuracy. They are then used to produce a PSTH (peristimulus time histogram).

  1. One simple way to do this would be to simply average the responses of each trial. While this would give accurate mean values, relative information of spikes is essentially lost.
  2. Another method uses binning
  3. One could also apply a Gaussian filter to each spike

A system whose response is equal to a weighted sum of its inputs is a linear filter (a bit like a kernel in convolution). A linear-nonlinear model uses a linear filter along with a non-linear function.

Of course, no neuron has a linear f-I curve. That’s why linear-nonlinear models can be quite useful.

A spike-triggered average (STA) attempts to measure a stimulus around the times of spikes to find a correlation.

It’s important to note that any correlations in input data will result in correlations in output data. The way to theoretically fix this is to only use random stimulus, but this makes the chance of a significant response almost nonexistent. There is always a trade-off.

More Statistics

Note, in vivo means in a living an animal, in vitro means “in glassware,” with laboratory equipment

A difficulty in spike-train statistics is that real neurons always receive random fluctuations in their input due to the irregularity of surrounding neurons. Due to this, understanding the irregularity of spike trains is very important.

Second-order statistics are measures of data, like variance and standard deviation, which depend on the square of values. The coefficient of variation (CV) of any set of values is the standard deviation divided by the mean.

The CV is a little deceptive, because variation does not imply irregularity. is more commonly used to measure irregularity by looking at the difference between successive data points. Of course, depending on the neuron behavior this also would not be an accurate measure. Bursting neurons produce sets of closely separated spikes with longer intervals in between those sets.

The Fano factor takes the CV over a specific time interval.

The Poisson process is a random point process that can be used for artificial spike trains. These can be useful for establishing baseline spike trains. The process is defined by the probability of an emission in any time-interval . The probability of spikes is where is the expectation value for . This defines the Poisson distribution. Note that the process defines the probability of a single-spike, should be less than one such that . So this distribution can be used to calculate an ISI distribution. The probability of a spike within an interval and is the probability of 0 spikes times the probability of 1 spike .

A Poisson process of rate can be simulated easily by choosing a small time-bin, such that the probability of more than 1 spike is negligible, and then using a line like,

spikes = rand(size(tvec)) < rate*dt;

A more general method would randomly select an ISI from the distribution and step along the time vector by that much.

Dummy data is a set of simulated data which can be analyzed in the same manner as the data from the real system being studied. Using dummy data is a good way to judge whether observed variation is meaningful or not.

Receiver-Operating Characteristic

An ROC curve is a plot of the probability of true positives against false positives, to help judge the reliability of a signal. The key point is that even two very different conditions can produce overlap which lowers an observer’s ability to discriminate.

One measure is the discriminability index where where are the standard deviations of each distribution and is the difference between their means.

Picking a Threshold

The optimal position for a threshold for stimulus detection should be at the intersection of the two probability distributions, where the gradient is 1 on the ROC curve.

If the prior probability is different then the optimal position changes by that much. The same thing happens when the value changes.

Note that ROC analysis can be used to gain information about whether a probability distribution is bimodal or not. Bimodal probabilities are very powerful.

Recollection is the ability to recall a stimulus, usually by connecting it in the context

Recognition is simply knowing that a stimulus has been encountered

Z-score is another way of measuring in terms of standard deviations from the mean. It differentiates better between Gaussian and non-Gaussian.