While a single neuron is quite powerful, the real magic begins once multiple neurons begin to influence each other.

Synapses are connections between two cells, which can be chemical (excitatory and inhibitory) or electrical

Electrical synapses contain a direct connection called a gap junction, which has small pores through which ions can flow. These gap junctions can be modified as a resistor or as a rectifier (allows current only one way)

Chemical synapses are the most common type in mammalian brains and consist of a small gap called the synaptic cleft. Interaction between the two neurons is through neurotransmitters, which is generally unidirectional from the axon terminal of one tot he dendritic spine/shaft/soma of another

  • A neurotransmitter is any chemical that diffuses across the synaptic cleft and enables communication
  • The presynaptic neuron is the neuron that releases neurotransmitters when it produces an action potential
  • The postsynaptic neuron is the neuron that responds to neurotransmitters following the presynaptic action potential

When an action potential occurs, the spike is transmitted along the axon to all of its axon terminals. This causes N-type (neural) and P/Q-type (Purkinje) calcium channels to open, which then initiates activation of proteins which cause vesicles to release neurotransmitters. There are many other intermediate steps involved too. There are a few important consequences of this complexity; first, there is a short-term history dependence of the synaptic efficacy, and second, there is a degree of randomness/noise due to the nature of chemical processes.

Synapses that cause a depolarization are excitatory while synapses that reduce depolarization are inhibitory

Dale’s Principle is said to state that the neurotransmitters released by a neuron at one synapse are the same as those at another synapse. This has few exceptions, but is sometimes incorrectly rephrased to say that cells provide only excitatory or inhibitory input (and not both). This is incorrect, because the effect of a neurotransmitter can depend on the particular receptor and on the concentration of other neurotransmitters.

Glutamate is the most dominant excitatory neurotransmitter in mammals
AMPA receptors react to glutamate with a rapid excitatory response
NMDA receptors generally react to glutamate with a slower excitatory response
-Aminobutyric acid (GABA) is the dominant inhibitory neurotransmitter in mammals. Its receptors allow chloride channels to open

The effect of synaptic transmission can be simulated via synaptic conductance in the postsynaptic cell. Each type of synaptic conductance must be treated separately. The general form is,

The simplest case is a step increase at the time of a presynaptic spike followed by an exponential decay, and if

This simplification is nice because the effects of each conductance are linear, so they can be added together and treated the same. However, the instantaneous rise is unrealistic.

A more complex equation includes decay time and rise time, . is included to ensure the peak height of the conductance change is (some math can derive its value)

Graded release is neurotransmitter release at a rate that can increase or decrease according to the membrane potential

Many neurons (like in the retina) can release neurotransmitters in a graded, voltage-dependent manner in the absence of a spike

Some equations were provided for this but they are pretty difficult to understand

The term dynamical synapse refers to the rapid changes in the effective strength of a synaptic connection following spikes which recover on short timescales (also known as short-term plasticity)

Short-term synaptic depression is a temporary reduction in synaptic strength
Short-term synaptic facilitation is a temporary increase in synaptic strength

When a certain number of vesicles release their neurotransmitters, there are less vesicles available for an action potential following shortly after. Mathematically this produces negative feedback, since it limits that amount of synaptic transmission to the production of vesicles

On the other hand, synaptic facilitation creates positive feedback. This happens because vesicles may be bound by some but not all of the proteins necessary for release, and a spike increases the probability of this happening for remaining vesicles. When initial release probability is low, synaptic facilitation can dominate. If initial release probability is very high, there isn’t as much “room for improvement.”

After a spike, the change in synaptic conductance can be modeled as where is synaptic facilitation and is synaptic depression

and
After a spike, and

The first update reduces by the fraction of emptied vesicles. Note that this equation assumes averaging over many synapses, but if they are simulated independently then the binominal distribution must be sampled for each synapse to produce an actual integer. The second update increases the facilitation variable towards its maximum according to . In practice, for .

The Connectivity Matrix

A connectivity matrix is a square matrix indicating the presence, direction, and sometimes strength of all connections within a group of neurons

These models generally do not take physical space into account but might include delays between spike generation and synaptic transmission or from ESPC to current flow. Of course, multi-compartmental models do include more realistic representations of cell dimensions

For a circuit containing a small number of neurons, each entry of the matrix could represent multiple connections between pre and postsynaptic cells. In larger circuits, each entry might represent a functional unit, which is a group of cells with similar responses to stimuli. On the largest scale, fMRI can be used to correlate activity between entire regions of the brain

Because of differences in reversal potentials, two matrices would be needed to deal with inhibitory connections, at least when simulating synaptic input as a conductance. In simpler models, a single matrix could be used for current (negative would be inhibitory). Dale’s Principle is followed as long as any given row is either non-negative or non-positive.

In matrices of single neurons, diagonal terms (autapses) are generally 0. However for matrices of groups of similar cells, self connections can be the strongest terms.

Making sense of connectivity matrices can be quite difficult. Reordering terms (switching the rows and columns of two elements) is a valid operation and can help.

  • In sparse matrices, most connections are absent
  • In matrices for globally feedforward networks, neurons can be ordered such that there is no connection from a later neuron to an earlier one. These can be transformed to be upper-triangular
  • In matrices for locally feedforward networks, neurons are mostly feed forward but some loop back
  • In recurrent networks, neurons receiving input from a cell can influence that cell in return, either directly or via other cells
  • Disconnected sets of neurons neither receive input nor provide input to other sets
  • Clusters of neurons are sets with significantly stronger or greater proportions of connections
  • Local connectivity is when connections are spatially structured, such that neurons are strongly connected to “closer” neurons according to some space
  • In a small world network, any neuron can reach any other neuron via a small number of connections

These connections have some interesting patterns. Even cells whose dendrites and axons intertwine connect with only 10-20% likelihood. This probability increases with the number of other cells that they connect with in common. Additionally, two cells with similar stimulus responses are more likely to be connected than is expected by chance.

“connections preferentially strengthen and remain intact between pairs of neurons that are active at the same time”

Motifs are representations of connections within a group of cells. There are three possible motifs for two cells, and sixteen patterns for three cells. Motifs are not found in nature equally. One can use computational methods to simulate expected distributions of motifs and compare this to experimental results.

Bistability

Once neurons are connected to each other, they can exhibit more complex patterns of activity. Even neurons that cannot oscillate on their own can oscillate when connected to others. Additionally, a system can exhibit bistability. One example is a kind of “flip-flop.”

Multistability is the existence of more than one stable pattern of activity or stable level of activity in a system in response to a set of inputs
A quasistable system is almost stable but slowly changes from that state

A widely appreciated example of bistability is visual bistability, in which one image can be perceived in two different ways.

A central pattern generator is a neural circuit that can autonomously produce a pattern of activity, usually in the context of muscle contraction. The most important goal of a central pattern generator is to ensure muscle contraction in the correct order.

External input can switch the generator on or off, change modes, or adjust features of the pattern, but the generator generally does not rely on external input. Two important characteristics are frequency of oscillation and the duty cycle of each neuron (relative time spent active versus quiescent)

In a half-center oscillator, two neurons oscillate in antiphase with the other, alternating in producing contractions

There are two mechanisms for switching,

  1. Switching by release is when as the activity of the active neuron decreases, the inhibitory input to the inactive neuron decays away
  2. Switching by escape depends on a regenerative process within the inhibited neuron, like deinactivation of an excitatory current

In a tri-phasic rhythm, three neurons activate in a reliable cyclic order. This can be produced with inhibitory connections alone or via post-inhibitory rebound. The best studied example is the pyloric (stomach) rhythm in crabs, where similar circuits, despite varying a lot in conductance between individuals, trigger important stomach contractions.

A phase response curve indicates how an oscillator responds to a perturbation, like a brief synaptic input. The response is quantified by the amount of advancement or delay of ensuing oscillations.

This curve can help find whether an oscillator can match a periodic input (this is called entrainment). Only sections of the phase response curve with negative gradient (sections where phase delay increases as input is delayed) lead to stable entrainment. This makes sense, since it means a too high phase delay would lineup the phase such that the response has less phase delay.

When multiple oscillators are coupled, one can analyze the relationship and find the coupled frequency. . This generally only works when or (out-of-phase or anti-phase)