Dynamical Systems
Dynamical systems have been the subject of much of this book, systems in which variables can change in time. This generally refers to systems of nonlinear differential equations. These are extremely powerful for modeling different behavior and are essential to describing biophysical processes. The interesting thing about these systems is how small changes can produce large qualitative effects.
To review, a fixed point is a state of a system at which none of the variables change in time
Stable fixed points resist small changes and correspond to stable states/equilibria. Unstable fixed points are tipping points; If changes slightly then the system will move away from that point.
A bifurcation happens when a continuous change in a parameter at some point leads to a qualitative change in the system behavior (generally because of changes in fixed points)
A bifurcation can be produced by finding the stable/unstable points for each value of a parameter and plotting against
Oscillations are impossible in a single variable system, because they require a positive and negative for any given . However, this is technically possible with a mapping (like in LIF).
Two-variable models are very valuable, since they can exhibit complex behavior but are still relatively easy to analyze. The FitzHugh-Nagumo model, for example, simplifies Hodgkin-Huxley by combining and and letting change instantaneously with .
The general form for a two-variable system is,
Viewing the fixed points is a bit more complicated here. A nullcline is a curve showing how the fixed points of one variable depend on the values of other variables. A phase plane is a plot that can show the position of fixed points, nullclines, and trajectories.
A nullcline shows the values of variables when their rate of change is 0. A plot with a nullcline for each variable will show the fixed points at their intersections.
A phase plane also has arrows to show the vector field of the system. These do not exactly show whether a fixed point is stable or unstable but can provide a good indicator. An orbit in a vector field implies oscillations.
An inhibition-stabilized regime is a set of parameters in which a circuit has enough excitatory feedback to destabilize low firing-rates but enough compensatory inhibitory feedback to stabilize the excitatory rate at an intermediate region. There are a few requirements for this:
- The excitatory feedback must be sufficiently strong such that the excitatory cells can’t fire stably at a low rate
- The inhibitory unit’s nullcline must cross the excitatory nullcline at a point that is otherwise unstable
- The inhibitory feedback must be sufficient to stabilize the fixed point, otherwise the system will likely oscillate
At face value, this produces some weird effects. An increased inhibitory input (inhibitory to the inhibitory cell?? I-unit changes rate opposite to its input) could correspond to a higher firing rate.
Excitatory feedback generally increases the effective time constant of activity in the circuit, while inhibitory feedback decreases the effective time constant. Slow inhibitory feedback is more likely to create oscillations, whereas fast feedback is more likely to stabilize firing rates. The net result is that stronger inhibitory-to-inhibitory connections make stabilized states more common.
Monostable systems have a single stable fixed point. Bistable/multistable systems have multiple stable fixed points.
Attractor state itinerancy is the process of a system changing from one fixed point to another, with more time typically taken near the fixed points. This phenomena would be harder to analyze, because averaging many trials would not reveal the correct transitions
Perceptual rivalry is a good example of a scenario that could potentially stem from an attractor state itinerancy. The Necker cube can be perceived in multiple orientations. If one stares at it, their perception generally switches at a variable rate.
Noise is a pretty good way to drive transitions in a bistable system
FitzHugh-Nagumo
If one variable in a system changes very slowly compared to other variables, the system can be analyzed as if that variable is just a parameter. Analyzed this way, a system can appear to have multiple possible settings for its attractor states.
A quasi-steady state is a state of a system which is almost stable for a duration longer than the system’s fastest timescales. If the system has a “slow variable,” then the rate of its changes will affect how long the system spends in a steady state
This kind of two-state system (where the steady states have two settings) is called a relaxation oscillator
One example is the FitzHugh-Nagumo model, which is a two-variable simplification of Hodgkin-Huxley,
- and determine the range of membrane potential variation
- is the membrane potential above which the adaptation variable rises above zero
- determines how much adaptation is needed to cause a jump in
- is a short time constant that determines the rate of rapid upswing and is a much longer time constant which determines the rate of recovery between spikes
Unstable fixed points can also be saddle points
A trajectory that goes from one saddle point to another is called a heteroclinic sequence
A sequence can appear stationary near a fixed point but is only quasi-steady there
Chaos
A chaotic system is a system whose behavior is extremely sensitive to initial conditions, with diverging trajectories that remain bounded and cross over each other
A system with parameters can be on the edge of chaos, such that adjusted in one direction creates chaos while adjusted in the other direction keeps the system in stable states
A deterministic system can still be chaotic, the key point is whether future states can be predicted without an infinite level of precision
In the context of neural activity, chaotic systems could be very versatile in aiding exploration and adaptability, the same way random variations are important in evolution. However, note that this need not be the source of unpredictability in neural activity, rather environmental fluctuations and microscopic noise certainly plays a large part.
In general, one expects effects like microscopic noise to not affect macro-level behavior. However, neurons contain many points of no return and it makes sense how noise could potentially be amplified.
Chaos could arise in a 3-unit system, but standard chaotic neural activity is easiest seen in larger systems
The Lyapunov exponent is a measure of the tendency for trajectories of a system to diverge; a positive one suggests chaos, while a negative value suggests attractor states
Criticality
Criticality is a phenomenon related to how changes in a system propagate. In the brain, homeostatic processes are necessary to keep neural activity at a proper intermediate level, such that activity propagates but does not avalanche out of control
An avalanche is typically defined as a sequence of time bins where activity exceeds a set threshold, the duration and size are the stretch of time in which a given number of activations occur
Plots of the number of avalanches as a function of size are best fit by power laws, say where is the number of avalanches and is the number of spikes. This looks like a line plotted on a logarithmic scale,
Non-critical systems can also produce power laws, and measuring criticality can be difficult without a lot of data
There are three power laws that are required for a critical system
- The number of avalanches of a given duration as a function of duration
- The number of avalanches of a given size as a duration of size
- The mean size of an avalanche of a given duration as a function of duration
Avalanches in a critical system should be scale-free, meaning they can all be scaled to look identical (not the same as a scale-free network)
This means that a universal curve exists for the system