Spiking neural networks (SNNs) turn some input into an output much like artificial neural networks (ANNs), which are already widely used today. Both achieve the same goal in different ways. The units of an ANN are single floating-point numbers that represent the activity levels of the units for a given input. Neuroscientists loosely understand this number as the average spike rate of a unit. In ANNs this number is usually the result of multiplying the input with a weight matrix. SNNs work differently in that they simulate units as spiking point processes. How often and when a unit spikes depends on the input and the connections between neurons. A spike causes a discontinuity in other connected units. SNNs are not yet widely used outside of laboratories but they are important to help neuroscientists model brain circuitry. Here we will create spiking point models and connect them. We will be using Brian2, a Python package to simulate SNNs. You can install it with either
conda install -c conda-forge brian2 or pip install brian2
We will start by defining our spiking unit model. There are many different models of spiking. Here we will define a conductance based version of the leaky integrate-and-fire model.
from brian2 import * import numpy as np import matplotlib.pyplot as plt start_scope() # Neuronal Parameters c = 100*pF vl = -70*mV gl = 5*nS # Synaptic Parameters ge_tau = 20*ms ve = 0*mV gi_tau = 100*ms vi = -80*mV w_ge = 1.0*nS w_gi = 0.0*nS lif = ''' dv/dt = -(gl * (v - vl) + ge * (v - ve) + gi *(ve - vi) - I)/c : volt dge/dt = -ge / ge_tau : siemens dgi/dt = -gi / gi_tau : siemens I : amp '''
The intended way to import Brian2 is
from brian2 import *. This feels a bit dangerous but it is very important to keep the code readable, especially when dealing with physical units. Speaking of units, each of our parameters has a physical unit. We have picofarad (
pF), millivolt (
mV) and nanosiemens (
nS). These are all imported from Brian2 and we assign units with the multiplication operator
*. To find out what each of those parameters does, we can look at our actual model. The model is defined by the string we assign to
lif. The first line of the string is the model of our spiking unit:
dv/dt = -(gl * (v - vl) + ge * (v - ve) + gi *(v - vi) - I)/c : volt
This is a differential equation that describes the change of voltage with respect to time. In electrical terms, voltage is changed by currents. These physical terms are not strictly necessary for the computational function of SNNs but they are helpful to understand the biological background behind them. Three currents flow in our model.
The first one is
gl * (v - vl). This current is given by the conductance (
gl), the current voltage (
v) and the equilibrium voltage (
vl). It flows whenever the voltage differs from
gl is just a scaling constant that determines how strong the drive towards
vl is. This is why
vl is called the resting potential, because
v does not change when it is equal to
vl. This term makes our model a leaky integrate-and-fire model as opposed to an integrate-and-fire model. Of course the voltage only remains at rest if it is not otherwise changed. There are three other currents that can do that. Two of them correspond to excitatory and inhibitory synaptic inputs.
ge * (v - ve) drives the voltage towards
ve. Under most circumstances, this will increase to voltage as
ve equals 0mV. On the other hand
gi *(ve - vi) drives the voltage towards
vi, which is even slightly smaller than
vl with -80mV. This current keeps the voltage low. Finally there is
I, which is the input current that the model receives. We will define this later for each neuron. Finally, currents don’t change the voltage immediately. They are all slowed down by the capacitance
c. Therefore, we divide the sum of all currents by
There are two more differential equations that describe our model:
dge/dt = -ge / ge_tau : siemens
dgi/dt = -gi / gi_tau : siemens
These describe the change of the excitatory and inhibitory synaptic conductances. We did not yet implement the way spiking changes
gi. However, these equations tell us that both
gi will decay towards zero with the time constants
gi_tau. So far so good, but what about spiking? That comes up next, when we turn the string that represents our model into something that Brian2 can actually simulate.
G = NeuronGroup(3, lif, threshold='v > -40*mV', reset='v = vl', method='euler') G.I = [0.7, 0.5, 0]*nA G.v = [-70, -70, -70]*mV
NeuronGroup creates for us three units that are defined by the equations in
threshold parameter gives the condition to register a spike. In this case, we register a spike, when the voltage is larger than 40mV. When a spike is registered, an event is triggered, defined by
reset. Once a spike is registered, we reset the voltage to the resting voltage
method parameter gives the integration method to solve the differential equations.
Once our units are defined, we can interface with some parameters of the neurons. For example,
G.I = [0.7, 0.5, 0]*nA sets the input current of the zeroth neuron to 0.7nA, the first neuron to 0.5nA and the last neuron to 0nA. Not all parameters are accessible like this.
I is available because we defined
I : amp in our
lif string. This says that
I is available for changes. Next, we define the initial state of our neurons.
G.v = [-70, -70, -70]*mV sets all of them to the resting voltage. A good place to start.
You might be disappointed by this implementation of spiking. Where is the sodium? Where is the amplification of depolarization? The leaky integrate-and-fire model doesn’t feature a spiking mechanism, except for the discontinuity at the threshold. If you are interested in incorporating spike-like mechanisms you should look for the exponential leaky integrate-and-fire or a Hodgkin-Huxley like model.
We are only missing one more ingredient for an actual network model: the synapses. And we are in a great position, because our model already defines the excitatory and the inhibitory conductances. Now we just need to make use of them.
Se = Synapses(G, G, on_pre='ge_post += w_ge') Se.connect(i=0, j=2) Si = Synapses(G, G, on_pre='gi_post += w_gi') Si.connect(i=1, j=2)
First we create an excitatory connection from our neurons onto themselves. The connection is excitatory because it increases the conductance
ge of the postsynaptic neuron by
w_ge. We then call
Se.connect(i=0, j=2) to define which neurons actually connect. In this case, the zeroth neuron connects to the second neuron. This means, spikes in the zeroth neuron cause
ge to increase in the second neuron. We then create the inhibitory connection and make the first neuron inhibit the second neuron. Now we are ready to run the actual simulation and plot the result. Remember that for now
w_gi = 0.0*nS, meaning that we will only see the excitatory connection.
M = StateMonitor(G, 'v', record=True) run(100*ms) fig, ax = plt.subplots(1) ax.plot(M.t/ms, M.v/mV) ax.plot(M.t/ms, M.v/mV) ax.plot(M.t/ms, M.v/mV) ax.set_xlabel('time (ms)') ax.set_ylabel('voltage (mV)') ax.legend(('N0', 'N1', 'N2'))
The first two neurons are regularly spiking. N0 is slightly faster because it receives a larger input current than N1. N2 on the other hand rests at -70mV because it does not receive an input current. When N0 spikes, it causes the voltage of N2 to increase as we expected, because N0 increases the excitatory conductance. N1 is not doing anything here, because its weight is set to 0. Continued activity of N0 eventually causes N2 to reach the threshold of -40mV, making it register its own spike and reset the voltage. What happens if we introduce the inhibitory weight?
w_gi = 0.5*nS run(100*ms) fig, ax = plt.subplots(1) ax.plot(M.t/ms, M.v/mV) ax.plot(M.t/ms, M.v/mV) ax.plot(M.t/ms, M.v/mV) ax.set_xlabel('time (ms)') ax.set_ylabel('voltage (mV)') ax.legend(('N0', 'N1', 'N2'))
With a weight of 0.5nS, N1 inhibits N2 to an extent that prevents the spike. We have now built a network of leaky integrate-and-fire neurons that features both excitatory and inhibitory synapses. This is just the start to getting a functional network that does something interesting with the input. We will need to increase the number of neurons, decide on a connectivity rule between them, initialize or even learn weights, decide on a coding scheme for the output and much more. Many of these I will cover in later blog posts so stay tuned.