Animations with Julia

Creating great looking animations in Julia is shockingly easy thanks for the Plots package and some macro magic. Here we will learn how to turn data into high quality animations. We will learn about the @animate macro, frames and the gif function.

Two Steps to Animations

To create animations we simply generate frames with the @animate macro and then generate a file with the gif function. Both are part of the Plots package from Julia, so we have to start with using Plots to make that package available. Next create the data we will plot, which is a simple sine wave. Then we generate the frame with the @animate macro and animate them with the gif function. This is the code and the resulting animation.

using Plots

x = collect(1:0.1:30)
y = sin.(x)
df = 2

anim =  @animate for i = 1:df:length(x)
    plot(x[1:i], y[1:i], legend=false)
end

gif(anim, "tutorial_anim_fps30.gif", fps = 30)

Macros and Meta-Programming

The @animate macro deserves some extra attention, because it looks like magic. Macros are related to a concept called meta-programming. In Julia, all code is a data structure that can be manipulated in a similar way to all other data structures. This effectively means that we can write code that manipulates our code. That’s what a macro is, a function that modifies code. In our case, the code being modified is the for loop behind our @animate macro. It is modified in a way that it catches the frame at the end of each loop iteration and saves it into anim. We can create code that does the same job ourselves.

anim = Plots.Animation()
for i = 1:df:length(x)
    plot(x[1:i], y[1:i], legend=false)
    Plots.frame(anim)
end

gif(anim, "tutorial_anim_fps30.gif", fps = 30)

We use the Plots.Animation() function to create our animation object where we will store our frames. During the for loop we then call Plots.frame(anim) to store the frame after each iteration in our anim object. These are the essential steps that the @animate macro takes care of. If you want to learn what the macro does in detail you can call @macroexpand on it.

@macroexpand @animate for i = 1:df:length(x)
    plot(x[1:i], y[1:i], legend=false)
end

There is another macro that is even more convenient. The @gif macro. It saves us from having to call gif() on our anim object.

@gif for i = 1:df:length(x)
    plot(x[1:i], y[1:i], legend=false)
end

This directly displays the animation if interactive Julia is available for it. The downside of this is that we do not save the animation to disk and we do not have an anim object available to do more animations later. It is most useful to quickly troubleshoot animations interactively.

Beyond plot()

The @animate macro supports animations of anything that can be plotted with Plots. For example, we can animate a heatmap.

anim = @animate for i = 1:100
    mat = rand(0:100, 32, 32)
    heatmap(mat, clim=(0,255))
end

gif(anim, "tutorial_heatmap_anim.gif", fps = 10)

The frame that is being caught is the state of the active figure at the end of the for loop. The for loop itself gives us a great deal of control, how many frames we want to create. For example in the previous examples, I skipped frames with the df variable.

That’s it for animations. To learn more you can take a look at the official Plots documentation.

The Type System of Julia

Every value in Julia has a type. Like other popular languages such as Python, Julia is dynamically typed. That means, we don’t need to explicitly define the type of a value when we create it. However, types are particularly important in Julia because we can use explicit type definitions to speed up calculations. Here we will get an introduction into the type system of Julia.

Dynamic Typing

When we assign a value in Julia we don’t need to specify its type. The type is guessed based on the given value. Let’s create an integer, a float and a string.

myint = 3
myfloat = 3.0
mystr = "3.0"
println(typeof(myint), ": ", myint)
println(typeof(myfloat), ": ", myfloat)
println(typeof(mystr), ": ", mystr)
# Int64: 3
# Float64: 3.0
# String: 3.0

We use typeof() to find out what type a value has. To create an integer, we assign a number without decimal. To create a float, we simply attach a decimal place. For a string we need quotation marks around our value. There is another way to more explicitly define the type. Our float value is a Float64 (it uses 64 bits). What if we wanted a Float32?

myfloat = convert(Float32, 3.0)
typeof(myfloat)
# Float32

We use the convert(Type, value) function to explicitly convert 3.0 to Float32. If we want to ensure that a value is of a certain type, we can use the double colon syntax (::). It evaluates normally, if the value has the specified type but throws an error, if it has a different type.

3::Int
# 3
3::Float64
# TypeError: in typeassert, expected Float64, got Int64
# 
# Stacktrace:
#  [1] top-level scope at In[8]:1

This type assertion syntax is frequently used when defining functions. A function is a piece of code that takes input arguments and processes them to produce output values. We will learn more about functions later. Because Julia is dynamically typed, we could write our functions in a way that they work the same regardless of input types. However, defining functions in a way that is specific to the input types can be good for performance and Julia makes heavy use of that fact. For a given function, multiple methods might exist. Each method is responsible for a given set of input parameters. For example, we can inspect all the different methods of the mathematical cos by calling the methods function on it.

methods(cos)
"""
# 12 methods for generic function cos:
cos(x::BigFloat) in Base.MPFR at mpfr.jl:744
cos(::Missing) in Base.Math at math.jl:1167
cos(a::Complex{Float16}) in Base.Math at math.jl:1115
cos(a::Float16) in Base.Math at math.jl:1114
cos(z::Complex{T}) where T in Base at complex.jl:823
cos(x::T) where T<:Union{Float32, Float64} in Base.Math at special/trig.jl:100
cos(x::Real) in Base.Math at special/trig.jl:124
cos(A::LinearAlgebra.Hermitian{#s661,S} where S<:(AbstractArray{#s662,2} where #s662<:#s661) where #s661<:Complex) in LinearAlgebra at C:\Users\Daniel\AppData\Local\Programs\Julia\Julia-1.4.2\share\julia\stdlib\v1.4\LinearAlgebra\src\symmetric.jl:907
cos(A::Union{LinearAlgebra.Hermitian{#s662,S}, LinearAlgebra.Symmetric{#s662,S}} where S where #s662<:Real) in LinearAlgebra at C:\Users\Daniel\AppData\Local\Programs\Julia\Julia-1.4.2\share\julia\stdlib\v1.4\LinearAlgebra\src\symmetric.jl:903
cos(D::LinearAlgebra.Diagonal) in LinearAlgebra at C:\Users\Daniel\AppData\Local\Programs\Julia\Julia-1.4.2\share\julia\stdlib\v1.4\LinearAlgebra\src\diagonal.jl:561
cos(A::AbstractArray{#s662,2} where #s662<:Real) in LinearAlgebra at C:\Users\Daniel\AppData\Local\Programs\Julia\Julia-1.4.2\share\julia\stdlib\v1.4\LinearAlgebra\src\dense.jl:793
cos(A::AbstractArray{#s662,2} where #s662<:Complex) in LinearAlgebra at C:\Users\Daniel\AppData\Local\Programs\Julia\Julia-1.4.2\share\julia\stdlib\v1.4\LinearAlgebra\src\dense.jl:800
"""

The cos function has 12 different methods. The double colon type assertion syntax checks the input type. Which method is used can depend on the type of all inputs. This is called multiple dispatch. Multiple inputs determine which method is dispatched. We will learn more about multiple dispatch in another post. Since arrays are central to scientific computing, we next look at the type system and arrays.

Array Types

There are several different ways to define arrays. One way is to use literal numbers enclosed by square brackets. In that case, the type is inferred in the same way as above, where we used literal numbers.

myintarr = [1, 2, 3]
typeof(myintarr)
# Array{Int64,1}
myfloatarr = [1.0, 2.0, 3.0]
typeof(myfloatarr)
# Array{Float64,1}

For an array, typeof() tells us not only the type of the array elements but also how many dimensions the array has. One important feature of the array is that all elements must have the same type. So what happens when we create an array from literals with different types? The type is determined by the most complex type.

myintarr = [1, 2, 3]
typeof(myintarr)
# Array{Int64,1}
myfloatarr = [1, 2.0, 3]
typeof(myfloatarr)
# Array{Float64,1}
mystrarr = [1, "2.0", 3]
typeof(mystrarr)
# Array{Any,1}

We see that only one float value makes the entire array Float64. However, when one of the elements is a string, the type becomes Any, rather than String. That is because Julia does not convert numbers to strings. We get an error if we call convert(String, 3). If the most complex type is the Float64, Julia tries to convert all other values to that type. This works in the case where the other values are integers, because convert(Float64, 3) works. If other values cannot be converted to the most complex type all values take on the Any type. This means that values in the array can be of different types. They could literally be anything. This defeats the purpose of an array so we generally try to avoid it. There are other array creation methods besides the literal way. We can call a variety of methods that allow us to create arrays and they usually allow us to specify the type.

zerosarr = zeros(Int8, (2, 3))
# 2×3 Array{Int8,2}:
#  0  0  0
#  0  0  0
zerosarr = zeros((2, 3))
# 2×3 Array{Float64,2}:
#  0.0  0.0  0.0
#  0.0  0.0  0.0

If we don’t specify the type we want, the zeros() function default to Float64. There are other functions such as ones(), giving an array of ones, or rand(), giving an array of random numbers. The type of the output can be specified for each.

That is it for our basic introduction into types. You can find a more complete introduction to Julia types in the official documentation. In summary, we don’t need to explicitly specify the type of values but sometimes it might help to make math more efficient. We will learn in later posts more about performance and efficiency in Julia.

Getting Started Programming Julia

To get us started with Julia we cover three basics. Arithmetic operators, name assignment.

Arithmetic operators

The standard arithmetic operators are addition (+), subtraction (-), multiplication (*), division (/), power (^) and remainder (%). They work as expected and the only one that is different for the Python crowd is power. That one is Matlab consistent. The normal precedence of operations applies. First power. Then multiplication and division. Then remainder. Finally addition and subtraction. Parentheses can be used to change the order of operation.

1 + 3 * 2
# 7
(1 + 3) * 2
# 8
2 * 3 ^ 2
# 18

In those examples, both sides of the operator are scalars. It gets a little more interesting when at least one of them is a vector or a matrix. Not all of the above operations are defined between vectors and scalars. Only division and multiplication are defined. We create a vector using square brackets ([]) with the elements separated by commas.

[3, 1, 4] * 2
# 3-element Array{Int64,1}:
#  6
#  2
#  8

[3, 1, 4] / 2
# 3-element Array{Float64,1}:
# 1.5
# 0.5
# 2.0

The other operations are not defined and throw an error.

[3, 1, 4] ^ 2
"""
MethodError: no method matching ^(::Array{Int64,1}, ::Int64)
Closest candidates are:
  ^(!Matched::Float16, ::Integer) at math.jl:885
  ^(!Matched::Regex, ::Integer) at regex.jl:712
  ^(!Matched::Missing, ::Integer) at missing.jl:155
  ...

Stacktrace:
 [1] macro expansion at .\none:0 [inlined]
 [2] literal_pow(::typeof(^), ::Array{Int64,1}, ::Val{2}) at .\none:0
 [3] top-level scope at In[52]:1
"""

The reasons for this design choice have to do with Julias focus on linear algebra and are not important here. If we want this operation to work in an element-wise manner, we have to force it explicitly. We can do so using the dot (.). This way we can explicitly force every operator to be applied element-wise.

[3, 1, 4] .^ 2
# 3-element Array{Int64,1}:
#  9
#  1
# 16
[3, 1, 4] .+ 2
# 3-element Array{Int64,1}:
#  5
#  3
#  6

Now that we have our arithmetic operators, let’s move on to name assignment so we can store the results of our operations.

Name Assignment

We assign names to values with the = operator using the syntax name = value. Once a name is assigned, we can use the name instead of the value in operations.

result_one = 2 + 2
result_two = result_one + 3
# 7

Once a name is assigned to a value we can reassign that same name to a different value without problem.

result = 2 + 2
result = 10
# 10

If we want to assign a name that is not supposed to be reassigned, we can use the const keyword. If we try to reassign a constant name we get an error.

const a = 8.3144621
a = 3
"""
invalid redefinition of constant a

Stacktrace:
 [1] top-level scope at In[73]:2
"""

There are a few rules about the names we can assign. Generally, Unicode characters (UTF-8) are allowed. This means we can do something like this.

δt = 0.0001

Here we are using the special character delta. If you want to quickly generate such a character, many Julia environments allow you to do this by typing \delta and hitting tab. I recommend using these sparingly, as they might confuse people transitioning from other languages that don’t allow unicode names. On the other hand they might be useful to make your code style more mathy. Not allowed as names are built-in keywords that have special meaning and trying to assign them will result in an error.

if = 3
# syntax: unexpected "="

If you are interested in more details about variables and name assignments you can take a look at the official documentation. In the next blog post we will take a look at the type system of Julia.

The Hodgkin-Huxley Neuron in Julia

The Hodgkin-Huxley model is one of the earliest mathematical descriptions of neural spiking. It was originally developed on data from the squid giant axon. Today, Hodgkin-Huxley like dynamics are also used to simulate the spiking of a variety of neuron types. I’ve recently written a script to simulate Hodgkin-Huxley dynamics in Julia. Here I am sharing that code and I will go through the most important elements. As I just started to learn Julia I will also mention some of the things I learned about Julia in the process.

using Plots
gr()

# Hyperparameters
tmin = 0.0
tmax = 1000.0
dt = 0.01
T = tmin:dt:tmax

# Parameters
gK = 35.0
gNa = 40.0
gL = 0.3
Cm = 1.0
EK = -77.0
ENa = 55.0
El = -65.0

# Potassium ion-channel rate functions
alpha_n(Vm) = (0.02 * (Vm - 25.0)) / (1.0 - exp((-1.0 * (Vm - 25.0)) / 9.0))
beta_n(Vm) = (-0.002 * (Vm - 25.0)) / (1.0 - exp((Vm - 25.0) / 9.0))

# Sodium ion-channel rate functions
alpha_m(Vm) = (0.182*(Vm + 35.0)) / (1.0 - exp((-1.0 * (Vm + 35.0)) / 9.0))
beta_m(Vm) = (-0.124 * (Vm + 35.0)) / (1.0 - exp((Vm + 35.0) / 9.0))

alpha_h(Vm) = 0.25 * exp((-1.0 * (Vm + 90.0)) / 12.0)
beta_h(Vm) = (0.25 * exp((Vm + 62.0) / 6.0)) / exp((Vm + 90.0) / 12.0)

# n, m & h steady-states
n_inf(Vm=0.0) = alpha_n(Vm) / (alpha_n(Vm) + beta_n(Vm))
m_inf(Vm=0.0) = alpha_m(Vm) / (alpha_m(Vm) + beta_m(Vm))
h_inf(Vm=0.0) = alpha_h(Vm) / (alpha_h(Vm) + beta_h(Vm))

# Conductances
GK(gK, n) = gK * (n ^ 4.0)
GNa(gNa, m) = gNa * (m ^ 3.0) * h
GL(gL) = gL

# Differential equations
function dv(Vm, GK, GNa, GL, Cm, EK, ENa, El, I, dt);
    ((I  - (GK * (v - EK)) - (GNa * (v - ENa)) - (GL * (v - El))) / Cm) * dt
end
dn(n, Vm, dt) = ((alpha_n(Vm) * (1.0 - n)) - (beta_n(Vm) * n)) * dt
dm(m, Vm, dt) = ((alpha_m(Vm) * (1.0 - m)) - (beta_m(Vm) * m)) * dt
dh(h, Vm, dt) = ((alpha_h(Vm) * (1.0 - h)) - (beta_h(Vm) * h)) * dt
I = T * 0.002

# Initial conditions and setup
v = -65
m = m_inf(v)
n = n_inf(v)
h = h_inf(v)

v_result = Array{Float64}(undef, length(T))
m_result = Array{Float64}(undef, length(T))
n_result = Array{Float64}(undef, length(T))
h_result = Array{Float64}(undef, length(T))

v_result[1] = v
m_result[1] = m
n_result[1] = n
h_result[1] = h


for t = 1:length(T)-1
    GKt = GK(gK, n)
    GNat = GNa(gNa, m)
    GLt = GL(gL)

    dvt = dv(v, GKt, GNat, GLt, Cm, EK, ENa, El, I[t], dt)
    dmt = dm(m, v, dt)
    dnt = dn(n, v, dt)
    dht = dh(h, v, dt)

    global v = v + dvt
    global m = m + dmt
    global n = n + dnt
    global h = h + dht

    v_result[t+1] = v
    m_result[t+1] = m
    n_result[t+1] = n
    h_result[t+1] = h
end

p1 = plot(T, v_result, xlabel="time (ms)", ylabel="voltage (mV)", legend=false, dpi=300)

The Parameters

There are seven parameters that define the Hodgkin-Huxley model. The maximum potassium, sodium and leak conductances gK, gNa and gL. Then there are the equilibrium potentials for potassium, sodium and leak, EK, ENa and El. Finally, there is Cm, the membrane capacitance. In a nutshell, the models comes down to calculating the fraction of sodium and potassium channels that are open at a time point. Together with the maximum conductance, the membrane potential and the equilibrium potential, the fraction of open channels tells us how much current is flowing at the time. The amount of current filtered by the membrane capacitance in turn tells us by how much the voltage changes. The fraction of open channels is given by n, m and h.

Differential Equations

We need to track the change of the voltage (v), potassium channel activation (n), sodium channel activation (m) and sodium channel inactivation (h). Let’s consider the potassium channels first. Active potassium channels can stay active or transition to the inactive state and inactive sodium channels. Since these potassium channels are voltage gated, the chance that they transition depends on the voltage. The function alpha_n gives the transition rate from inactive to active and beta_n gives the transition rate from active to inactive. For the sodium channels, the situation is almost identical. However, they can also be in a third state, that corresponds to depolarization induced inactivation.

Now that we are able to keep track of the states of our channels we can calculate the conductances. GK calculates the potassium conductance, GNa the sodium conductance and GL the leak conductance. Those conductances are then used in the dv function to calculate the currents based on the voltage. And that’s basically it. The integration method is simple forward euler inside the for loop.

Julia Notes From a Beginner

This is my very first Julia script so I have some thoughts. This is a completely non-optimized script and it is extremely fast, despite the for loop. From what I learned so far, for loops in Julia are known to be fast and they can allegedly outperform vectorized solutions. This is very different from Python, where we strictly avoid for loops, especially when concerned about performance.

For now I am confused by the scope and the use of the global keyword. Scope seems to operate similar to Python, where functions and for loops have seamless access to variables in the outer scope. Assigning variables on the other hand seems to be a problem inside the for loop, unless the global keyword is used.

Generally I am very happy with the Julia syntax. I think I could even code Python and Julia back to back. One major problem is of course indices starting at 1 but I get the difference in convention. I’m looking forward to my next script.