Skip to content

Commit

Permalink
Minor tweaks to fix things (#7)
Browse files Browse the repository at this point in the history
* Fix all typos Scott pointed out

* Fix DCM figure display

* Fix citations to bullets

* Update headers to be more visible in PING tutorial

* Fix semicolons

* Move random call to fix issues

* Another semicolon

* Absolute path to fix link

* Fix DCM plots for website

* Update _literate/custom.jl

---------

Co-authored-by: haris organtzidis <[email protected]>
  • Loading branch information
agchesebro and harisorgn authored Jan 7, 2025
1 parent 3a63d14 commit c581ab1
Show file tree
Hide file tree
Showing 6 changed files with 30 additions and 22 deletions.
12 changes: 10 additions & 2 deletions _literate/DCM.jl
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,8 @@ ax = Axis(f[1, 1],
)
lines!(ax, sol, idxs=idx_m)
f
save(joinpath(@OUTPUT, "fmriseries.svg"), f); # hide
# \fig{fmriseries}

# We note that the initial spike is not meaningful and a result of the equilibration of the stochastic process thus we remove it.
dfsol = DataFrame(sol[ceil(Int, 101/dt):end]);
Expand All @@ -117,6 +119,8 @@ for i = 1:nr
end
end
fig
save(joinpath(@OUTPUT, "csd.svg"), fig); # hide
# \fig{csd}

# ## Model Inference

Expand Down Expand Up @@ -218,11 +222,15 @@ end
# ## Results
# Free energy is the objective function of the optimization scheme of spectral DCM. Note that in the machine learning literature this it is called Evidence Lower Bound (ELBO).
# Plot the free energy evolution over optimization iterations to see how the algorithm converges towards a (potentially local) optimum:
freeenergy(state)
f1 = freeenergy(state)
save(joinpath(@OUTPUT, "freeenergy.svg"), f1); # hide
# \fig{freeenergy}

# Plot the estimated posterior of the effective connectivity and compare that to the true parameter values.
# Bar hight are the posterior mean and error bars are the standard deviation of the posterior.
ecbarplot(state, setup, A_true)
f2 = ecbarplot(state, setup, A_true)
save(joinpath(@OUTPUT, "ecbar.svg"), f2); # hide
# \fig{ecbar}

# ## Challenge Problems
# - **Explore susceptibility with respect to noise.** Run the script again with a different random seed and observe how the results change. Given that we didn’t change any parameters of the ground truth, what is your take on parameter inference with this setup? How reliable is model selection based on free energy (compare the different free energies of the models and their respective parameter values with ground truth)?
Expand Down
8 changes: 4 additions & 4 deletions _literate/PING_circuit.jl
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ I_bath = -0.7; ## External inhibitory bath for inhibitory neurons - value from p
# # Creating a network in Neuroblox
# Creating and running a network of neurons in Neuroblox consists of three steps: defining the neurons, defining the graph of connections between the neurons, and simulating the system represented by the graph.

# ### Define the neurons
# ## Define the neurons
# The neurons from Börgers et al. [1] are implemented in Neuroblox as `PINGNeuronExci` and `PINGNeuronInhib`. We can specify their initial current drives and create the neurons as follows:

exci_driven = [PINGNeuronExci(name=Symbol("ED$i"), I_ext=rand(I_driveE) + rand(I_base)) for i in 1:NE_driven] ## In-line loop to create the driven excitatory neurons, named ED1, ED2, etc.
Expand All @@ -76,7 +76,7 @@ inhib = [PINGNeuronInhib(name=Symbol("ID$i"), I_ext=rand(I_driveI) + rand(
# > to see the full details of the blocks. If you really want to dig into the details,
# > type ``@edit PINGNeuronExci()`` to open the source code and see how the equations are written.

# ### Define the graph of network connections
# ## Define the graph of network connections
# This portion illustrates how we go about creating a network of neuronal connections.

g = MetaDiGraph() ## Initialize the graph
Expand All @@ -96,7 +96,7 @@ for ni1 ∈ inhib
end
end

# ### Alternative graph creation
# ## Alternative graph creation
# If you are creating a very large network of neurons, it may be more efficient to add all of the nodes first and then all of the edges via an adjacency matrix.
# To illustrate this, here is **an alternative to the graph construction we have just performed above** that will initialize the same graph.
g = MetaDiGraph() ## Initialize the graph
Expand All @@ -115,7 +115,7 @@ for i ∈ 1:NI_driven
end
create_adjacency_edges!(g, adj);

# ### Simulate the network
# ## Simulate the network
# Now that we have the neurons and the graph, we can simulate the network. We use the `system_from_graph` function to create a system of ODEs from the graph and then solve it using the DifferentialEquations.jl package, but for performance scaling reasons we will use the experimental option `graphdynamics=true` which uses a separate compilation backend called [GraphDynamics.jl](https://github.com/Neuroblox/GraphDynamics.jl). The GraphDynamics.jl backend is still experimental, and may not yet support all of the standard Neuroblox features, such as those seen in the Spectral DCM tutorial.
# We choose to solve this system using the ``Tsit5()`` solver. If you're coming from Matlab, this is a more efficient solver analogous to ``ode45``. It's a good first try for systems that aren't really stiff. If you want to try other solvers, we'd recommend trying with ``Vern7()`` (higher precision but still efficient). If you're **really** interested in solver choices, one of the great things about Julia is the [wide variety of solvers available.](https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/)

Expand Down
4 changes: 2 additions & 2 deletions _literate/custom.jl
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ connection_equations(lif, izh) ## connection from lif to izh
# We even get a warning saying that the connection rule is not specified so Neuroblox defaults to this basic weighted connection.

# ## Custom Connections
# Often times genric connection rules are not sufficient and we need ones specialized to our custom Bloxs. There are two elements that allow for great customization variery when it comes to connection rules, connection equations and callbacks.
# Often times genric connection rules are not sufficient and we need ones specialized to our custom Bloxs. There are two elements that allow for great customization variety when it comes to connection rules, connection equations and callbacks.

# ### Connection equations
# Let's define a custom equation that connects a `LIFNeuron` to our `IzhNeuron`. The first thing we need to do is to import the `connection_equations` function from Neuroblox so that we can add a new dispatch to it.
Expand Down Expand Up @@ -171,7 +171,7 @@ sol = solve(prob, Tsit5());
# These callbacks will be applied at every timepoint during simulation where the callback condition is fulfilled.
# This mechanism is particularly useful for neuron models like the Izhikevich and the LIF neurons we saw above that use callbacks to implement spiking.

# > **_NOTE_:** The affect equations of a single event can change either only variables or parameters. Currenlty we can not mix variable and parameter changes within the same event.
# > **_NOTE_:** The affect equations of a single event can change either only variables or parameters. Currently we can not mix variable and parameter changes within the same event.
# > See the [ModelingToolkit documentation](https://docs.sciml.ai/ModelingToolkit/stable/basics/Events/#Discrete-events-support) for more details.

import Neuroblox: connection_callbacks
Expand Down
8 changes: 4 additions & 4 deletions _literate/intro_diffeq.jl
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ u0 = [x => 5, y => 2]
## Problem to be solved
prob = ODEProblem(simpsys, u0, tspan)
## Solution of the problem using the Tsit5 solver
sol = solve(prob, Tsit5())
sol = solve(prob, Tsit5());

# The solution object contains the values of every variable of the system (`x` and `y`) for every simulated timestep. One can easily access the values of a specific variable using its symbolic name
sol[x] ## or similarly sol[y]
Expand Down Expand Up @@ -160,6 +160,6 @@ save(joinpath(@OUTPUT, "mtk5.svg"), fig); # hide
# - Pick a neuron model of your choice and perform a sensitivity analysis on the system. How is its spiking behavior change as its parameters change? Summarize the results in one (or several) plots.

# ## References
# [1] E. M. Izhikevich, "Simple model of spiking neurons," in IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569-1572, Nov. 2003, doi: 10.1109/TNN.2003.820440
# [2] Ma, Yingbo, Shashi Gowda, Ranjan Anantharaman, Chris Laughman, Viral Shah, and Chris Rackauckas. "Modelingtoolkit: A composable graph transformation system for equation-based modeling." arXiv preprint arXiv:2103.05244 (2021).
# [3] https://docs.sciml.ai/ModelingToolkit/stable/
# - [1] E. M. Izhikevich, "Simple model of spiking neurons," in IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569-1572, Nov. 2003, doi: 10.1109/TNN.2003.820440
# - [2] Ma, Yingbo, Shashi Gowda, Ranjan Anantharaman, Chris Laughman, Viral Shah, and Chris Rackauckas. "Modelingtoolkit: A composable graph transformation system for equation-based modeling." arXiv preprint arXiv:2103.05244 (2021).
# - [3] https://docs.sciml.ai/ModelingToolkit/stable/
2 changes: 1 addition & 1 deletion _literate/intro_plot.jl
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ save(joinpath(@OUTPUT, "spikes.svg"), fig); # hide
# \fig{spikes}

# # Challenge Problems
# See the [Differential Equations in Julia](./intro_diffeq/)
# See the [Differential Equations in Julia](/pages/intro_diffeq/)

# # References
# - Danisch et al., (2021). Makie.jl: Flexible high-performance data visualization for Julia. Journal of Open Source Software, 6(65), 3349, https://doi.org/10.21105/joss.03349
Expand Down
18 changes: 9 additions & 9 deletions _literate/neuron_mass.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,15 @@
# - drive single neuron and neural mass activity using external sources

# ## Neurons and neural masses
# As a first example we will consider a neural mass `WilsonCowan` Blox of Exhitatation-Inhibition (E-I) balance. This is a two-dimensional reduction over a population of excitatory and inhibitory neurons with continuous dynamics.
# As a first example we will consider a neural mass `WilsonCowan` Blox of Excitation-Inhibition (E-I) balance. This is a two-dimensional reduction over a population of excitatory and inhibitory neurons with continuous dynamics.
using Neuroblox
using OrdinaryDiffEq
using CairoMakie

## Set the random seed for reproducible results
using Random
Random.seed!(1)

@named nm = WilsonCowan()
## Retrieve the simplified ODESystem of the Blox
sys = system(nm)
Expand Down Expand Up @@ -42,7 +46,7 @@ save(joinpath(@OUTPUT, "wc_timeseries.svg"), fig); # hide
sys = system(qif; discrete_events = [60] => [qif.I_in ~ 10])
tspan = (0, 100) # ms
prob = ODEProblem(sys, [], tspan)
sol = solve(prob, Tsit5())
sol = solve(prob, Tsit5());

# Besides the generic `plot` function, Neuroblox includes some plotting recipes specifically for neuron models.
# A raster plot with chosen spike threshold
Expand All @@ -64,7 +68,7 @@ fig
save(joinpath(@OUTPUT, "qif_timeseries.svg"), fig); # hide
# \fig{qif_timeseries}

# Finally we simulate an HH neuron with stochastic dynamics which was introduced in [this article on deep brain stimulation int he subthalamic nucleus](https://doi.org/10.1073/pnas.2120808119).
# Finally we simulate an HH neuron with stochastic dynamics which was introduced in [this article on deep brain stimulation in the subthalamic nucleus](https://doi.org/10.1073/pnas.2120808119).
# The model includes a brownian noise term affecting `D(V)` which you can inspect using the `equations` function.

using StochasticDiffEq ## to access stochastic DE solvers
Expand All @@ -86,7 +90,7 @@ save(joinpath(@OUTPUT, "hh_power.svg"), fig); # hide

# ## Sources

# External sources in Neuroblox as a particular Blox subtype (`<: AbstractBlox`) which contains a system with output and no input variables.
# External sources in Neuroblox are a particular Blox subtype (`<: AbstractBlox`) which contains a system with output and no input variables.
# Naturally source Bloxs can only connect **to** other (non-source) Blox and can not receive connections from any Blox.
# There are two main categories of sources, ones with continuous dynamics for their variables and ones that operate through events (callbacks).

Expand All @@ -113,7 +117,7 @@ fig
save(joinpath(@OUTPUT, "wc_input.svg"), fig); # hide
# \fig{wc_input}

# Notice how the E-I balance has shiften after adding our input. We will work with a more complex circuit for E-I balance on the next session and learn more about its intricacies.
# Notice how the E-I balance has shifted after adding our input. We will work with a more complex circuit for E-I balance on the next session and learn more about its intricacies.

# We can create custom sources with continuous input the same way we create custom Bloxs and write custom connection rules for them as we have seen in the previous session.

Expand Down Expand Up @@ -199,10 +203,6 @@ save(joinpath(@OUTPUT, "ifn_input.svg"), fig); # hide
# Even though these sources are often used in DBS protocols, they are implemented as any other source so they can be connected to any other Bloxs given a connection rule.
# We will first visualize the sources on their own and then connect them to an HH excitatory neuron.

## Set the random seed for reproducible results
using Random
Random.seed!(1)

## Square pulse stimulus
@named stim = DBS(
frequency=100.0, ## Hz
Expand Down

0 comments on commit c581ab1

Please sign in to comment.