kanav99 on gh-pages
build based on ea7207d (compare)
YingboMa on bg
YingboMa on master
Update `similarterm` and add `i… Update SymbolicUtils compat Update BipartiteGraph and 2 more (compare)
kanav99 on gh-pages
build based on fee84cf (compare)
ChrisRackauckas on parameterized
don't access properties. (compare)
kanav99 on gh-pages
build based on eefdf20 (compare)
kanav99 on gh-pages
build based on c8941f1 (compare)
[slack] <chrisrackauckas> Graham Smith [1:19 PM]
ginkobab: your message got clipped going from gitter to slack, so I can't see your implementation starting around update!
. But I've thought some about how I would implement a spiking network, so I'll try to answer your questions:
t
V
, your membrane potentials? It's just the vector as long as np.N
. You don't define the np.N
x timesteps
matrix; that comes out of solve
.dt
(assuming I'm right about what you meant by the matrix); solve
will take care of that. Eventually you may even want to use a more advanced solver, which would do way more than multiplying by dt
V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0 # This makes the neurons over the threshold spike
V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ # This reset the neurons that spiked to a low potential
V
will have a marker for spiking timesteps, and you won't clobber it on the same step (as these lines do now, with 0.0 as the marker)Graham Smith [2:00 PM]
Oooooo wait you can probably use callbacks to handle spike thresholding
[2:00 PM] https://diffeq.sciml.ai/v2.0/features/callback_functions.html
Chris Rackauckas [3:08 PM]
you probably want to link to recent docs. that's v2.0
[slack] <ginkobab> Thanks for your answer!
Just to clarify, the weird block about updating sets the potential of the subsequent timestep equal to Vr or to 0, so the order is important, otherwise since 0 > threshold, the reset wouldn't work. So effectively it marks a spike that we can then extract!
I'm gonna look into callbacks now, thanks again!
Hi all. Is there a way to deal with this problem in Julia?
function explicit(y,p,t)
sqrt(1-y[1]^2)
end
tspan = (0.0,π)
y0 = 0.0
problem = ODEProblem(explicit,y0,tspan)
sol = solve(problem, Rosenbrock23())
In MATLAB, Cleve Moler solve this problem by add a bound like this:
f = @(t,y) sqrt(1-min(y,1)^2).
This could also be used in Python with Scipy.odeint.
def f(t,y): return np.sqrt(1-np.min(y,1)**2)
But Julia don't allow this.
Hi everyone, first off, thanks for developing an awesome DE library/framework. It's really impressive just how much functionality you all have crammed into this library. I hope I'm in the right place to ask this.
I'm attempting to install the diffeqpy
package to utilize the DifferentialEquations.jl
library as a sort of "drop-in" solver to our already existing systems biology models written in Python since we need a speedup and easier parallelization options.
I'm on an Ubuntu 18.04 system and I've
DifferentialEquations.jl
package through Pkg
PyCall.jl
and built it with the Python binary I'm using with my pipenv
vrirtual environment.diffeqpy
package to the virtual environment via pipenv install
(backended with pip, just in the virtual environment)DifferentialEquations.jl
via Juliadiffeqpy
via python-jl
diffeqpy
via python-jl
with only benchmarking the solving of the problem, not the specification via de.ODEProblem()
diffeqpy
via python-jl
with only benchmarking the solving of the problem as in (d.) and also pre-compiling the problem via numba
These are the results I've obtained:
You can see the full code at https://github.com/dcolli23/benchmarking_diffeqpy
I'm almost certain I've done something wrong here since this performance is vastly different and I know DifferentialEquations.jl to be some of the fastest implementations of ODE solvers out there. Do you all have any idea what I could have done wrong in my setup of the tech stack or if I'm just interfacing with the solver incorrectly?
I seriously appreciate your all's help in all of this! Thank you so much! And I'm happy to provide any extra information as needed!
Hello, I am trying to fit an ODE with 8 external inputs from measurement data which are interpolated and I find BFGS very slow compared to BlackBoxOptim. I also tried ADAM from DiffEqFlux and it is also very slow. Does anyone here have any thoughts on why ?
I am using LinearInterpolation()
objects to get the inputs at each time t
I have more details here
https://discourse.julialang.org/t/bfgs-very-slow-compared-to-blackboxoptim-how-to-improve-performance/49253/14
_itp
are LinearInterpolation()
objects from Interpolations.jl
, the rest are constants enclosed in a wrapper function.function thermal_model!(du, u, p, t)
# scaling parameters
p_friction, p_forcedconv, p_tread2road, p_deflection,
p_natconv, p_carcass2air, p_tread2carcass, p_air2ambient = p
fxtire = fxtire_itp(t)
fytire = fytire_itp(t)
fztire = fztire_itp(t)
vx = vx_itp(t)
alpha = alpha_itp(t)
kappa = kappa_itp(t)
r_loaded = r_loaded_itp(t)
h_splitter = h_splitter_itp(t)
# arc length of tread area
theta_1 = acos(min(r_loaded - h_splitter, r_unloaded) / r_unloaded)
theta_2 = acos(min(r_loaded, r_unloaded) / r_unloaded)
area_tread_forced_air = r_unloaded * (theta_1 - theta_2) * tire_width
area_tread_contact = tire_width * 2 * sqrt(max(r_unloaded^2 - r_loaded^2, 0))
q_friction = p_friction * vx * (abs(fytire * tan(alpha)) + abs(fxtire * kappa))
q_tread2ambient_forcedconv = p_forcedconv * h_forcedconv * area_tread_forced_air * (t_tread - t_ambient) * vx^0.805
q_tread2ambient_natconv = p_natconv * h_natconv * (area_tread - area_tread_contact) * (t_tread - t_ambient)
q_tread2carcass = p_tread2carcass * h_tread2carcass * area_tread * (t_tread - t_carcass)
q_carcass2air = p_carcass2air * h_carcass2air * area_tread * (t_carcass - t_air)
q_carcass2ambient_natconv = p_natconv * h_natconv * area_sidewall * (t_carcass - t_ambient)
q_tread2road = p_tread2road * h_tread2road * area_tread_contact * (t_tread - t_track)
q_deflection = p_deflection * h_deflection * vx * abs(fztire)
q_air2ambient = p_air2ambient * h_natconv * area_rim * (t_air - t_ambient)
du[1] = der_t_tread = (q_friction - q_tread2carcass - q_tread2road - q_tread2ambient_forcedconv - q_tread2ambient_natconv)/(m_tread * cp_tread)
du[2] = der_t_carcass = (q_tread2carcass + q_deflection - q_carcass2air - q_carcass2ambient_natconv)/(m_carcass * cp_carcass)
du[3] = der_t_air = (q_carcass2air - q_air2ambient)/(m_air * cp_air)
end
function explicit(y,p,t)
sqrt(1-y^2)
end
would be fine for your problem (just like the other cases)
[slack] <torkel.loman> Is there something I can do to help solve https://discourse.julialang.org/t/strange-maxiters-numeric-instability-occured-when-solving-certain-sde/49392/9 ?
Happy to do some digging, but unsure what I should be looking for.
((((((1.0 * α₁ * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) / (α₂ + (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0))))))))) - (((1.0 * α₄) * x₁(t)) / (α₅ + x₁(t)))) - ((x₁(t) * α₆₂) / 0.1048)) + (((α₆₅ * x₃(t)) / ((α₆₆ + x₃(t)) * ((0.1048 * ((x₁(t) + x₇(t)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * 2))) / α₇₁))) / 0.1048)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * α₂₃)) - ((x₁(t) * x₅(t)) * α₂₄)
ERROR: Failed to apply rule ~~(z::_isone) * ~~x => ~x on expression (1.0 * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) * α₁
[slack] <Peter J> My code does a lot of parameter-parallel ODE solving, and I'd like to do them on the GPU, but since a lot of julia is not supported on the GPU by DiffEqGPU (broadcast, matrix multiply...) would this idea work?
Given an ode described by f
, an IC u_0
and a list of parameters [p_1, p_2,... p_n]
. Create a new ode function big_f(du,u,p,t)
, and w_0
. Where w_0
is a cuarray
consisting of u_0
concatenated n
times, and big_f
applies f
seperately to each copy of u_0
, each with a different p_i
.