Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 27 08:16
    JockLawrie opened #987
  • May 25 13:04
    Vaibhavdixit02 commented #977
  • May 24 00:19
    github-actions[bot] opened #986
  • May 24 00:19

    github-actions[bot] on new_version

    CompatHelper: bump compat for C… (compare)

  • May 14 17:58
    BioTurboNick commented #712
  • May 14 17:53
    BioTurboNick commented #712
  • May 05 13:46
    sethaxen commented #943
  • May 05 12:36
    oschulz commented #943
  • May 05 06:01

    github-actions[bot] on v1.7.0

    (compare)

  • May 05 06:00
    JuliaTagBot commented #892
  • May 05 05:34

    pkofod on master

    Update Project.toml (compare)

  • May 04 10:06
    pkofod commented #943
  • May 03 18:19

    pkofod on ntr0

    (compare)

  • May 03 18:19

    pkofod on master

    Newton Trust Region fail safe f… (compare)

  • May 03 18:19
    pkofod closed #985
  • May 03 15:07
    pkofod synchronize #985
  • May 03 15:07

    pkofod on ntr0

    Missing quote. (compare)

  • May 03 14:41
    pkofod synchronize #985
  • May 03 14:41

    pkofod on ntr0

    Fix project (compare)

  • May 03 14:06
    pkofod synchronize #985
Adam
@adamglos92
When I didn't use Fminbox, sometimes arguments just continue to be larger and larger. When I use Fminbox, sometimes I stuck close the boundary. Imagine I optimize the sin function on [0, 2 pi]. Then 0 is local minimum, but it is not interesting. The problem is much worse for multivariate optimization.
Antoine Levitt
@antoine-levitt
I wouldn't use fminbox
It should be fine without it
Adam
@adamglos92
OK, maybe I falsely understood how it worked without minbox, I will recheck it
Christopher Ross
@cpross90

I recently made a post to the Discourse forum about trying to find a way to access the approximated Hessian from a converged solution to use as the preconditioner on a future minimization of a slightly altered initial state; I'm using BFGS to bifurcate from previously minimizers. After looking at Optim.jl, libLBFGS, and LBFGS++ I couldn't find a interface, so I imagine that it is most likely not available in Optim.jl either. I wanted to double check here if that's something viable with the current interface, if there would be any interest in adding it to the package, or if I should try to extend this on my own in Julia somehow.

So far, the only package outside of Julia that implements this is SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_bfgs.html

Antoine Levitt
@antoine-levitt
this forum is not very used, you might be better off opening an issue
do you want BFGS or LBFGS?
in any case it should simply be a matter of getting out the data you need from the function
I'd suggest hacking it to see if it works
but optim does a good job of storing all its info in a state object
so you just need access to that
in principle I guess the state should just be added to MultivariateOptimizationResults?
Antoine Levitt
@antoine-levitt
or actually you can do it very simply: the optimize function is
optimize(d::D, initial_x::Tx, method::M,
                  options::Options = Options(;default_options(method)...),
                  state = initial_state(method, options, d, initial_x))
so if you just pass a state it'll get mutated and you can get what you want from it after
Christopher Ross
@cpross90
@antoine-levitt Thanks for the response. Sorry about the confusion, LBFGS is what I am trying to work with. That's awesome, I'll try that out now!
Antoine Levitt
@antoine-levitt
cool, let me know how that goes!
jxc100
@jxc100
Is optim reentrant? Can I optimize a function that itself calls optimize()? In a message of May 10, @antoine-levitt mentions a "state" object - is this one global object (ie, non-reentrant) or local to a specific refinement?
Antoine Levitt
@antoine-levitt
It should be ok
Yeah pretty sure it's OK, report an issue if it's not the case
jxc100
@jxc100
Thanks, so far so good!
Ilja Kantorovitch
@IljaK91
Hey everyone, not sure if many people are active here, but is there a way to use box-constrained optimization without providing a gradient? There is no example in the documentation for that.
Antoine Levitt
@antoine-levitt
You'd be better off pinging @pkofod on slack or opening an issue
John Washbourne
@jkwashbourne

Hi All, new here ... We have been using Optim.jl and LBFGS for a while, and for our particular problems the combination of alphaguess = LineSearches.InitialQuadratic() and linesearch = LineSearches.MoreThuente() works best. I thought I would try that for Rosenbrock on the front page, and it beats BFGS pretty handily, at least in terms of operation count, almost 2x. Thought that was worth mentioning.

Where is a pointer to the slack channel please? Cheers

Antoine Levitt
@antoine-levitt
There's no specific slack channel, but there's #math-optimization
In my experiments a simple backtracking worked best, I guess it depends on problems...
John Washbourne
@jkwashbourne
Thanks @antoine-levitt for the reply. Our problem has way costly operations, so we scrutinize. I think I may have to dive down the linesearch rabbit hole eventually as there is some behavior I think I want to change. cheers
Segawa@動く人形キボウちゃん
@segawachobbies_twitter
I started learning the Julia language yesterday. I'm hoping that Julia might be useful for numerical optimization in my research.
One question, is there any way to get or check the Hessian obtained by Automatic Differentiation in Optim.jl in Symbolic form?
Segawa@動く人形キボウちゃん
@segawachobbies_twitter

fun(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2

For example, we want to get the following Hessian from the above objective function.

function fun_hess!(h, x)
h[1, 1] = 2.0 - 400.0 x[2] + 1200.0 x[1]^2
h[1, 2] = -400.0 x[1]
h[2, 1] = -400.0
x[1]
h[2, 2] = 200.0
end

Christopher Rackauckas
@ChrisRackauckas
@segawachobbies_twitter you can use ModelingToolkit to test it
Segawa@動く人形キボウちゃん
@segawachobbies_twitter

@ChrisRackauckas

@segawachobbies_twitter you can use ModelingToolkit to test it

using ModelingToolkit

@variables x[1:2]

result = ModelingToolkit.hessian((1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2, x)

I tried the above code in ModelingToolkit and successfully got the ideal result! Thank you very much.

Segawa@動く人形キボウちゃん
@segawachobbies_twitter
I used ModelingToolkit to find and optimize the gradient and Hesse matrices from functions only.
I compared Automatic differentiation with this method, and the difference in processing speed is about 100 times faster during optimization.
I think this is one of the options for fast optimization by just using the objective function.
using ModelingToolkit
using Optim

@variables x[1:2]

f = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
g = ModelingToolkit.gradient(f, x)
h = ModelingToolkit.hessian(f, x)

buildedF = build_function(f, x[1:2])
buildedG = build_function(g, x[1:2])
buildedH = build_function(h, x[1:2])

newF = eval(buildedF)
newG! = eval(buildedG[2])
newH! = eval(buildedH[2])

initial_x = zeros(2)

@time Optim.minimizer(optimize(newF, initial_x, Newton(); autodiff=:forward))
@time Optim.minimizer(optimize(newF, newG!, newH!, initial_x, Newton()))
Christian Rorvik
@ancapdev
Hi everyone. An open question more than any proposal; Looking at Optim I don't think there's any way to evaluate an objective function over a vector of parameters (e.g., for solvers that work on a population - like particle swarms). My specific use case involves a problem where I can amortize a lot of computation by simultaneously evaluating multiple parameters, but I can't easily factor the problem to extract and pre-compute parameter-independent parts. Has anyone given any thoughts for how the API may be adapted to support this, or have any interest in solutions being contributed?
Antoine Levitt
@antoine-levitt
I don't understand why you can't precompute exactly. You could do this easily with a closure for instance
Christian Rorvik
@ancapdev
For additional context, I have two use cases. One is building signals over high frequency market data, signals may be stateful, i.e., a function over a long history of events, and their state is also a function of parameters under optimisation. Market data is played back through an event processing API, involves building order books a high amount of IO, all which can be amortised over a set of signal parameters, but which cost can't be factored out ahead of time. Second use case is in simulating trading strategies over historical data, which again and even more so doesn't lend itself to precomputing much, but if a set of parameters are available these can be evaluated simultaneously and benefit from amortising shared costs, or even scaled out on a cluster (which is something I've hand built a parallel optimiser for in the past). What I really want from an optimisation package though, is not for it to handle parallelism or simultaneous evaluation of the objective, but to simply give me a set of parameters to evaluate for. Those two problems are fairly orthogonal
Antoine Levitt
@antoine-levitt
OK that makes sense
Gradient type solvers are not parallel but you might have more success with pso type methods
I don't think it's implemented in optim though
Jan Lukas Bosse
@jlbosse
Hey everyone,
I recently implemented the simulatenous stochastic perturbation algorthm (SPSA) and the model gradient descent (MDG, first developed in https://arxiv.org/abs/2005.11011) in a roughly Optim.jl compatible way. Is there interest in adding these methods to Optim.jl? If yes, I would make my code fully compatible to the Optim.jl interface and open a PR
Antoine Levitt
@antoine-levitt
Optim is in the process of being rewritten (allegedly 😂), open an issue on github to see what @pkofod says
evrenmturan
@evrenmturan
Hi, I found Optim.KrylovTrustRegion but I don't see any documentation for it. It seems to have been added 15 months ago. Does anyone know if its usable/stable?
Christopher Rackauckas
@ChrisRackauckas
It's usable
we make use of it in the DiffEqFlux stuff
Christoph Ortner
@cortner
For parameter optimisation?
Ahmed
@AhmedAlreweny
I am looking for this (http://users.iems.northwestern.edu/~nocedal/lbfgsb.html) implementation of L-BFGS in julia
Bryson Cale
@astrobc1
Is there anything like a dedicated "Model Parameters" type/package compatible with Optim.jl or possibly another mathematical optimization package? (opposed to using a vector to store parameters?) If not is this something the Optim.jl package could benefit from?
Bryson Cale
@astrobc1
Hi, I'm looking into making an Optim.jl compatible solver/optimizer following the instructions here: https://julianlsolvers.github.io/Optim.jl/stable/#dev/contributing/. I'm also trying to use https://github.com/JuliaNLSolvers/Optim.jl/blob/master/src/multivariate/solvers/zeroth_order/nelder_mead.jl sort of as a template, but I'm confused where the methods value, value!, and value!! are defined. Thanks!
Louis Ponet
@louisponet
Hi All, has anyone ever looked into the Firefly algorithm of optimization? It's basically a superset of the Particle swarm (see https://link.springer.com/chapter/10.1007/978-3-642-04944-6_14)
Would Optim.jl be a good place to implement it? One of the things I was wondering about is that the algorithm needs some notion of distance between two fireflies, which in the problem that I'm working on would be more complicated than a simple euclidean distance
if that doesn't really clash with the rest of Optim I'd be happy to give implementing it a try