Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 29 12:34
    Cyberface commented #875
  • Oct 29 11:54
    ChrisRackauckas commented #875
  • Oct 29 10:54
    johnmyleswhite commented #873
  • Oct 29 09:54
    Cyberface closed #875
  • Oct 29 09:54
    Cyberface commented #875
  • Oct 29 08:11
    genisprat commented #874
  • Oct 29 08:08
    pkofod commented #870
  • Oct 29 08:07
    pkofod commented #871
  • Oct 29 08:06
    pkofod commented #873
  • Oct 29 07:58
    pkofod commented #874
  • Oct 29 07:57
    pkofod commented #875
  • Oct 29 07:56
    pkofod closed #877
  • Oct 29 07:56
    pkofod commented #877
  • Oct 27 00:32
    codecov[bot] commented #878
  • Oct 27 00:32
    codecov[bot] commented #878
  • Oct 27 00:10
    codecov[bot] commented #878
  • Oct 26 23:56
    codecov[bot] commented #878
  • Oct 26 23:53
    codecov[bot] commented #878
  • Oct 26 23:47
    codecov[bot] commented #878
  • Oct 26 23:47
    codecov[bot] commented #878
import jerry
@import_jerbear_twitter
Is there a way to print out more information like what JuMP does?
Adam
@adamglos92
Hi! What happens when I pass negative values to x_tol and f_tol when using LBFGS? Will it only use norm of the gradient as a convergence measure?
Antoine Levitt
@antoine-levitt
I believe so yes, but you can check
Adam
@adamglos92
the thing is that the output is still |x - x'| = 9.96e-06 ≰ 0.0e+00
Antoine Levitt
@antoine-levitt
sure, i think it just sets it to zero instead of negative
@rpetit i just saw your messages, glad the functionality is useful!
Adam
@adamglos92
nevermind, I run different code, the output is correct;)
Adam
@adamglos92

Hi! I've got following output: ``` 21.085590 seconds (3.80 M allocations: 105.695 MiB, 0.04% gc time)

  • Status: success

  • Candidate solution
    Minimizer: [4.09e-01, 5.23e+00, 6.28e+00, ...]
    Minimum: 8.416426e-01

  • Found with
    Algorithm: Fminbox with L-BFGS
    Initial Point: [4.54e-01, 6.00e+00, 5.54e+00, ...]

  • Convergence measures
    |x - x'| = 0.00e+00 ≰ -1.0e+00
    |x - x'|/|x'| = 0.00e+00 ≰ -1.0e+00
    |f(x) - f(x')| = 0.00e+00 ≰ -1.0e+00
    |f(x) - f(x')|/|f(x')| = 0.00e+00 ≰ -1.0e+00
    |g(x)| = 1.53e-03 ≰ 1.0e-06

  • Work counters
    Seconds run: 21 (vs limit Inf)
    Iterations: 6
    f(x) calls: 11450
    ∇f(x) calls: 11450```

My questions is: Why the status is "success" while non of the convergence measures is satisfied?
Antoine Levitt
@antoine-levitt
yeah I've run into similar issues in the past
@pkofod
can you open an issue? probably will be fixed in @pkofod's upcoming rewrite...
Adam
@adamglos92
I can, although I may have problem with MWE. My function is really complicated...
Adam
@adamglos92
Hi again. I receive an error Value and slope at step length = 0 must be finite.when using L-BFGS. My function is bounded and all derivatives are bounded. Do You know the reason of such behaviour?
Antoine Levitt
@antoine-levitt
probably it's a nan or something?
print or debug your objective function and see at which point it's evaluated
Adam
@adamglos92
a just printed it. All the values of the objective function, argument and gradient are at most 10 in abs, and there is no NaN or Inf. I forgot to mention that I actually use Fminbox on L-BFGS. May it cause the effect?
Adam
@adamglos92
although I don't think it is the case, as argument is far from boundary for each dimension
Adam
@adamglos92
It seems that passing negative values to for convergence measures may cause this effect
Adam
@adamglos92
Hi again, is there a standard way to work with periodic functions? I wrote a manifold in which I modulo the argument, however, I wonder whether there is a more standard way? Fminbox is not always good, as sometimes I got stuck close to the bounds, which were not a global minimum.
Antoine Levitt
@antoine-levitt
can't you modulo your objective function?
Adam
@adamglos92
what You mean by "modulo your objective function"? You mean modulo inside?
Antoine Levitt
@antoine-levitt
I don't understand the problem actually, what's the problem if you have a periodic function?
Adam
@adamglos92
When I didn't use Fminbox, sometimes arguments just continue to be larger and larger. When I use Fminbox, sometimes I stuck close the boundary. Imagine I optimize the sin function on [0, 2 pi]. Then 0 is local minimum, but it is not interesting. The problem is much worse for multivariate optimization.
Antoine Levitt
@antoine-levitt
I wouldn't use fminbox
It should be fine without it
Adam
@adamglos92
OK, maybe I falsely understood how it worked without minbox, I will recheck it
Christopher Ross
@cpross90

I recently made a post to the Discourse forum about trying to find a way to access the approximated Hessian from a converged solution to use as the preconditioner on a future minimization of a slightly altered initial state; I'm using BFGS to bifurcate from previously minimizers. After looking at Optim.jl, libLBFGS, and LBFGS++ I couldn't find a interface, so I imagine that it is most likely not available in Optim.jl either. I wanted to double check here if that's something viable with the current interface, if there would be any interest in adding it to the package, or if I should try to extend this on my own in Julia somehow.

So far, the only package outside of Julia that implements this is SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_bfgs.html

Antoine Levitt
@antoine-levitt
this forum is not very used, you might be better off opening an issue
do you want BFGS or LBFGS?
in any case it should simply be a matter of getting out the data you need from the function
I'd suggest hacking it to see if it works
but optim does a good job of storing all its info in a state object
so you just need access to that
in principle I guess the state should just be added to MultivariateOptimizationResults?
Antoine Levitt
@antoine-levitt
or actually you can do it very simply: the optimize function is
optimize(d::D, initial_x::Tx, method::M,
                  options::Options = Options(;default_options(method)...),
                  state = initial_state(method, options, d, initial_x))
so if you just pass a state it'll get mutated and you can get what you want from it after
Christopher Ross
@cpross90
@antoine-levitt Thanks for the response. Sorry about the confusion, LBFGS is what I am trying to work with. That's awesome, I'll try that out now!
Antoine Levitt
@antoine-levitt
cool, let me know how that goes!
jxc100
@jxc100
Is optim reentrant? Can I optimize a function that itself calls optimize()? In a message of May 10, @antoine-levitt mentions a "state" object - is this one global object (ie, non-reentrant) or local to a specific refinement?
Antoine Levitt
@antoine-levitt
It should be ok
Yeah pretty sure it's OK, report an issue if it's not the case
jxc100
@jxc100
Thanks, so far so good!
Ilja Kantorovitch
@IljaK91
Hey everyone, not sure if many people are active here, but is there a way to use box-constrained optimization without providing a gradient? There is no example in the documentation for that.
Antoine Levitt
@antoine-levitt
You'd be better off pinging @pkofod on slack or opening an issue
John Washbourne
@jkwashbourne

Hi All, new here ... We have been using Optim.jl and LBFGS for a while, and for our particular problems the combination of alphaguess = LineSearches.InitialQuadratic() and linesearch = LineSearches.MoreThuente() works best. I thought I would try that for Rosenbrock on the front page, and it beats BFGS pretty handily, at least in terms of operation count, almost 2x. Thought that was worth mentioning.

Where is a pointer to the slack channel please? Cheers

Antoine Levitt
@antoine-levitt
There's no specific slack channel, but there's #math-optimization
In my experiments a simple backtracking worked best, I guess it depends on problems...
John Washbourne
@jkwashbourne
Thanks @antoine-levitt for the reply. Our problem has way costly operations, so we scrutinize. I think I may have to dive down the linesearch rabbit hole eventually as there is some behavior I think I want to change. cheers