Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 02 22:37
    GiggleLiu commented #200
  • Apr 02 22:27
    thisrod commented #775
  • Apr 02 22:26
    thisrod commented #775
  • Apr 02 21:03
    pkofod commented #775
  • Apr 02 09:03
    pkofod closed #737
  • Apr 02 09:03

    pkofod on master

    closes #737 by offering only_* … (compare)

  • Apr 02 09:03
    pkofod closed #742
  • Apr 02 09:03
    pkofod closed #738
  • Apr 02 09:02
    pkofod commented #742
  • Apr 02 09:02
    codecov[bot] commented #742
  • Apr 02 09:01
    codecov[bot] commented #742
  • Apr 02 08:51
    codecov[bot] commented #742
  • Apr 02 08:48
    codecov[bot] commented #742
  • Apr 02 08:46
    codecov[bot] commented #742
  • Apr 02 08:43
    codecov[bot] commented #742
  • Apr 02 08:43
    codecov[bot] commented #742
  • Apr 02 08:42
    codecov[bot] commented #742
  • Apr 02 08:42
    codecov[bot] commented #742
  • Apr 02 08:42
    codecov[bot] commented #742
  • Apr 02 07:06

    github-actions[bot] on v0.20.6

    (compare)

Alexandre Brilhante
@brilhana
what is the new dev workflow for julia 1.0? dev Optim doesn’t seem to create a new fork or new branch I can push to
Christopher Rackauckas
@ChrisRackauckas
You need to fork and set your upstream to your fork
Antoine Levitt
@antoine-levitt
Re https://discourse.julialang.org/t/optimize-performance-comparison-optim-jl-vs-scipy/12588/8 I remember we discussed changing the default linesearch to backtracking but can't find an issue for this, do I remember wrong?
Patrick Kofod Mogensen
@pkofod
We discussed that, and I agree we should do it
think it wqas just in this channel
Asbjørn made a pr but closed it again
Patrick Kofod Mogensen
@pkofod
I would just prefer to see a few benchmarks/convergence profiles first
Patrick Kofod Mogensen
@pkofod
I'll run some benchmarks with some extra starting values to the standard problems we use and make a pr later
Asbjørn Nilsen Riseth
@anriseth
@cortner and @ChrisRackauckas ; I don't know if the papers on deflation by Patrick Farrell mention his code, but in case you're interested in playing with an existing implementation for producing the bifurcation diagrams with his method you can check out https://bitbucket.org/pefarrell/defcon
Christopher Rackauckas
@ChrisRackauckas
Nope GPL
Can't look unless it's actually open sourced
Patrick Kofod Mogensen
@pkofod
Interesting to see Robert Kirky on the contributors list
Asbjørn Nilsen Riseth
@anriseth
Nope GPL
Damn GPL
Is LGPL equality problematic?
Interesting to see Robert Kirky on the contributors list
Do you know him @pkofod ?
Christopher Rackauckas
@ChrisRackauckas
LGPL is usually fine
To link
But looking at the source still has a derived work clause
Patrick Kofod Mogensen
@pkofod
I met him this summer at CEF [Computing in Economics and Finance] (the main conference of Society for Computational Economics)
I was part of a panel discussion on open source software development
He has a project called "VFI toolkit" that he was talking about. It's a Matlab project, so there was some discussion about being able to call a matlab toolbox open source :)
(I'm currently on leave working for an open source economics related project [in python], and the PI on that project was also part of the panel, and the one talking against a matlab toolbox as being meaningfully free and open source)
chriselrod
@chriselrod
I met Robert Kirby a couple weeks ago. I've heard from a grad student in the math department that he encourages grad students in the math department to use Julia, but when I met him he said he mostly uses Python himself.
Patrick Kofod Mogensen
@pkofod
@anriseth docs doesn't seem to be firing, you have any idea why?
Asbjørn Nilsen Riseth
@anriseth
So the version must be updated to 1.0
We set it up like that to copy the approach by @fredrikekre and @KristofferC in Literate.jl and JuAFEM.jl
But I see that they have now moved to a new approach using the new Julia 1.0 Projects
Patrick Kofod Mogensen
@pkofod
Related to the recent default line search thoughts we had, I was thinking to benchmark backtracking with a low c_2 as a default method. Anyone wonna place any bets on it's efficiency compared to the default 0.9? :]
Asbjørn Nilsen Riseth
@anriseth
On (quasi)-Newton, I bet lower is worse. On CG and Gradient Descent, I bet lower is better :p
Patrick Kofod Mogensen
@pkofod
You mean worse as in slower? I was thinking robustness here
Asbjørn Nilsen Riseth
@anriseth
Ah, worse as in slower; take more iterations/objective evaluations
(actually, maybe it will take more objective evals but fewer iterations :p )
Jeremy
@jebej
Hello all, I'd like to implement a projected gradient descent algorithm with Optim, where, after a gradient step, the state vector is modified to fit some constraint. I see that there are some reference that this is possible in the Manifold optimization section of the docs, but don't understand how to implement the retract! function. If the current state is x, and I have a function y = project(x) that I want to use to project the state, how would that work?
cossio
@cossio
How to diagnose LoadError: ArgumentError: Value and slope at step length = 0 must be finite? Any hints of what it might be? (I'm using LBFGS) The weird thing is that I compute the gradient manually and it's finite.
Antoine Levitt
@antoine-levitt
Hi, just logged here for the first time in forever, @jebej if you still have questions on manifolds stuff don't hesitate to open an issue on the repo asking for clarifications
Romain Petit
@rpetit
Hi! I'd like to use Optim to perform a standard gradient descent, except that I would like to be able to run after each step an instruction which will modify the state vector. Would it be possible?
Romain Petit
@rpetit
Just to provide a bit more information: what I'm actually doing is minimizing an energy defined over closed curves in the plane. The way I handle curves numerically is to store a list of points and assume the curve is a polygon whose vertices are the stored points. I already implemented functions to compute the gradient of my energy, and conducted some experiments by coding my own gradient descent loop. Everything behaves nicely provided that after each gradient step I somehow resample points along my curve to ensure they are regularly spaced (with respect to the curvilinear abscissa). That's the step I would be able to perform using Optim.jl's gradient descent (resampling points along the curve, i.e. modifying the state vector right after a gradient step). Sorry for posting such a long message!
Christoph Ortner
@cortner
That sounds a bit like projecting onto a manifold. @antoine-levitt Has implemented that capability.
Romain Petit
@rpetit
@cortner Thanks, I finally managed to do what I wanted by implementing a new manifold and the proper retract! method!
Christoph Ortner
@cortner
that's great. May I ask what you are implementing? String method, NEB, MAP? Or something entirely different?
Romain Petit
@rpetit
I'm not sure if that's what you wanted to know but I'm using this to approximately solve a calculus of variations problem to perform super-resolution on signals with a special structure (you can think of piecewise constant images)!
Christoph Ortner
@cortner
Ok - so very different from what I had in mind. But it sounded similar, so I'd be interested to learn more about it. Do you have a paper or can you recommend a paper on this?
Romain Petit
@rpetit
I believe this approach to be new, but I'll be happy to send you a preprint as soon as I manage to write something about all this!
Christoph Ortner
@cortner
Sounds great - thanks!
Octoboye54
@Octoboye54
Guys?
import jerry
@import_jerbear_twitter
Is there a way to print out more information like what JuMP does?