by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 22 09:51

    mlubin on gh-pages

    build based on 9bf82e9 (compare)

  • Sep 22 09:33

    blegat on master

    Update aff_expr.jl (#2342) Rem… (compare)

  • Sep 22 09:33
    blegat closed #2342
  • Sep 22 02:50
    odow synchronize #2341
  • Sep 22 02:50

    odow on fix_linearity

    Use USER_OPERATOR_ID_START inst… (compare)

  • Sep 21 23:56
    vtjeng commented #2342
  • Sep 21 23:28
    mlubin commented #2342
  • Sep 21 21:40

    mlubin on gh-pages

    build based on b5acc79 (compare)

  • Sep 21 21:18

    blegat on master

    Fix StackOverflow for NestedIte… (compare)

  • Sep 21 21:18
    blegat closed #2336
  • Sep 21 21:18
    blegat closed #2335
  • Sep 21 19:54
    vtjeng opened #2342
  • Sep 21 08:00
    blegat closed #1165
  • Sep 21 08:00
    blegat commented #1165
  • Sep 21 04:25
    odow opened #2341
  • Sep 21 04:24

    odow on fix_linearity

    Fix linearity check of user-def… (compare)

  • Sep 21 04:22
    odow assigned #2340
  • Sep 21 02:07
    odow edited #2340
  • Sep 21 02:06
    odow labeled #2340
  • Sep 21 02:06
    odow opened #2340
Miles Lubin
@mlubin
One big change I've been thinking about is if we can choose a set of core solvers and move their MOI wrappers (but not binary dependencies or C wrappers) into the MOI repo. Then a new feature can be merged only when the wrappers are passing the new tests.
This is how cvxpy and other less modular modeling languages work.
Robert Schwarz
@rschwarz
But this would only help these core solvers?
Robert Schwarz
@rschwarz
Rather than moving the solver wrapper code into MOI, this could also be done with a separate CI stage, just like with MINLPTests? Then one would avoid the circular dependencies somewhat, and keep the repos separate.
Eric Hanson
@ericphanson
as a maintainer of a non-core solver (SDPAFamily.jl), I agree it would be nice to not have our tests break with new MOI patch releases. Maybe MOI tests should use an opt-in model instead of opt-out? (ie specify which the tests run by which features you support, instead of the current system where you specify which features you don't support)
Miles Lubin
@mlubin
@rschwarz How would a separate CI stage work? Changes in MOI require changes in solver repos.
Robert Schwarz
@rschwarz
I guess there remains the cyclic dependency of newly tagged versions, between MOI and solvers, so I don't know.
Miles Lubin
@mlubin
@ericphanson we haven't started discussing which solvers would be core solvers
Robert Schwarz
@rschwarz
At least the MOI CI would show which servers will fail, so that these can be notified.
You can also run CI and accept failure in the PR context.
Miles Lubin
@mlubin
@rschwarz If you just want notification you can set travis to run daily and send you an email when it fails.
Robert Schwarz
@rschwarz
Good to know, thanks.
Eric Hanson
@ericphanson
@mlubin true, I shouldn't have assumed one way or another.
Miles Lubin
@mlubin
I would also like the cost of adding new features to be more apparent on the MOI PR and more of the responsibility of the person who adds the feature
Joaquim Dias Garcia
@joaquimg
I like the first step to be: "if there are changes in tests, change the minor and not the patch".
Another option would be to change solvers tests to explicitly enumerate the desired tests, so there is no new default test that appears out of the blue.
Miles Lubin
@mlubin
Opting-into tests just means that users are going to be the first to discover the breakage or some basic JuMP function not working with a given solver
Using JuMP shouldn't be like playing minesweeper to find unimplemented attributes
Joaquim Dias Garcia
@joaquimg
Sure thing.
The least we could do is automatically open github issues in all solvers after some feature is added
Robert Schwarz
@rschwarz
Nice idea, linking to the MOI PR. Then, when solvers start to implement the change, other solvers can easily find the linked sovler PRs and look at the diffs.
Miles Lubin
@mlubin
A separate CI stage would work for that
Henrique Becker
@henriquebecker91
@odow sorry for the delay, I had a conference yesterday, I commited that little tidbit of code concerning deepcopy now. I used AbstractModel (instead of Model) as you suggested.
mtanneau
@mtanneau
I just noticed that the MOI definition of the primal exponential cone (here) does not match Mosek's one (here)
Specifically, if (x, y, z) is in MOI's version of the exponential cone, then (z, y, x) is in Mosek's version of the cone.
mtanneau
@mtanneau
I was solving exponential cone problems with Mosek (via MOI) and getting wrong results. Using the above index correspondence before passing the conic constraints fixed it.
Looks like ECOS, SCS and CVX use the same index convention as MOI. I've opened a JuliaOpt/MosekTools.jl#42.
Benoît Legat
@blegat

Should it be ArgumentError or MOI.UnsupportedAttribute? (or else)

@mtanneau It cannot be UnsupportedAttribute as MOI.is_copyable(MOI.ConstraintDual()) is false.

Benoît Legat
@blegat

Using JuMP shouldn't be like playing minesweeper to find unimplemented attributes

I agree, I prefer opt-out. Automatically opening issues to all solvers when a new tests such as number_threads and solve_result_index will be added is a good idea.
We might use supports_constraint to disable unit tests as we can assume that supports_constraint is already tested in contlinear and contconic

mtanneau
@mtanneau
Update on Clp.jl: only two items left to do.
1) MOI.DualObjectiveValue: I can't find the corresponding function in the C interface. Not supporting this means I'd have to disable quite a few tests for the build to pass.
2) MOI.ConstraintDual for infeasible models. I'm currently looking into this; the question is whether it's a Clp bug or a me bug. If it's a Clp bug, I think the safe option would be to raise an error when querying a primal infeasibility certificate. ConstraintDual appears to work fine for the other cases.
Henrique Becker
@henriquebecker91
I am using Gurobi 0.7.2 and JuMP 0.20.0 and deleted about 30k variables from a model with 50k variables and it took many minutes to do so (much more than creating/solving the model, and after deleting there was a message: "Warning: excessive time spent in model updates. Consider calling update less frequently."). I see that a similar problem was already dealt with in the recent past: JuliaOpt/Gurobi.jl#216, are these changes already included in the last version? Or there is a version to be launched that solve them? Should I open a issue? If the issue is not too deep I may try to contribute.
Oscar Dowson
@odow
Deleting variables and constraints is slow and hard to fix. Why do you want to delete so many?
BridgingBot
@GitterIRCbot
[slack] <ExpandingMan> hello all, any chance of getting an 0.20.1 release to fix the sparse matrix issues in Julia 1.3?
mtanneau
@mtanneau
@henriquebecker91 Gurobi updates models in a lazy fashion. That means all modifications are buffered until an update function is called.
To ensure consistence of variable/constraint indices, the wrapper will perform that update every time a variable/constraint is deleted. Hence, deleting lots of variables/constraints is slow.
There's no simple fix because of the need to ensure that the indices will remain consistent: when deleting a variable, the indices of all subsequent variables are shifted in the Gurobi object, while the MOI indices are not. Thus the MOI<-->GRB index correspondence must be updated.
Henrique Becker
@henriquebecker91

@odow I am reproducing and improving over the method presented in a paper (https://pubsonline.informs.org/doi/abs/10.1287/ijoc.2016.0710) it involves a "final pricing" which is done after solving the continuous relaxation, and if done right, should remove 40~70% of the variables, making the model considerably easier to solve. The authors have lost their code (one of the reasons I am re-implementing it in Julia), but by their description the deletion was done the same way I am trying to do it.

@mtanneau hmm, are you saying that JuMP is eager by default (and should stay this way)? or that was too much work and nobody implemented a lazy layer inside the wrapper that keeps an index correspondence table and only updates when optimize is called? seems to me that this kinda throws away the utility of using JuMP over a solver with lazy update semantics.

I will try to find a workaround for the time being. Probably I will need to create the model again, without the variables, and copy what I can from the old model, translating from the old indexes to the new ones.

Miles Lubin
@mlubin
FYI we are transferring juliaopt.org from a private account to NumFOCUS today. This could involve some downtime as the DNS is reconfigured.
Robert Schwarz
@rschwarz
@henriquebecker91 Instead of removing the variables on the JuMP side, could you instead fix them to 0 (or some lower bound)? Then the Gurobi would be able to remove all of these variables in presolve, but on the JuMP-side, the indices remain valid.
I understand (from reading the comments above) that adding variables and constraints to Gurobi is lazy, but removing variables is eager, or else one might end up in an inconsistent state.
Miles Lubin
@mlubin

by their description the deletion was done the same way I am trying to do it

Was it done using the Gurobi C API or the C++/Python API? The C++/Python APIs provide references that aren't invalidated when you delete variables. When deleting a variable in the C API, it potentially invalidates all previous indices.

Oscar Dowson
@odow
There is definitely room for improvement in how we manage the lazy updating. It would involve more caching on the Julia side, and you would have to think through different sequences of actions. For example, if you've just deleted a variable, we require an update before you can add one, or query something about the model state, since if we don't update, Gurobi will return information from the un-updated model. However, if you delete a variable, we probably don't need to update if the previous operation was also deleting a variable.
The work-around is probably to use JuMP in manual mode, which isn't documented very well. Build a JuMP model, attach the solver and solve. Then detach, make a lot of changes, like deleting variables. Then re-attach and solve again
It also looks like the paper used CPLEX, which doesn't have the same lazy updating semantics
Henrique Becker
@henriquebecker91
Yes. But CPLEX is not working with JuMP 0.20, and I need JuMP 0.20 for using the warm start methods that are present only in it.
So instead of implementing the whole thing again using CPLEX without JuMP, I decided to keep the JuMP and use Gurobi below.
I mean, I could use JuMP 0.19 + CPLEX.jl 0.5 and write for myself a warm start method that uses the underlying solver object, but I do not know how hard that would be.
Oscar Dowson
@odow
@jd-lara and I are going to look at updating CPLEX.jl next week
Henrique Becker
@henriquebecker91

@rschwarz This is a good idea, if I can trust Gurobi to do so. I will check for flags to make presolve as aggressive as possible, and run some tests to see if the presolve reduce the number of variables by the expected amount.

@mlubin See what I answered to odow, the paper uses CPLEX, but I cannot use CPLEX+JuMP and warmstart the variables, what is also necessary for the method.

@odow I will try what you propose (direct method) and compare with @rschwarz idea of fixing the variables. This kind of lazy update is something I would like to contribute to, but I am not sure if I have the time for contributing for something that needs more than a whole day of work now.

Henrique Becker
@henriquebecker91
@odow, updating CPLEX.jl would kinda solve all my headaches, but I have experiments to run and a paper to write, how much work do you think it is? I could contribute with one or two days of work, if it meant this would be solved next week. This would save me from testing workarounds, and give me a more solid paper, I just cannot delay my work too much, nor I am very knowlegeable in MOI internals, is something a beginner can help, or would take more work from you to babysit me than to your solve the problem alone?
mtanneau
@mtanneau

If you create a Gurobi model, then add (scalar) variables x, y, z, their respective indices are 1, 2, 3 (or maybe they start at 0, not sure, anyway...).
Delete variable y, and now the indices of x, z are 1, 2, which means to query the value of z you must query the value of the variable whose index is 2.
Gurobi only ever works with variable indices (unless you're using names but that's much slower), which means that you, as a Gurobi user, must keep track of which index corresponds to which variable, otherwise things go wrong.

In JuMP/MOI, if you create variables x, y, z, you get three objects x, y, and z, for which you can query whatever you want.
Delete y from the model, and the JuMP/MOI convention is that x, zstill allow you to access information about the corresponding variables, without having to make any modification to the objects x, z themselves. That means, the shift of index must be accounted for somehow. If you're using Gurobi directly, you're responsible for it. If you're using JuMP/MOI, then MOI is responsible for it.

Henrique Becker
@henriquebecker91
@mtanneau Yes, but something like @odow mentioned is possible, right? When you ask JuMP to delete a variable, JuMP does not really needs to delete a variable, it may just mark the variable as to-be-deleted in an internal structure, and everything that would need to query solver state flush this internal structure before doing what they would do. Basically, making JuMP to also have lazy updates, because being eager if a solver is lazy is problem, but being lazy if a solver is eager often is not a problem.
mtanneau
@mtanneau
@odow, is there such a thing as deleting several variables/constraints simultaneously, i.e., MOI.delete(model, x::Vector{MOI.VariableIndex})? (without using an individual delete fallback)
If so, it may be possible (hopefully without requiring to much work) to wrap GRBdelvars so that all variables are eliminated in bulk, thus resulting in a single update.
Just an idea, I haven't looked at how it would impact the re-building of wrapper-level data structures.
mtanneau
@mtanneau

JuMP does not really needs to delete a variable, it may just mark the variable as to-be-deleted in an internal structure

This is kind-of already implemented in the MOI wrapper of Gurobi: there is a needs_update flag that is updated when modifying the model.
What @odow suggested was to be less conservative as to when updates should be performed, and when not.
If you delete x, you don't need to update before deleting y. But if you add a constraint c, then you need to update before deleting y. However that requires extensive tracking of what's the stack of buffered modifications, and do I need to flush before performing some modification X. Sorting all this out is not impossible, it's just cumbersome (and someone has to do it and then maintain it)...