- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- Aug 10 23:52odow commented #2309
- Aug 10 23:23angeris commented #2309
- Aug 10 23:21odow commented #2309
- Aug 10 19:25angeris commented #2309
- Aug 10 19:25angeris commented #2309
- Aug 10 19:20angeris commented #2309
- Aug 10 18:43angeris edited #2309
- Aug 10 18:43angeris edited #2309
- Aug 10 18:22
blegat on master

Add name option for basic_const… (compare)

- Aug 10 18:22blegat closed #1133
- Aug 10 18:10angeris edited #2309
- Aug 10 18:09angeris opened #2309
- Aug 10 16:22blegat commented #1133
- Aug 10 16:22blegat synchronize #1133
- Aug 10 16:22
blegat on basic_name

Add name option for basic_const… (compare)

- Aug 10 16:21blegat opened #1143
- Aug 10 16:20
blegat on wiki_tri

[ci skip] Add wikipedia referen… (compare)

- Aug 10 16:20
blegat on wiki_tri

[ci skip] Add wikipedia referen… (compare)

- Aug 10 13:04
odow on perf

- Aug 10 13:04
odow on master

Tidy Constraint.Vectorize bridg… (compare)

GAMS can apparently output models to JuMP now. https://twitter.com/svigerske/status/1290275543112224771

I'm getting

`MathOptInterface.UnsupportedConstraint{MathOptInterface.VectorAffineFunction{Float64},MathOptInterface.LogDetConeSquare}: `MathOptInterface.VectorAffineFunction{Float64}`-in-`MathOptInterface.LogDetConeSquare` constraint is not supported by the model.``

(where my `model`

is ```
model = MOIB.full_bridge_optimizer(MOIU.CachingOptimizer(MOIU.UniversalFallback(MOIU.Model{T}()),
SCS.Optimizer()), T)
```

). Does that mean I need to formulate the constraint differently? What way would be accepted?

[slack] <mtanneau> I think Optim.jl has a Newton interior-point algorithm somewhere, but I don't know how efficient/robust it is. Otherwise, not that I know of.

pure_interpreter: It follows from the "method error principle" (https://jump.dev/JuMP.jl/v0.21.3/style/#User-facing-MethodError-1) that you should get errors that describe what you did wrong as the user. If you directly asked for the attribute then

`UnsupportedAttribute`

is clearly a user error. If you called function X that called function Y that called function Z that asked for an attribute, then it's probably the responsibility of one of those functions to catch the error.
[slack] <mtanneau> That errors eventually means that the attribute you called is not supported by the underlying solver.

The first step should be to look at the MOI documentation to rule out typos, signature, etc.

The second step should be to look at that solver’s docs, and see whether it indeed supports said attribute. For instance, does it support setting the number of threads? If yes, then it should probably be exposed through MOI, and if it isn’t, do open an issue with that solver’s wrapper. If no, then there’s no way MOI can expose a functionality that does not exist :man-shrugging:

Re gams: “New output format: JuMP scalar model.” I wonder what they mean by “scalar model”. I could only find this https://www.gams.com/blog/2020/06/new-and-improved-gams-links-for-pyomo-and-jump/

[slack] <pure_interpeter> Could https://github.com/ohinder/OnePhase.jl be a slot in replacement for IpOpt in Julia? Paper: https://arxiv.org/pdf/1801.03072.pdf

[slack] <oxinabox> Optim.jl has a IPNetwton. Its one of Optim.jl’s (only?) constrained optimisers. It really should be hooked up to MOI, no?

https://julianlsolvers.github.io/Optim.jl/stable/#algo/ipnewton/

https://julianlsolvers.github.io/Optim.jl/stable/#algo/ipnewton/

[slack] <pure_interpeter> I am not super familiar non-convex optimization. However lot's of Julia packages working on non-linear programming seem to call IpOpt in the end. Currently the optimization landscape in Julia is not tied in with automatic differentiation through the solver. However for number/matrix type overloading (or similar Julia features) to ever become popular i think having an Julia package which can replace IpOpt is necessary.

I think usually you usually don't want to AD through an iterative solver anyway but rather find an adjoint for it, in which case it doesn't matter what language it was written in (there was a similar discussion for eigenvectors here btw: https://discourse.julialang.org/t/native-eigenvals-for-differentiable-programming/27126/9)

Right, there's no need for the solver to take generic number types. See https://link.springer.com/chapter/10.1007%2F978-3-540-68942-3_7.

I believe scalar model means "not using sets" for constraints. So, if you had the same kind of constraint for every node in a graph, say, you would use a `for`

loop in GAMS or JuMP. But the GAMS-generated JuMP model would contain many individual constraints.

In addition, I would assume that all coefficients are computed and inserted as literals at this time.

At least, we did something like this when we had a model definition in a C++ codebase, but generated (scalar) GAMS models as strings so that we could get access to more solvers :see_no_evil:

I had a confusing bug yesterday where I accidentally created a constraint with

`MOI.RelativeEntropyCone(n)`

where `n`

was the length of my vectors, instead of `2n+1`

which is the dimension of the cone. The tricky thing is that it solved the problem without emitting errors and even found almost the right optimal value of the variables, but the objective value was 0 instead of what it should be. I was wondering if there is a way to add a dimension check so that it would error if the wrong dimension is supplied.
pure_interpreter: I think there is a need for a variety of pure-Julia NLP solvers. Optim.jl likely provides a good baseline. Matching Ipopt's reliability and performance is a good goal but likely takes a non-trivial amount of engineering. Still having less mature NLP solutions to test through the MOI interface with would be fantastic.

ah, glad to hear I'm not the only one, and thanks for the link! Btw I'm quite excited for Hypatia, I'm hoping Convex can be a good frontend for it once I finish jump-dev/Convex.jl#393

We now have a SCIP_jll package (thanks @matbesancon), and are working on a SCIP-Interfaces/SCIP.jl#177 to integrate with SCIP.jl.

It works fine on Linux and OS X (via Travis), but fails on Windows (via AppVeyor). I have the feeling that there is an issue with shared library generated by BinaryBuilder, because the same Julia version work when we download the binaries from the SCIP website.

Do you have any suggestions on debugging this, preferably not using a Window machine?

since there are Julia images already, the process is very simple

It's not too hard to reformulate a knapsack problem for a new interface.

Google's OR-tools has a CP-SAT solver that can solve pure-integer problems. That would be useful to have available.

[slack] <Wikunia> @pure_interpeter have you checked out https://github.com/dourouc05/JuCP.jl ?

I'm working on https://github.com/Wikunia/ConstraintSolver.jl but it is currently a hobby project and you're probably looking for useable solvers 😂

I'm working on https://github.com/Wikunia/ConstraintSolver.jl but it is currently a hobby project and you're probably looking for useable solvers 😂

[slack] <Wikunia> And yes it's also not directly SAT Solver but might still be interesting

@/all We'll be online again this weekend for another sprint continuing to work on JuMP. You can find more details here: https://docs.google.com/document/d/1kwtC3-vzfxKE2Kkgi0Yup5MFSgkL1DEA49U9U0B75g0/edit?usp=sharing We spent last sprint isolating and fixing a range of performance issues which should improve the initial model build time for JuMP models.

trying to revive my NLOptControl package and there is an issue creating variables, I have continuous variables, but somehow my call to the @variable macro defines my variables as discrete and I get this error

```
ERROR: LoadError: Solver does not support discrete variables
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] _buildInternalModel_nlp(::JuMP.Model, ::JuMP.ProblemTraits) at /home/febbo/.julia/packages/JuMP/I7whV/src/nlp.jl:1249
```

because I am using IPOPT. I checked out the new documentation, to see if I can make sure that the variables are continuous http://jump.dev/JuMP.jl/v0.21.3/variables/ but did not notice anything. Is there a way to ensure that the variables are continuous when they are defined?

currently I am the macro with https://github.com/JuliaMPC/NLOptControl.jl/blob/master/src/setup.jl#L180