## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Sep 29 12:20
ErikOrm removed as member
• Jun 08 17:45
fhk starred flowty/flowty
• Apr 07 11:38
spoorendonk closed #4
• Apr 07 11:38
spoorendonk commented #4
• Mar 31 07:03

spoorendonk on master

vrptw time before demand (compare)

• Mar 31 06:58

spoorendonk on master

format (compare)

• Mar 30 16:03

spoorendonk on use-knapsack

• Mar 30 16:03

spoorendonk on master

knapsack less output num customers from arg and 5 more (compare)

• Mar 30 16:03
spoorendonk closed #7
• Mar 30 16:01
spoorendonk synchronize #7
• Mar 30 16:01

spoorendonk on use-knapsack

update knapsack value (compare)

• Mar 30 16:01

spoorendonk on use-knapsack

• Mar 30 14:45
spoorendonk synchronize #7
• Mar 30 14:45

spoorendonk on use-knapsack

workflow (compare)

• Mar 30 14:41
spoorendonk synchronize #7
• Mar 30 14:41

spoorendonk on use-knapsack

knapsack less output num customers from arg and 2 more (compare)

• Mar 30 13:30

spoorendonk on master

remove an optimize (compare)

• Mar 30 11:20
spoorendonk synchronize #7
• Mar 30 11:20

spoorendonk on use-knapsack

update workflow (compare)

• Mar 30 11:09
spoorendonk opened #7
Simon Spoorendonk
@spoorendonk
@erohe_gitlab ^
Simon Spoorendonk
@spoorendonk

Ok. The big one is hard

Process Node 514 (algo = PRICE_AND_CUT, phaseLast = PHASE_CUT) gLB = 56379.5 gUB = 59044 gap = 0.04726 time = 861.580

5 % after 15 min

tt_r18.1_12.csv

objval: 372254.0000000172

real 1m14.640s
user 15m30.381s
sys 0m10.323s

60 seconds without std::out
Simon Spoorendonk
@spoorendonk
On strong inequalities. Are we not talking $x_{ij}^k \leq y_{ij}$
Simon Spoorendonk
@spoorendonk
and $x_{ij}^k \leq d^k y_{ij}$ for the other model
Simon Spoorendonk
@spoorendonk

with and without cuts in 06 example

Alps0208I Search completed.
Alps0261I Best solution found had quality 250351 and was found at depth 32
Alps0265I Number of nodes fully processed: 20
Alps0266I Number of nodes partially processed: 15
Alps0267I Number of nodes branched: 17
Alps0268I Number of nodes pruned before processing: 0
Alps0270I Number of nodes left: 0
Alps0272I Tree depth: 7
Alps0274I Search CPU time: 143.07 seconds
Alps0278I Search wall-clock time: 83.62 seconds

================ DECOMP Statistics [BEGIN]: ===============
Total Decomp = 83.60 100.00 35 3.51
Total Solve Relax = 0.00 0.00 0 0.00
Total Solve Relax App = 0.00 0.00 0 0.00
Total Solution Update = 0.79 0.94 109 0.05
Total Generate Cuts = 72.43 86.63 48 1.59
Total Generate Vars = 6.59 7.88 82 0.10
Total Compress Cols = 0.04 0.05 14 0.01
================ DECOMP Statistics [END ]: ===============

Node 32 process stopping on bound. This LB= 250366 Global UB= 250351.

Alps0208I Search completed.
Alps0261I Best solution found had quality 250351 and was found at depth 30
Alps0265I Number of nodes fully processed: 18
Alps0266I Number of nodes partially processed: 15
Alps0267I Number of nodes branched: 16
Alps0268I Number of nodes pruned before processing: 0
Alps0270I Number of nodes left: 0
Alps0272I Tree depth: 7
Alps0274I Search CPU time: 48.34 seconds
Alps0278I Search wall-clock time: 3.56 seconds

================ DECOMP Statistics [BEGIN]: ===============
Total Decomp = 3.54 100.00 33 0.31
Total Solve Relax = 0.00 0.00 0 0.00
Total Solve Relax App = 0.00 0.00 0 0.00
Total Solution Update = 0.86 24.17 134 0.05
Total Generate Cuts = 0.00 0.00 49 0.00
Total Generate Vars = 0.66 18.52 97 0.01
Total Compress Cols = 0.06 1.70 21 0.00
================ DECOMP Statistics [END ]: ===============

room for improvement
wonder why the pricing became so hard
Simon Spoorendonk
@spoorendonk

tt_r18.1_12.csv

objval: 372254.0000000172

real 1m14.640s
user 15m30.381s
sys 0m10.323s

Erik Hellsten
@erohe_gitlab
Hi!
I pushed some intital results for the smaller instances. Still need to run some of the bigger ones but I'll do it tonight.
Simon Spoorendonk
@spoorendonk
Cool. Awesome
Simon Spoorendonk
@spoorendonk
and they are with the strong flow inequalities right?
Erik Hellsten
@erohe_gitlab
they are with dynamic strong inequalities, yes =)
ehm, yeah. more or less at least. I solve almost everything single core, except for when I solve the integer problem with the root node columns to generate an intial upper bound, which I by some reason solve with 4 cores. But that makes up but a sliver of the runtime, so don't think that would have any major impact
Simon Spoorendonk
@spoorendonk
need to do some optimizations I can see :)
Erik Hellsten
@erohe_gitlab
You bloody better don't beat me ;) I've spent too much time on this ^^
Simon Spoorendonk
@spoorendonk
Me too 😀
Simon Spoorendonk
@spoorendonk
boom. No optimizations yet. But now I can do callbacks to initialization, solution feasibility check, primal heuristic, and your very own pricing algorithm. Some docs and then a new version comming up
Erik Hellsten
@erohe_gitlab
Sweet As! The development of tomorrow is underway =) Good to hear!
Simon Spoorendonk
@spoorendonk
moved to github now. They support build on windows. Added you to my example project to see callback (and other) functionality. New version is available with the callbacks
Erik Hellsten
@erohe_gitlab
Ok, managed to download it. Though I didn't see the GitHub invite. Which account did you invite?
Simon Spoorendonk
@spoorendonk
@ErikOrm
Erik Hellsten
@erohe_gitlab
ok, thanks! found it
Simon Spoorendonk
@spoorendonk
does it make sense to initialize the master with some nice columns?
Erik Hellsten
@erohe_gitlab
Oh look at that! =) For the FCMCFP? I mean, I suppose, if there would be a good way to generate them. Ofc one could solve a shortest path for each commoodity and add that, but I don't think that would make any major difference. What I believe is good practice, for many problems, including the FCMCFP, is to initialise the problem with artificial rejection "slack variables" for the commodities (or for each separate subproblem in the general case), with a high cost. And then generate columns until all the slack variables are 0, meaning that we have sufficient columns to find a feasible solution to the problem. I think this is generelly more likely to find a good initial column pool than generating them straight from Farkas lemma. Don't know how big the difference is, but maybe this can be applied as a general setting, to handle problems whose master problems are infeasible with empty column pools. Maybe you already do this? Just a thought.
Simon Spoorendonk
@spoorendonk
Artificial columns to begin with is already done. It is called phase1 in the output. Phase2 is where the real cost is used. Didn’t think initialisation would make a big difference. For other problems it may make more sense
Erik Hellsten
@erohe_gitlab
sweet! Yeah, no, I would certainly believe that that would be sufficient. =)
Simon Spoorendonk
@spoorendonk
long time. I have updated to version 0.2. Not many big new things performance wise. But you can read/write instances for path mips. And you can add packing sets $S$ where $\sum_{i \in S} x_i \leq 1$ to hint good branching decisions. Not super useful in MCF but good in routing.
Erik Hellsten
@erohe_gitlab
Hi Simon, good to hear from you! Time goes so quickly once one gets caught up in something. In the midst of a few things I'm trying to finish up right now, but hopefully I will be able to come back and write the code for the cuts soon enough. =)
Simon Spoorendonk
@spoorendonk
No problem. Maybe I will beat you to it