reproduced a crash in the sparse-direct least squares
will rerun in Debug mode to try to reproduce
Jack Poulson
@poulson
apparently it was just an std::bad_alloc
also, I have committed a step towards addressing the performance of ApplyPackedReflectors when there are only a few right-hand sides: elemental/Elemental@85572b9
Jeff Hammond
@jeffhammond
There should be a T-shirt or similar with this on it :-)
something in the middle of my preference to add functionality forever and your desire to release every day is likely the right compromise
Ryan H. Lewis
@rhl-
@poulson: thanks for doing that. I'm looking into writing BFGS in El. Is it possible to use the existing Reflectors to store arbitrary rank 1 updates to the identity?
Jack Poulson
@poulson
why use the reflection abstraction instead of using a product of tall-skinny dense matrices applied via a Gemm?
Ryan H. Lewis
@rhl-
Let me think about how to do that
Ryan H. Lewis
@rhl-
I'm not immediately seeing how you can use a Gemm here
Ryan H. Lewis
@rhl-
I guess if you think of Bk = B{k-1} + U_k + S_k then Bk*v = [ B{k-1} | U_k | S_k ]v is a Gemv
But each U_k and S_k are rank one
Ryan H. Lewis
@rhl-
But you need to apply the inverse
Ryan H. Lewis
@rhl-
@poulson: seeing IPM based BPDN using about 1000x the resources of ADMM.
Jack Poulson
@poulson
the sparse IPM BPDN should be much faster than the dense ADMM...
not that it is a fair comparison
but it's also not fair to compare the two without also looking at the achieved accuracy
Ryan H. Lewis
@rhl-
we have a dense matrix
does that mean that IPM BPDN is trying to create some kind of sparse matrix from our dense one? That would be horrible..
Jack Poulson
@poulson
no, it is creating a larger dense matrix for the KKT system
Ryan H. Lewis
@rhl-
Ah that makes sense.
Should ADMM have decent accuracy?
Jack Poulson
@poulson
its convergence is eventually linear, but it tends to have a preasymptotic convergence to a couple of digits of accuracy pretty quickly
so the answer depends upon how many digits of accuracy you want