- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

A python library for quantum information and many-body calculations including tensor networks.

- Aug 03 22:45jcmgray synchronize #51
- Aug 03 22:45
jcmgray on tensor_2d

TN: graph further fix for depre… (compare)

- Aug 03 19:55jcmgray synchronize #51
- Aug 03 19:55
jcmgray on tensor_2d

TN: graph fix deprecation warni… (compare)

- Jul 30 22:25jcmgray synchronize #51
- Jul 30 22:25
jcmgray on tensor_2d

TN: add basic tensor 2d SU + FU… (compare)

- Jul 30 18:24jcmgray synchronize #51
- Jul 30 18:24
jcmgray on tensor_2d

azure: try installing tensorflo… (compare)

- Jul 30 18:18jcmgray synchronize #51
- Jul 30 18:18
jcmgray on tensor_2d

azure: try tweaking tensorflow … (compare)

- Jul 30 17:52jcmgray synchronize #51
- Jul 30 17:52
jcmgray on tensor_2d

docs and azure tweak (compare)

- Jul 29 21:39jcmgray synchronize #51
- Jul 29 21:39
jcmgray on tensor_2d

TN optimize: add NADAM Adam wi… update docs (compare)

- Jul 13 23:44jcmgray synchronize #51
- Jul 13 23:44
jcmgray on tensor_2d

fallback to toolz (might be use… (compare)

- Jun 27 19:36jcmgray synchronize #51
- Jun 27 19:36
jcmgray on tensor_2d

azure: try disabling numba para… (compare)

- Jun 27 19:29jcmgray synchronize #51
- Jun 27 19:29
jcmgray on tensor_2d

azure: try disabling numba para… (compare)

If the slepc and mpi4py tests are passing then trying backend='slepc-nompi' would be the first thing to check, then launching the program in MPI but using the 'syncro' mode. Which version of MPI do you have installed? Openmpi 1.10.7 seems to be one of the few version that can reliably spawn processes

If it's of any interest to you @jcmgray , there's this paper that just came out: https://arxiv.org/abs/1905.08394 . I'm of the opinion that Quimb + opt_einsum + Dask would do it just as well given the same hardware though.

Ha yes! Paper and code out soon hopefully. That's good to hear you are working on dask SVD. It's such a convenient lazy/distributed backed, though I'm still not totally sure it can be scaled up well, for instance, it can't handle contractions with 32+ dimensions yet. Do you know the cyclops tensor framework? It has a numpy like API and should be mostly compatible with quimb already via autoray now. I have tested it in fact briefly using MPI to contract quantum circuits. I think the SVD it uses is scalapack however.

I knew of CTF during my masters days a couple of years ago but it didn't have SVD back then, which is why I wrote my own MPS code using Elemental. Does it seem reasonable that we could do MPI TN calculations through quimb + autoray + CTF in Python? If so, it might be worth me plugging in a faster MPI SVD directly into CTF than continuing on with Dask.

The SVD I am working on for Dask is based on the polar decomposition https://doi.org/10.1137/120876605 which is a much better fit for distributed task-scheduling than the divide-and-conquer approach (which Lapack gesdd, Eigen BDCSVD and Elemental's SVD are based on). In principle, it should be very easy to hook up an MPI implementation of this new SVD (https://github.com/ecrc/ksvd) to CTF since the function signature is almost identical to Scalapack's.

quimb + autoray + cyclops is absolutely one of the combinations I'll be targeting (I might even open a pull-request to support ctf in *should* be effortless and in principle I don't see any reason that good performance cannot be achieved. In a few months time I'll be trying this stuff properly but until then any additional attempts obviously welcome as well!

`opt_einsum`

now since it's just a one-liner). The idea of autoray is basically that most array operations and specifically tensor network operations can be defined v simply so supporting numpy/tensorflow/cyclops
With regard to implementing a good, distributed, dense SVD in dask or ctf, I think both would be super useful. I guess ctf might be the more natural place given I assume communication performance and thus using MPI is relatively important.

My experience so far, is that for simply *contracting* large tensors ctf performs not much slower than usual numpy/blas, but the SVD was very slow (I compiled with OpenBLAS rather than Intel which has a custom scalapack implementation so that might be a factor). It may also be worth checking my claim that the SVD used really is that from scalapack!

Finally, I'll also mention

`mars`

https://github.com/mars-project/mars in case you have not seen it. Very similar to dask in lots of ways, but unlike dask supports chunking in both dimensions for SVD currently. Not sure about its general performance! But follows numpy api so will work with autoray nicely.
Thanks for your thoughts. One other thing I found that required Elemental was that non-MKL implementations of Scalapack do not have support for 64-bit integer indexing, leaving a relatively small cap on the matrix sizes. This was something I was hoping to avoid by going to Python.

It also looks like CTF does not support the 64-bit int indexing in MKL Scalapack. So unfortunately there is no SVD implementation that fulfils all my needs for performance, distributed parallelism, 64-bit integer indexing, support and has a Python interface. I think for now I will continue working on my SVD in Dask, but raise an issue with the CTF developers with regard to SVD.

That makes sense regarding

`mars`

. And yes annoying nothing quite meets all those requirements - which I suppose is why you're working on it! Is the limitation for 32-bit indexing simply that dimensions all have to be smaller than 2^32? Let me know if you want any input/changes from the quimb side of things.
That's right for the 32-bit indexing. It applies even to the global dimension of a Scalapack matrix (due to the scalapack descriptors using ints). In Dask though, the individual chunks would all definitely be smaller than this limit, and the indexing into individual chunks is handled by Python ints.

Hello everyone, I am new to tensor networks and I would like to know if quimb can be used to solve ML tasks in the way Stoudenmire shows, using the dmrg algorithm to optimize the MPS tensors. I saw that in quimb you can use dmrg but I don't know if I can use a loss function in it

Hi Daniel, the DMRG algorithm probably doesn't work out of the box with that method, but I don't suppose it would be super tricky to modify it, since it tries to be quite general. Note there are also direct pytorch/tensorflow/autograd+scipy/jax+scipy tensor network optimizers in quimb, which might be a easier way to get started!

I believe TensorFlow should be able to backpropagate through SVD according to the rules in https://journals.aps.org/prx/abstract/10.1103/PhysRevX.9.031041 so there might not be a need for DMRG in this case.

Hi Johnnie, 2 questions:

- Would it be possible to add an
`absorb=None`

option to`tensor_split`

, which would just return`U, s, V`

without absorbing the singular values? I'd like to have separate access to the svals for easy gauge switching without calling`Tensor.singular_values`

each time. Looking at the source, I'm worried that jit might have some issues with a variable number of return values (`U, V`

vs`U, s, V`

). - When trimming the results of SVD (e.g.
`U = U[..., :n_chi]`

in`quimb.tensor.decomp`

), I am worried that this creates a view that does not release the potentially much larger underlying array (https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html ). Would an explicit copy be more suitable here?

Yeah I think an option that returns the decomposition as well as the singular values is needed. Just to understand what would be best, what's your use case? The options are to return raw U, s, VH or the same but as tensors (then with s as a diagonal array or as a vector with a 'hyperindex'). Or just to return s in addition to whatever option is chosen.

And yes an explicit copy would be good here, I'll take a look into what is the best broadly backen agnostic option here

I'd like to control contraction of the singular values myself. For space saving, I think

`tensor_split`

should return the singular values in a vector, which I can specify in contractions with the hyperedge as you suggest (e.g. `einsum('ij,j->ij', U, s)`

or `einsum('ij,j,jk->ik', U, s, VH)`

). So with `absorb=None`

, `get='values'`

would be unchanged, `get='arrays'`

would return the raw `U, s, VH`

, `get='tensors'`

would return `Tensor`

s of each `U, s, VH`

and the default `get=None`

would return the equivalent tensor network for `U_ij s_j VH_jk`

.
I started writing some binary tree tensor network stuff using quimb for 2D TN states, but I think it would be good to contribute once #51 is merged.

Sounds cool regarding 2D TN states! If you fancied raising an issue with the scope / any details of what you have done then maybe I can keep it in mind while finishing off the 2D stuff (#51), but obviously up to you!

I've been starting some work on some DMRG-like stuff, and I'm interested also in implementing the 1-site DMRG with subspace expansion for quimb https://doi.org/10.1103/PhysRevB.91.155115

Hi Johnnie, i have started playing with Quimb. Its a really great library. I wanted to point out a bug in MatrixProductState.partial_trace(). The function works fine for two or more sites, but it runs into an error for one site. The error appears to have to do with the rescaling, but switching off the rescaling of sites also returns an error.

I just submitted an imaginary time TEBD MPS pull request for you to give feedback on. Thinking about extending it to MPOs, I believe I need to use

`superop_TN_1D`

, and also keep track of an accumulating trace during the sweep so the last tensor can be appropriately normalised, since QR sweeping the MPO does not enforce normalisation Tr(rho) = 1 as it does for MPSs <psi|psi> = 1.
Does that sound reasonable?

Hi Aidan, sorry will try and get to the PR this morning! The superop is probably not needed since it's for general quantum channels rather than Hamiltonian evolution. QR sweeping the MPO does enforce Tr(rho^2)=1 which is probably sufficient to keep the numerics under control, with the state just being normalized when it's retrieved at the end

Though I do think having the functionality to apply general superops to particular sites of the MPO would be quite cool, and unitary operators would be the specific "rank 1" case of the superop.

If a density matrix is represented by the double layered MPO (https://arxiv.org/abs/1812.04011 Fig 2h, Orus calls this an MPDO), then QR sweeping will enforce the proper normalisation Tr(rho) = 1. So maybe another option is to write a new MPDO class that inherits from

`TensorNetwork1DFlat`

.
Do you have some use case for these things? I generally find it helpful in terms of design to think about the end point!

For testing some MPO tomography techniques, I would like to generate thermal states of e.g. transverse Ising Hamiltonians. Perhaps I will start by implementing the MPDO class, which will have a method to export it as a single layer MPO. I'm worried that MPO normalisation Tr(rho^2) = 1 might have issues for fully mixed states.

The MPDO is also known as a Locally Purified State (LPS) I'm actually just writing up a paper on using these for tomography at the moment. And to clarify, keeping Tr[rho^2] == 1 (or in fact any way of simply rescaling the tensor entries) would just be to stop the scale of the entries from exploding, the assumption being that the MPO would be properly scaled (normalized) before being returned to the user.

Learning about the term LPS has lead me to https://arxiv.org/abs/1412.5746 which basically describes the TEBD extension to MPDOs that I was thinking about. So I think I'll try creating an LPS class that inherits

`TensorNetwork1DFlat`

and implement TEBD on that.
Hi Johnnie. Quick question if you don't mind: With quimb, if I have an MPS and apply an (n>2)-site gate, is there built-in functionality to split this back down to an MPS again (like there is for n=2), or do I just have to split and compress etc. manually afterwards. I'm just scanning through the docs to find the best way to do this.

Johnnie, did you have anything to do with this LPS-based tomography paper? https://arxiv.org/abs/2006.02424 I was working on being able to eventually implement something like this