@cgranade Please correct me wherever I am wrong. 1) Can't we have those cases where we don't really need to map our data into a higher dimension, i.e. we can prepare our state in a euclidean space without needing to apply any tensor product to prepare them? (This assumes that the data is simple enough that we don't really require any further mapping)
2) If the above statement is true, how is having our dataset in a limited space a bad thing? Why are we treating State Preparation step as if it is at this point that we are training our model? Isn't this analogous to zero initialization of weights in classical Machine Learning? We start at the same limited value (zero) and then learn the appropriate weights and biases.
3) If I am completely off-track here, could you please tell me some resources to get started in this part of the subject? I think that this is certainly getting too much for me.
i.e. we can prepare our state in a euclidean space without needing to apply any tensor product to prepare them
The state of a quantum system is always a vector in Hilbert space rather than Euclidian space, but that's largely a mathematical artifact of how we describe, model, and simulate quantum systems. In particular, just like with the kernel trick in traditional ML, we never write down those vectors in Hilbert space but prepare them implicitly.
Sweet, I'll get something moving soon for the chat platform stuff!
In the meantime verson 0.12 is out of the QDK, I wrote up some thought/context on the release notes here: https://twitter.com/crazy4pi314/status/1280174137973981201?s=20