ApplyUnitaryOperation
operation that takes a Complex[][]
input to describe a unitary and a LittleEndian
register to apply it to would be a nice parallel to the existing PrepareArbitraryState
operation.
When I have two using blocks one after another, I notice some odd behavior w/ Reset() by looking at the output of DumpRegister(). Ex:
using (q1 = Qubit()) {
X(q1);
Rz(PI() / 2.0, q1);
DumpRegister((), [q1]);
let m1 = M(q1);
Reset(q1);
}
using (q1 = Qubit()) {
ResetAll([q1]);
DumpRegister((), [q1]);
Reset(q1);
}
Output:
# wave function for qubits with ids (least to most significant): 0
โฃ0โญ: 0.000000 + 0.000000 i == [ 0.000000 ]
โฃ1โญ: 0.707107 + 0.707107 i == ********************* [ 1.000000 ] / [ 0.78540 rad ]
# wave function for qubits with ids (least to most significant): 0
โฃ0โญ: 0.707107 + 0.707107 i == ********************* [ 1.000000 ] / [ 0.78540 rad ]
โฃ1โญ: 0.000000 + 0.000000 i == [ 0.000000 ]
It did reset back to |0>, but looks like the state isn't reflecting that correctly. Any idea why?
What's the easiest way to create a phased operator (e.g. -XZ)? Ideally, I'd like to make it generic like ApplyPauli, but I haven't figured out the syntax yet. A function doesn't work because it seems I can only return one operation and can't apply many in sequence.
For example:
function FixedR(theta : Double, op : ((Double, Qubit) => Unit is Adj+Ctl)) : (Qubit => Unit is Adj+Ctl) {
return op(theta, _);
}
allowed me to create a testing harness for A5 from the Q# challenge.
operation negXZ (qubit : Qubit) : Unit is Adj+Ctl {
body (...) {
R(PauliI, PI(), qubit);
X(qubit);
Z(qubit);
}
}
DumpRegister()
and DumpMachine()
to explore those samples and tutorials.
Controlled
functor can turn what were global phases into locally observable phases.
Bound
can be used to return a single operation representing a sequence of operations without needing to wrap them in a new operation. For example, your negXZ
could be written as let negXZ = BoundCA([R(PauliI, PI(), _), X, Z]);
.
ApplyToEach
, ApplyToFirst
, and so forth can be really useful operations for cases like op(qubits[0]);
. If you want lambda support in Q#, @bettinaheim has been discussing that feature request at microsoft/qsharp-compiler#181.
LittleEndian
is a single atomic value. If you have an array of little-endian registers (that is, LittleEndian[]
), then ApplyToEach
works great over that array. On the other hand, if you want to apply an operation to each qubit making up a single LittleEndian
register, you can unwrap it with the unwrap operator (!
) to get an array of type Qubit[]
.
LittleEndian
UDT marks that a register of qubits should be interpreted where ๐โ is the least-significant (little end) bit in the expansion of ๐. From that perspective, a big-endian paper can be converted to a little-endian one by reversing the convention used to order qubits.
Hi there! I've got another difficulty in understanding a part of the topic of Circuit-Centric-Classifiers
Here's the excerpt from the paper that I am having difficulty understanding
I don't even know where to begin reasoning the paragraph, can someone help me in justifying how this is the case?
Hello,
I am measuring in bell basis . I apply H and CNOT gate.
I measured several times and this outputs I got:
Measured CNOT ยท H |00? and observed (Zero, Zero)
Measured CNOT ยท H |00? and observed (Zero, Zero)
Measured CNOT ยท H |00? and observed (Zero, Zero)
Measured CNOT ยท H |00? and observed (One, One)
Measured CNOT ยท H |00? and observed (Zero, Zero)
Measured CNOT ยท H |00? and observed (One, One)
Measured CNOT ยท H |00? and observed (One, One)
Measured CNOT ยท H |00? and observed (Zero, Zero)
I did not get (One , Zero ) output . If input (One,One), I think output should be (One,Zero) and if input ( Zero, One ) output should be ( Zero,One) .
Why I do not get this outputs ?
@cgranade Please correct me wherever I am wrong. 1) Can't we have those cases where we don't really need to map our data into a higher dimension, i.e. we can prepare our state in a euclidean space without needing to apply any tensor product to prepare them? (This assumes that the data is simple enough that we don't really require any further mapping)
2) If the above statement is true, how is having our dataset in a limited space a bad thing? Why are we treating State Preparation step as if it is at this point that we are training our model? Isn't this analogous to zero initialization of weights in classical Machine Learning? We start at the same limited value (zero) and then learn the appropriate weights and biases.
3) If I am completely off-track here, could you please tell me some resources to get started in this part of the subject? I think that this is certainly getting too much for me.