I propose 3 alternatives:
1) Engage the community and move the operators out of experimental. This will involve seeing a) demand and b) implementation from different classes of partners.
2) Leave them in experimental, and thus not compel backends to implement them. This will allow for users to utilize them if their given front/back-end support them but we are not compelling anyone to support them if it makes analysis/compliance harder
3) Remove the operators. If we do not see any signal for demand for these operators (please let me know if there is demand outside FB) we may just remove them. This will simplify the specification
Please let me know your thoughts
Thanks everyone who joined the control flow and loop track in the ONNX workshop, here the summary of the discussion and what’s next:
ifnode: no disagreement, it should be removed from experiment and put into ONNX core. The description need to clarify that the variadic output need to be the same type for both branches of the
ifnode only work on a scalar condition and only execute
elsebranch. So we need a
selectnode which is an element wise select. Both branches are tensors with the same shape and the result is another tensor that combine both based on the
selectcondition. The condition in this case is a tensor and not scalar.
Scannode: formerly named ScanLoop, we need to add
reverseattribute and a list of sequence lengths, here the current PR: onnx/onnx#1177. ScanWhile is gone.
LoopTensorIndexis not needed. And
Loopwill be the more generic while loop. We should be able to implement
Scanon the top of
Loopif needed. Also, we should remove some of the current restriction and address feedback.
Next steps, I will send an email to the participant and the owner of each of the above work items. And please provide feedback. The feedbacks for both Loop and Scan are pretty low.
Back to the reference of the enclosing scope
Actually, current description of the Loop operator is “Values from the enclosing scope (i.e. variable
a here) are in scope” (https://github.com/onnx/onnx/blob/master/docs/Operators.md#Loop)
The rationale for this is that otherwise referring to too many initializers is too verbose (imagine passing every weight tensor to the input loop). And lexical analysis is a simple pass to carry out in the backend