Op-level, it can always be provided to a
Scanare very related
Is this behavior of Aesara variables intended/serves some purpose or is it simply a bug ?
import aesara.tensor as aet import copy x = aet.dscalar("x") z = copy.deepcopy(x) assert x==z # Fails
==to work like it generally should, we would need to implement consistent
==effectively does what
g_1 == g_2a little expensive when they're both large graphs that are very similar except in the "leaf" nodes/inputs
dicts all the time, it would surely have an effect
This seems like a whole other problem in itself.
As of now, I was looking for ways to have a copy of a nested structure (for instance a dict with list of TensorVariables), I wanted to track changes within such structures, by comparing old to new modified structures. But ran into this issue when I deepcopied the nested structure.
mathseems a bit cluttered. There is also
import aesara import aesara.tensor as at rv, updates = aesara.scan( fn=lambda : at.random.normal(0, 1), n_steps=5, ) print(rv.eval()) # [-1.5294442 -1.5294442 -1.5294442 -1.5294442 -1.5294442] print(rv.eval()) # [-1.5294442 -1.5294442 -1.5294442 -1.5294442 -1.5294442]
numpyand start working my way through the list. Could do that with
math, but this can get unwieldy.
jax.scipy) but it would make a lot of sense
from aesara.tensor.special import softmax?
from aesara.tensor.math import softmax