@steven-johnson @zvookin - I posted one small commit after you guys approved this PR halide/Halide#2871. Ok to merge?

@steven-johnson @zvookin Does anyone remember why indeterminate expressions need to propagate upwards in the simplifier? Isn't it enough for them to just exist somewhere in the expression tree?

Also, @pranavb-ca while you're here, please delete any of your old defunct branches that are no longer relevant: https://github.com/halide/Halide/branches/yours

Trying to slowly erase branches that are holding onto the old github history

@abadams re: why they propagate upward, IIRC the idea is that any expr that contains indeterminate is itself indeterminate

but it seems unnecessary to do that folding

looks like constraints set on buffer dimensions don't propagate. E.g. if you declare an RDom(input), the RDom dimensions will use input.extent.<D>, not input.extent.<D>.constrained. Two examples here: inazarenko/Halide@3243736 . Do you think it's worth fixing? If so, any advice as to how best to do that?

IIUC, it happens because add_image_checks injects constrained variables into the IR that exists, but not into RDom definitions that are still hiding in env at that point, nor into the other constraint expressions

That sounds worth fixing, but I'm not clear on how there are still Exprs hiding in the RDoms that haven't made it into the Stmt at the time add_image_checks runs

at that point, loops over RDom are defined in terms of .min and .max variables, but the LetStmts for those variables won't be created until the next pass (bounds inference)

Example:

roduce f {

let f.s0.x.loop_max = f.s0.x.max

let f.s0.x.loop_min = f.s0.x.min

let f.s0.x.loop_extent = ((f.s0.x.max + 1) - f.s0.x.min)

for (f.s0.x, f.s0.x.loop_min, f.s0.x.loop_extent) {

f(f.s0.x) = undef()

}

let f.s1.input$x.loop_extent = ((f.s1.input$x.max - f.s1.input$x.min) + 1)

let f.s1.input$x.loop_max = f.s1.input$x.max

let f.s1.input$x.loop_min = f.s1.input$x.min

let f.s1.x.loop_max = f.s1.x.max

let f.s1.x.loop_min = f.s1.x.min

let f.s1.x.loop_extent = ((f.s1.x.max + 1) - f.s1.x.min)

for (f.s1.input$x, f.s1.input$x.loop_min, f.s1.input$x.loop_extent) {

f(f.s1.input$x) = (uint8)42

}

}

roduce f {

let f.s0.x.loop_max = f.s0.x.max

let f.s0.x.loop_min = f.s0.x.min

let f.s0.x.loop_extent = ((f.s0.x.max + 1) - f.s0.x.min)

for (f.s0.x, f.s0.x.loop_min, f.s0.x.loop_extent) {

f(f.s0.x) = undef()

}

let f.s1.input$x.loop_extent = ((f.s1.input$x.max - f.s1.input$x.min) + 1)

let f.s1.input$x.loop_max = f.s1.input$x.max

let f.s1.input$x.loop_min = f.s1.input$x.min

let f.s1.x.loop_max = f.s1.x.max

let f.s1.x.loop_min = f.s1.x.min

let f.s1.x.loop_extent = ((f.s1.x.max + 1) - f.s1.x.min)

for (f.s1.input$x, f.s1.input$x.loop_min, f.s1.input$x.loop_extent) {

f(f.s1.input$x) = (uint8)42

}

}

sorry, let me try that again

```
produce f {
let f.s0.x.loop_max = f.s0.x.max
let f.s0.x.loop_min = f.s0.x.min
let f.s0.x.loop_extent = ((f.s0.x.max + 1) - f.s0.x.min)
for (f.s0.x, f.s0.x.loop_min, f.s0.x.loop_extent) {
f(f.s0.x) = undef()
}
let f.s1.input$x.loop_extent = ((f.s1.input$x.max - f.s1.input$x.min) + 1)
let f.s1.input$x.loop_max = f.s1.input$x.max
let f.s1.input$x.loop_min = f.s1.input$x.min
let f.s1.x.loop_max = f.s1.x.max
let f.s1.x.loop_min = f.s1.x.min
let f.s1.x.loop_extent = ((f.s1.x.max + 1) - f.s1.x.min)
for (f.s1.input$x, f.s1.input$x.loop_min, f.s1.input$x.loop_extent) {
f(f.s1.input$x) = (uint8)42
}
}
```

I see. In general the thing that is supposed to happen is that passes that introduce new symbols into the Stmt after add_image_checks runs are supposed to see if there's a .constrained version of the symbol already in scope. It's not a great design, but that's the status quo.

We could try switching to a design where add_image_checks and unpack_buffers cooperate to unpack the true values using a different name, and then the constrained values are just called .min/.max

One snag there is that bounds inference inserts definitions of *before* the constrained variables

`.min`

and `.max`

my very blurry idea of the fix was to (a) sort the definitions of

`.constrained`

variables in topological order, so that one constraint can use another; and (b) change the bounds inference to get underneath all those definitions and use them if they exist
Another option is to have add_image_checks mutate the environment along with the Stmt so far

We've made a deep copy of it at that point

That might be the most general change. It wouldn't do your point (a) though

yes, that sounds promising; though then we still need to change the bounds inference to insert its LetStmts after the .constrained