steven-johnson on jit-calls
clang-format (compare)
steven-johnson on jit-calls
Prototype of revised JIT-call c… (compare)
steven-johnson on init-from-context
steven-johnson on main
Allow overriding of `Generator:… (compare)
steven-johnson on gen-main-api
steven-johnson on main
Add execute_generator() API (#6… (compare)
libhexagon_remote_skel.so
and friends are checked-in in binary form.
run_main_on_hexagon
is still only in the Hexagon SDK, though, I'd imagine. Any idea why the changes I make in the Halide source aren't appearing in the simulator? It looks like libhexagon_remote_skel.so
is the only library I need a copy of, but there are several others (hexagon_sim_remote
, libsimqurt.a
) that are built in Halide's src/runtime/hexagon_remote
dir but whose absence doesn't trigger any linker errors
halide_malloc
in binary files, nothing that I've built in Halide matches. But the Halide_TOOLS
inside the Hexagon SDK does match. It looks like this symbol isn't being included in my build. My pre-build configuration command is cmake -B ./build/ -S . -G Ninja -DCMAKE_BUILD_TYPE=Release -DTARGET_WEBASSEMBLY=OFF -DWITH_TESTS=OFF -DWITH_TUTORIALS=OFF -DWITH_UTILS=OFF -DWITH_PYTHON_BINDINGS=OFF
factor
I read this as "my dimension is X, so width, split by factor of 4 is then width / 4", I'm not sure how to read it to get the behaviour you're explaining. If on the other hand parameter was called `num_elements_in_inner_dimension" there would be little room for confusion! I'm not proposing this name, but trying to illustrate how more explicit name (or more text in the docs) could clear up any confusion once and for all.
Halide::Runtime::Buffer
that is only allocated on device? I have a few AOT compiled functions that will run on device only and it would be nice to not allocate the input and output of these on host.
Halide is an open-source domain-specific language which iterates over up to four dimensions to apply computations.
There is no such restriction...
I see these in the stmt files:
let t7162 = ((t7077 - input.min.0) + t7170)
So I figured, adding something like this:
pipeline.add_requirement(input.dim(0).min()==0, "Min should be 0");
would simplify the stmt and hopefully make things faster.
While it did simplify the stmt, the performance was significantly slower. Any thoughts on why just adding that requirement that all buffers have 0 min might impact performance negatively?
Does someone knows if there is a way to map the Func fields in a Halide::Generator class to their corresponding Func’s in a compiled pipeline (seen in the dumped *schedule.h file dumped by the 2019 auto-scheduler from Andrew Adams)?
Does Halide provide an automated way of pulling this mapping or any other information that could help to derive the map?
For example, the auto-scheduler pipeline has the func1_1 Func local variable, which belongs to the func1 private field in the Halide::Generator class. Another less trivial mapping is with pipeline Func local variables that start with “repeatedge*”. I understand that these are mapped to Func’s fed by a BoundaryConditions expression in the Halide::Generator class, although I am not sure about this. Thanks
For the question above, here is a the sample code. I am looking a way to
automatically map func1 from the HalideGenerator class to func1_1 in the
scheduler pipeline below. And func2 to repeat_edge_1.
class HalideGenerator1 : public Halide::Generator <HalideGenerator1> {
public:
...
void generate() {
...
func2(x, y) = BoundaryConditions::constant_exterior(func1, 0)(x, y);
...
}
void schedule() {
...
}
private:
...
Func func1{"func1"};
Func func2{"func2"};
...
};
inline void apply_schedule_HalideGeneratorName(
::Halide::Pipeline pipeline,
::Halide::Target target
) {
using ::Halide::Func
...
Func func1_1 = pipeline.get_func(28);
Func repeat_edge_1 = pipeline.get_func(27);
...
func1_1
.split(...)
.vectorize(...)
.compute_root()
.parallel(...);
...
repeat_edge_1
.split(...)
.vectorize(...)
.compute_root()
.parallel(...);
...
}