These are chat archives for Deskshell-Core/PhoenixEngine

21st
Feb 2015
Ingwie Phoenix
@IngwiePhoenix
Feb 21 2015 02:10
This friend of mine is messing with me. :) He reminded me of drag0n and how i told him that - which is true - I planned to use this to create a series of amazing developer tools and the „drag0n Store“ - an advanced app store. I told him that I wanted to use the Phoenix Engine as that software’s foundation and now he is like, „why not make an OS designed for devs? Make a desktop and bundle Phoenix Engine, use the store to deliver PE based apps."
I am like „….well it definitively isn’t impossible."
Ingwie Phoenix
@IngwiePhoenix
Feb 21 2015 02:33
Time to do this. Rewriting the IceTea core to use Ninja’s build execution logic. I was explained how it works, now I can implement it :)
Ingwie Phoenix
@IngwiePhoenix
Feb 21 2015 03:18

So there I am, stuck on a little decision. The situation is the following: Ninja creates a set of spawned subprocesses based on the number of available CPU cores (aka. available threads) and stuffs that array with commands that it generates from command and rules. So from this:

rule CC
  command = gcc $in -o $out
rule LINK
  command = gcc $in -o $out
build foo.o: CC foo.c
build bar.o: CC bar.c
build baz.o: CC baz.c
build app: LINK foo.o bar.o baz.o

Effectively fills Ninja first with the three CC tasks (compiling all the .c files) and once they are done, all dependencies for the LINK task are ready and it runs that. So on a 4 core CPU:

1 = CC foo.c
2 = CC bar.c
3 = CC baz.c
4 = Empty.
# After 1 2 and 3 are done…
1 = LINK app
2, 3, 4 = empty.

I am about to implement this very logic into IceTea. However, I have a tiny decision problem. Ninja wrote its own async subprocess routines that I can not use. Instead, I already have that due to the STLPlus library I am using. So I already have stlplus::async_process. Now here is the thing that I am stuck at:

1: void Run() is one giant while(!tasks.empty() && !subprocesses.done()) loop, that picks up one task at a time and also checks if all subprocesses did do well. If not, it spills their STDERR and exits the app. However, IceTea also supports scripted tasks. So imagine we have

task("limbo", "exe") {
    input: ["limbo.c", "limbo_resources.bbq"]
}

and the rules for .c and .bbq are:

rule("C", "c") {
    accepts: ["*.c"],
    outputs: {
        pattern: "*.o",
        expected: "out/%t-%b.o"
    },
    build: function(input, output, targetName, target) {
        return shell.sanitize(["gcc", input.join(" "), "-o", output]);
    }
}
rule("Resource generator", "bbq-rsrc") {
    accepts: ["*.bbq"],
    outputs: {
        pattern: "*.c", // <---
        expected: "out/%t-%b.c"
    },
    build: function(input, output, targetName, target) {
        return (bool)bbq2c(/*...*/);
    }
}

That would mean that the compiler job would go into the async pool first. But then the process will focus on the BBQ file and no checks on the subprocesses is done. So by the end that the BBQ archive is done, we maybe could already have reported an error or maybe even terminated the script execution to save time, cpu and memory. The question is - should I have such a monolithic loop?

2: The build executor is split into two: The loop is the same as above, BUT it externalizes the checks on subprocesses to another thread. So that when something happens, the sub-thread can show the error message already and maybe cause build termination or prepare clean-ups while the main loop is stuck at the BBQ part.

What would your suggestion be? I know its a lengthy question, but an important one int he long run.