I think in this case you are better off using forward chaining. You would need two rules for that. First, you have a rule similar to the one you had originally, where you match a single person along with their cars, and wrap the result in a new fact, which you yield from the right-hand side of the rule.
When() .Match<Person>(() => person, p => p.Age < 30) .Let(() => cars, () => person.Cars.Where(c => c.Year > 2016)) .Having(() => cars.Any()); Then() .Yield(ctx => new YoungPersonWithNewCar(person, cars));
Then another rule matches these new facts and collects them.
When() .Query(() => youngPeopleWithNewCars, q => q .Match<YoungPersonWithNewCar>() .Collect() .Where(c => c.Any())); Then() .Do(ctx => DoSomethingWithNewCarsThatBelongToYoungPeople(youngPeopleWithNewCars));
@snikolayev I saw that you're doing some work on rete optimization, such as this commit here:
If it'd be helpful, I'd be happy to try a new version with our project and provide a comparison using the Unity 3D Engine's profiler again, as I did in this issue:
If there are two commits you'd like me to do an A/B comparison with, just let me know.
Or I can wait a bit if there's more you're planning to do.
@przemekwojcik Check out ISession.Events.LhsExpressionEvaluatedEvent. This event fires every time a When (LHS) expression is evaluated, and references all rules that contain this expression. You can monitor these events over time to see what is matching for a given rule.
In a rete net, you have to do a bit of legwork to figure out exactly which pattern matches cause a rule to fire because expressions can be shared between multiple rules, and their results are cached.
Therefore you need to build some tooling to do monitor and associate evaluations with a given rule firing.
@snikolayev I'm curious to experiment a bit with some resource pooling to try to reduce allocations in NRules. I'd love to hear your thoughts on it when you have a moment.
One low-hanging-fruit spot I'd like to try is Tuple reuse. It's very clear where Tuples are constructed, and they could instead come from a pool of Tuples which can be reused. Any ideas to help me track down the point/points in the code where can a Tuple be safely returned to the pool for reuse?
@snikolayev Thanks for the pointers. Yes, it makes sense to start with the easiest options that offer the greatest optimization for amount of effort and risk. I'll openly admit that I'm not especially well versed in this kind of optimization, and would definitely want to keep you in the loop as I experiment to make sure I'm going in the right direction. I know there can be subtle details in the implementation that may make all the difference in how much allocation savings we're actually reaping.
The object arrays created in the LHSExpression Invoke methods look like they'd be pretty straightforward to pool. Do you believe that swapping these arrays out for List<object> that are cleared and repopulated each time would offer an allocation advantage?
If that won't help significantly, I could also imagine caching a Dictionary<ITuple, object> to be used by these LHSExpression methods. I believe this would work because the object will always be the same size for the Tuple. Obviously, this cache would need to be updated whenever a given tuple's facts change. That could either be done by iterating through them and updating them on each call to Invoke, or by checking for a "dirty" flag on the tuple which would indicate that the facts have changed since the last invoke, though that'd be more complex.
For the Tuple.Facts IEnumerable, I could give each Tuple a List<Fact> that's cleared and repopulated by walking the tuples on each call to get the IEnumerable. However, again, maybe there'd be a more efficient way to do that and only update the Facts list as needed. Think that'd be possible?