These are chat archives for ChaiScript/ChaiScript

16th
Mar 2018
Daniel Church
@anprogrammer
Mar 16 2018 12:50
I have a script. In the script there is an array of around 20 objects, each of which has an "update" method. These are objects created/written in ChaiScript, not from the C++ side
Every frame, I have some ChaiScript code which loops through the array and calls draw on each object. With a debugger attached I noticed every call was throwing multiple guard_error exceptions
ChaiScript has been slow for me so I figured, maybe it's the potentially 100 exceptions per frame or more being thrown and caught, slowing things down
I modified ChaiScript to check if a function can be called for an object-type before trying to do so (preventing the guard_errors from triggering). Got a 25-50% performance improvement for my particular use-case. Would this code be interesting for anyone, or am I approaching this from the wrong direction?
Jason Turner
@lefticus
Mar 16 2018 14:17
I have actually gone many many rounds with trying to reduce the cases for when those exceptions are caught and handled internally. If you have something I've not seen before, I would love to see it
can you put it on a github fork @anprogrammer and open a pull request?
That's the best way for me to evaluate it
Daniel Church
@anprogrammer
Mar 16 2018 14:22
Sure I'll do that. On a more general note, I'm guessing the (very flexible and powerful) dispatch system is a pretty big performance hug? Building a list of functions, sorting them, searching them for each call? Someday I'm considering trying to work on some sort of system which adds a map of function name to actual call on individual object instances. Cache methods on an object instance itself to skip lookup the second time you call it. Would that be a useful idea... or crazy talk?
performance hog*
Jason Turner
@lefticus
Mar 16 2018 14:28
I've gone down that road already before. The cost of any caching system, that I was able to come up with ended up being more expensive than the dispatch
but yes, the dispatch is where all the hard work happens. I've also (probably because of a lack of imagination) have not come up with any kind of caching system that would not potentially break things that are currently allowed
I have debated doing a parse time resolution of the functions being dispatched. Kind of like a "compile" option. For users that don't care about the possibility that new functions with different overloads could be added later during execution
I think that's probably the option with the most potential. No cache to manage, the func_call nodes would just know if they have a fixed thing to call
Daniel Church
@anprogrammer
Mar 16 2018 14:31
Good to know. I may be foolhardy and take a crack at it someday, but I can see why that would be problematic
btw, just wanted to say thank you. It's an amazing piece of software, and even more amazing the level of support you provide to end-users
Jason Turner
@lefticus
Mar 16 2018 14:35
I attempt to. I have lots of travel and business right now, so I know there are a few support issues and PRs languishing
I'm glad it's working out for you
Daniel Church
@anprogrammer
Mar 16 2018 14:38
Hope your business trip goes well!