qtxie on IO
FIX: more error handling when c… (compare)
qtxie on IO
FIX: remove some debugging logs. (compare)
qtxie on IO
FIX: use b-allocator to copy bi… (compare)
qtxie on master
FIX: para breaks `toggle` align… (compare)
Which way to you guys use
try most of the time?
if error? try [...] [..after-error recovery..]
set/any 'err try [...] (... later err is processed...)
Please post your answers into thread if possible. Thanks.
try/catchisn't a thing. We have
throw/catchwhich is not specifically error related, but is about general non-local flow control.
Do we have a function that creates a contex and binds the following code to context it running?
I want to create a
with [a: 33 c: 33] [...code...]
Which is equivalent to:
do bind [...code...] make object! [a: 33 c: 33]
But I would like to know if it already exists.
Here I am again:
Why not allowing
SET? It would make setting values in context very easy: (Please pardon me the wrong usage of
IFbut I have still not studied it)
valid?: func [data test-name] [ result: switch test-name [ size [either data ... [true] [false]] position [either data ... [true] [false]] ] ] options: [size 00x200 position above] ctx: make object! [ size: none position: none ] parse options [ set ctx/size 'size pair! (if not valid? ctx/size 'size] [fail!] set ctx/position 'position word! (if not valid? ctx/position 'position] [fail!] ]
options:[size 300x200 position above]
3for no reason!
ctx. E.g. working example:
between?: func [data min-test max-test][ all [min-test <= data max-test >= data] ] valid?: func [data test-name] [ switch test-name [ size [all [between? data/x 250 500 between? data/y 200 400]] position [find [above below left right] data] ] ] ctx: make object! [size: none position: none] options: [size 300x200 position above] parse options bind [ 'size set size pair! if (valid? size 'size) 'position set position word! if (valid? position 'position) ] ctx == true ctx == make object! [ size: 300x200 position: 'above ]
parse options [set ctx/size ..]as
parse options reshape [set !(in ctx 'size) ..]. See reshape
struct: [size "aaa" position "bbb" color [background none text none]] options: [size 300x200 position above background red] parse options [ 'size set struct/size pair! if (valid? struct/size 'size) 'position set struct/position word! if (valid? struct/position 'position) 'backgound-color set blk/color/background word! if (valid? blk/color/backgropund 'colors) ]
Reshapeare nice functions but the former does not use path notation; the latter uses it but statically, at rules block binding.
set wordis expressed as
setrule: [set word]
Anyway, paths are cleaner but are not gonna be there anytime soon (unless you PR it and defend ;)
The ultimate defense is simplicity. As "stacking
compose" I mean the code practice where you nest multiple
compose calls, which can be generalized as "the code practice where you have to transform the block of code multiple times under
reshape like functions". This forces you to keep in mind the reshaping/transformation mechanisms of the functions you are using for this purpose; multiple meta states of the code block; the relevant elements; the final version of the code and where it will interact. Paths are simple, visible, straightforward with no layer of complications: what you see is what you manage.
The second side defense is @greggirwin one: if you have heavy overhead only when using them, well it's your choice as no one will have penalties for these features.
I have looked at red/red#3528: Doc refers to tight loops overhead in a scenario of parse reducing word to their content for RULES, like
>> rules: [end: [to end]] == [end: [to end]] >> parse "foo" [rules/end] == true
Which usually happens far more frequently than
set ctx/key in my usage scenario. But even as rule word reduction, you should not have any overhead in standard usage if you put the check for
path after parse check for
word as once a "word" is found, the extra check for "path" is not performed. In any scenario, you have overhead only if you use paths, it's your choice!
w-rule: [to end] b-rule: [end: [to end]] o-rule: object [end: [to end]] oo-rule: object [a: object [end: [to end]]] ooo-rule: object [a: object [b: object [end: [to end]]]] count: 1'000'000 x: [loop count [[to end]]] w: [loop count [w-rule]] b: [loop count [b-rule/end]] o: [loop count [o-rule/end]] oo: [loop count [oo-rule/a/end]] ooo: [loop count [ooo-rule/a/b/end]] profile/show [x w b o oo ooo]
Time | Time (Per) | Memory | Code 0:00:00.035 | 0:00:00.035 | 440 | x 0:00:00.042 | 0:00:00.042 | 284 | w 0:00:00.074 | 0:00:00.074 | 284 | o 0:00:00.084 | 0:00:00.084 | 284 | b 0:00:00.093 | 0:00:00.093 | 284 | oo 0:00:00.112 | 0:00:00.112 | 284 | ooo
parseneeds to check for the new datatype as well, which adds to this. And this is just reading the value, not setting it. But if the user isn't paying the cost when the feature isn't used, this seems like a useful way to modularize rules. Of course, creating "modules" of common rules, assuming people use them, means many people are paying that price for common rules.