dockimbel on master
FIX: issue #5174 ([Compiler] In… (compare)
dockimbel on master
FIX: regression in `replace/dee… (compare)
dockimbel on no-end-check
dockimbel on master
FEAT: removes the end checking … FIX: regression in `split`. FIX: regression in `do-file`. and 4 more (compare)
lit-paren!
is a fine name, as a parallel to lit-path!
, but we have to draw a line somewhere. e.g. what would a get-paren!
mean? But since we already have a value that represents what you want, why not use that? Of course, this is all part of exploring and balancing the core language and what it lets you create. Constraints can be very helpful. e.g., you're using path notation, but if you described your dialect, what are the key elements and actions?
Constraints can be very helpful. e.g., you're using path notation, but if you described your dialect, what are the key elements and actions?
Are you telling me to take into considration the key elements of my ideas when I think about path dialects or are you asking me to write them down here?
@greggirwin I have read again my last year's notes and your replies. You are right, many things have to be considered but I now think parens are not a good solution. The current working must be maintained, we should just change the path segment operation to skip using some symbol and maintain the current mechanism. Let's made the hypothesis we choose >>
or >
as skip
This skips 2 positions on the right
word/>>2/word2
This skips 4 positions on the left
word/>>-4/word2
as before but gets the gets the number from :value
word/>>:value/word
as before but gets the gets the number from the resulting of (code)
word/>>(code)/word
But we can just go further.
Paths navigate through series and we have another great Red tool to navigate through them: PARSE! Why don't we use it?
Please take a look at this path:
word/<:[skip 2 thru 'section-start <:]/word
If a path segment has <:[]
then the current path series position is passed to PARSE and the block is its RULES argument. While parsing, each time you encounter <:
the path current series position is changed to the parse one. If parse fails, the path fails and it returns none
.
(Conceptually this last solution seems so easy and straightforward)
I like both ideas but second one is even more powerful and implementable reusing 99% of the code we have in Red.
[Id <object> "section2" id 33]
parts and similar names, let you express the elements after object in one path selection, without using blocks and reusing the same words: word/>>3/id
returns 33. Excessive nesting is sometime also a problem and not all structures are better when nested, sometime they are better when expressed linearly...
<<
is really >>
) The last usage I forgot to write is the SKIP to value or datatype. Let's take the first example and extend it [Id word word <object> data data id 33]
the first part, as before, ends in object
. With TO shifting you can have variable data length on the first segment of data and in the second just with word/>>object!/>>id
. With this syntax a full horizontal navigation is possible in just one line.
find
expressed in one efficient and compact path.
struct!
datatype, which would be more memory friendly than block!
and would provide named accessors to all values.
[1 2 3 4 5 6 7 8 9]
vs [[1 2 3] [4 5 6] [7 8 9]]
. Nesting, and keys vs positional values, add overhead, but also information. There is no single solution that it best in every case, so it will be interesting to see what comes of your work.
Everyone I hear using the M1 are a bit shocked at how fast it is, even on a Mac that only has 8 GB RAM. The key to the design is very wide memory buses and of course having all the memory right on the main chip. Also, each CPU core can saturate the memory controller that can handle around 60-70 GB/s. An intel/AMD CPU core can only do 15-25 GB/s. That's pretty astounding.
Then of course is the low TDP of less than 30 W can make it do long running tasks with little cooling needed. The GPU isn't very powerful though. The cost is also presently only a maximum of 16 GB RAM.
What is even more interesting is that Mx series CPUs will grow much faster in the near future, if Apple's CPU speed ramp will be maintained from the earlier Ax CPUs. There's lots of low-hanging fruit left for Apple to pick.