qtxie on master
FIX: issue #5161 (Decompress do… (compare)
dockimbel on master
FIX: regression on setting obje… (compare)
qtxie on master
FIX: better fix for issue #4690. (compare)
qtxie on master
TESTS: correct tests for issue … (compare)
qtxie on master
FIX: issue #5179 (console hangs… (compare)
@greggirwin I have read again my last year's notes and your replies. You are right, many things have to be considered but I now think parens are not a good solution. The current working must be maintained, we should just change the path segment operation to skip using some symbol and maintain the current mechanism. Let's made the hypothesis we choose >>
or >
as skip
This skips 2 positions on the right
word/>>2/word2
This skips 4 positions on the left
word/>>-4/word2
as before but gets the gets the number from :value
word/>>:value/word
as before but gets the gets the number from the resulting of (code)
word/>>(code)/word
But we can just go further.
Paths navigate through series and we have another great Red tool to navigate through them: PARSE! Why don't we use it?
Please take a look at this path:
word/<:[skip 2 thru 'section-start <:]/word
If a path segment has <:[]
then the current path series position is passed to PARSE and the block is its RULES argument. While parsing, each time you encounter <:
the path current series position is changed to the parse one. If parse fails, the path fails and it returns none
.
(Conceptually this last solution seems so easy and straightforward)
I like both ideas but second one is even more powerful and implementable reusing 99% of the code we have in Red.
[Id <object> "section2" id 33]
parts and similar names, let you express the elements after object in one path selection, without using blocks and reusing the same words: word/>>3/id
returns 33. Excessive nesting is sometime also a problem and not all structures are better when nested, sometime they are better when expressed linearly...
<<
is really >>
) The last usage I forgot to write is the SKIP to value or datatype. Let's take the first example and extend it [Id word word <object> data data id 33]
the first part, as before, ends in object
. With TO shifting you can have variable data length on the first segment of data and in the second just with word/>>object!/>>id
. With this syntax a full horizontal navigation is possible in just one line.
find
expressed in one efficient and compact path.
struct!
datatype, which would be more memory friendly than block!
and would provide named accessors to all values.
[1 2 3 4 5 6 7 8 9]
vs [[1 2 3] [4 5 6] [7 8 9]]
. Nesting, and keys vs positional values, add overhead, but also information. There is no single solution that it best in every case, so it will be interesting to see what comes of your work.
Everyone I hear using the M1 are a bit shocked at how fast it is, even on a Mac that only has 8 GB RAM. The key to the design is very wide memory buses and of course having all the memory right on the main chip. Also, each CPU core can saturate the memory controller that can handle around 60-70 GB/s. An intel/AMD CPU core can only do 15-25 GB/s. That's pretty astounding.
Then of course is the low TDP of less than 30 W can make it do long running tasks with little cooling needed. The GPU isn't very powerful though. The cost is also presently only a maximum of 16 GB RAM.
What is even more interesting is that Mx series CPUs will grow much faster in the near future, if Apple's CPU speed ramp will be maintained from the earlier Ax CPUs. There's lots of low-hanging fruit left for Apple to pick.
We are in the plateau region of the development curve, where almost nothing changes. If you put more than 10 cores in a chip you don't get much useful work out of it, so we are flattening out everywhere in the computer biz.
We have still a large margin of improvement in software. Algorithms are tuned for single processor tasks. Once they will move to multi-core operations we will see again a lot of improvements.
dockimbel on master
FEAT: adds `bounds` option to /… (compare)