Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 09:26
    hiiamboris commented #3340
  • 09:16
    hiiamboris commented #3340
  • 07:29

    qtxie on GTK

    FIX: refactor gradient pen/fill… FIX: update grad-pen test FIX: focal gradient's first cir… and 2 more (compare)

  • 07:29
    qtxie closed #4188
  • 04:31
    greggirwin commented #3340
  • 03:32
    greggirwin commented #3340
  • 02:06
    bitbegin opened #4188
  • Dec 13 19:42

    qtxie on master

    FIX: regressed for drawing text… (compare)

  • Dec 13 19:07
    giesse commented #3340
  • Dec 13 16:58
    hiiamboris commented #3340
  • Dec 13 16:44
    hiiamboris commented #3340
  • Dec 13 16:43
    hiiamboris commented #3340
  • Dec 13 16:25
    dockimbel commented #3340
  • Dec 13 16:24
    dockimbel commented #3340
  • Dec 13 16:21
    dockimbel commented #3340
  • Dec 13 15:01

    qtxie on master

    CI: run all the tests on Action… (compare)

  • Dec 13 14:37

    qtxie on master

    CI: run regression tests on Act… (compare)

  • Dec 13 06:37

    qtxie on master

    FIX: texture pen doesn't work i… (compare)

  • Dec 12 19:34

    qtxie on master

    TESTS: base-self-test: relaxed … (compare)

  • Dec 12 19:34
    qtxie closed #4187
GiuseppeChillemi
@GiuseppeChillemi
We are at the end of the journey. Be positive and look at all the good things we have. This will help to keep the spirit high and creates the grounds to for the success.
LFReD
@LFReD
Good news! append blk "a" is 1.2x faster with Red over Rebol (and really fast in general)
Petr Krenzelok
@pekr
The boss using quotes to squash someone, should not be boss in the first place ...
LFReD
@LFReD

Writing to file is faster in Red as well, by 1.04 over Rebol.

I've ran like 10 of these now, and in general, Red is 4x slower, which we knew, and yeah, who needs to do a:1 100,000 times in a row. It's not a deal breaker.

These two are weird;

a: 1 - 5x slower in Red
a: 1 + 1 - 2.4x slower in Red

GiuseppeChillemi
@GiuseppeChillemi
Have you tried compiling it ?
LFReD
@LFReD
I believe it's the interpreter that has the issue, so no. Give it a try.

Hey, going back to my marketing rant the other day, there's a ton of free low-hanging fruit out there to promote Red. Here's one example.

https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)

GaryMiller
@GaryMiller
I've noticed slightly better execution speeds when compiling my Red AI app. I've not done any benchmarks but it definitely feels a bit faster. I'm looping through around 16,000 Levenshtein comparison of sentence lengthed text strings to a user entered input. I don't usually compile though because the whole compile link process with -r takes over 10 minutes.
hiiamboris
@hiiamboris
Perhaps we should implement some common string distance algorithms as routines? I have a use for that too
Gregg Irwin
@greggirwin
@PeterWAWood not clairvoyant, I just put a bunch of blocks in profile. :^)
Count: 10000
Time         | Time (Per)   | Memory      | Code
0:00:00      | 0:00:00      | 284         | [blk/:key]
0:00:00.002  | 0:00:00      | 440         | [e-funk "bob" "email"] ; empty func
0:00:00.012  | 0:00:00      | 1200284     | [rejoin [s1 "-" p1]]
0:00:00.424  | 0:00:00      | 1016504     | [to-word rejoin [s1 "-" p1]]
0:00:00.425  | 0:00:00      | -1646932    | [to word! rejoin [s1 "-" p1]]
0:00:00.435  | 0:00:00      | -1383324    | [dafunk "bob" "email"]
LFReD
@LFReD

Edited this Wikipedia page: https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)

The Redbol languages really stand out. (if you see any errors, let me know)

Gregg Irwin
@greggirwin
:point_up: December 12, 2019 3:04 AM The reason it's a non-issue @LFReD is because Red isn't magic. Brute force will go a long way, but it has its limits.
LFReD
@LFReD
@greggirwin If I knew it was going was going to be referred to, I would have renamed dafunk :)
With the memory, is that in bytes? And what's with the negative values?
Gregg Irwin
@greggirwin

Memory is simply a stats call and diff, so can vary depending on GC, etc.

Thanks for the wikipedia update. copy/part is tricky, because it's richer in Red than other langs. I also don't know if 1..10 range syntax in some langs can use vars (m..n) or alternate syntax for that.

LFReD
@LFReD

Like I said, it's not a deal breaker, with one or two exceptions, there's workarounds as well. By going through the conversation, I've found an alternative that's suits my desired purpose even better.

But that was never the point. I expected Red to be 4x faster than Rebol, not the other way around.
And hopefully one day it will be. But I don't think, ah, good enough is a good response.

Gregg Irwin
@greggirwin
I don't know why you expected that but, again, it's not magic. It gives you easy options for blazing speed in some cases. e.g. your math example could be done in R/S. If you're doing math, and need it fast, you win big time there.
LFReD
@LFReD
Yep, I get it. Same with compiling I would imagine.
Gregg Irwin
@greggirwin
Compiling will speed some things up, but you're still working in a very high level language. And you should, 99% of the time.
GaryMiller
@GaryMiller
@hiamboris That would be wonderful! As you see in RosettaCode quite a few of the higher-level languages have edit distance (Levenshtein) built-in functions that run at compiled speeds.
hiiamboris
@hiiamboris
@GaryMiller There's also a problem of choice ;) I love the graph in the beginning of this article
GaryMiller
@GaryMiller
@hiiamboris Levenshtein still seems to be the most accurate (although slowest) for Bot applications, but I haven't found a code implementation of what the author suggested in this paragraph. "What string distance to use depends on the situation. If we want to compensate for typos then the variations of the Levenshtein distances are of good use, because those are taking into account the three or four usual types of typos. The metric could be improved f.x. by factoring the keyboard layout into the calculation. On an English keyboard the distance between “test” and “rest” would then be smaller than the difference between “test” and “best” for obvious reasons."
Also any Levenshtein function should include a third parameter to return early if the distance calculation reaches the value of the last integer parameter. Call that MaxDistance. The reason this optimization is important is say we are searching a million strings and we find a string that is only 1 less than than the length of what we are comparing to. Any other string that we examine we can return early once we hit a distance of 2 because the one we found earlier with a distance of 1 we already know is a better match. This is a very good speed optimization.
GaryMiller
@GaryMiller
Levenshtein can also run about 11 times faster if converted to code that runs on GPU https://www.researchgate.net/publication/300042590_Using_GPUs_to_Speed-Up_Levenshtein_Edit_Distance_Computation
hiiamboris
@hiiamboris

return early once we hit a distance of 2 because the one we found earlier with a distance of 1 we already know is a better match

Yes, a good idea.
I myself was using the OSA variant of Levenshtein's to compare 2 file name lists (names can be 100 chars easily), where I used another optimization: if number of errors exceeds a given limit (during the metric computation), consider them a zero match and continue with the other candidates (reducing the time required by ~2 orders of magnitude). The use case here is not looking for a best match, but for a set of acceptable matches (it turns out that in practice everything <0.7 is false matches). Still, it's pretty dumb to compare lists like this. Just the easiest to implement solution, not the optimal one.

GiuseppeChillemi
@GiuseppeChillemi
A question and a proposal: what about organizing a RedCon in Europe ? It could be done on the next Red big announcement !
LFReD
@LFReD

Red tip # 157

To speed up your code, use an expression when assigning an integer to a word.

(loop 100K)

a: 1 0.015 secs
a: 2 * 2 - 3 0.010 secs
a: 1 * 1 0.010 secs

Henrik Mikael Kristensen
@henrikmk
I'd expect the opposite result there.
LFReD
@LFReD
Unless it's my method?
timeit: func [loops] [
    t1: now/precise
    loop loops [a: 1 * 1]
    t2: now/precise
    print difference t2 t1
]
Gabriele Santilli
@giesse
>> profile/show [[a: 1] [a: 1 * 1]]
Time                 | Memory      | Code
1.0x (92ns)          | 752         | [a: 1]
2.12x (194ns)        | 168         | [a: 1 * 1]
LFReD
@LFReD
The timeit function is real time. Set it to 1M and it literally takes nearly two seconds to complete.
Is it the loop function itself that's killing it?
Gregg Irwin
@greggirwin
@LFReD first, what do you mean by "expression"? All your examples are expressions. Second, be sure to run small tight loop tests a number of times. At those timescales any system influence can change results. Third, please be careful giving advice about language use and optimization at this stage of your Red knowledge, and confirm your findings.
LFReD
@LFReD

@greggirwin I have tried 'small tight loop tests a number of times'.. which brings up another issue I've found is time/precise doesn't seem to kick in until a certain time has passed?

a: 1

timeit 1000     0.000 seconds
timeit 10000   0.0010009 seconds.

Shouldn't that first value be 0.0001000 ?

By advice, I'm guess you're referring to my 'tip'.
It's a joke.

'in the usual notation of arithmetic, the expression 1 + 2 × 3 is well-formed...' https://en.wikipedia.org/wiki/Expression_(mathematics)

I posted this particular observation to try to determine if this is an issue or not, and if not, why does it happen? Is it my method?

Gregg Irwin
@greggirwin

I thought we had hi res timing on windows since red/red#3476 was merged. If that's not the case, it's an OS limitation on Windows.

To speed up your code, use an expression when assigning an integer to a word.

Makes it sound authoritative.

On math, if you're referring to precedence of operators, Red is strictly left-to-right in this regard. If you want precedence, use parens or the math dialect.

Henrik Mikael Kristensen
@henrikmk
I observed a difference in time resolution between 0.6.4 and the latest nightly build, so increased time resolution should be working.
Gabriele Santilli
@giesse
Timing stuff is not as straightforward as one might think at first glance. Use profile if you don't know better. https://gist.github.com/giesse/1232d7f71a15a3a8417ec6f091398811
Gregg Irwin
@greggirwin
:+1:
Jose Luis
@planetsizecpu

@magicmouse > "Debugging consumes 85% of all programmer time by my long-term measurements. Ignore these speed freaks. The real battle is with the new programmers who are looking for a much simpler system."

+1

GiuseppeChillemi
@GiuseppeChillemi
+1
GaryMiller
@GaryMiller
I am not suggesting that months be spent optimizing but as in a hospital if a patient arrives and is in great pain and they triage them and find that pain in a matter of a few hours can be remedied by removing the nail in their leg that keeps them from walking then the patient is cured and that patient wasn't left to sit around in pain or seek out another hospital. I think serious performance problems should be triaged, given rough time estimates to fix, and if the performance problem was just an oversight by the original developer and could be fixed in a short amount of time then that can be seen as significant progress at a minor cost and would not delay new features by a significant amount.
LFReD
@LFReD

Forget all that, the real elephant in the room... a 37mb data file (a block of strings) that takes over 3 minutes to load, and once loaded, consumes nearly 400mb of system memory.. according to my math, that's 10mb per 1mb of data. Once it's loaded, it's absolutely blazing. Find a string among 3M strings in 1ns.

The load time seems to be exponential? I tried to load a 84mb file, and gave up after a half hour. It was loading at 1 mb per 10 seconds.

If I 'didn't know better', I would say that's a big issue. And I know you're working on it, but this...

"it means that with an optimizing backend (Red/Pro), our lexer will be 4-8 times faster than R3's. It should then be possible to load gigabytes of Red data in memory in just a few seconds (using the future 64-bit version)"

Pro only? ETA on 64-bit version?

'gigabytes'.. Is 2 gb of data going to consume 20gb of memory?
Oldes Huhuman
@Oldes
@LFReD instead of writing neverending list of complains, maybe you should check yourself, where the activity is. You would maybe noticed, that there is fast-lexer branch, which should address exactly this issue: https://github.com/red/red/tree/fast-lexer
And yes... your way of dealing with data is not efficient, because you are using high level values.. like strings, which requires more bytes than you think and it is by purpose.
I mean when you have key: "abc" it does not consume just 4 bytes like in C.
LFReD
@LFReD
@Oldes Explain to me how you would make a db that doesn't contain strings?
Oldes Huhuman
@Oldes
Sorry, I have my own work, do your own study... but usually the DBs works as a binary blobs with strictly defined structure, like that keys have 8 chars and so on.
LFReD
@LFReD
Yeah, binary blobs that need to be converted to and from strings so people can understand the syntax.
GaryMiller
@GaryMiller
Most strings today for business take more than one byte per character due to foreign language characters and the need for localization in business apps. Some languages give you the choice of a plain array of ASCII characters or an array of Runes which are 2-byte codes used to represent the extended character sets.