Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Snektron
    @snektron:matrix.org
    [m]
    thanks
    Troels Henriksen
    @athas
    Although that's not strictly true. Futhark does not define an in-memory representation. It's whatever the compiler thinks will make your code run faster.
    But when you flatten/reshape arrays, the semantic ordering is considered row-major order.
    Snektron
    @snektron:matrix.org
    [m]
    Do i need to do something to update the tag? I've been compiling Futhark for myself but if i just do pull and stack install the tag doesn't seem to change, and i'm wondering if i'm doing that right
    Troels Henriksen
    @athas
    You mean the one in the version number? The build system doesn't understand the module dependencies for that bit of metaprogramming. Do a stack clean if you want to be sure that it's right.
    Snektron
    @snektron:matrix.org
    [m]
    alright, thanks
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    Ah, one less tab
    Snektron
    @snektron:matrix.org
    [m]
    Yea, matrix and gitter merged a while ago.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    I know, but my old matrix account had a joke name that felt unsuitable for use in this context. I've never been good at coming up with pseudonyms.
    Snektron
    @snektron:matrix.org
    [m]
    Do like my friend did and use an online hacker name generator, i think he still goes by shadowdestroyer or something
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    That is one way to do it. Since I use this for work I thought I'd just use my name though.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    Is there any benefit to compiling the compiler to use multiple threads? My compile times are starting to become longer. It's still very acceptable in the grand scheme of things, and I should probably remove some dead code in my libraries, which I guess would speed it up a bit, but waiting around a minute is more than I am used to with Futhark.
    Troels Henriksen
    @athas
    Unless you specifically ask it not to, the Futhark compiler is already compiled as a parallel program (although the benefit isn't that great).
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    Oh, I missed that. Cleaning and waiting it is.
    Troels Henriksen
    @athas
    If you compile with -v the compiler will give time deltas for various parts of the compiler pipeline, and then maybe you can figure out what is taking so long. My guess (because this is always the answer) is the aggressive inlining, which can massively blow up the program. If you are careful, you can put #[noinline] attributes on large functions that are called in multiple places.
    But you have to be careful, because the compiler cannot handle function calls in awkward places (specifically, inside code that it wants to put on GPUs), and it's not yet able to ignore noinline attributes that would cause errors in code generation.
    It will never result in silent failures, though.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    I guess this is pretty low priority, but is there potential for more thread utilisation in the compiler, or is the process inherently very single threaded?
    Troels Henriksen
    @athas
    Not at all, I think it could be parallelised a lot more than it is.
    One problem is that parallel Haskell isn't very effective for the kind of pointer-chasing code you find in a compiler (the GC isn't good enough), but we could probably do better than we currently are.
    But the main performance problem is the naive inlining policy. I think significant gains could be made just by making that a bit smarter.
    It will never be a fast compiler, though.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    I'd rather have fast code than a fast compiler :)
    Troels Henriksen
    @athas
    Well, that's the idea. Originally my success criteria was for Futhark to be regarded like Stalin Scheme: technically impressive, but not practically useful due to excessive compile times. We've grown more ambitious since then, though.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    This is of course only until Snektron implements futhark in futhark.
    1 reply
    Troels Henriksen
    @athas
    And algorithmically, the Futhark compiler isn't doing anything ludicrous like Stalin is. The most dubious thing we have is a general policy of writing passes that generate excessive code, then afterwards apply a heavy-duty simplifier to shrink it again. That means every single pass doesn't have to care about producing minimal code, but it does slow down compile times.
    Yes!
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    I need to read more about Stalin. What a thing to say.
    Troels Henriksen
    @athas
    It's "brutally optimizing".
    Its main trick is intense (and super expensive) control flow analysis to aggressively monomorphise and unbox values.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    "Stalin is free and open(source)"
    Troels Henriksen
    @athas
    A similar design is found in the MLton compiler, although MLton is much more practical.
    Snektron
    @snektron:matrix.org
    [m]
    I've grown to love the |> operator
    i did not realize i needed to have something like that in my life
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    Makes sense that a snake would like a pipe if you ask me
    Troels Henriksen
    @athas
    I took it from F#. Despite years of Haskell, I think I already like it better than Haskell's $ operator.
    Of course, in Futhark it should be called the thorn ( ᚦ) operator.
    2 replies
    munksgaard
    @philip:matrix.munksgaard.me
    [m]
    That's why I've started using & in Haskell! It's probably not idiomatic, but it is nicer than $ imo.
    Snektron
    @snektron:matrix.org
    [m]
    I did some preliminary testing for the lexer VS flex, flex reaches about 150-180 MB/s depending on the compiler, on my Ryzen 3700X. Running futhark on a RTX 2080 TI yielded about 5 GB/s, though i cant actually double check since my uni broke that machine. On my RX 580 it does about 650 MB/s, and futhark c does about 80 MB/s.
    The latter can probably be explained by the large lookup table i use, which isn't very cache friendly of course.
    Gusten Isfeldt
    @gusten:matrix.org
    [m]
    They broke the computer?
    Troels Henriksen
    @athas
    That sounds quite impressive (although I have no intuition for whether flex is considered fast). Too bad about your university messing with your machines.
    Snektron
    @snektron:matrix.org
    [m]
    I don't think its physically broken, but my faculty has been messing with these machines ever since the start of this academic year.
    1 reply
    I haven't tried other flex alternatives, although the name (fast lexical analyzer generator) would lead me to believe that it should be decently competitive. Its gnu software though, so its methods may be dated by now.
    Snektron
    @snektron:matrix.org
    [m]
    This isn't even a cluster system. There are just a few machines you can use as student, and another few you can use as staff. All fair-use, although that wont stop students from running R programms that run for several months and require 700GB of memory <grumble>
    Up until recently students could log into staff machines so i had just been using those.
    Every once in a while a machine breaks magically and usually its fixed within a few hours but not this machine i guess
    Snektron
    @snektron:matrix.org
    [m]
    Is it intentional that (a).b is an invalid module expression? I guess its kind of weird, but i'm trying to import a parametric module and apply it simultaneously

    so i have something like
    a.fut:

    module a (T: integer) = {
      let add (a: T.t) (b: T.t): T.t = a + b
    }

    and then in b.fut:

    module a_u8 = (import "a").a u8
    Troels Henriksen
    @athas
    Good question. No, I think that could be allowed.