Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Michael Merrill
    @mhmerrill
    sure, and you need seemntics at the top to tie them together
    yuck
    ive written my share of all of those libraries
    you forgot CUDA ;-)
    Louis Jenkins
    @LouisJenkinsCS

    And they should all work together in a way; I.E Communication can have specific LLVM optimizations, and use the official tasking standard to manage their own communication threads, if they decide to.

    Yeah that's a good point, CUDA as well.

    CUDA can still go on thriving, providing low-level optimizations specific to NVidia GPUs, but you'd need the tasking layer to be the abstraction layer (or two) above it
    Michael Merrill
    @mhmerrill
    I think Brad went to lunch ;-)
    Brad Chamberlain
    @bradcray
    I just didn’t want to interrupt the conversation and was planning on chiming in when things died down a bit. :)
    Michael Merrill
    @mhmerrill
    did we clear things up or muddy things up?
    Brad Chamberlain
    @bradcray
    I understand the original question better now. :D
    Brad Chamberlain
    @bradcray
    To Louis’s original question (substituting “[a] runtime that understands task dependencies” for “[a] dynamic task-based runtime”), I think the short answer is that (a) we didn’t want to force users to specify dependences for every task they created and (b) we were / are sufficiently skeptical about a compiler's ability to determine such dependences automatically without being unduly conservative.
    Louis Jenkins
    @LouisJenkinsCS

    we didn’t want to force users to specify dependences for every task they created

    If the user doesn't want to specify dependencies, then it is no different than a normal static scheduler, right? I.E, the task graph would consist of tasks without any dependence and they can be executed in whatever order is desired, similar to how it is done now.

    However it adds the ability to specify dependencies, for cases where it matters, right?

    we were / are sufficiently skeptical about a compiler's ability to determine such dependences automatically without being unduly conservative.

    That's fine too, but I believe it could have been an invaluable piece of infrastructure to handle what rare cases were found to be parallelizable.

    Michael Merrill
    @mhmerrill
    you are overlooking a whole class of algorithms which have divide-and-conquer control parallelism I think
    Brad Chamberlain
    @bradcray
    (continuing before reading Louis’s most recent responses): The original plan for expressing such task dependence graphs in Chapel (coming from the Tera MTA / Cray XMT side of the house) was to lean on single variables to express such dependencies. There was also some compelling work by Vivek Sarkar’s group at Rice quite awhile back about integrating futures into the language as more of a first-class citizen that I was initially skeptical about, but liked more over time. A similar proposal was to support "begin expressions” rather than just begin statements as we currently do. The current Futures module contributed by Nick Park was an attempt to express similar patterns without any language changes. I think all of these options are still on the table, but we haven’t had sufficiently compelling use cases to invest further in any of them.
    Michael Merrill
    @mhmerrill
    that is what I would lobby for…^^^
    some interesting data structures have nice "sub-tree” or "sub-part" parallelism which could benefit
    but we just concentrate on arrays
    as our primary abstraction
    Michael Merrill
    @mhmerrill
    anybody ever tried to implement a trie in Chapel?
    Louis Jenkins
    @LouisJenkinsCS

    was to lean on single variables to express such dependencies.

    Was it ever planned to have the single be injected implicitly? I've never seen single used in practice today, either. I do like the option of having native first-class support for futures though, but I've a feeling this won't happen in the near future (no pun intended)

    (gotta go to class)
    Brad Chamberlain
    @bradcray
    I don’t mean to imply that there aren’t interesting use cases, just that we arguably haven’t had anyone breathing down our necks about them enough. I’m not aware of anyone trying to build a trie in Chapel.
    @LouisJenkinsCS: The “begin expression” concept would’ve injected singles implicitly (and uses of that pattern would’ve likely been good motivation to work more on optimizing it). E.g., var total = left.total() + begin right.total(); could be thought of as being converted into var tmp$: single; begin tmp$ = right.total(); var total = left.total() + tmp$;
    Michael Merrill
    @mhmerrill
    one of the fastest string sorts Burstsort is implemented using `trie
    used in genome data sets i think
    Greg Titus
    @gbtitus
    @LouisJenkinsCS only slightly snarky, but ... if US DOD + one or more big corporations were enough to force use of a programming model and/or language, we'd all be writing in Ada.
    Louis Jenkins
    @LouisJenkinsCS
    Well, so long as the big corporations aren't dictating it, it should go well enough. Perhaps "force" is too strong, more like "suggest" and back said standard
    Greg Titus
    @gbtitus
    Back then it was DOD that was backing and mandating Ada, and private industry that didn't want to go along.
    Louis Jenkins
    @LouisJenkinsCS
    Right, but if the present day U.S Government + Big Corporations want a successful product, then I think an open discussion of "What should a unified interface for X look like?" - Given we're talking HPC, and since a vast majority of HPC in the U.S appear to be DoE national labs, and since each national lab seems to have their own runtime environment, I think this has a much larger chance of success.
    Plus this isn't a language, nor an implementation of X, just the specification of X we're talking about here
    Louis Jenkins
    @LouisJenkinsCS
    Ah, there's a buzzword people like: "co-design"
    Louis Jenkins
    @LouisJenkinsCS
    It's so crisp and clean; something like this for Task Graphs would be great
    Rahul Ghangas
    @rahulghangas
    @bradcray thanks for the link to the Arkouda project.
    Manthan Gupta
    @Manthan109
    @Manthan109
    hello everyone !
    My name is Manthan Gupta. 2nd year Btech CSE Student from bennett unuiversity
    looking forward to contributing for GSOC 2020
    is there a github link where i can get started with ?
    Rahul Ghangas
    @rahulghangas
    The documentation/tutorials at https://chapel-lang.org/ are a good place to learn the ins and outs of the language. Then have a look at https://chapel-lang.org/contributing.html
    Brad Chamberlain
    @bradcray
    It could also be useful to look at the GSoC 2019 pages: https://chapel-lang.org/gsoc/index.html
    Manthan Gupta
    @Manthan109
    Thanks @rahulghangas
    Rahul Ghangas
    @rahulghangas
    Has anyone been successful in running the bitcode files generated by using —savec along with —llvm using lli?
    Brad Chamberlain
    @bradcray
    @rahulghangas: That sounds like a question for @mppf or possibly @ronawho. I’m personally not familiar with using —savec and —llvm together.
    Elliot Ronaghan
    @ronawho
    Definitely not a question for me :)
    Michael Ferguson
    @mppf
    @rahulghangas - I sometimes use the bitcode files generated with --llvm --savec=tmp but I haven't ever tried to run them with lli. I would imagine that to run the .bc withlli it would need also some symbols we normally link with and I'm not sure how that would work (does lli have a facility to work with compiled C libraries in .a files?).
    Brad Chamberlain
    @bradcray
    I meant to mention earlier that @milthorpe and his students have been doing some exploration of targeting GPUs from Chapel and may have experience to share.
    Rahul Ghangas
    @rahulghangas
    Ahahaha, I am the student
    @mppf thanks for your reply. I’ll have to look if I can do that with lli.
    Brad Chamberlain
    @bradcray
    Oh, great, nice to meet you! (I wondered when I typed that, but either didn’t recognize your name, or Josh never mentioned it to me). Keep asking questions and helping us help you make progress!
    Rahul Ghangas
    @rahulghangas
    Nice to meet you too. I will keep posting any queries I have and updates on the project.
    mohitg55555
    @mohitg55555
    Hello everyone, My name is Mohit Pandey. 2nd year B.tech student. I am looking forward to contribute for contribution to this organization in GSOC 2020. Where i have to start from ?