Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Ray Donnelly
    @mingwandroid
    heh!
    but ok, that could be our "real hardware" anyway
    jayfurmanek
    @jayfurmanek
    you get format_Exec errors
    Ray Donnelly
    @mingwandroid
    and if a QEMU build can also be performed we can compare things like testsuite outputs for really big things like numpy and scipy against "real hardware".
    jayfurmanek
    @jayfurmanek
    sent out some requests for answers - will let you know what I hear back on qemu quality
    Ray Donnelly
    @mingwandroid
    I want to introduce something like dejagnu into conda/conda-build someday too so we can run baked-in package tests (at least, low-bar) or else actual test: in builds so you can have something of a test matrix iof you want using different backends.
    jayfurmanek
    @jayfurmanek
    Sounds interesting. I've not used dejagnu myself tho, so I have no opinion on if it fits
    Ray Donnelly
    @mingwandroid
    by something like I don't mean actually that of course, just the idea of a test matrix with command substitution and comms considerations built in
    jayfurmanek
    @jayfurmanek
    yeah, that'd be great
    Marius van Niekerk
    @mariusvniekerk
    @mingwandroid provided you have the correct qemu-static things in that container you can totally run x86 in it too
    ie to run it on a ppc64 machine i guess?
    Mike Wendt
    @mike-wendt

    Hi all, I've been trying to follow this guide with mixed success to add ppc64le support for RAPIDS deps.

    I see in some active feedstocks with ppc64le support alreadylinux_ppc64le: azure and others linux_ppc64le: default which is the correct one?

    Secondly when I run conda smithy rerender using version 3.4.7 I do not get updated configs in .ci_support/ or .azure-pipelines/azure-pipelines-linux.yml specifying the ppc64le builds like here or here.

    Appreciate the help as we get full ppc64le support for RAPIDS, I have a long list of these libraries that I need ppc64le support and it just keeps growing.

    jayfurmanek
    @jayfurmanek
    linux_ppc64le: default is what you probably want. It runs on native hardware in Travis-ci.org
    also, if you don't have a provider listed, smithy won't generate the scripts for you
    Do you have a PR you are working on where we can take a look?
    Mike Wendt
    @mike-wendt
    One sec I'll push what I have locally as I'm trying to build and test before opening a PR
    jayfurmanek
    @jayfurmanek
    oh ok, no problem.
    Mike Wendt
    @mike-wendt
    Might be the issue that I'm trying to do this on the llvmdev feedstock because I just added linux_ppc64le: default to the arrow-cpp feedstock and it generated the travis files
    jayfurmanek
    @jayfurmanek
    hmm.I've not looked at the llvm feestock. Glad it worked for arrow tho
    Isuru Fernando
    @isuruf
    We can't build it on travis as it times out and nobody was interested in building them
    Mike Wendt
    @mike-wendt
    so far my first stab is at arrow-cpp -> clangdev=7 -> llvm,llvmdev=8.0.1 and I have ~20 more to investigate
    jayfurmanek
    @jayfurmanek
    yeah..build times of llvm are long from my experience. And the Travis-ci.org has like a 45min limit I think
    Mike Wendt
    @mike-wendt
    ok I'll build them locally and upload to our channel for now, this is so we can start testing/support for ppc64le
    thanks for catching that I know to look there now
    jayfurmanek
    @jayfurmanek
    sure
    Mike Wendt
    @mike-wendt
    also just so I know what best practice is... should I only be adding ppc64le or should I always pair ppc64le with aarch64?
    Isuru Fernando
    @isuruf
    Travis-CI probably gives better timeouts for paid customers, but we are on the free-tier
    jayfurmanek
    @jayfurmanek
    I'm sure they do :)
    Isuru Fernando
    @isuruf
    Pair both. If you run into trouble with aarch64, ping the conda-forge/arm-arch team
    Mike Wendt
    @mike-wendt
    yeah I moved RAPIDS away from Travis-CI last year to our own CI setup with GPU testing and builds
    Isuru Fernando
    @isuruf
    If you are interested in llvmdev=7, send a PR downgrading and we can merge it to a new branch. When the PR is merged, you can build locally and upload to your channel and we can copy it to conda-forge
    jayfurmanek
    @jayfurmanek
    now your talking! I didn't know we could do that!
    Mike Wendt
    @mike-wendt
    I know I have to do that for clangdev=7, arrow-cpp=0.14.1 wants that version and only clangdev=8 has ppc64le support
    IIRC llvm=8.0.1 is needed and that's the current version so no downgraded need for that one
    Isuru Fernando
    @isuruf
    @jayfurmanek, :smile: I thought I mentioned this in an issue in llvmdev-feedstock
    @mike-wendt, clangdev=7 depends on llvmdev=7
    Mike Wendt
    @mike-wendt
    gotta love dep trees haha
    thx for the heads up I'll add it to my spreadsheet of the trail of ppc64le pkgs to clobber
    jayfurmanek
    @jayfurmanek
    FYI @mingwandroid some feedback on QEMU from some folks that know a bit more:
    it mostly works, lots of people use it for testing.
    but it's not officially/properly supported by any team.
    
    there are issues, but it's not too bad
    
    I'd recommend getting the latest version.  I'm still trying to push some fixes into 4.2 (not released yet)
    Mark Harfouche
    @hmaarrfk
    @jayfurmanek that one dependency on poppler seems to be all we need. It will update its dependencies , so the rest of the tree should be able to move forward.
    jayfurmanek
    @jayfurmanek
    That's great. Do we still need a branch created there?
    Mark Harfouche
    @hmaarrfk
    Ideally a branch is created, but if somebody wants to merge into master, I've commented out the difference for an easy rollback
    Konstantin Maksimov
    @knm3000

    I'm trying to create a feedstock that uses build requirement packages (in recipe meta.yaml) from https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda conda channel.
    https://github.com/knm3000/libmxnet-feedstock/blob/master/recipe/meta.yaml#L27
    Can anyone please advise if it is possible to:

    1. Set env. variable IBM_POWERAI_LICENSE_ACCEPT=yes
    2. Use the custom conda channel https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda to install meta.yaml build requirements from that channel during conda build?

    When building the recipe manually I use these steps:
    export IBM_POWERAI_LICENSE_ACCEPT=yes
    conda build . -c https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda
    However, I cannot find the solution how to use it in the feedstock.

    jayfurmanek
    @jayfurmanek
    Are you working on a feedstock to contribute to forge? or just for your own use?
    conda-forge feedstocks generally try to be self-contained - only pull other things from forge. If this is just for your own use, you could manually edit the build_steps.sh script and set the env var there.
    Isuru Fernando
    @isuruf
    See https://github.com/conda-forge/conda-smithy#making-a-new-feedstock on how to use a custom channel on a custom feedstock
    For the env vars, see conda-forge/conda-smithy#1162
    Konstantin Maksimov
    @knm3000
    @jayfurmanek @isuruf Thanks for your help, will try these suggestions.