Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 12:48
    danieljfarrell opened #3886
  • 10:58
    speleo3 commented #3885
  • 10:35
    speleo3 opened #3885
  • 01:28
    cla-bot[bot] labeled #3884
  • 01:28
    isuruf opened #3884
  • Feb 27 20:00
    mandeep added as member
  • Feb 27 16:28
    leofang opened #3883
  • Feb 27 13:20
    duncanmmacleod commented #3866
  • Feb 27 12:56
    bilderbuchi closed #3641
  • Feb 27 12:56
    bilderbuchi commented #3641
  • Feb 26 10:48
    mingwandroid synchronize #3882
  • Feb 25 20:02
    nehaljwani edited as member
  • Feb 25 11:48
    mingwandroid synchronize #3882
  • Feb 25 11:37
    mingwandroid synchronize #3882
  • Feb 25 11:06
    mingwandroid review_requested #3882
  • Feb 25 11:05
    cla-bot[bot] labeled #3882
  • Feb 25 11:05
    mingwandroid opened #3882
  • Feb 22 22:18
    njzjz commented #3806
  • Feb 22 22:16
    njzjz commented #3806
  • Feb 21 06:44
    WenjieZ commented #1992
jmgpeeters
@jmgpeeters
am I doing the idiomatic thing here? essentially, I need to copy my .so somewhere in the conda package so that - upon installation - it ends up in site-packages. is using SP_DIR the way to go?
Jonathan J. Helmus
@jjhelmus
Copying the .so into SP_DIR will work, typically this is done via a cmake install or make install step
conda build works by creating an environment with all the packages and dependencies from the host section, running the build script, and them packaging up any new files that appear in the environment.
For most Python packages this is done via pip/setuptools using pip install . or the equivalent. Although it sounds like your use case might be a bit different.
jmgpeeters
@jmgpeeters
OK, cool, thanks. (I am indeed using ninja install)
yeah, going the setup.py way is another option. seems quite well documented.
I'm building some other C++ shared libraries as well, though, alongside the cpython extension, so to start the CMake + conda build.sh seemed more straightfoward.
Brad Buran
@bburan

Having a bit of trouble with Anaconda being installed to a location where there's a space in the path. When I run conda-build to create my packages, it incorrectly splits the path into a list (when I set up Windows 10, I didn't realize that it would be creating my home folder as "First Last" name and I'm not in the mood to have to re-install to fix this folder issue).

Rewriting env in output: {'BUILD_PREFIX': 'C:\\Users\\First '
                 'Last\\bin\\Anaconda3\\envs\\decibel-build\\conda-bld\\tdtpy_1580496454031\\_build_env',
 'PREFIX': 'C:\\Users\\First '
           'Last\\bin\\Anaconda3\\envs\\decibel-build\\conda-bld\\tdtpy_1580496454031\\_h_env',
 'SRC_DIR': 'C:\\Users\\First '
            'Last\\bin\\Anaconda3\\envs\\decibel-build\\conda-bld\\tdtpy_1580496454031\\work'}

Is there any work-around to resolve this issue? I'm running 3.18.11.

Jonathan J. Helmus
@jjhelmus
You can use the --croot option to point at a location without spaces. conda-build will perform the build there.
Sean Yen
@seanyen

Hi all, just want to check does anyone running into this issue when using multiple outputs:
conda/conda-build#3860

I am wondering any workaround (without touching the conda-build code)? I am trying to get my conda-smithy CI build worked.

Jan Pöppel
@jpoeppel

Hi everyone, I have a (hopefully quick question) regarding build variants: I want to build a package depending on boost, to be built against different version of boost, to increase compatibility with other packages, e.g. from conda-forge that may still use 1.70 instead of 1.71 etc.
I followed the instructions in the docu regarding build variants, i.e. I setup a conda_build_config.yaml with:

boostcpp:
    - 1.68
    - 1.69
    - 1.70
    - 1.71

but when I try to build my package, I get an "Index out of range" exception in metadata.py line 1698 (extract_single_output_text). If I only specify 2 boost versions it works however. Any idea why this may be the case?

The recipe for this is:
{% set protobufversion = "3.8.0" %}

package:
        name: ipaaca
        version: "0.1.2"

source:
        - path: ../ipaacalib/cpp
          folder: ipaaca-cpp
        - path: ../ipaacalib/proto
          folder: proto
        - path: ../ipaacalib/python
          folder: ipaaca-py

build:
        number: 8


requirements:
        run:
                - ipaaca-cpp
                - ipaaca-py


outputs:
        - name: ipaaca-cpp
          script: install_cpp.sh        # [unix]
          requirements:
                  build:
                        - {{ compiler('c') }}
                        - {{ compiler('cxx') }}
                        - cmake >=3.10
                  host:
                        - libprotobuf {{ protobufversion }}
                        - mosquitto
                        - boost-cpp {{ boostcpp }}
                  run:
                        - mosquitto
                        - libprotobuf {{ protobufversion }}
                        - boost-cpp {{ boostcpp }}

        - name: ipaaca-py
          noarch: python
          script: install_python.sh
          requirements:
                  host:
                          - python
                          - pip
                          - setuptools
                          - protobuf {{ protobufversion }}
                  run:
                          - python
                          - mosquitto
                          - paho-mqtt
                          - protobuf {{ protobufversion }}
Jan Pöppel
@jpoeppel
I am using miniconda2 4.8.2 with conda-build 3.18.11
Jan Pöppel
@jpoeppel
Interestingly enough, I just found out, that it appears to work with more than 2 variants, if I do not use 2 outputs, but only build the cpp version as its own package. Looks like something goes wrong with combining multiple outputs with multiple variants, maybe related to conda/conda-build#3860 ?
Jan Pöppel
@jpoeppel
Found out something new: When I do not define an implicit meta-package, by defining run requirements in the recipe directly (not the outputs), it appears to work. I followed the description in the documentation regarding "Implicit metapackages" when I created the recipe, but apparently this does not play well with build variants (although I still do not understand why this could cause a problem). Should I create an issue for this?
Matthew R. Becker
@beckermr
Are test commands supposed to be run if there is a test script too?
Isuru Fernando
@isuruf
@mingwandroid, do you know what's the reason behind the naming of target_platform instead of host_platform. For building cross compiler packages, they are different things but target_platform is used for host section right?
Ray Donnelly
@mingwandroid
Just unfortunate parallel naming of things. I would have preferred target to retain its gnu meaning.. For cross compilers etc.
Frederic De Groef
@sevas

Hi, I have a question I cannot find an answer reading the docs: is there a way to copy files generated during the test section of conda build?

I'd like to keep the junit xml file and coverage.py reports from pytest next to where the package is saved (ideally in the same location as --output-folder)

Jonathan J. Helmus
@jjhelmus
The package has already been created when the test phase is run, nothing can be added to the package at that point. You could copy files to a local folder (e.g. $HOME) in the test phase to examine the output but they would not be part of the package. If you want to files as part of the package you would need to run the tests as part of the build phase.
Frederic De Groef
@sevas

ah sorry, i don't want these files in the package, i just want them copied as build artifacts to be published alongside the package. Right now they are generated in the conda build workspace, which i don't control.

I suppose $HOME can work when running as a CI build

Mike Sarahan
@msarahan
No way currently, but that's a good idea for a feature request. Please file an issue on the conda build issue tracker. A PR would be most appreciated if you have time. If you need help getting started, let us know.
Frederic De Groef
@sevas
alright, thanks for the answer. I'll have a look for a PR
Marcelo Duarte Trevisani
@marcelotrevisani

I think you can do it using

tests:
  commands:

section. does it make sense?

Frederic De Groef
@sevas

i am missing either a variable or an outputs or artifacts section to tell conda-build to copy certain files from its build workspace to a target location (which can then be picked up by gitlab-ci's artifacts section or something similar in other CI systems

The goal is to publish that further down the build pipeline (either publishing test results on artifactory, or generating a static allure test report, or merging coverage reports in a downstream job)

right now my test command looks like:
commands:
    - pytest tests/ -v --cov --junit-xml="{{ dist_dir }}/testresults/pytest_{{ python }}.xml"
i've tried several values for dist_dir from existing env variables listed in the docs, but they all point to the workspace, which is deleted immediately after the build succeeds
Mike Sarahan
@msarahan
Conda build has no notion of artifacts created other than the packages. It would need some way to specify additional artifacts to collect from the test phase.
Frederic De Groef
@sevas
got it, i just wanted to check that assumption
Isuru Fernando
@isuruf
Can I have a review of conda/conda-build#3868 ?
Marcelo Duarte Trevisani
@marcelotrevisani

Can I have a review of conda/conda-build#3868 ?

done! thanks! sorry for the delay

Wim Van Leuven
@wimvanleuven-kbc
Hey guys, if I run conda skeleton pypi --pypi-url <our-internal-proxying-artifact-repo it tries to download files from files.pythonhosted.org which fails in our secured corporate environment. I understand that the metadata of the package at hand points conda-skeleton to that source.
Is there a solution to use a parameter combination that would make conda-skeleton work against our artifact repository? E.g. conda install correctly downloads from the internal repo.
Anthony Scopatz
@scopatz
In a build.sh script, how do I know if I am on linux64, linux-aarch64, or linux-ppcle64?
Anthony Scopatz
@scopatz
I think you can use ${target_platform} for this
mattchan-tencent
@mattchan-tencent
Hi, I'm getting an undefined symbol on MacOS for a build involving mkl-2020. The _mkl_blas_caxpy symbol is unresolved when I inspect the dylib with nm.
> nm libmkl_intel_lp64.dylib | grep _mkl_blas_caxpy                                                                                                       (base) 
                 U _mkl_blas_caxpy
                 U _mkl_blas_caxpyi
mattchan-tencent
@mattchan-tencent
It's defined in libmkl_sequential.dylib, but I still get failed symbol resolution at runtime. They're both linked on the link line...
mattchan-tencent
@mattchan-tencent
This is the exact error reported:
dyld: Symbol not found: _mkl_blas_caxpy
  Referenced from: $PREFIX/lib/libmkl_intel_lp64.dylib
  Expected in: flat namespace
 in $PREFIX/lib/libmkl_intel_lp64.dylib
mattchan-tencent
@mattchan-tencent
Argh. I finally fixed it. The default conda LDFLAGs include -Wl,-dead_strip_dylibs, which conveniently decides to remove all the MKL sequential, core, etc. libraries...

The hint is in the DYLIB debug:

dyld: loaded: <5DDFA61A-C829-3B91-BC98-271F81B355BF> $WORKDIR/work/bin/pw.x
dyld: loaded: <803DD8AC-4799-3838-8344-55D4FC373664> $PREFIX/lib/libmkl_intel_lp64.dylib
dyld: loaded: <87036AAE-73CD-3C93-B6C8-E7C4F766FE0A> $PREFIX/lib/libgfortran.3.dylib

and when -dead_strip_dylibs is removed:

dyld: loaded: <4A8D5DBE-1988-3E5B-8C36-660876CCEB1F> $WORKDIR/work/bin/pw.x
dyld: loaded: <803DD8AC-4799-3838-8344-55D4FC373664> $PREFIX/lib/libmkl_intel_lp64.dylib
dyld: loaded: <4EDD7B70-4F4B-3A52-A420-25B282BCAC65> $PREFIX/lib/libmkl_sequential.dylib
dyld: loaded: <735589BA-904A-3D27-9264-01DB2C877ADF> $PREFIX/lib/libmkl_core.dylib
dyld: loaded: <87036AAE-73CD-3C93-B6C8-E7C4F766FE0A> $PREFIX/lib/libgfortran.3.dylib
mattchan-tencent
@mattchan-tencent
I filed a bug report on AnacondaRecipes/Intel_repack-feedstock. I don't know if that's the right location for it. AnacondaRecipes/intel_repack-feedstock#8
Anthony Scopatz
@scopatz
Is there a way with conda-build to render the meta.yaml for a given platform?
Jonathan J. Helmus
@jjhelmus
CONDA_SUBDIR is supported by the cli, CONDA_SUBDIR=win-64 conda render recipe
Anthony Scopatz
@scopatz
OK cool! thanks!
Marcelo Duarte Trevisani
@marcelotrevisani
cli_example_grayskull.gif

Hey all,

I just released grayskull :skull:, for now it is just creating recipes looking for packages on PyPI.
It is generating (in general) recipes much better than conda-build skeleton (maybe in the future we can replace skeleton with grayskull :smile: ).
So, please give it a go, you can install it with pip or conda.

pip install grayskull

or

conda install -c conda-forge grayskull

It is similar to use as conda-build skeleton

grayskull pypi PACKAGE_NAME

For the next major version we are planning to be able to easily load and partially update the recipes. After that, we are going to focus to generate recipes for R.

Any problems or suggestions please feel free to open an issue.

Repository: https://github.com/marcelotrevisani/grayskull

Anthony Scopatz
@scopatz
Congrats @marcelotrevisani!
Marcelo Duarte Trevisani
@marcelotrevisani
thanks!