Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    arjunsrivatsa
    @arjunsrivatsa
    yeah that worked
    A. R. Shajii
    @arshajii
    Ok
    I think the PYTHONPATH needs to be different, hmm..
    A. R. Shajii
    @arshajii
    Can you try /home/assrivat/miniconda3/pkgs/numpy-base-1.20.3-py37h74d4b33_0/lib/python3.7/site-packages
    Maybe the numpy-base-1.20.3-py37h74d4b33_0 part will be different
    arjunsrivatsa
    @arjunsrivatsa
    right
    well, it found the package
    PyError: 
    
    IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
    
    Importing the numpy C-extensions failed. This error can happen for
    many reasons, often due to issues with your setup or how NumPy was
    installed.
    
    We have compiled some common reasons and troubleshooting tips at:
    
        https://numpy.org/devdocs/user/troubleshooting-importerror.html
    
    Please note and check the following:
    
      * The Python version is: Python3.8 from "/home/assrivat/miniconda3/bin/python3"
      * The NumPy version is: "1.21.0"
    
    and make sure that they are the versions you expect.
    Please carefully study the documentation linked above for further help.
    
    Original error was: No module named 'numpy.core._multiarray_umath'
    
    
    Raised from: pyobj.exc_check
    A. R. Shajii
    @arshajii
    Oh wait, I think it's actually /afs/csail.mit.edu/u/a/arshajii/miniconda3/lib/python3.7/site-packages
    Sry that's my one
    Replace with your home dir
    arjunsrivatsa
    @arjunsrivatsa
    That didn't work. There doesnt look to be a numpy package under that dir
    A. R. Shajii
    @arshajii
    Really? For me there is a numpy folder there
    What about conda install numpy?
    arjunsrivatsa
    @arjunsrivatsa
    I usually use pip. But I guess this wouldn't install under any environment?
    That did work though
    A. R. Shajii
    @arshajii
    Well actually you can run python shell then do import site; print(''.join(site.getsitepackages()))
    That's what I just did
    arjunsrivatsa
    @arjunsrivatsa
    How would I use a pip package installed under a conda env?
    would I have to port it under that dir
    A. R. Shajii
    @arshajii
    What happens if you run the code above in Python?
    What path do you get?
    I think you can just set PYTHONPATH to that..
    arjunsrivatsa
    @arjunsrivatsa
    I get the same path as last time: /home/assrivat/miniconda3/lib/python3.8/site-packages
    arjunsrivatsa
    @arjunsrivatsa
    got pip to work
    I think everything is good now, thanks
    A. R. Shajii
    @arshajii
    Great!
    Mark Henderson
    @markhend
    seqc run on v0.10.1 thru v0.10.3 works just fine on any source file. seqc build works on v0.10.1 but not .2 or.3. I haven't quite been able to sort out the issue. I've tried on both Ubuntu 20.04 on WSL on Windows and Ubuntu 21.04 on Linux. Typical output:
    mhenders@DESKTOP-3B448LI:~/seq-code$ seqc build hello.seq
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `BZ2_bzBuffToBuffCompress'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `BZ2_bzBuffToBuffDecompress'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_code'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_stream_buffer_bound'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_easy_decoder_memusage'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_easy_buffer_encode'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_stream_decoder'
    /usr/bin/ld: /home/mhenders/.seq/lib/seq/libseqrt.so: undefined reference to `lzma_end'
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    error: process for 'clang' exited with status 1
    A. R. Shajii
    @arshajii
    ^ that'll be fixed in the next version
    was a static linking issue
    arjunsrivatsa
    @arjunsrivatsa
    Hey, is there a way to sample from a list over some probability distribution list. I see random.choices, but i guess that requires me to multiply through the distribution by a fixed integer. That would probably keep the sampling properties I am looking for but I'm not sure
    A. R. Shajii
    @arshajii
    Hey @arjunsrivatsa -- yes random.choices is probably the right thing to use for that. We should actually allow float weights there, but for now you can do exactly what you said by multiplying my an integer (e.g. 1000000) and then convert the entries to int. This'll be the same as if you had used float (only difference would be the 6th decimal and beyond if multiplying by a million)
    arjunsrivatsa
    @arjunsrivatsa
    Thanks
    Is there any way to check the memory usage at a certain point in the program? i.e. a print call to check the memory
    It looks like my memory is blowing up in the cluster when I didn't expect it to
    A. R. Shajii
    @arshajii
    Yes, the GC has some functions for getting memory usage information
    e.g. you should be able to do this:
    from C import GC_get_heap_size() -> int
    
    print(GC_get_heap_size())
    There's also GC_get_total_bytes to get total allocated bytes up to that point (i.e. that value will never decrease)
    Gert Hulselmans
    @ghuls
    @arshajii htslib should be build with libdeflate for much faster reading/writing of BAM/BGZF files: --with-libdeflate
    A. R. Shajii
    @arshajii
    ^ I think the main issue we were having was that we couldn't specify a custom libdeflate
    Otherwise we can just add a libdeflate dependency and statically link it
    Gert Hulselmans
    @ghuls
    statically linking is not always easy indeed
    A. R. Shajii
    @arshajii
    I can look into this again, there might be some obscure option I overlooked
    I remember htslib's libdeflate detection to be very fragile though
    Ricardo Lebrón
    @rlebron-bioinfo
    Do you have any plans to create a Seq kernel for Jupyter?
    A. R. Shajii
    @arshajii
    @rlebron-bioinfo Yes! That's one of the next things we plan to work on.
    Ricardo Lebrón
    @rlebron-bioinfo
    Cool! Many thanks!
    Gert Hulselmans
    @ghuls
    @arshajii seq-lang/seq#241 will speedup reading BAM/SAM/CRAM/VCF/BCF files a bit.