Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 23 16:36
    diegodelemos closed #285
  • Jan 23 16:36
    diegodelemos closed #253
  • Jan 23 16:36

    diegodelemos on master

    workflow_run: notify about thir… (compare)

  • Jan 23 14:26
    diegodelemos commented #253
  • Jan 23 14:26
    diegodelemos commented #253
  • Jan 23 14:25
    diegodelemos synchronize #285
  • Jan 23 13:44
    diegodelemos synchronize #285
  • Jan 23 09:42
    roksys synchronize #287
  • Jan 23 09:37
    roksys synchronize #287
  • Jan 22 15:57
    roksys synchronize #287
  • Jan 22 14:14
    diegodelemos synchronize #285
  • Jan 22 14:14
    diegodelemos synchronize #285
  • Jan 22 13:14
    roksys opened #287
  • Jan 22 11:10
    mvidalgarcia labeled #258
  • Jan 22 11:10
    mvidalgarcia opened #258
  • Jan 22 10:58
    tiborsimko closed #286
  • Jan 22 10:58

    tiborsimko on master

    rest-utils: allows to restart w… (compare)

  • Jan 22 09:44
    tiborsimko commented #257
  • Jan 22 09:33
    diegodelemos labeled #257
  • Jan 22 09:33
    diegodelemos opened #257
Rokas Maciulaitis
@roksys
would it be possible for you to use another deployment type?
jordidem
@jordidem

Dear all, finally we managed to solve the issue using Rancher with the following workaround extracted from coment of gitlawr on rancher/rancher#14836 . Literally we did the following steps

1) Edit cluster, Edit as YAML
2) Add the following flags for kubelet:

services:
   kubelet:
        extra_args:
            containerized: "true"
        extra_binds: 
             - "/:/rootfs:rshared"

3) Click save and wait till the cluster is updated.

Notes:
The community is planning to deprecate the "--containerized" for kubelet(kubernetes/kubernetes#74148).
But the flag is essential for the capability as there is no alternative at the moment.

Now our REANA deployment on Rancher works! Yihaaa!
Thanks for the help and pointing the link!

Diego
@diegodelemos
@jordidem nice finding! would you like to share the solution on the reanahub/reana-cluster#117? I think it would be really useful for people who might be in the same situation as you were :)
Oriol
@zakeruga
Hi Reana team, I’m a co-worker of @jordidem . I have a question, ¿can I have another extra volume like /var/reana but visible for all the job pods?
The reason why I need this is because I have a lot of big files (GB) as input and I don’t want to replicate this files for each user’s workflow.
Rokas Maciulaitis
@roksys
Hi @zakeruga, unfortunately there is not that kind of feature. Job-Controller mounts only workspace (/var/reana/users/<user_id>/workflows/<workflow_id>) for the job pods.
Rokas Maciulaitis
@roksys
We have this feature in Master, but it is not released - reanahub/reana-job-controller@ed125c0
Lukas
@lukasheinrich
hi all.. just wanted to check in what the status is for workflows needing authentication?
is there something I could try out?
Tibor Šimko
@tiborsimko
Kerberos is ready and confirmed working with Kubernetes, however we have an issue with CVMFS needing an upgrade, hence the production instance deploy delays still... I also owe you several replies on other threads regarding workshop etc... Been busy weeks lately. More news next week
Lukas
@lukasheinrich
no worries, great to hear it's working.. looking forward to trying it out
maybe we can meet next week or so
Tibor Šimko
@tiborsimko
:+1: it would be good, any day except Wed should work for me
Rok Roškar
@rokroskar
Hi REANA team - I'm reading through https://t.co/6KTnvEOA9x and it looks like some really exciting developments are mentioned in there! I'm curious how much work it is for the user to run REANA workflows on HPC with this setup? From the extra slides it looks like it's the user's job to configure the VC3 cluster which binds to a specific resource allocation on the HPC resource?
Tibor Šimko
@tiborsimko
Hi @rokroskar, the VC3 will rnu a web site and the user clicks on an option "I want to deploy REANA" and VC3 will create a personal REANA cluster for the given user with credentials etc. Note that this use case is linked to VC3, but we also have native HPC/Slurm support in REANA. It works via ssh-to-headnode, we have run successful examples on a generic Slurm deployment and on CERN Slurm deployment, @roksys is finishing the last touch... So this second option may be more easy for you if RENKU users have access to a Slurm cluster. Note also that we are using Kerberos keytab secret for paswordless-ssh-rsycnc like actions to exchange info and files betwen Kubernetes and Slurm clusters. Dunno if keytab is an option for you...
Rok Roškar
@rokroskar
Thanks @tiborsimko for the info! yes, "native" slurm support would be much better. I suppose supporting LSF would then also be possible? They use slurm at CSCS but LSF on the ETH clusters
Tibor Šimko
@tiborsimko
Yeah, it should be possible, we have not tried with LSF but for a start it would mean a new subclass in our abstract job manager, see here: https://cds.cern.ch/record/2696223/files/CERN-IT-2019-004.pdf
Rok Roškar
@rokroskar
Great!
Mingrui Zhao
@zhaomr13
Hi guys
Where can I find the recent news of REANA?
Tibor Šimko
@tiborsimko
Hi, for general news you can always check our Twitter feed https://twitter.com/reanahub, for development news we are currently working on stabilising and releasing v0.6.0 (see project description and kanban board with detailed tasks here https://github.com/orgs/reanahub/projects), and finally for mid-term and long-term roadmap you can see what is roughly planned on the triage board (see https://github.com/orgs/reanahub/projects/7). Are you interested in anything in particular?
Mingrui Zhao
@zhaomr13
Thank you Tibor. Sorry that I do not follow your recent work. I am mostly interested that is there some major changes in the user side?
Tibor Šimko
@tiborsimko
For the user side, we have added a possibility to open interactive sessions (such as Jupyter notebooks) while batch workflows are running (see release news https://reana-client.readthedocs.io/en/latest/changes.html), we have added restricted resource access (e.g. Kerberos), and a possibility to run parts of workflows on different compute backends (Kubernetes, HTCondor, Slurm). There was a number of other improvements such as running workflows partially etc to ease analysis development, or the forthcoming GitLab bridge so that REANA can be used in Continuous Integration mode. Just a few glimpses...
Lukas
@lukasheinrich
hi all .. just checking in on the status of authenticated workflows
is there something I could try?
Tibor Šimko
@tiborsimko
Hi, it'll be possible next week... This week we are all on either CERN Open Data or Invenio sprints
Lukas
@lukasheinrich
hey all, just following up on the auth stuff
i'm available to test
Diego
@diegodelemos
Hello @lukasheinrich, it is done and integrated but we are waiting for some troubles regarding deployment to be fixed by the cloud team, we will come back to you once there is an instance to connect to
Lukas
@lukasheinrich
thanks @diegodelemos .. have the cloud issues been resolved?
Mattias de Hollander
@mdehollander
Hi! I just found out about this cool project. Is support for other workflows systems on the roadmap? I am most interested in snakemake. Or is there something that would block integration?
Tibor Šimko
@tiborsimko
@mdehollander Snakemake is actually something that was on our eyes since some time, e.g.it is used by some LHCb physics group as well. There are two possible ways: (1) running Snakemake workflows via CWL; (2) running Snakemake workflows directly. Ad (1), we have tested Snakemake -> CWL export -> REANA in the past, with mixed results. A direct support would be probably more advantagous, but we can retest that with latest Snakemake/CML versions. Ad (2), we have no immediate plans to develop reana-workflow-engine-snakemake in the near future, but if there is a momentum and some people willing to contribute, we can perhaps revive this?!
Lukas
@lukasheinrich
hi all, what's the status on the authenticated workflows?
Diego
@diegodelemos

hi all, what's the status on the authenticated workflows?

Hello @lukasheinrich ! good news everything is working, also EOS support (with which we were having some troubles as well) We will have everything soon deployed on REANA DEV instance

Lukas
@lukasheinrich
great.. it would be really good if we could test this before the break
Mattias de Hollander
@mdehollander

@mdehollander Snakemake is actually something that was on our eyes since some time, e.g.it is used by some LHCb physics group as well. There are two possible ways: (1) running Snakemake workflows via CWL; (2) running Snakemake workflows directly. Ad (1), we have tested Snakemake -> CWL export -> REANA in the past, with mixed results. A direct support would be probably more advantagous, but we can retest that with latest Snakemake/CML versions. Ad (2), we have no immediate plans to develop reana-workflow-engine-snakemake in the near future, but if there is a momentum and some people willing to contribute, we can perhaps revive this?!

@tiborsimko Thanks for your answer. I think indeed direct support for Snakemake is better, because I see that CWL export has quite some limitations: https://snakemake.readthedocs.io/en/stable/executing/interoperability.html#cwl-export I could see if there is interest in the snakemake community. I use snakemake quite often myself. But first I need to see if REANA would be of added value for us. From the website/paper it is, but a hands-on experience works always better. I could setup a local installation, but if there is a demo login I am happy to hear about it. For our end-users, the REANA web-interface would be also a good feature. Is there already a place where I can see how the ui will look it?

Tibor Šimko
@tiborsimko
  • The local installation is possible relatively easily, using kubectl and minikube and helm as the prerequisites. We have make based system that can bring up a development version of REANA on a laptop in an automated manner.
  • We don't have any publicly accessible REANA cluster that you could use, although we were musing about setting one up; it would require some CPU accounting limits etc which we did not get to doing yet.
Tibor Šimko
@tiborsimko
  • The web interface is something that we are actively developing these months, so while there is reana-ui component where you could see some mock-ups, it is better to wait for a more complete "live" version coming in February. We have user logins connected to GitLab for easy Continuous Integration use cases, we are working on workflow list and workflow run details visualisation. The important design principle is doing heavy stuff as React components, so that the UI could be eventually plugged into JupyterLab notebooks, so that one could easily launch workflows from within notebooks and monitor progress, for example.
Lukas
@lukasheinrich
hi all, just checking in again (sorry for being annoying :) ). I should be online until the end of the week, so if testing is possible at any point, I'k happy to do it
Tibor Šimko
@tiborsimko
@lukasheinrich We had a last-minute surprise with cvmfs/influxdb/fluentd killing the cluster (INC2249437), but as far as nobody would ask for CVMFS runtime resources, the cluster is ready for testing; more via email
Lukas
@lukasheinrich
awesome! I don't think I need cvmfs for now
Michael R. Crusoe
@mr-c
Hey all, congratulations on the latest release. The hybrid scheduling feature looks really cool! Are there plans to support hybrid scheduling with CWL?
Tibor Šimko
@tiborsimko
Hi @mr-c yes, the hybrid scheduling is fully supported with CWL already, via REANA-specific compute_backend hint (using kubernetes by default)
Michael R. Crusoe
@mr-c
@tiborsimko great news! I guess I misread the release notes
Francois Lanusse
@EiffL
Hi everyone, just saw the release notes of the latest version and couldn't be more excited to see the support for SLURM backend. I'm part of the LSST Dark Energy Science Collaboration ( https://lsstdesc.org ) and would absolutely love to be able to deploy our analysis pipelines using REANA!
Our main computing facility is the NERSC Cori machine at Berkeley National Lab (which runs SLURM for compute nodes, shifter for containers, and a separate k8 service called Spin), has anyone already successfully setup REANA there? And if not, any recommendations for how to go about it?
Tibor Šimko
@tiborsimko
@EiffL Sounds interesting! You may want to get in touch with Kenyi Hurtado and/or Cody Kankel from Notre Dame University who are actually developing REANA workflows for NESRC/Shifter (via VC3 integration). See some details in Kenyi's CHEP 2019 presentation here
Francois Lanusse
@EiffL
Fantastic! will contact them, thanks Tibor.
Lukas
@lukasheinrich
hi all.. for the workshop I'm trying out to run some workflows
is it a good time now or should I try later.. I seem to hit some issues with reana.cern.ch
Tibor Šimko
@tiborsimko
Yeah, now is good. If you hit some issues just ping us via MM