Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Maxat Kulmanov
@coolmaksat
I found that it happens when workflow has an output of type Directory
Peter Amstutz
@tetron
@coolmaksat do you want to post some of your CWL on here https://forum.arvados.org/
Maxat Kulmanov
@coolmaksat
Okay, thank you
Peter Amstutz
@tetron
@/all Arvados 2.2.1 is released: https://arvados.org/release-notes/2.2.1/
Jarett DeAngelis
@jdkruzr
@tetron is 2.2 compatible with Ubuntu 20.04 now?
I feel like that was in a changelog at some point
or a plan somewhere
Ward Vandewege
@cure
@jdkruzr yes - cf. the 2.2.0 release notes: https://arvados.org/release-notes/2.2.0/
Jarett DeAngelis
@jdkruzr
thanks @cure. also is it normal that previews of items in a collection work in wb1 but are not present in wb2?
images for example
Cibin S B
@cibinsb
Hi All,
Following from the instuctions here: https://doc.arvados.org/v2.0/install/arvados-on-kubernetes-minikube.html. I installed arvados on minikube successfully, however when ran tests/minikube.sh command:
(py36) cibin@cibins-beast-13-9380:~/EBI/arvados-k8s/tests$ ./minikube.sh 
Monday 26 July 2021 10:26:47 PM IST
Monday 26 July 2021 10:26:48 PM IST
cluster health OK
uploading requirements for CWL hasher
2021-07-26 22:26:48 arvados.arv_put[202690] INFO: Creating new cache file at /home/cibin/.cache/arvados/arv-put/349acdb1f48e6a0369aa03e95afda6c7
0M / 0M 100.0% 2021-07-26 22:26:49 arvados.arv_put[202690] INFO: 

2021-07-26 22:26:49 arvados.arv_put[202690] INFO: Collection saved as 'Saved at 2021-07-26 16:56:48 UTC by cibin@cibins-beast-13-9380'
vwxyz-4zz18-zkx2s1uzrhznn4d
uploading Arvados jobs image for CWL hasher
running CWL hasher
INFO /home/cibin/anaconda3/envs/py36/bin/cwl-runner 2.2.1, arvados-python-client 2.2.1, cwltool 3.0.20210319143721
INFO Resolved 'hasher-workflow.cwl' to 'file:///home/cibin/EBI/arvados-k8s/tests/cwl-diagnostics-hasher/hasher-workflow.cwl'
INFO hasher-workflow.cwl:1:1: Unknown hint WorkReuse
INFO Using cluster vwxyz (https://192.168.49.2/)
INFO Using collection cache size 256 MiB
INFO hasher-workflow.cwl:1:1: Unknown hint WorkReuse
INFO [container hasher-workflow.cwl] submitted container_request vwxyz-xvhdp-z35egb6cw09u3a3
INFO Monitor workflow progress at https://192.168.49.2/processes/vwxyz-xvhdp-z35egb6cw09u3a3
INFO [container hasher-workflow.cwl] vwxyz-xvhdp-z35egb6cw09u3a3 is Final
ERROR [container hasher-workflow.cwl] (vwxyz-dz642-ua97ck3433k2qvx) error log:
  ** log is empty **
ERROR Overall process status is permanentFail
INFO Final output collection None
INFO Output at https://192.168.49.2/collections/None
{}
WARNING Final process status is permanentFail
2 replies
Ward Vandewege
@cure
@cibinsb I can reproduce this failure, I will have a look
5 replies
Ward Vandewege
@cure

thanks @cure. also is it normal that previews of items in a collection work in wb1 but are not present in wb2?

@jdkruzr cf. https://doc.arvados.org/v2.2/install/install-keep-web.html, the note:

Whether you choose to serve collections from their own subdomain or from a single domain, it’s important to keep in mind that they should be served from me same site as Workbench for the inline previews to work.

Please check keep-web’s URL pattern guide to learn more.

https://doc.arvados.org/api/keep-web-urls.html#same-site

Ward Vandewege
@cure
Can you try again @cibinsb ? Problem found and fixed.
Cibin S B
@cibinsb
Thanks @cure, Its working now
Ward Vandewege
@cure
excellent! thanks for confirming. We'll be adding some monitoring of the package repositories to make sure this (missing published packages in the stable repo) does not happen again.
Ryan Golhar
@golharam
Hi all - I'm going through the users guide and discovered a discrepency in steps. Specifically https://doc.arvados.org/v2.2/user/tutorials/wgs-tutorial.html, from step 3a setting up a new project to step 3b working with collections. There is no transition.
EDIT: Nevermind, its there, just wasn't clear as II was reading it.
Peter Amstutz
@tetron:matrix.org
[m]
hi @golharam
Peter Amstutz
@tetron
we'll take that as feedback to improve the tutorial text
we have a ticket of details to polish in the tutorial, I added your comment https://dev.arvados.org/issues/17846
Peter Amstutz
@tetron
@/all the Arvados user group meeting is starting soon: https://meet.google.com/eig-fvsw-xvd
Gudule JR
@GuduleJR_twitter
Hi all... Coming back with Arvados... I would like to know if the "Multi host Arvados" is posible for a provate network, tahts is, to install all of the services for a storage managment, using somne local CA certificates? That's because i have to proove on my staff that Arvados is viable, to have some public ip....
Peter Amstutz
@tetron
yes, you can use a local CA for your certificates
crusoe
@mr-c:matrix.org
[m]
I've seen multiple references to https://gitlab.com/iidsgt/arv-helm but that returns 404 ; where did that get moved to?
Peter Amstutz
@tetron
maybe @osmanwa knows?
Gudule JR
@GuduleJR_twitter
@tetron ok, thank's I supose I have to use salt to autoticaly transfer the certificates to each servers....
Tom Schoonjans
@tschoonj

Hi all,

We are currently setting up a test Arvados cluster and I ran into some unusual behaviour regarding the default replication number: even though I have set this number to 1 in config.yml (Clusters.ClusterID.Collections.DefaultReplication), and this is confirmed in the output of http://ClusterID.our.domain.com/arvados/v1/config, this does not appear to be replicated in the Python SDK:

$ /usr/share/python3/dist/python3-arvados-python-client/bin/python
Python 3.8.10 (default, Jun  2 2021, 10:49:15) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import arvados
>>> arvados.api('v1')._rootDesc['defaultCollectionReplication']
2

Any thoughts on why this happens? Thanks in advance!!

Peter Amstutz
@tetron:matrix.org
[m]
good catch, let me see
Tom Schoonjans
@tschoonj
thanks Peter
Peter Amstutz
@tetron:matrix.org
[m]
oh, perhaps you have a cached discovery document?
Tom Schoonjans
@tschoonj
not sure what that is
I actually ran into this problem through arv-put
Peter Amstutz
@tetron:matrix.org
[m]
rm -r ~/.cache/arvados
Tom Schoonjans
@tschoonj
ok
this works!
thanks Peter!!
Peter Amstutz
@tetron:matrix.org
[m]
by the way we have an Arvados use group meeting today in half an hour
Tom Schoonjans
@tschoonj
I know, but won't be able to make it due to childcare :-(
Peter Amstutz
@tetron
@/all The user group video chat is happening soon https://forum.arvados.org/t/arvados-user-group-video-chat/47/8
Tom Schoonjans
@tschoonj

Hello again,

We are still setting up our test arvados infrastructure, and have now a single VM with API server, PostgreSQL, keepstore and keepproxy. Our issue now is with the keepproxy: the docs stipulate that arv keep_service accessible should contain a reference to the keepproxy server. This works fine when running the command on the office network, but fails when trying this from home over VPN as it is shown to contain the keepstore domain name instead.

I assume that this is related to the geo settings in the nginx config?

Thanks in advance!

Peter Amstutz
@tetron
yes
it is controlled by the geo setting
is the home VPN considered to be on the same network?
Tom Schoonjans
@tschoonj
apparently not :-)
I will ask our IT department what IP range we need to add to support our VPN connections
Peter Amstutz
@tetron
if you are outside the private network, you should get keepproxy from "keep_services accessible", if you are inside the private network, you should get the keepstore servers instead. it doesn't matter which one you get as long as it is reachable
so it sounds like either the keepstore needs to be reachable from the home VPN or your geo section needs to send the home VPN to keepproxy (which needs to be reachable?)
Tom Schoonjans
@tschoonj
aha
so what we are seeing here is actually ok?
Peter Amstutz
@tetron
does it work?