Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Jarett DeAngelis
@jdkruzr
right, which is different for things inside and outside the cluster
or it should be
Ward Vandewege
@cure
it can be yeah, hence split dns is useful
(or /etc/hosts entries on certain machines if you must)
jdkruzr @jdkruzr rubs head
Jarett DeAngelis
@jdkruzr
why do split DNS instead of just telling each component "go to this address to access this node, and externally we will use this URL to get to it"
Ward Vandewege
@cure
yeah, I agree
Jarett DeAngelis
@jdkruzr
like it seems to be there should be
ListenURL
InternalURL
ExternalURL
Ward Vandewege
@cure
uhuh
Jarett DeAngelis
@jdkruzr
I don't know how my last cluster worked at all, lol
Ward Vandewege
@cure
in practice you can work around this with a few strategic /etc/hosts entries (or split dns) on e.g. the wb machine
basically, you want to have an /etc/hosts entry on the wb machine for whatever the controller ExternalURL is, if you want that to resolve differently from inside
that should solve most (all?) of your problems
Jarett DeAngelis
@jdkruzr
okay
so I could make every InternalURL 0.0.0.0 theoretically and it could work
(did I misunderstand that)
this is where I don't understand how this is supposed to work
  Keepstore:
    # No ExternalURL because they are only accessed by the internal subnet.
    InternalURLs:
      "{{ keep_internal_url1 }}": {}
will the keepstore only listen to itself if I put http://MY_KEEP_IP:25175 or whatever it is there?
if so how does everything else know where to contact it?
Ward Vandewege
@cure
(dinner biab)
Ward Vandewege
@cure
keepstores by definition you only want accessible inside your cluster so... don't change their internalURL?
btw the alternative to changing the internalURL for your controller is to update your nginx config so that it talks to that 192.168... adress instead of 127.0.0.1
Ward Vandewege
@cure
@jdkruzr does that help?
Jarett DeAngelis
@jdkruzr

@jdkruzr does that help?

it did, thank you

next question: am trying to set up git. 1) is it possible to configure Arvados to just use my dang Gitlab instance? 2) if I must use gitolite, what is this intended to do?
git@gitserver:~$ ssh -o stricthostkeychecking=no localhost cat .ssh/id_rsa.pub
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7aBIDAAgMQN16Pg6eHmvc+D+6TljwCGr4YGUBphSdVb25UyBCeAEgzqRiqy0IjQR2BLtSirXr+1SJAcQfBgI/jwR7FG+YIzJ4ND9JFEfcpq20FvWnMMQ6XD3y3xrZ1/h/RdBNwy4QCqjiXuxDpDB7VNP9/oeAzoATPZGhqjPfNS+RRVEQpC6BzZdsR+S838E53URguBOf9yrPwdHvosZn7VC0akeWQerHqaBIpSfDMtaM4+9s1Gdsz0iP85rtj/6U/K/XOuv2CZsuVZZ52nu3soHnEX2nx2IaXMS3L8Z+lfOXB2T6EaJgXF7Z9ME5K1tx9TSNTRcYCiKztXLNLSbp git@gitserver
git@gitserver:~$ rm .ssh/authorized_keys
when I try to do that first line after I become_user git with Ansible, I get this: FATAL: unknown git/gitolite command: 'cat .ssh/id_rsa.pub'
in fact it also happens on the command line without Ansible:
git@arvados-api:/home/jtd$ ssh -o stricthostkeychecking=no localhost cat .ssh/id_rsa.pub
FATAL: unknown git/gitolite command: 'cat .ssh/id_rsa.pub'
why is it trying to execute gitolite? did gitolite become its shell at some point?
(I just checked, the answer is no)
and why do we want to delete .ssh/authorized_keys after all that work?
Peter Amstutz
@tetron
@/all the Arvados biweekly user group meeting will be starting shortly https://forum.arvados.org/t/arvados-user-group-video-chat/47
Maxat Kulmanov
@coolmaksat
Hi,
I'm getting this error after upgrading to the latest version:
2021-07-13T07:58:13.785717454Z ERROR Unhandled error:
2021-07-13T07:58:13.785717454Z 'NoneType' object has no attribute 'target'
2021-07-13T07:58:13.785717454Z Traceback (most recent call last):
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/cwltool/main.py", line 1205, in main
2021-07-13T07:58:13.785717454Z tool, initialized_job_order_object, runtimeContext, logger=_logger
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/arvados_cwl/executor.py", line 775, in arv_executor
2021-07-13T07:58:13.785717454Z self.final_output, self.final_output_collection = self.make_output_collection(self.output_name, storage_classes, self.output_tags, self.final_output)
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/arvados_cwl/executor.py", line 467, in make_output_collection
2021-07-13T07:58:13.785717454Z adjustFileObjs(outputObj, rewrite)
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/cwltool/utils.py", line 353, in adjustFileObjs
2021-07-13T07:58:13.785717454Z visit_class(rec, ("File",), op)
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/cwltool/utils.py", line 295, in visit_class
2021-07-13T07:58:13.785717454Z visit_class(rec[d], cls, op)
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/cwltool/utils.py", line 293, in visit_class
2021-07-13T07:58:13.785717454Z op(rec)
2021-07-13T07:58:13.785717454Z File "/usr/share/python3/dist/python3-arvados-cwl-runner/lib/python3.7/site-packages/arvados_cwl/executor.py", line 461, in rewrite
2021-07-13T07:58:13.785717454Z fileobj["location"] = generatemapper.mapper(fileobj["location"]).target
2021-07-13T07:58:13.785717454Z AttributeError: 'NoneType' object has no attribute 'target'
All steps in the workflow are finished successfully
and it was running with version 2.1
Maxat Kulmanov
@coolmaksat
I found that it happens when workflow has an output of type Directory
Peter Amstutz
@tetron
@coolmaksat do you want to post some of your CWL on here https://forum.arvados.org/
Maxat Kulmanov
@coolmaksat
Okay, thank you
Peter Amstutz
@tetron
@/all Arvados 2.2.1 is released: https://arvados.org/release-notes/2.2.1/
Jarett DeAngelis
@jdkruzr
@tetron is 2.2 compatible with Ubuntu 20.04 now?
I feel like that was in a changelog at some point
or a plan somewhere
Ward Vandewege
@cure
@jdkruzr yes - cf. the 2.2.0 release notes: https://arvados.org/release-notes/2.2.0/
Jarett DeAngelis
@jdkruzr
thanks @cure. also is it normal that previews of items in a collection work in wb1 but are not present in wb2?
images for example
Cibin S B
@cibinsb
Hi All,
Following from the instuctions here: https://doc.arvados.org/v2.0/install/arvados-on-kubernetes-minikube.html. I installed arvados on minikube successfully, however when ran tests/minikube.sh command:
(py36) cibin@cibins-beast-13-9380:~/EBI/arvados-k8s/tests$ ./minikube.sh 
Monday 26 July 2021 10:26:47 PM IST
Monday 26 July 2021 10:26:48 PM IST
cluster health OK
uploading requirements for CWL hasher
2021-07-26 22:26:48 arvados.arv_put[202690] INFO: Creating new cache file at /home/cibin/.cache/arvados/arv-put/349acdb1f48e6a0369aa03e95afda6c7
0M / 0M 100.0% 2021-07-26 22:26:49 arvados.arv_put[202690] INFO: 

2021-07-26 22:26:49 arvados.arv_put[202690] INFO: Collection saved as 'Saved at 2021-07-26 16:56:48 UTC by cibin@cibins-beast-13-9380'
vwxyz-4zz18-zkx2s1uzrhznn4d
uploading Arvados jobs image for CWL hasher
running CWL hasher
INFO /home/cibin/anaconda3/envs/py36/bin/cwl-runner 2.2.1, arvados-python-client 2.2.1, cwltool 3.0.20210319143721
INFO Resolved 'hasher-workflow.cwl' to 'file:///home/cibin/EBI/arvados-k8s/tests/cwl-diagnostics-hasher/hasher-workflow.cwl'
INFO hasher-workflow.cwl:1:1: Unknown hint WorkReuse
INFO Using cluster vwxyz (https://192.168.49.2/)
INFO Using collection cache size 256 MiB
INFO hasher-workflow.cwl:1:1: Unknown hint WorkReuse
INFO [container hasher-workflow.cwl] submitted container_request vwxyz-xvhdp-z35egb6cw09u3a3
INFO Monitor workflow progress at https://192.168.49.2/processes/vwxyz-xvhdp-z35egb6cw09u3a3
INFO [container hasher-workflow.cwl] vwxyz-xvhdp-z35egb6cw09u3a3 is Final
ERROR [container hasher-workflow.cwl] (vwxyz-dz642-ua97ck3433k2qvx) error log:
  ** log is empty **
ERROR Overall process status is permanentFail
INFO Final output collection None
INFO Output at https://192.168.49.2/collections/None
{}
WARNING Final process status is permanentFail
2 replies
Ward Vandewege
@cure
@cibinsb I can reproduce this failure, I will have a look
5 replies
Ward Vandewege
@cure

thanks @cure. also is it normal that previews of items in a collection work in wb1 but are not present in wb2?

@jdkruzr cf. https://doc.arvados.org/v2.2/install/install-keep-web.html, the note:

Whether you choose to serve collections from their own subdomain or from a single domain, it’s important to keep in mind that they should be served from me same site as Workbench for the inline previews to work.

Please check keep-web’s URL pattern guide to learn more.

https://doc.arvados.org/api/keep-web-urls.html#same-site