<api_url>or can I skip it as long as I have it in the backend python file as _OIDCENDPOINT
https://galaxyproject.org/authnz/config/oidc/do not have it at all.
<url>appears as an option, and in the
oidc_backends_config.xmlit is replaced, in some providers, as
<url>only appears in the example in the doc https://galaxyproject.org/authnz/config/oidc/#oidc-configuration-options-for-identity-providers but it is not explained in the text. All other tags are explained in detail. Maybe, if it is not required, we shall remove it from the doc?
<redirect_url>to the IdP. My question was more about the opposite link: how your client will connect to the IdP ...
<url>tag in the
oidc_backends_config.xml, it seems that the OIDC_ENDPOINT variable is enough in the backend python file. I am using our own IdP and the backend (very basic) is based on OpenIdConnectAuth class.
Hi! I am now trying to run a notebook for ML that needs to use the gpus on the host machine. I have installed nvidia container toolkit, and can run docker successfully and see the gpus in the container with:
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
Can I make galaxy add the
--gpus all option for the docker run command it uses? I have naively tried
<param id="docker_run">--gpus all</param> in the job_conf.xml file of the destination that is used. But that is not rendered. I will in the meantime look deeper into the documentation to see if I find the solution myself.