Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Enis Afgan
    @afgane
    Alexandru Mahmoud
    @almahmoud
    Is the auth meeting happening in 15?
    Pablo Moreno
    @pcm32
    Hi guys, do you still have the issue of the newer Kubernetes job runner capturing the stdout and stderr of the jobs in a way that kubectl logs -f <pod> doesn’t see any output? Thanks!
    Nuwan Goonasekera
    @nuwang
    @pcm32 I haven’t checked job logs since GCC, but there was no output up till that point...
    Pablo Moreno
    @pcm32
    thanks… I will open an issue, as I think this is a problem of the runner...
    MoHeydarian
    @MoHeydarian
    When launching Cloudman 2.0, what is the username or email that grants access on the (green colored) GVL landing page?
    Alexandru Mahmoud
    @almahmoud
    admin
    I think
    Alexandru Mahmoud
    @almahmoud
    Other possibilities could be ubuntu cbuser or cluser
    But I think those were on the VM and admin on keycloak
    MoHeydarian
    @MoHeydarian
    admin worked! Thanks, Alex :)
    MoHeydarian
    @MoHeydarian
    How would a user make themselves a Galaxy admin with Cloudman 2.0?
    Alexandru Mahmoud
    @almahmoud
    Is this the newest one, or the old 2.0?
    In the newest one (GVL alpha release), you'd just add admin_users in the galaxy.yml
    I don't remember for the old Cloudman 2.0?
    /it might have to be manually changing galaxy.xml on the VM
    Enis Afgan
    @afgane
    @MoHeydarian On the CloudMan 2.0 landing page, there is an edit/pencil icon next to the galaxy line, click that and an editor will show up. Select galaxy.yml tab and add a line at the bottom of the file like so: admin_users: email@address. After you click Save, a Galaxy restart will be initated and soon after you’ll see tha Admin tab in Galaxy.
    MoHeydarian
    @MoHeydarian
    Thanks, Alex and Enis. When I add admin_users: "my@email.com"' togalaxy.yml` and click Save, I see I/O changes on the Cluster Status page, but it doesn't result in Galaxy admin access.
    If I refresh the Cloudman 2.0 landing page after adding admin info to galaxy.yml, then check the Configure Galaxy menu (edit/pencil) the changes to galaxy.yml aren't reflected.
    Enis Afgan
    @afgane
    @MoHeydarian seems we have a regression in CloudMan that does not propagate the config values. I’ve created an issue here to keep track of it galaxyproject/cloudman#95. Meanwhile, set the value directly in Kubernetes to have this working and will share that cluster info directly with you.
    Alexandru Mahmoud
    @almahmoud
    Thoughts on this as a diagram for our K8S setup?
    Ignore first one, was an older version (different legend)
    Alexandru Mahmoud
    @almahmoud
    Untitled Diagram (2).png
    Alexandru Mahmoud
    @almahmoud
    New version
    Untitled Diagram.png
    Enis Afgan
    @afgane
    Alexandru Mahmoud
    @almahmoud
         database_connection: 'postgresql://{{.Values.postgresql.galaxyDatabaseUser}}:{{.Values.postgresql.galaxyDatabasePassword}}@{{ template "galaxy-postgresql.fullname" . }}/galaxy'
          integrated_tool_panel_config: "/galaxy/server/config/mutable/integrated_tool_panel.xml"
          sanitize_whitelist_file: "/galaxy/server/config/mutable/sanitize_whitelist.txt"
          tool_config_file: "{{.Values.persistence.mountPath}}/config/editable_shed_tool_conf.xml,/galaxy/server/config/tool_conf.xml,{{ .Values.cvmfs.main.mountPath }}/config/shed_tool_conf.xml"
          tool_data_table_config_path: "{{ .Values.cvmfs.main.mountPath }}/config/shed_tool_data_table_conf.xml,{{.Values.cvmfs.data.mountPath}}/managed/location/tool_data_table_conf.xml,{{.Values.cvmfs.data.mountPath}}/byhand/location/tool_data_table_conf.xml"
          tool_dependency_dir: "{{.Values.persistence.mountPath}}/deps"
          builds_file_path: "{{.Values.cvmfs.data.mountPath}}/managed/location/builds.txt"
          datatypes_config_file: "{{ .Values.cvmfs.main.mountPath }}/config/datatypes_conf.xml"
          containers_resolvers_config_file: "/galaxy/server/config/container_resolvers_conf.xml"
          workflow_schedulers_config_file: "/galaxy/server/config/workflow_schedulers_conf.xml"
          build_sites_config_file: "/galaxy/server/config/build_sites.yml"
    Enis Afgan
    @afgane
    find . -type f -name '*.xml’
    Nuwan Goonasekera
    @nuwang
    @almahmoud I meant to comment on the diagram earlier - who’s the intended audience and what is the intended idea? Is it to communicate the k8s objects being created? Depending on intention, perhaps we need additional items like where the jobs execute, another PVC for teh database etc.
    Alexandru Mahmoud
    @almahmoud
    Yes the intention was to show Gen3 folks the resources deployed. I did add the database PVC in a newer version, can share in a bit when I go back to computer. In terms of where jobs are run (and also what to attach the rbac rectangle to) I wasn’t sure how to represent that
    Alexandru Mahmoud
    @almahmoud
    sorry for delay
    Gen3-single-user (1).png
    Untitled Diagram (2).png
    these were for gen3
    Enis Afgan
    @afgane
    For speeding up start times for us during dev, AWS now supports instance hibernation for Ubuntu so we could have a stand-by instance ready that starts in seconds vs minutes. https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ec2-hibernation-now-available-ubuntu-1804-lts
    Vahid
    @VJalili
    A similar concept to serverless; wondering if we can initialize an instance, hibernate it, then create a pool of such hibernated instances, and use them for users.
    Vahid
    @VJalili
    If nothing urgent to talk about, I suggest we cancel tomorrow's meeting.
    Alexandru Mahmoud
    @almahmoud
    Can we discuss the documentation you made at https://galaxyproject.org/cloud/k8s/gke/ ?
    I thought it was going to be a google doc so that we can collaboratively make it, and easily update it given that everything is still in dev?
    There are some things that don't particularly make sense (eg: "./helm, which is a separate installation in user's space as opposed to helm that is a version installed on the server."), also some errors that will never appear if helm is installed properly (eg: the incompatible versions. i.e.: that was happening because you had installed another version of helm on path and initialized that tiller pod then tried to use the other client. Should not happen and if it's happening it means the previous installation was never cleaned up) and the solutions you suggest are at best masking the errors, at worst making it worse... Eg: if we know that the latest helm doesn't work, we should not encourage people to do ./helm init --upgrade... also, other portions that I had left comments on the previous google doc about being explicitly bad practice as per helm docs, I don't think we should encourage people to manually delete resources because it can leave other residuals and i believe helm docs or tutorials specifically warn about that...
    Also, I was not aware that the commands I sent you were going to make it to an official documentation, or I would've cleaned them up a bit more. There is no reason to use ./helm instead of putting the binary on path, also we should probably mention how to cleanup all the residuals from the helm installation after properly installing it on path, given that cloud shell is persistent. The method I gave you was supposed to be temporary just for development and allow for an easy upgrade to another helm version when they become compatible with GKE, not a permanent solution in docs... also, i'm gonna test newer versions given that it's been a while since 2.10 was the newest working version, so we recomend using the newest working one...
    Vahid
    @VJalili
    Lets discuss it next week, I have a deadline next Monday.
    Enis Afgan
    @afgane
    Other than Vahid then, I think we should have the regular meeting.
    Enis Afgan
    @afgane
    Nuwan Goonasekera
    @nuwang
    call link?
    Enis Afgan
    @afgane
    Alexandru Mahmoud
    @almahmoud
    Alexandru Mahmoud
    @almahmoud
    Some notes from the live debugging with Vahid:
    1) The configmap changes don't get propagated (even with a manual redeploy) if you added new sections (would have to also manually add the subpath mounts in the Volumes section of the deployments)
    2) When redeploying keycloak (in a cloudman setup), all the information is lost (i.e. the Rancher and galaxy clients are deleted, the changes in settings go back to default, did not look if users are kept), but we should probably do something to persist, in case keycloak fails to not lose the clients
    and 3) there is a bug in the galaxy code that causes teh galaxy-keycloak integration not to work without SSL validation for now, Vahid PR-ed a fix but it's not merged ( galaxyproject/galaxy#7632 ), we'll probably have to merge it into the image if we want the demo to work with everything
    Nuwan Goonasekera
    @nuwang
    Regarding 1, this is not happening with helm right?
    Alexandru Mahmoud
    @almahmoud
    No, 1) is about changes in rancher sorry for lack of context was previously talking to Enis about changing configmaps in rancher and easiest way to apply them, and was just noting that they take effect but need to be manually mounted as well
    Nuwan Goonasekera
    @nuwang
    regaeding 2), is keycloak not using dedicated storage? I don’t recall running into this issue...