Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 10:54
    douardda commented #988
  • 07:45
    betatim commented #990
  • Nov 28 15:08
    FRidh edited #990
  • Nov 28 12:38
    FRidh synchronize #990
  • Nov 28 12:14
    FRidh synchronize #990
  • Nov 28 11:54
    FRidh synchronize #990
  • Nov 28 11:47
    FRidh synchronize #990
  • Nov 28 11:35
    FRidh synchronize #990
  • Nov 28 11:11
    FRidh synchronize #990
  • Nov 28 10:31
    FRidh ready_for_review #991
  • Nov 28 10:31
    FRidh commented #990
  • Nov 28 10:31
    FRidh edited #990
  • Nov 28 10:30
    FRidh edited #990
  • Nov 28 10:30
    FRidh synchronize #990
  • Nov 28 10:30
    FRidh synchronize #991
  • Nov 28 10:29
    FRidh opened #991
  • Nov 28 09:48
    FRidh edited #990
  • Nov 28 09:47
    FRidh synchronize #990
  • Nov 28 09:37
    FRidh edited #990
  • Nov 28 09:37
    FRidh synchronize #990
Erik Sundell
@consideRatio
I think you would do something like...
jupyterhub:
  hub:
    extraConfig:
      spawnerHook: |
        def my_hook_function(): 
          pass
        c.KubeSpawner.pre_stop_hook = my_hook_function
Sarah Gibson
@sgibson91
I don't think it's possible on mybinder.org though, you'd need to be running your own BinderHub
syn4ps1s
@syn4ps1s
awesome!!
thank you guys.
Mikołaj Rybiński
@mikolajr_gitlab
'morge BinderHub team, I've got a deploy related question, more specifically about place for the BinderHub-based "end app" code. I guess values.yaml is supposed to be provided to k8s deploy via ConfigMap; if so, why in BinderHub repo the BinderSpawner class code is provided directly in "values.yaml" file? It seems to me that a more "correct" way is to put it in a separate code file that comes with the "end app" image, and the class is only imported in "values.yaml", much like KubeSpawner is imported now. Would that indeed be more "correct", or am I missing something here?
Ned Letcher
@ned2

Hey folks :) Running into an issue during initial creation of a container on Binder:

  Attempting uninstall: terminado
    Found existing installation: terminado 0.8.3
ERROR: Cannot uninstall 'terminado'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Removing intermediate container 1c4d639a6120
The command '/bin/sh -c ${KERNEL_PYTHON_PREFIX}/bin/pip install --no-cache-dir -r "requirements.txt"' returned a non-zero code: 1

Conventional Python wisdom appears to be to install the offending package with: pip install --ignore-installed PACKAGE, but I'm not sure how to do that with a requirements.txt.

Thought this might be an issue that's come up before on Binder?

Tim Head
@betatim
@ned2 do you have a link to the repository for which this happens? without it is super hard to say anything. off the top of my head my answer is: not seen this before, probably something related to which packages exactly are asked for by the repo and those that are installed by default?
Ned Letcher
@ned2
yeah should have included: https://github.com/ned2/melbviz
Tim Head
@betatim
@mikolajr_gitlab i don't remember why we put it in values.yaml. Overall i think the answer is "history" and it makes it easy to make small edits to it without having to rebuild the image (which might be why it got put there at the very beginning and then never got cleaned up?
15 replies
wild guess: https://github.com/ned2/melbviz/blob/ca6ee7372266a5ecc1cfe512c0cfb10e13002b87/requirements.txt#L66 requests (via pip) a version of terminado which is different from the version that is already installed (repo2docker uses conda to do that). Because it wasn't installed via pip, you now get the warning about "partial uninstall". Generally we recommend to not list all packages ever in your requirements.txt but only those you "care" about. in particular for things which are dependencies of your dependencies and such i wouldn't specify an explicit version
at least that is what i'd try
Ned Letcher
@ned2
ahh, right. yeah that one is made with pip-tools (compiles the requirements.in --> requirements.txt)
I bet you're right. it was working before I switched to compiling a fully pinned requirements.txt
thanks will try!
Ned Letcher
@ned2

success @betatim! seems kind of obvious in hindsight :P

thanks for the help :D

Tim Head
@betatim
no worries
it is a bit particular to binder. i think in general having a "fully pinned" requirements.txt is a good idea/practice but in binder it gets used "on top of" an existing environment so the recommendation changes a bit
Min RK
@minrk
I think there is an issue in that the "distutils installed" bit shouldn't happen. This is probably a problem in the conda package not installing the right metadata.
Our terminado is also outdated, so this may have been fixed and a refreeze will do it
Loic Tetrel
@ltetrel
Hi all,
I was wondering if it is possible to select on which node the hub and binder pods will be installed via helm? I wanted to let my worker nodes "free" of any orchestration pods.
For example for cert-manager, I added this option with the helm install : --set nodeSelector."node-role\.kubernetes\.io/master="
Sarah Gibson
@sgibson91
2 replies
Sarah Gibson
@sgibson91
@ltetrel (sorry, can't do threads in the app) Yep, just repeat the steps using "user" instead of "core"
Loic Tetrel
@ltetrel
thanks, the config you shared will be usefull!
pulsargranular
@pulsargranular
Hi there, I tried to add a new RepoProvider, according to the docs here https://binderhub.readthedocs.io/en/latest/developer/repoproviders.html#adding-a-new-repository-provider
Is there a config-based way?
Min RK
@minrk
Yeah, you can add providers in binderhub_config.py:
c.BinderHub.repo_providers.update({"myprefix": MyRepoProvider})
It can only be done with Python config, though, not the declarative helm config.
in helm:
extraConfig:
  myRepoProvider: |
        class MyRepoProvider(...)

        c.BinderHub.repo_providers.update({"myprefix": MyRepoProvider})
pulsargranular
@pulsargranular
cool, thanks a lot!
Francisco Bischoff
@franzbischoff
Hello binders!
There is any API to retrieve the status of a binder? so I can make a badge on my repo if the binder is ready or not?
Tim Head
@betatim
@franzbischoff that endpoint will trigger a new build. off the top of my head i can't think of a way you can use it to make a badge :(
you'd have to make a little service that looks at the content it gets backs from that endpoint and then generates a badeg
Francisco Bischoff
@franzbischoff
@betatim yup, the ideal thing would be just check for the build status, and not trigger a new build...
the URL above will return the build log... if it quickly answer with this: "data: {"phase": "built", "imageName": "gcr.io/binderhub-288415/r2d-staging-72d7634-heads-2duporto-2drstudio-5fenv-49b1f3:cb08bf5fa6d39dab1f94e2e798fe4bba9a3fff24", "message": "Found built image, launching...\n"}" I can assume it is built
Francisco Bischoff
@franzbischoff
I found something here: jupyterhub/mybinder.org-deploy#448 but seems to be related with the running docker, not the build process
Francisco Bischoff
@franzbischoff
--> Another question:
there is any way to store the GITHUB_TOKEN when someone checks out the repository so the user can push back changes without having to type user/pass? I'm thinking in use this for private repository
(for github classroom)
well. the first problem is that nbgitpuller only clone public repos :-/
Sarah Gibson
@sgibson91
@franzbischoff I'd watch this PR jupyterhub/binderhub#1169
This would be for authenticated BinderHubs only (so not mybinder.org)
Francisco Bischoff
@franzbischoff
ok, thanks!
Tim Head
@betatim
nods. in general for mybinder.org we recommend people do not enter any kind of private credentials or passwords. this is a simple and easy message for people. maybe one particular repo is not evil so people could maybe enter something secret into it. but how does the average user know/check that some other binder doesn't contain something evil that will steal their credentials? this is super hard to do. hence our simple message "please don't type secret things into your binder sessions."
but you can run your own binderhub with authentication etc which solves some/all of these worries
Nalin Bhardwaj
@nalinbhardwaj
Hey guys! I'm trying to deploy BinderHub and use it to create kernels and connect to them via websockets locally with the jupyter messaging API (https://jupyter-client.readthedocs.io/en/latest/messaging.html#python-api) but I've been having trouble getting an authorisation token that let's me do this. Am I just doing something wrong or is this something not possible to do with BinderHub?
I can connect to a different self-hosted jupyter server, so I think I'm doing something wrong with BinderHub in specific (or it's not possible to use BinderHub the way I'm trying to)^
Tim Head
@betatim
@nalinbhardwaj maybe take a look at how https://github.com/executablebooks/thebe and https://github.com/ines/juniper do it?
they use a binderhub to launch kernel(s) and then use them to have code executed from a webpage