Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 00:05
    ryanlovett commented #4017
  • Aug 19 15:54

    manics on main

    avoid database error on repeate… Merge pull request #4019 from m… (compare)

  • Aug 19 15:54
    manics closed #4017
  • Aug 19 15:54
    manics closed #4019
  • Aug 19 14:04
    vladfreeze synchronize #3651
  • Aug 19 14:04
    vladfreeze synchronize #3651
  • Aug 19 13:51
    pre-commit-ci[bot] synchronize #3651
  • Aug 19 13:50
    vladfreeze synchronize #3651
  • Aug 19 11:23
    vladfreeze ready_for_review #3651
  • Aug 19 09:55
    minrk commented #3961
  • Aug 19 09:01
    minrk commented #4017
  • Aug 19 08:59
    minrk labeled #4019
  • Aug 19 08:59
    minrk opened #4019
  • Aug 19 08:46
    minrk labeled #4018
  • Aug 19 08:45
    minrk commented #4018
  • Aug 19 08:37
    minrk commented #4017
  • Aug 19 08:33
    minrk commented #4017
  • Aug 19 08:26
    minrk opened #4018
  • Aug 19 05:05
    RanadeepPolavarapu commented #3961
  • Aug 18 20:50
    ryanlovett opened #4017
Matt Waller

hi y'all! I'm thinking about using JupyterHub for a project.

My company makes scientific python packages and dash apps, and I wondered if creating custom, preloaded JupyterHubs would be a good thing to do for our client companies. Ideally we could make some highly customized front end that could launch the dash apps or other custom gui webapps, but also have a link to the JupyterHub with all of our software packages on it. Does this sound like a good usecase?

Sarah Gibson
Yes! but also you might be interested in Binder https://mybinder.readthedocs.io/en/latest/

Hello i just have two questions with regards to a jovyan user's account suddenly requiring a refresh ( no cmds are run, the browser displays a pop up which requests a restart of the server)
Looking at the logs for our hub, I found that it could be because of the MIME request header parsing.

  1. I was hoping to ask if there are any ways to improve the stability of a user pod so that it won't randomly restart a users server.

  2. With respect to the below issue, is there a way to inspect or guarantee a */* type?

Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute
        result = await result
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/handlers/base.py", line 1496, in get
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/utils.py", line 636, in get_accepted_mimetype
        for (mime, params, q) in _parse_accept_header(accept_header):
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/utils.py", line 591, in _parse_accept_header
        typ, subtyp = media_type.split('/')
    ValueError: not enough values to unpack (expected 2, got 1)
The user didn't explicitly run a command; however, the logs show the below:
 Uncaught exception GET /hub/user/<user_name>/files/<folder_name>/<file_name>.ipynb?_xsrf=2%7C998969d3%7Cdc244f802e575ffd753fcdbf6efd17de%7C1658510380 (::ffff:

Proxy logs

18:33:13.220 [ConfigProxy] error: 503 GET /user/<user>/api/terminals connect ECONNREFUSED IP:8888
18:33:17.372 [ConfigProxy] error: 503 GET /user/<user>/api/kernels/413753d7-67bb-4a49-ae10-84c1f80f3c00/channels connect EHOSTUNREACH IP:8888
18:33:17.372 [ConfigProxy] error: 503 GET /user/<user>/api/contents/file_name connect EHOSTUNREACH IP:8888
18:33:17.373 [ConfigProxy] error: 503 GET /user/<user>/api/kernels connect EHOSTUNREACH IP:8888
18:33:17.373 [ConfigProxy] error: 503 GET /user/<user>/api/sessions connect EHOSTUNREACH IP:8888
18:33:20.540 [ConfigProxy] error: 503 GET /user/<user>/api/kernels/413753d7-67bb-4a49-ae10-84c1f80f3c00 connect EHOSTUNREACH IP:8888
18:33:22.186 [ConfigProxy] info: Removing route /user/<user>
18:33:22.186 [ConfigProxy] info: 204 DELETE /api/routes/user/<user>
18:33:39.731 [ConfigProxy] info: 200 GET /api/routes
18:34:39.731 [ConfigProxy] info: 200 GET /api/routes
18:34:41.856 [ConfigProxy] info: Adding route /user/<user> -> http://NEW_IP:8888
18:34:41.856 [ConfigProxy] info: Route added /user/<user> -> http://NEW_IP:8888
18:34:41.857 [ConfigProxy] info: 201 POST /api/routes/user/<user>

Could this be a reassigning of IP that could be the issue?

Nikhil Jha
I have a tiny PR for kubespawner to fix a regression in IPv6 address handling: jupyterhub/kubespawner#619 🥺
João Victor Carvalho
Hi, good morning! I want to configure the HTTPS on my installation of JupyterHub. I am using it through the Kubernetes Cluster... Likewise, I don't understand what more I need to do to make the HTTPS work. If someone can help me, I give thanks!
This is my config in a yaml file:
I try to access the port that HTTPS is running, but the return always is connection refused
Tung Lam
Hi all!
I am new to JupyterHub. Currently, I want to set up a JupyterHub server where each user can use a GPU in the server (the server has multiple GPUs). I followed this guide jupyterhub/dockerspawner#331 but all the user uses the same GPU. I want each user uses a different GPU from the other. Is it possible?
@tunglambk It is possible in k8s environment for sure. As for docker, I haven't tried it.
Tung Lam
@Armadik Currently, I just use DockerSpawner because of the requirement.

Hey there! I'm having some issues on a Littlest Jupyterhub environment on AWS. Users are authenticating to Jupyterhub through Azure AD. We've got a proxy in the environment and I've been trying back and forth to get the TLJH server to use the proxy when the users log in (cuz otherwise the authentication won't be completed, it needs to go through the corporate proxy), but I can't get it to work (I've been tracing the traffic through tcpdump and I can see the server sends the requests directly without going through the proxy, and then they time out). I've been looking at documentation and already opened issues (such as jupyterhub/oauthenticator#217) but I haven't been able to make it work on a TLJH installation.

Do you guys have any idea what else could I try to get it to work? Thanks! :)

Good Morning! Today my admins deleted a bunch of EBS volumes that were used by our jhub deployment in kuberentes. When the pod tries to startup for the user it says can't find volume. What is the easiest way to fix this? I could delete the pods or redeploy but I think I might need to delete the pvc and redeploy. Any thoughts?
2 replies
Tung Lam
Hi all, I'm meeting problem with permission of folder with the above config
This is the folder created by the pre_spawn_hook when I log with user "tung"
When I run the notebook with user tung, the work folder's owner is jovyan and I can create file in the folder
However, when I log in with others users, I cannot create file in work
Tung Lam
Please help me :(
Can jupyterhub-chp be configured with regex?
i.e route of /user/{username}/{suffix} be mapped to dynamic target? i.e http://service.com:8888/{username}/someStringHere/{suffix} ?
Min RK
No, it only supports prefix matching
1 reply

I'm trying to make "rooms" that users get routed into. The "rooms" are pre-launched instances of JupyterLab. I am able to programmatically spawn the instances and I am able to programmatically configure the CHP in order to route requests the newly spawned instances; however the instances that I spawn are unable to communicate with the Hub presumably because they have not completed the OAuth with the Hub.

Anyhow, I am wondering if there is an approach to spawning JupyterLab instances that aren't necessarily owned by a single user. In other words, I would like to spawn JupyterLab instances that have URLs like this: "https://example.org/user/the-name-of-the-room/lab?"

Min RK
You can run jupyterlab as a jupyterhub 'service'
You can run any number of these that you like
and grant different users access to one or more services via roles
These can't be launched programmatically, though (services come from config and aren't dynamic)
You can also grant users access to another user's server via the access:servers!user=name scope. So if you create 'fake' users, you can grant a group of real users access to that fake user's server.

Hello! I'm doing some work on an existing JupyterHub for Kubernettes cluster and I'd like to allow for users to leave work running on their notebook server overnight.

So far I've considered:

  • disabling the jupyterhub-idle-culler and asking users to shut down their server when they are done
  • extending the idle timeout so that servers are culled after 12 hours of inactivity

Are there any other good options for achieving this? Is it possible at all for JupyterHub admins/users to disable culling on individual servers (but have culling turned on by default).

Thanks in advance for any thoughts or advice!

Min RK
There isn't a simple, clean option to temporarily marking a server as not-for-shutdown. Of course, increasing your idle shutdown time and/or disabling it (requiring users to shut down) definitely work.
The 'clean' solution would be to allow a server extension to set a flag that the idle culler would consider. This doesn't quite work yet (there's no writable area for the server to put that info).
The slightly hacky solution that should work reliably is an extension (UI for users is the same - a button that sets an internal state in the server), that just constantly registers activity so the server never appears idle.
The extra hacky solution (that real people use often) is to register real activity by leaving a browser open running an operation that always registers activity (e.g. a terminal posting output forever). That takes no implementation, but requires an always-on browser.
(or a virtual browser via selenium, I guess)
Min RK
Here's something that might be useful, though: how much does your cluster scale up/down and how many unique users do you have in a day vs your max concurrent users? Because it may well be that extending your idle timeout a bunch won't cost you anything, or at least not more than the cost of developing a better opt-in 'stay-alive' implementation.
Thank you very much @minrk for the great discussion of options, I was indeed slightly curious if there was a way to do this via an extension. I was thinking that the simplest and most reliable way for now is just to extend the idle timeout, glad that you also think that this is not a bad choice. Thanks again!
Any idea why my hub pod complains with the below error post authentication. Error " raceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1703, in _execute
result = await result
File "/usr/local/lib/python3.6/dist-packages/oauthenticator/oauth2.py", line 213, in get
user = await self.login_user()
File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py", line 699, in login_user
authenticated = await self.authenticate(data)
File "/usr/local/lib/python3.6/dist-packages/jupyterhub/auth.py", line 383, in get_authenticated_user
authenticated = await maybe_future(self.authenticate(handler, data))
File "/usr/local/lib/python3.6/dist-packages/oauthenticator/otds.py", line 185, in authenticate
resp = yield http_client.fetch(req)
tornado.httpclient.HTTPClientError: HTTP 404: Not Found"
Min RK
That probably means that a URL in either the otds authenticator class or its configuration is incorrect. But this otds.py file is not part of oauthenticator, so I'm not sure where it could come from.
1 reply

Hello again, I've been trying to allow users to create their own conda environments for notebooks for our Juptyerhub for Kubernetes, everything seems to be set up according to that guide but I've been trying it out and I can't seem to use my own conda environment in a notebook:

I use conda to create an environment with the ipykernel and an arbitrary python package (suds) that isn't in our base environment: conda create -n myenv ipykernel suds

This environment shows up in my configured user directory ~/my-conda-envs/myenv and the kernel appears in the list of available kernels (as 'Python [conda env:myenv]').

However I can't seem to use this environment/kernel; the notebook doesn't seem to be able to connect (when I run a cell the status goes from 'Connecting' to 'Disconnected', sometimes to 'No Kernel')

Any advice on debugging this? Happy to provide any info

3 replies
I'm having trouble getting widgets working and according to the advice on this page: https://ipywidgets.readthedocs.io/en/latest/user_install.html it should be looking at a javascript loading issue, but I can't see a way to debug this.
Wayne Motycka
On my admin panel, all the Spawn Page links except my own have the string 'undefined' appended to the spawn paths of my users e.g. ".../spawn/login_idundefined". This appears to be the :servername value which I can't seem to find a way to set or initialize (to a blank value in my case) Does anyone have any guidance on how to get these links to initialize properly?
Hello! I've been having an issue similar to what @vanyae-cqc mentioned. I have an JupyterLab extension which requires the conda base environment to be activated so that the extension can find all the files it requires. However, I haven't been able to find a way to activate the base environment when the JupyterHub profile is launched. If I try locally, I can activate the base environment by changing the CMD in my Dockerfile but this gets overwritten when added to the JupyterHub. We're also using KubeSpawner. Any information you can give me for configuring this is highly appreciated. Thanks!
3 replies
I am getting the below error as I am unable to find a way to import the oauth provider ssl certificate in my jupyterhub pod. Hence wanted to know how do we import the oauth ssl certificate. Error " raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='X.X.X.X', port=8443): Max retries exceeded with url: /otdsws/v1/resources/18af9d04-5f85-4912-bda8-ffbba9d1c3df/activate (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))"
Gauraang Khurana

Hey guys, I have a small question. Is it possible to have multiple users connect to the same jupyter notebook pod?

Context: I am trying to provide jupyter notebook access to large volume of users.
Right now, every user spawns a new jupyter notebook pod in the kubernetes cluster and this is not very efficient. I am looking for a way to connect many users to one single notebook pod in kubernetes.