Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 14:56
    martindurant closed #691
  • 14:56
    martindurant commented #691
  • 13:38
    mohitg55555 opened #731
  • 13:36
    mohitg55555 opened #730
  • 00:54
    rafa-guedes commented #691
  • Oct 13 16:49
    Pmohit605 commented #729
  • Oct 13 16:16
    jhamman commented #728
  • Oct 13 16:15
    jhamman commented #729
  • Oct 13 16:13
    jhamman closed #726
  • Oct 13 16:13
    jhamman commented #726
  • Oct 13 13:42
    martindurant commented #288
  • Oct 13 13:33
    martindurant commented #288
  • Oct 13 13:14
    sofroniewn commented #288
  • Oct 13 13:14
    sofroniewn commented #288
  • Oct 13 12:18
    martindurant commented #288
  • Oct 13 10:48
    Pmohit605 opened #729
  • Oct 13 01:17
    sofroniewn commented #288
  • Oct 13 01:05
    mrocklin commented #288
  • Oct 13 00:53
    sofroniewn commented #288
  • Oct 12 22:32
    stale[bot] unlabeled #72
Daniel Rothenberg
@darothen
@kmpaul yeah it would be nice to stay in the loop - i've rotated back to being charge of cloud datastores at ClimaCell so may be able to contribute a bit more directly
Kevin Paul
@kmpaul
@darothen Wonderful! I’ll make sure you are on the list, then. Everyone who volunteered is new, so it will be helpful for them to have a prior perspective.
Sarah Bird
@birdsarah
@yuvipanda I couldn't find you, I'm in Alder 105 whenever is good
Joe Hamman
@jhamman
@jcrist - if you are around this afternoon, I’m going to spend a bit more time on dask-gateway and would love to ask you a few questions.
Yuvi Panda
@yuvipanda
Filipe
@ocefpaf
HackMD for the pangeo ODVC sprint: https://hackmd.io/svbpE6qeRfiztcRDDST-og
Jim Crist
@jcrist
@jhamman, sorry I didn't see this earlier. I'm available tomorrow if you're still interested. I'm more likely to see responses via github than gitter, so it'd be best to comment here: pangeo-data/pangeo-cloud-federation#371
Joe Hamman
@jhamman
yes, tomorrow AM would be great.
I’ll provide an update on the gh issue.
Scott
@scollis
hey @jhamman @scottyhq any advice on my question.. it seems when using an existing docker and building on that binder/start does not get hit
Scott Henderson
@scottyhq
@scollis you’re right I think the pangeo binder isn’t setup for a start script, so you’ll have to stick to@your workaround for now https://github.com/pangeo-data/pangeo-stacks/blob/0425c2df72c0badd2f37dde4b6f02f2a55b8de2b/onbuild/r2d_overlay.py#L125
Scott
@scollis
awesme .. thanks
Joe Hamman
@jhamman
Seattle pangeans can expect a few boxes of donuts at alder hall c.9am. It will pay to be there in time 😉
Scott
@scollis
Is it just my imagination or is it taking longer than normal to spin up an server with an existing image.
Ryan Abernathey
@rabernat
Pangeo publication reporting form: https://forms.gle/yfCHEHU2LMZhtqgF8
Joe Hamman
@jhamman
@rsignell-usgs - are you around today?
epifanio
@epifanio
Hello @all I will attend Oceanhackweek in the next days.. since I am already in Seattle... I thought to join you at the Pangea meetings. Hope it is ok )
Ryan Abernathey
@rabernat
@phaustin :
def test_launch_binder(binder_url, repo, ref):
    build_url = binder_url + '/build/gh/{repo}/{ref}'.format(repo=repo, ref=ref)
    r = requests.get(build_url, stream=True)
    r.raise_for_status()
    for line in r.iter_lines():
        line = line.decode('utf8')
        if line.startswith('data:'):
            data = json.loads(line.split(':', 1)[1])
            if data.get('phase') == 'ready':
                notebook_url = data['url']
                token = data['token']
                break
    else:
        # This means we never got a 'Ready'!
        assert False

    headers = {
        'Authorization': 'token {}'.format(token)
    }
    r = requests.get(notebook_url + '/api', headers=headers)
    assert r.status_code == 200
    assert 'version' in r.json()

    r = requests.post(notebook_url + '/api/shutdown', headers=headers)
    assert r.status_code == 200

binder_url = 'https://binder.pangeo.io'
repo = 'pangeo-data/pangeo-ocean-examples'
ref = 'f73b92a'

test_launch_binder(binder_url, repo, ref)
Takes a few minutes
Lindsey Heagy
@lheagy
All, this was published yesterday on the growth of Syzygy, the Canadian deployments of JupyterHubs for researchers which @phaustin talked about yesterday, and the suggestion of a consortium that could serve universities and research groups who want to use interactive computing. Could be an interesting avenue for connection! https://blog.jupyter.org/national-scale-interactive-computing-2c104455e062
Kevin Paul
@kmpaul
Preliminary results of the STAC sprint from the Pangeo community meeting: https://docs.google.com/presentation/d/17MdoxIqw4hyfZC3L9UyyM8VG4Of-LBki9qZBHp8FWgU/edit?usp=sharing
Ryan Abernathey
@rabernat
Philip Austin
@phaustin
@rabernat -- is the markdown editor https://stackedit.io/ ?
Filipe
@ocefpaf

@rabernat this is a list of the new packages in conda-forge added during the sprints: https://github.com/conda-forge/staged-recipes/issues?utf8=%E2%9C%93&q=label%3Apangeo-sprint+

The list will be updated tomorrow with at list one more.

Sarah Bird
@birdsarah
@yuvipanda @ocefpaf if you're curious my un-upgradeable conda was because conda-meta/pinned had pinned conda to 4.5.* (conda/conda#6941)
Filipe
@ocefpaf
Geez. Glad you resolved it.
Ryan Abernathey
@rabernat

@rabernat -- is the markdown editor https://stackedit.io/ ?

No, it was https://manubot.org

I guess you have to do a little setup first (e.g. kubectl apply -f shared-nfs-staging.yaml --namespace nasa-staging)
Ryan Abernathey
@rabernat
Does anyone have an example of an intake catalog pointing at zarr datasets on S3?
Aimee Barciauskas
@abarciauskas-bgse
I was hoping to do this as a part of my sprint but didn't get to it :disappointed:
Joe Hamman
@jhamman
@andersy005 :arrow_heading_up:
Ryan Abernathey
@rabernat
For some reason, @tjcrone and I are have trouble making intake work with s3. I think Tim will open an intake issue.
Rob Fatland
@robfatland
Two questions: Who is going to the Princeton Workshop on Next Gen Cloud Research Infrastructure? My abstract is only tangentially about pangeo and I was thinking someone was going to attend to carry that banner but I'm not sure who that is. Second: Is gitter Lobby just any and all conversations?
Joe Hamman
@jhamman
Rob, @rabernat is subbmitting on cloud-native data formats (e.g. Zarr) and I am submitting on the Pangeo Cloud Architecture/Principles.
This gitter lobby is a place for general conversation but we tend to try to limit the depth of any conversation because its hard to track.
Anderson Banihirwe
@andersy005
@jhamman, @rabernat, I still haven't had time to create static intake catalog for the CESM1 LENS data in S3. What I have today is something that works with intake-esm: https://intake-esm.readthedocs.io/en/latest/notebooks/examples/cesm1-lens-aws.html
Rob Fatland
@robfatland
Ok first: Great. Second: The Princeton thing is Mon/Tues and I'm at the Cornell Cloud Forum the prior Wed/Thu/Fri so plan to kill the weekend wandering around Manhattan looking for sandbagger climbing gyms. And third: I'll keep my gitter discourse oh wait, cool ! PewDiePie > 1e8 subscribers!

plan to kill the weekend wandering around Manhattan looking for sandbagger climbing gyms

Would love to meet up when you're in the city

Scott
@scollis
Hey folks.. I am using the KubeCluster and want to write data back to the main binder machine for analysis in my workflow.. but the daksk workers get a permission denied error when trying to write back to the file system
Joe Hamman
@jhamman
Your workers don’t have access to the home directory of your notebook session.
Scott
@scollis
Thanks @jhamman .. is there a FS the workers can see?
Joe Hamman
@jhamman
No. Well, they all have their own persistent disk.
But if you want the data locally, you need to compute first.
Scott
@scollis
right, is there a way to copy back from the workers to the notebook session FS? I am generating grid data on the workers that would be too big to fit in memory if I bought it all back by a client.gather(future)
I guess I could push the data back in the futures and then loop over them and dispose of the data as I save..
Let me experiment with that.. fun :)