Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 2019 19:27
    rufuspollock commented #241
  • Dec 14 2018 15:27
    StephenAbbott opened #246
  • Dec 03 2018 09:12
    rufuspollock commented #245
  • Nov 26 2018 14:51
    StephenAbbott opened #245
  • Nov 08 2018 08:31
    zelima commented #243
  • Nov 08 2018 08:05
    zelima closed #244
  • Nov 08 2018 08:05
    zelima commented #244
  • Nov 08 2018 07:57
    zaneselvans commented #244
  • Nov 07 2018 07:22
    zelima commented #244
  • Nov 07 2018 07:16
    akariv commented #244
  • Nov 07 2018 07:10
    akariv commented #234
  • Nov 06 2018 16:56
    parrottsquawk commented #234
  • Nov 01 2018 13:25
    zelima commented #244
  • Nov 01 2018 13:25
    zelima commented #244
  • Nov 01 2018 13:23
    zelima commented #244
  • Nov 01 2018 08:29
    anuveyatsu commented #244
  • Nov 01 2018 08:29
    anuveyatsu commented #244
  • Oct 24 2018 19:03
    zaneselvans opened #244
  • Oct 23 2018 09:40
    geraldb starred datahq/datahub-qa
  • Oct 19 2018 08:22

    Branko-Dj on master

    [travis][s]: Added update comma… (compare)

Avishek2020
@Avishek2020
@rufuspollock I need Drugbank and sider dataset for my linking of two dataset. if i will get some good sparql end point it will also help...
Rufus Pollock
@rufuspollock
what is sider dataset?
And can you point to the exact original datasets you wanted as RDF?
Avishek2020
@Avishek2020
The Linked Data version of SIDER which contains information on marketed drugs and their adverse effects.
i got one for sider is http://linkeddata.finki.ukim.mk/sparql but not sure is it healthy / good or not
Dilyan
@DilyanPetkov
hi guys,i am kinda new here
i need some help
I have a generated JAXB class with list segment in it
and i send the request from POSTMAN
but the issue with this list segment is that i can only process one element from it, but i need to input some more
i am thinking that it can be the inbound adapter, because it is a Map<String, String> , so every next element in the list ovewrites the previous element, or i could make a publication handler that checks for delimiters in the request
Rufus Pollock
@rufuspollock
@DilyanPetkov can you give you a bit more info about what you are trying to do with DataHub.io / Frictionless data stuff?
Xeverus01
@Xeverus01
Hey guys, does anyone know if this data set will be continually updated? https://datahub.io/core/co2-ppm
Anuar Ustayev
@anuveyatsu
Hi @Xeverus01 yes, we will setup a monthly update for this dataset.
@Xeverus01 by the way, we also have daily data here - https://datahub.io/core/co2-ppm-daily
by the way @svetozarstojkovic and @stefanbole I think the daily data doesn’t contain entries from 2019, could you check, please?
Xeverus01
@Xeverus01
@anuveyatsu Hey, awesome - yeah I used to use Carbon Doomsday which I believe is now the co2-ppm-daily but they changed it to only be 1973 so the co2-ppm has more historical data but hasn't been updated in awhile, Thanks for updating it!
Anuar Ustayev
@anuveyatsu
@Xeverus01 it is definately in our list :smile: I believe should be automated sometimes soon!
Anuar Ustayev
@anuveyatsu
let’s track it here @Xeverus01 - datasets/co2-ppm#12
Sahana Chandrakumar
@mel_kumar_gitlab
Hi I am creating a report and used your data. I was wondering if I can have a small blurb of how your website started and what it is now?
Rufus Pollock
@rufuspollock
@mel_kumar_gitlab sure! You can see https://datahub.io/docs/about
Mahmoud
@GeniusItech_twitter
hi alll
Rufus Pollock
@rufuspollock
hi
zeroexp
@zeroexp
HI - we had a dataset published on the old datahub.io but I cannot find it on the datahub.io -- is there something i need to do on my end to have it included and findable in the new datahub.io site?
the link to the dataset on the old.datahub.io is: https://old.datahub.io/dataset?q=Imagesnippets+
Rufus Pollock
@rufuspollock
@zeroexp yes, you will need to republish on datahub.io if you want it to show up there. Please go for that and less us know if you have any issues.

Carbon Doomsday update

@titojankowski we’ve been busying working on a port of your carbondoomsday - it is still in progress but you can see how it looks here https://carbon.datahub.io/

If you or any colleagues want to help out the repo is here https://github.com/datahq/carbondoomsday

Tito Jankowski
@titojankowski
@rufuspollock that’s terrific! wow
look at that! cool
glad to see the port is up and running!
any questions I can help answer for the team?
Tito Jankowski
@titojankowski
looks like a ton of work went into it, completely new graphing system
Rufus Pollock
@rufuspollock
@titojankowski :smile: - yes we switched to standard graphing system we’ve used on datahub.io as easier to maintain going forward (we think). We loved what you guys already had and have tried to imitate as closely as possible whilst moving away from custom svg :wink:
Michael Brunnbauer
@michaelbrunnbauer
Regarding the Imagesnippets dataset (https://old.datahub.io/dataset/imagesnippets): What do you mean exactly by "republish on datahub.io". I cannot find anything to submit dataset metadata - only stuff to upload data. Are we supposed to make an out of sync copy of our triples here?
Rufus Pollock
@rufuspollock

Regarding the Imagesnippets dataset (https://old.datahub.io/dataset/imagesnippets): What do you mean exactly by "republish on datahub.io". I cannot find anything to submit dataset metadata - only stuff to upload data. Are we supposed to make an out of sync copy of our triples here?

So the new datahub can do “metadata” only - you’d need to create a datapackage.json with empty resources array and push that. If you want you can do that :smile: - or you can push the dataset itself if that is possible (e.g. if it is bulk and reasonably static).

Tito Jankowski
@titojankowski
@rufuspollock nice thinking ahead!
Rufus Pollock
@rufuspollock
@titojankowski we’ve got a little more polishing to do. Once that is done what do you think of organizing a redirect - that way this will have a permanent reliable home url?
Michael Brunnbauer
@michaelbrunnbauer
@rufuspollock the old datahub has a download link for the datapackage.json but all the interesting metadata is in the resources array: https://old.datahub.io/dataset/imagesnippets/datapackage.json You said the resources array has to be empty but in that case I would not be able to provide a single link (to triple dump, SPARQL endpoint, dataset homepage, etc.). Are you sure about it?
@rufuspollock Also I cannot find any link to upload that file. Do I have to install any of your software? Would the dataset be findable by other users of datahub.io after uploading the metadata (e.g. in https://datahub.io/search)?
Anuar Ustayev
@anuveyatsu

@rufuspollock Also I cannot find any link to upload that file. Do I have to install any of your software? Would the dataset be findable by other users of datahub.io after uploading the metadata (e.g. in https://datahub.io/search)?

Hi @michaelbrunnbauer yes, you need to install data CLI tool to publish datasets - https://datahub.io/download. Once it is published, it will be findable by other users.

Rufus Pollock
@rufuspollock

https://old.datahub.io/dataset/imagesnippets/datapackage.json You said the resources array has to be empty but in that case I would not be able to provide a single link (to triple dump, SPARQL endpoint, dataset homepage, etc.). Are you sure about it?

You could add links to the resources array to the remote resources - that should work i think.

Lynn Greenwood
@lynngre_twitter
Hi all, I just gound datahub.io & wondered if anyone could let me know if it's possible to pickup the open data sets using the API & so how do we go about that without an API key or is it just an open read API?
Anuar Ustayev
@anuveyatsu
@lynngre_twitter Hi there! You can take a look at this tutorial - https://datahub.io/docs/getting-started/getting-data
Lynn Greenwood
@lynngre_twitter
Thanks!
Lynn Greenwood
@lynngre_twitter
Ok, sorry I'm not a developer . I still don't see how I can pickup specific datasets with the API without having an API key or oath of some kind.
We want to pickup open data sets & display them in widgets to click through to datahub from a website.
Anuar Ustayev
@anuveyatsu
@lynngre_twitter once you locate a dataset you can use r links, e.g., if you want to get this dataset https://datahub.io/core/finance-vix you’d use following URLs:
also, on each dataset page, you can see "Integrate this dataset into your favourite tool” section, see https://datahub.io/core/finance-vix#curl
Lynn Greenwood
@lynngre_twitter
Ahha, thanks a million