Branko-Dj on master
[travis][s]: Added update comma… (compare)
Folks, I just spent a couple of hours uploading 43 datasets. It was a very frustrating to find that only 3 of those datasets made it to the datahub website, even though the data utility uploaded everything without an issue. Here are the results:
@MAliNaqvi Hi Ali! As I can see all datasets was uploaded successfully, however, most of them have validation/processing issues. You need to be logged in to see those errors. I know that you’re using an org account so the best way to check would be to pass your JWT within query params, e.g., try this https://datahub.io/JohnSnowLabs/chicago-traffic-tracker/v/2?jwt=<your-jwt>
so that you are able to see FAILED dataset page.
medical diagnostic. or for only one disease like heart and like that ?
@akariv dataflows' sort_by
processor does not seem to be working as expected, any ideas?
from dataflows import Flow, printer, sort_rows
data = [
{'data': 'B'},
{'data': 'E'},
{'data': 'C'},
{'data': 'D'},
{'data': 'A'},
]
f = Flow(
data,
sort_rows('data'),
printer()
)
f.process()
results with
res_1:
# data
(string)
--- ----------
1 B
2 E
3 C
4 D
5 A
{}
around data
https://github.com/frictionlessdata/datapackage-pipelines#sortfrom dataflows import Flow, printer, sort_rows
data = [
{'data': 'B'},
{'data': 'E'},
{'data': 'C'},
{'data': 'D'},
{'data': 'A'},
]
f = Flow(
data,
sort_rows('{data}'),
printer()
)
f.process()
babbage
, which provides facts and aggregation endpoints over such DBs.
dump_to_sql
that you can automatically reconstitute a relational database with all of the same constraints and types and inter-table relationships that are specified within a data package? Is there any mechanism for specifying relationships between tables that are originating from resources in different data packages?