These are chat archives for DataBrewery/cubes

26th
Apr 2015
Stefan Urbanek
@Stiivi
Apr 26 2015 21:44
@psychok7 hey. json_record_limit is for not killing your server and yes, the very low limit is on purpose. another reason for such low limit is usability on the font-end side, where you don’t want to kill the browser or the browsing experience of end-user. if you want to get more data for some other than user-interface purposes, it is recommended to use different format, such as json_lines – maintains data type, it is one JSON record per row
@psychok7 ah, I forgot to mention the real problem with JSON and memory – to be able to spit out the whole JSON you need to get all the data into the memory. where if you are generating CSV or json_lines you can stream the data throug series of iterators. That’s what cubes does: for CSV/json_lines no more than one record is held in memory (well, just theoretically, practically some layers are caching a bit here and there) → record is converted to a CSV line or JSON record line → passed to the Flask framework which streams the data to the server
s/server/client