These are chat archives for juttle/juttle

28th
Jan 2016
David Cook
@davidbcook
Jan 28 2016 17:59
I'm reading from 3 data files that are between 2.2 and 3.3 MB each, joining them together, then splitting the stream out to multiple visualizations. Ideally, I want to display 4 visualizations, but the number of visualizations seems to have an impact on the range of data that each vis covers. With 4 timecharts, each timechart has about 53 minutes of data. With 3 timecharts, they all have about 63 minutes of data. And with 2 timecharts, each timechart displays the full 88 minutes of data. The range of data displayed varies from run to run with both 3 and 4 timecharts. Only in the 2 timechart version do the context charts appear. Is there some sort of resource constraint on my computer preventing the full charts from being drawn?
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:02
can you provide the first and last timestamp of each file ? I'd guess that some files have a larger timerange than others and when join the data sets that is increasing the perceived data range ? (hopefully that was understandable)
David Cook
@davidbcook
Jan 28 2016 18:08
The files do have different last timestamps, but I don't see how changing the number of charts should have an impact. At no point do I change the range of data. Also, the fact that the context charts don't appear signals that outrigger isn't done with the chart generation.
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:08
sorry I think it just sank in that you were changing the visualizations
are they all timecharts ? I can give this a shot locally with a few emitters
David Cook
@davidbcook
Jan 28 2016 18:09
yeah
each file has 1 point per second and 16 fields per point
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:10
ok give me about 5 minutes to see if I can repo by just adding more timecharts and seeing what happens to the timeranges on each one...
@davidbcook any special options being passed to those timecharts ? you can share those right ?
David Cook
@davidbcook
Jan 28 2016 18:12
here's the code:
(
  read file -file 'E4 data/Rollups/LeftArmRollup1Second.json' -to :2016-01-26T21:13:00.969:
  | put la_avg_rms = avg_accel_rms, la_max_rms = max_rms, la_stan_dev_rms = stan_dev_accel_rms
  | keep time, la_avg_rms, la_max_rms, la_stan_dev_rms
  ;
  read file -file 'E4 data/Rollups/RightArmRollup1Second.json' -to :2016-01-26T21:13:00.969:
  | put ra_avg_rms = avg_accel_rms, ra_max_rms = max_rms, ra_stan_dev_rms = stan_dev_accel_rms
  | keep time, ra_avg_rms, ra_max_rms, ra_stan_dev_rms
  ;
  read file -file 'E4 data/Rollups/RightLegRollup1Second.json' -to :2016-01-26T21:13:00.969:
  | put rl_avg_rms = avg_accel_rms, rl_max_rms = max_rms, rl_stan_dev_rms = stan_dev_accel_rms
  | keep time, rl_avg_rms, rl_max_rms, rl_stan_dev_rms
) | join
| (
   keep time, la_avg_rms, ra_avg_rms, rl_avg_rms
   | split
   | view timechart -title 'Average RMS Per Limb' -display.dataDensity 0 -keyField 'name' -valueField 'value'
   ;
   keep time, la_max_rms, ra_max_rms, rl_max_rms
   | split
   | view timechart -title 'Max RMS Per Limb' -display.dataDensity 0 -keyField 'name' -valueField 'value'
   ;
   //keep time, la_stan_dev_rms, ra_stan_dev_rms, rl_stan_dev_rms
   //| split
   //| view timechart -title 'Standard Deviation RMS Per Limb' -display.dataDensity 0 -keyField 'name' -valueField 'value'
   //;
   //keep time, la_stan_dev_rms, ra_stan_dev_rms, rl_stan_dev_rms
   //| reduce -every :1s: -over :5s: la_avg_sd_rms = avg(la_stan_dev_rms), ra_avg_sd_rms = avg(ra_stan_dev_rms), rl_avg_sd_rms = avg(rl_stan_dev_rms)
   //| split
   //| view timechart -title 'Average Standard Deviation RMS Per Limb Over 5 Second Window' -display.dataDensity 0 -keyField 'name' -valueField 'value'
)
Daria Mehra
@dmehra
Jan 28 2016 18:14
say, could you add these debugging statements before each chart:
| (pass; (head 1; tail 1) | view table -title ‘Chart One’)
(with different titles)
David Cook
@davidbcook
Jan 28 2016 18:15
yeah
Daria Mehra
@dmehra
Jan 28 2016 18:15
so we see the first and last data point being fed into each chart.
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:18
I can produce a smaller time range for the last chart where you're doing the reduction but thats becuase its doing a window reducer (-over :5s:) but all the other 3 charts still have the same duration in my experiment... still poking at things on my side
whats he range on your data ? a few hours ?
David Cook
@davidbcook
Jan 28 2016 18:19
@dmehra each table says "waiting for data" and the charts have stopped updating
@rlgomes ~88 minutes
... stopped updating but no context charts are present
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:21
did you get the head 1; tail 1 values in a few tiny tables ?
David Cook
@davidbcook
Jan 28 2016 18:21
sidenote, it would be great if passwas documented here: https://juttle.github.io/juttle/processors/ I was trying to find pass/merge the other day and couldn't
nope, the tables are waiting for data
Daria Mehra
@dmehra
Jan 28 2016 18:22
i’m writing up a doc on debugging juttle that will cover pass. will also list it in processors - didn’t realize it wasn’t there
David Cook
@davidbcook
Jan 28 2016 18:22
cool
Daria Mehra
@dmehra
Jan 28 2016 18:22
can you change the debugging step to do only head 1 without the tail? surely we’ll get that, at least
David Cook
@davidbcook
Jan 28 2016 18:24
Yeah: Tue Jan 26 19:45:45 2016 UTC, Tue Jan 26 19:45:45 2016 UTC and Tue Jan 26 19:45:45 2016 UTC
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:30
@davidbcook you have subsecond timestamps in those files ? (still trying to reproduce)
and could you give me the count on this: read file -file 'E4 data/Rollups/LeftArmRollup1Second.json' -to :2016-01-26T21:13:00.969: | reduce count()
David Cook
@davidbcook
Jan 28 2016 18:31
no, they look like this: "time": "2016-01-26T19:28:27.000Z"
6256
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:47
@davidbcook when you run the program with 4 timecharts and it only shows 50 something minutes does an error appear in your javascript console ? (it may not be noticeable at this point without opening the javascript console)
David Cook
@davidbcook
Jan 28 2016 18:48
Uncaught SyntaxError: Unexpected end of input but that appears in all outrigger tabs regardless of whether all of the data and context charts appear or not
it appears for completely different programs too
Daria Mehra
@dmehra
Jan 28 2016 18:48
yes that’s an issue @go-oleg knows about, not specific to this case
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:49
I'm just wondeing if it was something like a websocket disconnect that wasn't reported (since our error reporting isn't the greatest right now)
Daria Mehra
@dmehra
Jan 28 2016 18:50
just a thought, do the charts render if you remove -display.dataDensity 0 from them?
you must have approximately 7000 points going into each chart if it’s every 1 second and about 2 hours time period covered
David Cook
@davidbcook
Jan 28 2016 18:54
that looks like it could be the problem; without -display.dataDensity 0 the whole timerange appears
Daria Mehra
@dmehra
Jan 28 2016 18:55
what’s the reason you were setting that parameter? do you not want the chart to downsample?
David Cook
@davidbcook
Jan 28 2016 18:55
correct
Daria Mehra
@dmehra
Jan 28 2016 18:55
ok, then your program should do reduce in such a way that resulting points can fit onto the pixels of your chart
Rodney Lopes Gomes
@rlgomes
Jan 28 2016 18:56
you can "fix" the data with a desired downsampling instead of the average done by the timechart itself
Daria Mehra
@dmehra
Jan 28 2016 18:56
(and since there are multiple charts, perhaps there’s an overall limit on points we can pass to the browser without overwhelming it? calling @go-oleg for comment)
David Cook
@davidbcook
Jan 28 2016 18:56
well it can't be a pixel issue because the number of charts doesn't change the number of pixels available
Daria Mehra
@dmehra
Jan 28 2016 18:57
correct, so it must be this overall limit we’re running into
it’s new to me because i never tried to render multiple charts where each is so dense, i tend to reduce -every :5 min: max() or something like that
David Cook
@davidbcook
Jan 28 2016 19:01
that doesn't work for us because we're looking at quick motions. The raw data is at about 30 points per second, so the rollups that this program draws from are already significant
Daria Mehra
@dmehra
Jan 28 2016 19:04
then a workaround for right now would be to look at half-hour time period instead of 2 hours, just to get you unblocked
David Cook
@davidbcook
Jan 28 2016 19:05
yeah but we want the whole time range to know where to drill down, but I guess I have no choice but to cut the time range
Daria Mehra
@dmehra
Jan 28 2016 19:06
to know where to drill down: would plotting both max() and min() reductions for each series help with that? you could do -every :10s: that alone should bring it to tolerable volume.
David Cook
@davidbcook
Jan 28 2016 19:08
no because there is tons of variability in the data. The data is motion data from someone dancing and we're trying to find the key moments which are milliseconds long
we'll figure it out, but it would be nice to know why this cap is there
David Cook
@davidbcook
Jan 28 2016 19:47

Sorry to keep bugging you guys, but I'm having another issue with timechart. The context chart isn't appearing regardless of whether I include the -display.dataDensity option or not.

read file -file 'E4 data/Rollups/LeftArmSnapshotTimecode.json' -to :2016-01-26T21:09:53:
| keep accel_rms, time
//| (pass; (head 1; tail 1) | view table -title 'Chart One')
| view timechart -valueField 'accel_rms' -display.dataDensity 0

The data file 5760 points in it.
If I shorten the time range queried in the read statement to ~25 seconds (800 points) then the context chart appears. Seems like a similar issue to earlier.

Daria Mehra
@dmehra
Jan 28 2016 19:51
with just that one chart in the program? that’s the whole program?
David Cook
@davidbcook
Jan 28 2016 19:51
yep
Daria Mehra
@dmehra
Jan 28 2016 19:54
i’m able to reproduce. waiting for @go-oleg to return from lunch and we’ll look at it.
reproduces both with and without density parameter, as you said.
David Cook
@davidbcook
Jan 28 2016 19:56
yay thanks for looking into it @dmehra!
Jonathan Dunlap
@jadbox
Jan 28 2016 20:03
How do I kill the Outrigger daemon?
Daria Mehra
@dmehra
Jan 28 2016 20:05
this works: pkill -f outriggerd
Jonathan Dunlap
@jadbox
Jan 28 2016 20:07
Thanks! Might be good to have something for it as part of the cli.
Daria Mehra
@dmehra
Jan 28 2016 20:14
@mstemm see what you think ^^
Mark Stemm
@mstemm
Jan 28 2016 20:18
I don’t think it would be a good fit for the juttle cli. Maybe the outrigger client program.
David Cook
@davidbcook
Jan 28 2016 20:31
Another context chart question: is 10 seconds the smallest amount of time you can zoom in on? If I try to zoom in on less than that, the x axis range stays at 10 seconds and the line doesn't take up the whole axis.
Oleg Seletsky
@go-oleg
Jan 28 2016 20:31
Yea, the 10s min time range is something we have always had.
Not 100% on the reasoning, it may have something to do with when emit … starts, we wanted to have a chart with a minimum time range instead of no time range. And for metrics/events that we dealt with in Jut 1.0 which occurred at a few second granularity, it made sense.
David Cook
@davidbcook
Jan 28 2016 20:55
yeah that makes sense
Daria Mehra
@dmehra
Jan 28 2016 20:59
@davidbcook if that's something you need adjusted, please file an issue in juttle-viz repo
David Cook
@davidbcook
Jan 28 2016 21:12
it's much less of an issue than the point-limit-thing I was running into earlier, so if we really need less than a 10 second window, I'll file an issue
Mark Stemm
@mstemm
Jan 28 2016 22:17
I published a preview release of outrigger 0.5.0-rc.0 an hour or so ago. I also accidentally set the ‘latest’ tag to that preview. I cleaned up the tags so latest points back to the offical release 0.4.0 and next points to the preview release. Sorry for the confusion.