slackbot6<hsiliev> Status update from the team: doing some profiling of reporting. Seems most of the time is spent in vm/vm2 sandbox. We applied caching and managed to reduce the time for generating a report from 2:40 minutes to 0:25 seconds. started productizing our internal Service Bridge MVP prepared a demo recording for Abacus Broker. Should be available next week hidden in youtube. Will send a link to the team for comments g
slackbot6describing different aspects of Abacus for Resource Providers
slackbot1<dr.max> <here> anyone else joining? I am on
slackbot1<hsiliev> !here|@here I'm on vacation the whole month. Sorry for the late notice.
slackbot1<hsiliev> Not sure the plan really has previous and current. @keviny Is this normal in your pov?
slackbot1<hsiliev> It may happen if we have retries after the collector
slackbot1<hsiliev> Actually after the meter
slackbot1<carrolp> retrying now....
slackbot1<carrolp> Did not recreate with a second attempt either. @keviny I didn't quite follow your example of "failing at the duplicate doc". Would this have happened if I had emptied the databases but not deleted them and then tried to send in the same data again? The CouchDB I'm using would still have the old documents marked as _deleted=true, and potentially leading to a document conflict.
slackbot1<carrolp> Could this explain the problem I saw originally perhaps?
slackbot1<hsiliev> Did you restart Abacus after clearing the DB?
slackbot1<carrolp> @hsiliev It was a couple days ago, I can't recall for sure. If I do not restart the DB after clearing the databases, I think I just recreated the problem -- I sent in two records, deleted them, sent in two more records, and the consumption report returned '40' instead of the correct '20'
slackbot1<keviny> Yeah, if you cleaned the db, but does not restart the app
slackbot1<keviny> Abacus has the cache on the accumulated usage doc (which keeps the 20)
slackbot1<keviny> And since the db is cleaned, when it checks if the record was submitted previously (by fetching the duplicate doc from db) it will says not a duplicate.
slackbot1<keviny> so it will go ahead and accumulate it to 40
slackbot1<carrolp> I can confirm now that the 10+10=30 problem was purely due to my experimentation -- clearing the databases and retrying without restarting abacus. I wasn't deliberately trying to keep abacus running originally, but could easily have done it by accident when resetting for another test. Thanks for the help!
slackbot12<hsiliev> Since I won’t be able to make it on Friday evening (public holiday here in Bulgaria), <here> is the status update for the team: - bumped supported node version to 8.1.3 (latest version with node.js snapshot optimizations for eval) - increased poll interval for app & services bridges to reduce CC load - reporting will skip missing consumers from previous months - brainstroming ideas about v2 - working on support for
slackbot12updates from Resource Providers). First version should land next week.
slackbot12<dr.max> My status is I am reviewing the v2 items. You’ll see my comments as I get a chance to add them
slackbot12<dr.max> Did you schedule the v2 inception? I can schedule for you if you need
slackbot12<dr.max> FYI to <here> that I will be at pivotal all next two weeks. In case folks there want to chat about Abacus
slackbot6<hsiliev> If you want to deploy Abacus on CF (bosh-lite or cloud setup) you might need to put zip binary in the path due to cloudfoundry-incubator/cf-abacus#239
npm start, and then run the
slackbot6<hsiliev> you can also check this script here: https://github.com/hsiliev/workstation-scripts/blob/master/scripts/abacus-local-get.sh
slackbot6<hsiliev> to filter it you can use graphql or do it client side