slackbot5
<hsiliev> !here|@here This week the team in Sofia was busy with: eval with timeout hangs on node 6 and node 7 (vm & vm2), but so far seems working on node 8. Disabled timeout by default npm 5 breaks Abacus on CF and so does node 8. Disabled use of npm 5 until we adapt to node 8/npm 5 combo as it will be LTS release * we started looking at providing shrinkwrap by default for Abacus to have reproducible build out of the box
slackbot5
<hsiliev> We want to disable the use of API endpoint to fetch UAA coordinates. This should allow us to use oauth module for both CF and non-CF OAuth servers, plus should reduce the number of roundtrips. If you have any objections please comment on cloudfoundry-incubator/cf-abacus#639
slackbot5
<hsiliev> @dr.max !here|@here We did “risk analysis” and “cloud qualities” workshops and identified as top prio items, that we need to improve the availability check (and we are already working on healthcheck aggregator) ability to do blue-green (or red-black) update * scalability The update & scalability issues seems to us connected to the fact that we have app-level partitioning, so I would propose to meet on th
slackbot5
<hsiliev> Having said that… I don’t think we should go for 1.0 before either fixing or at least building a plan on how to approach the issues we already faced.
slackbot5
<hsiliev> !here|@here One more thing. We're facing problems with reporting. It cannot handle orgs with more than 2000 consumers (restarted/restaged apps basically). Any idea if this is only our installation or a general problem?
slackbot5
<hsiliev> !here|@here: Can you please have a look at PR cloudfoundry-incubator/cf-abacus#652 We intend to fix the module dependencies. This perhaps is just the first step towards reproducible build
slackbot6
<dr.max> Ok, thx @keviny, about week after next?
slackbot6
<keviny> We should be available by then
slackbot6
<dr.max> For me two things to highlight: 1. Various acceptance 2. I got a proposal from @hsliev for two commiters from his team (one a replacement As previous left SAP). I am doing reviewing and might run a vote later today unless I have questions
slackbot6
<dr.max> Please chime in when you have a minute and send questions to me about ^^^
slackbot6
<dr.max> Finally, IBM team looking forward to chatting on July 12th @ 11a. Can someone schedule the normal room we've used in the past in Foster City. Thx
slackbot6
<hsiliev> !here|@here Team was busy fixing: 1. negative usage caused by bridge due to a) restart using stop&start events with the same timeout, b) events reported with seconds accuracy by CC, therefore we might get events with same timestamp 2. reporting timeout due to a) reporting killed since heartbeat & healthcheck fail b) request timeout. Both are caused by event loop occupied by summary/charge for > 3000 consumers
slackbot6
<hsiliev> Both bugs imho are prerequisites for 1.0 release
slackbot6
<dr.max> Hi, <here>, remember our call at 10a PDT today. Just joined by clicking on this zoom URL: https://zoom.us/j/788691850
slackbot6
<dr.max> See you soon. Cheers
chihyi88
For example, I shall send usage # 1, usage # 2, usage #3 to abacus, but for some reason, I failed to send usage #2, If it is a discrete type, I only lost usage #2
chihyi88
But if it is a time-based, the charge at report would go crazy. My question is, for this problem, is there a way that I can recovery from this, and avoid the charge go crazy ?
slackbot6
<hsiliev> You need to wait a bit for the doc to go through the pipeline and end up processed in the final database. This usually takes up to 2 minutes
slackbot6
<hsiliev> Then the doc would be accumulated/aggregated right away
slackbot6
<david.wu> Got it. Thank you very much.
slackbot6
<dr.max> Hi, <here>, remember our call at 10a PDT today. I have a small conflict, would it be possible to delay 30 mins? Sorry about the late notice. Please chime here. If not possible let's do status/update here and next week we can do live since I'll be at Foster City. Thx!!
slackbot6
<dr.max> I'll assume we do status here. My status is twofold: 1. Doing some acceptance and clean up and adding the new committers from SAP 2. Using abacus as test for new cf-Extensions project so you will see a few commits from me adding metadata to repo. I plan to discuss this at the cf-Extensions PMC call Monday 7/31 if you can join
slackbot6
<dr.max> Let me know if you are waiting on me for anything. For the IBM folks I am looking forward to chatting in person in foster city. However, don't have invite. Can someone send. We agreed Th or Friday 11a
slackbot6
<hsiliev> Status update from the team: doing some profiling of reporting. Seems most of the time is spent in vm/vm2 sandbox. We applied caching and managed to reduce the time for generating a report from 2:40 minutes to 0:25 seconds. started productizing our internal Service Bridge MVP prepared a demo recording for Abacus Broker. Should be available next week hidden in youtube. Will send a link to the team for comments g
slackbot6
describing different aspects of Abacus for Resource Providers
slackbot1
<dr.max> <here> anyone else joining? I am on
slackbot1
<hsiliev> Not sure the plan really has previous and current. @keviny Is this normal in your pov?
slackbot1
<hsiliev> It may happen if we have retries after the collector
slackbot1
<hsiliev> Actually after the meter
slackbot1
<carrolp> retrying now....
slackbot1
<carrolp> Did not recreate with a second attempt either. @keviny I didn't quite follow your example of "failing at the duplicate doc". Would this have happened if I had emptied the databases but not deleted them and then tried to send in the same data again? The CouchDB I'm using would still have the old documents marked as _deleted=true, and potentially leading to a document conflict.
slackbot1
<carrolp> Could this explain the problem I saw originally perhaps?
slackbot1
<hsiliev> Did you restart Abacus after clearing the DB?
slackbot1
<carrolp> @hsiliev It was a couple days ago, I can't recall for sure. If I do not restart the DB after clearing the databases, I think I just recreated the problem -- I sent in two records, deleted them, sent in two more records, and the consumption report returned '40' instead of the correct '20'