Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    slackbot2 <keviny> Do you mean the posts field when setting up dataflow?
    slackbot2 <keviny> Yeah, so it is configured to be flexible
    slackbot2 <keviny> The output of aggregator are the org aggregation and consumer aggregation documents
    slackbot2 <keviny> So its one for each
    slackbot2 <hsiliev> Right. So we'll post each of those to the sink host
    slackbot2 <keviny> Yeah, that is configurable. We can actually made them 2 different endpoint
    slackbot2 <hsiliev> And your fix will now stop error docs from going to the next host in the sink?
    slackbot2 <keviny> if we want to post the org aggregation to one, and the consumer aggregation to the other
    slackbot2 <keviny> The problem is this: aggregator produce business error. Dataflow check to see if there is any sink configured. It doesn’t see sink configured, so it didnt check the business error.
    slackbot2 <keviny> So my change is to make it check the business error wether there is a sink or not
    slackbot2 <hsiliev> I see
    slackbot2 <hsiliev> Getting back to "2 different endpoints" ... I don't see that in the code. Now we have only one afaict
    slackbot2 <keviny> I notice the reason we don’t see this in our side is because we have sink
    slackbot2 <keviny> yeah, in https://github.com/cloudfoundry-incubator/cf-abacus/blob/master/lib/aggregation/aggregator/src/index.js#L583-L584, I notice that we set them to point to the same endpoint.
    slackbot2 <hsiliev> Yep. We want to have one too. That's why I'm asking so many questions :slightly_smiling_face:
    slackbot2 <hsiliev> They still need to be on the same host though
    slackbot2 <keviny> ok, the current design is this [org aggregation, consumer aggregation, duplicate doc]. When posting to the sink it will post the org aggregation to /v1/metering/aggregated/usage and consumer to /v1/metering/aggregated/usage.
    slackbot2 <keviny> I think the Dataflow was designed so that it can post the the docs produced to different app, hence it becomes like that
    slackbot2 <keviny> In this case we don’t want to post to different endpoint
    slackbot2 <keviny> so they become the same
    slackbot2 <keviny> I think we have a dataflow app that will post the docs to different endpoint
    slackbot2 <hsiliev> Endpoints might be different but the host is not really an array, is it?
    slackbot2 <keviny> Yeah the host is just one
    slackbot2 <hsiliev> We are using the partitioning?
    slackbot2 <keviny> The partitioning is taken care of by the apps field
    slackbot2 <keviny> L582
    slackbot2 <keviny> it would forward it to the partition based on that
    slackbot2 <hsiliev> Thanks. I want to dig deeper, so I'll add some tests on top of your PR. Would be a good excercise
    slackbot2 <hsiliev> Good day/night :slightly_smiling_face:
    slackbot2 <keviny> I’ll add a test on the case where no sink is configured when i get the chance. Have some urgent stuff to do :(
    slackbot2 <keviny> Good night and thanks
    slackbot2 <hsiliev> @jsdelfino Any progress with npm publishing?
    slackbot2 <hsiliev> If you think an account/org in npm can help, SAP can cover the costs
    slackbot2 <keviny> @hsiliev I added the test for my PR yesterday :slightly_smiling_face:
    slackbot2 <hsiliev> @keviny Thanks. I had these half the way done :slightly_smiling_face:
    slackbot2 <hsiliev> At least I know what this part of the tests do
    slackbot2 <hsiliev> !here|@here We investigated an issue with 401 from provisioning plugin. If we mark a doc as "error" and later submit the same usage we'll get 409 I guess?
    slackbot2 <hsiliev> I can see we call dataflow.replay on app start, but we are not calling it with timer or something
    slackbot2 <keviny> Wait I meant just now, not yesterday
    slackbot2 <keviny> 401 is authorization issue though
    slackbot2 <keviny> We don't mark doc as error. it is stored in the error db
    slackbot2 <keviny> yeah, if we don't configure the replay, we actually is not calling the replay
    slackbot2 <keviny> so if we don't pass time or env.REPLAY it is not going to attempt to replay
    slackbot2 <hsiliev> And this replay is only going to be executed once on start?
    slackbot2 <keviny> Yeah
    slackbot2 <keviny> It is basically to take care of stuff that might not processed succesfully when cf restart the app
    slackbot2 <hsiliev> I see. Thanks
    slackbot2 <dr.max> All, FYI, the following blog post on micro services using CF container networking. Might help inspire how to configure Abacus micro services when deploying as many services. Perhaps solving pert issues?