Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Alan Klikić
@aklikic_twitter
I personally use cluster singleton for similar cases and not facing any problems
Renato Cavalcanti
@renatocaval
yes, cluster singleton is must simpler, but then you have only one stream with all the data. I understood that each stream instantiation was streaming a subset of the data.
isn’t that the case?
nikhilaroratgo
@nikhilaroratgo
it goes like this request comes via Post to service -->graph/flows running (on each node) --> flow to query persistent entity -->flow for some business logic -->flow to send message to external system --> flow to handle the response
and i think if cluster singelton is safe ,then I can replace flow of external system with singelton actor (which internally can have some throttling and circuit breaker)
kotdv
@kotdv
this is way far from starting simple :smile:
nikhilaroratgo
@nikhilaroratgo
@renatocaval what do you say?
kotdv
@kotdv
looks like a simple sequential "monadic program" overcomplicated by akka-streams
Renato Cavalcanti
@renatocaval
I don’t get how you go from one single POST to a service to a flow running on each node.
kotdv
@kotdv
@renatocaval that's what I mean exactly haha
Renato Cavalcanti
@renatocaval
is that a process that should be triggered and affect all events in your journal?
kotdv
@kotdv
he's starting the graph on { request => .......... } it seems
or materializing is a better word
nikhilaroratgo
@nikhilaroratgo
no graph is matereialised only once --> in between I am using kafka from which I use kafka Source
Renato Cavalcanti
@renatocaval
@nikhilaroratgo, how I see it know is that you have something similar to a read-side processor, but on demand
nikhilaroratgo
@nikhilaroratgo
but I did not mention everything
POst request --store to kafka queue-> use alpakka kafka source to create source out of topic and so on
the graph is materialised during the startup of the node (only once)
Sorry ,I was not too clear to explain it upfront :-)
kotdv
@kotdv
it is not clear still :smile: it's possible to do everything, even the wildest things... but the intention is not clear here, might be creating unnecessary pipelines to replace existing "features"
e.g. post request would usually send an update/create like command to PE, PE would persist an event, event would be sent to Kafka, or processed by read side processor or both and so on
and you sound like you're doing all that in service impl, that's why it is unclear
plus using akka streams
nikhilaroratgo
@nikhilaroratgo
@kotdv yes,some thing like that .It depends on what you want to do the POST request.But I got the clarity that cluster singleton is also safe to use
Thanks :-)
@kotdv what you mentioned is one my other use cases where POST is creating a command and creating event ,then a read side processor and basically what is there in CQRS
but in my other use case the post directly sends the message to kafka and then i have created a source out of this kafka topic and some flows and so on.
@kotdv i hope this is okay and not an issue :-)
kotdv
@kotdv
might be an issue for your employer lol, not for me obviously :smile:
but there's a very high chance for that flow being logically incorrect, error prone, and impossible to replay/recover from failure.
that's from my perspective
impossible to test also, but that's the consequence
Jeffrey van Aswegen
@jeffmess

seeing the following error from one of our listeners...

[warn] org.apache.kafka.common.utils.AppInfoParser [] - Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-info,id=recon-2

Is it because the groupId is not set?

David Leonhart
@leozilla
if I want to log every request/response made by a auto implemented ServiceClient, where would be a good place to do that?
I dont see an easy extension point where I can hook into. The only option I see so far would be to use a custom MessageSerializer which adds logging next to the serialization.
Sergey Morgunov
@ihostage
@leozilla What do you want to log? HTTP traffic?
David Leonhart
@leozilla
@ihostage yes for HTTP, basically I would like to add debug logging of the messages which are sent and received
more precisely the HTTP message body
Sergey Morgunov
@ihostage
You have more that one option for this. :smile:
David Leonhart
@leozilla
Can u shortly tell me some of those?
Sergey Morgunov
@ihostage
Yes, of course. One minute
David Leonhart
@leozilla
No stress, actually I also have to leave now but I would appreciate if you could post them here so I can have a look later
Sergey Morgunov
@ihostage
  1. Lagom use Akka HTTP (by default). You can use out-of-box configuration of Akka HTTP
    https://doc.akka.io/docs/akka-http/current/configuration.html
    a) Set akka.http.server.log-unencrypted-network-bytes = 65536
    b) Enable debug level for akka.stream.Materializer
    <logger name="akka.stream.Materializer" level="DEBUG" />
  1. Lightbend Telemetry (need buy Lightbend Subscription)
  2. Kamon — open source analog of Lightbend Telemetry. See module kamon-akka-http. Note: Kamon don’t work on Lagom DEV-mode until.
Chris Bowden
@cbcwebdev
we are evaluating projections in the 1.6.x milestones. for observability is the intent to periodically poll Projections#getStatus? just curious if there is future intent to add a per-node listener/subscriber that is only notified when the [observed|requested] state changes?
Chris Wong
@lightwave
To use Postgresql with Lagom, do you use the standard Postgresql Java JDBC driver from https://jdbc.postgresql.org/download.html? Or do you use something else, more Scala and async driver?
Sergey Morgunov
@ihostage
@lightwave You can use standard Postgresql Java JDBC driver.
Chris Wong
@lightwave
@ihostage Thanks!

I noticed the following configuration in https://github.com/lagom/lagom-samples/blob/1.6.x/shopping-cart/shopping-cart-scala/shopping-cart/src/main/resources/application.conf

Is this still something that needs to be explicitly configured in application.conf in Lagom 1.6.0-M6? I understand that Lagom 1.6 depends on Akka 2.6. Why not simply have this as the default in Lagom 1.6?

# Enable the serializer provided in Akka 2.5.8+ for akka.Done and other internal
# messages to avoid the use of Java serialization.
akka.actor.serialization-bindings {
  "akka.Done"                 = akka-misc
  "akka.NotUsed"              = akka-misc
  "akka.actor.Address"        = akka-misc
  "akka.remote.UniqueAddress" = akka-misc
}
Tim Moore
@TimMoore
@lightwave you're right, that's not needed in 1.6. Well spotted!
Tim Moore
@TimMoore