I am using mysql via jpa to store read-side data for my lagom project implemented in java. The tables for readside are created by jpa but the data is not getting populated in the tables . I am following shopping cart example as well as mixed-persistence example . I have cassandra as write -side DB. I am doing everything the same way as explained in the examples .I must be missing something but if someone has already faced the same issue, can help me here.
java.lang.UnsatisfiedLinkError: C:\Users\jkobe\AppData\Local\Temp\jna-101190673\jna8952672481501973987.dll: %1 is not a valid Win32 applicationfrom what I assume is the bundled cassandra
invoke's the service interface (which is CompletableStage and the get will want a resolved entity)
Having problems implementing pac4j-lagom (java) OAuth in my project:
Configure your OAuth 2 library with your client_id, client_secret, and redirect_uri. Tell it to use https://launchpad.37signals.com/authorization/new to request authorization and https://launchpad.37signals.com/authorization/token to get access tokens
Hi Team; I've hunted the internet, this channel, and have yet to find a clue, so here goes:
Kafka Server closed unexpectedly.
...has become the bane of my existence. I am 100% sure it's not a problem in lagom 1.6.4, but in my use of it. It's possible it's this machine (maybe it's been crippled somehow by IT). However I need to get to the bottom of it.
runAllabove that "closed" message.
<your-project-root>/target/lagom-dynamic-projects/lagom-internal-meta-project-kafkafolder in case it was zookeeper.
Might I ask advice on tracking this down?
<x>-impl/src/main/resources/application.conffiles. However, when I start the service with
sbt runAll, I still see akka.persistence.cassandra.query.EventsByTagStage logs saying "starting with EC delay 5000ms", and the timing doesn't seem to have changed at all for events being picked up in my read-side processor. Any suggestions as to what I might be doing wrong?
I'm using shopping-cart example from lagom-samples. While running on single node cluster I observe that, on making multiple concurrent requests akka.actor.default-dispatcher count (observed from visualVM) keeps increasing (proportional to concurrent requests) and they remain in PARK state even after request is served.
is this an expected behaviour? How do we control this and reuse the same actors or kill the actors after request has been served.
bytea, can this be because my key's are UUID and using the native postgres uuid type?
@Id @Type(type="pg-uuid") private UUID eventId;