Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    guys, how can I implement a database connection provider for akka-persistence-jdbc that uses rotating credentials?
    Ruud Welling
    @alexmnyc I think you should look for this solution in the database-connection pool you are using. I have little experience with this. All you can do is make sure Slick is configured with a DataSource with automatically handles the credential rotation

    Hello guys!

    I have a pretty easy question.
    What is the meaning of jdbc-read-journal.refresh-interval and jdbc-read-journal.journal-sequence-retrieval.query-delay configuration settings?
    For example, if I want to have a polling interval = 500ms, I set them both to 500ms. Is it correct or I misunderstood something?

    What is the recommended value?
    I have 3 nodes of my application with 30 shards for each entity (I'm using Lagom). The default value for both settings is '1s'.
    But during my experiments I found that for good performance I have to set it to <100ms. But in this case my DB experiences huge load by polling requests.

    Ruud Welling

    The "journal-sequence-retrieval" configuration is a mechanism to ensure that the events are returned in the correct order (it only applies to eventsByTag queries). It basically checks which ids exist in the database, and it detects "missing" ids (which can happen in case a transaction is not committed yet, but also if a transaction fails). The "query-delay" configured how often it checks for new ids. In cases of missing ids, the "max-tries" setting determines the number of tries untill the assumption is made that the id has been skipped over. (note: setting this value too low might cause events to be missing in an eventsByTag query).

    The refresh-interval setting determine how often polling for new events is done. (for all queries)

    It makes no sense to have jdbc-read-journal.journal-sequence-retrieval.query-delay bigger than jdbc-read-journal.refresh-interval. Setting these to equal values seems to be okay to me.

    You can find the defaults here https://github.com/akka/akka-persistence-jdbc/blob/master/core/src/main/resources/reference.conf
    These are reccommended unless you need somethign different

    @WellingR Thanks a lot! I tried to read the source code of the plugin several times and still can't figure out this. You helped a lot!
    But it seems that I have a problem with these settings. When I set them to "10ms" I have normal performance, but when I fall back to default the performance is VERY low.
    And I see the difference in profiler (I'm using YourKit). The difference is that with 10ms my threads are loaded with work, and with default they are almost "yellow" all the time. That's the signal for me. But "10ms" setting is killing my DB.
    Ruud Welling
    what is the real difference between the things you call normal and low performance?
    I don't know the thread colours of Yourkit, but it could very well be that yellow means that the thread is idle because there is nothing to do
    Setting these settings to very low values only leads to unneccecary polling
    Keep in mind that it is normal that there is a small delay when these events are delivered. With the default settings this delay is as most 10 seconds
    Yes, my threads are yellow despite the fact that there is a lot of work actually. But my application is slowly consuming request (1-4 rps). That's very slow.
    I have 40 concurrent connections that are just waiting. Also the latency for processing just single command is 30seconds - that's not normal in my world.
    So, my question is why the default setting is 1s. How it was determined?
    Ruud Welling
    Are you using eventsByTag or eventsByPersistenceId? What version of akka-persistence-jdbc are you using?
    As far as I know the refresh-interval determines how long the db will wait in case the last attempt did not have any more events to retrieve
    If the last retrieved batch was complete, the next batch of events is retrieved immediately. In other words, even if the refresh-interval is set to a high value. (e.g. 10 seconds) then for a long-running steam there will be no significant slowdown. The only thing is that some events might arrive a bit later
    The lower you set these values, the lowever the delays might be, however this also means that more unnecessary polling will be done
    Ruud Welling
    I cannot answer how the default of 1s was determined. I guess that no value is perfect for all situations.
    I'm using Lagom 1.6.1 with the latest version of the plugin. I do not call any methods directly. It does Lagom for me.
    Enno Runne
    A release candidate for Akka Persistence JDBC 4.0 is now available! https://discuss.lightbend.com/t/akka-persistence-jdbc-4-0-0-release-candidate/6377
    Abel Miguez

    Hello all, I am testing Amazon Aurora postgreSQL with Akka persistence JDBC. Anybody has some feedback on the use of it? Searching on the chat I saw that some years ago @calvinlfer asked about it, thought could not find more details.
    I am interested on Akka persistence jdbc configuration tuning specific to the usage of Aurora.

    From my observation the queries to recover the journal are taking much longer that a local instance of Postgresql (docker) 29 s vs 1s.
    select "ordering", "deleted", "persistence_id", "sequence_number", "message", "tags" from "system"."journal" where ((("persistence_id" = 'processor') and ("deleted" = false)) and ("sequence_number" >= 1)) and ("sequence_number" <= 11854) order by "sequence_number" limit 9223372036854775807;

    Abel Miguez
    Re, I will just update... what I am experiencing is more related to the size of the events and the bandwidth between the AZ, that with some Aurora specific.
    Anyway if there is something I need to know configuration wise, I listen :raising_hand:
    Renato Cavalcanti
    @amiguez, I'm not aware of anything related with Amazon Aurora
    1 reply

    Hey guys
    I've just plugged in jdbc to my poc application and persistence stopped working. The only logs it's giving me are

    [2020-08-02 20:50:59,258] [ERROR] [com.poc.License] [] [License-akka.actor.default-dispatcher-18] - Supervisor RestartSupervisor saw failure: Exception during recovery from snapshot. PersistenceId [License|arturs_license]. Circuit Breaker is open; calls are failing fast
    akka.persistence.typed.internal.JournalFailureException: Exception during recovery from snapshot. PersistenceId [License|arturs_license]. Circuit Breaker is open; calls are failing fast

    Is there any way to make it give me more information? log level is debug already
    Was working fine with cassandra before

    And AFAIK it should be configured correctly
    Ruud Welling
    This is odd, i would expect that there would be more logging details in there. Are you sure that the logging is configured correctly?

    hey guys, I am getting errors while saving the snapshot:
    (persistence_id, sequence_number)=(MEMBERSHIP-d8ec1971-f095-43d2-a69f-39772463fd90, 43) already exists.
    The issue is intermittent. But sometimes the latest snapshot gets deleted, and all the events are replayed for that actor for any new sequence number.
    More Info at:

    I am using postgres as backend as persistence store. Any leads will be helpful.


    I had tried to run slick in debug mode, it is failing in upsert for the same sequence id for the third time, for the first two times, the opersation succeeds. For the third time it fails with the error I quoted in the message above.
    Sql is same in all the upserts:

    Executing prepared update: HikariProxyPreparedStatement@160991021 wrapping update "event"."snapshot" set "created"=?,"snapshot"=? where "persistence_id"=? and "sequence_number"=?; insert into "event"."snapshot" ("persistence_id","sequence_number","created","snapshot") select ?,?,?,? where not exists (select 1 from "event"."snapshot" where "persistence_id"=? and "sequence_number"=?)

    that is first it tries to update and then it tries to insert. Don't know why it tries to insert the third time because exists check is already there on insert.

    Per Wiklander
    Is this the right room for akka-persistence-jdbc as of now, or is there a new room after the Lightbend takeover?
    Ignasi Marimon-Clos
    Hi @PerWiklander, it’s better to use akka/akka or akka/dev. We generally don’t maintain gitter rooms for each specific library under the akka org.