Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Tim Moore
    @TimMoore
    But it's usually OK for Journal Queries to be eventually consistent.
    Tulio Gomez Rodrigues
    @tuliogrodrigues

    Hi everyone,
    I'm trying delete all events into my journal based in some conditions in tag column. I managed to do like that, but I don't like this import in the middle of the code and would like to remove it.
    Anyone knows how to do it without this import in the middle?
    `
    def deleteJournal(companyId: CompanyId): EitherF[Done] = {

    val query: PostgresReadJournal = PersistenceQuery(actorSystem).readJournalFor[PostgresReadJournal](PostgresReadJournal.Identifier)
    
    import query.driver.api._
    
    val deleteQuery = query.journals.filter(t => t.tags @> Map("companyId" -> companyId.id.toString)).delete
    EitherT.liftF(
      query.database
        .run(deleteQuery)
        .map(_ => Done))
    } 

    `

    Ruud Welling
    @WellingR
    Are you using other extension on top of akka-persistence jdbc? As far as I know PostgresReadJournal is not part of akka-persistence-jdbc. The jdbc driver of akka-persistence-jdbc does not provide access to the driver/profile this way
    The only officially supported api for message deletion is the following
    This is done by a persistent actor
    David Roon
    @adridadou
    hi everyone, so I’m trying to figure out a way to configure slick so I can use the connection pool for akka-persistence-jdbc but also for my slick table configuration in my application
    the configuration for slick is slick.dbs.default but all examples are using slick.db
    is there a way of using slick.dbs.default
    David Roon
    @adridadou
    on the same subject, I am pretty sure my configuration is wrong and the akka persistence query hangs when I use it but I see no error message. What should I do to see the errors ?
    i nthe logs I mean
    David Roon
    @adridadou
    ok I think I figured out the issue. There were still some configurations that pointed to LevelDb instead of Jdbc
    OlafurD
    @OlafurD
    Hi guys. I was wondering about the performance of eventsByTag query, is it using any wildcards in the sql query (e.g. where tags like '%some_tag%') or is it free of wildcards?
    Renato Cavalcanti
    @octonato
    it is using wildcards indeed
    and that's not great
    we want to refactor it
    OlafurD
    @OlafurD
    gotcha, thanks
    Enno Runne
    @ennru
    We intend to move Akka Persistence JDBC to the Akka Github organisation dnvriend/akka-persistence-jdbc#252
    Enno Runne
    @ennru
    The repository has now been moved to https://github.com/akka/akka-persistence-jdbc
    Ihor Zadyra
    @TomoHavvk
    Hi, I am using your implementation of akka-persistence-jdbc, at the moment the snapshot table has grown to a large size, tell me please, whether it is possible to configure automatic removal of old snapshots? If not, how can this be implemented correctly? Thank you so much for earlier ;)
    Ruud Welling
    @WellingR
    @TomoHavvk see https://doc.akka.io/docs/akka/current/persistence.html#snapshot-deletion whenever you receive a SaveSnapshotSuccess you could choose to atomatically delete the old snapshots
    alexmnyc
    @alexmnyc
    guys, how can I implement a database connection provider for akka-persistence-jdbc that uses rotating credentials?
    Ruud Welling
    @WellingR
    @alexmnyc I think you should look for this solution in the database-connection pool you are using. I have little experience with this. All you can do is make sure Slick is configured with a DataSource with automatically handles the credential rotation
    Michael
    @grouzen

    Hello guys!

    I have a pretty easy question.
    What is the meaning of jdbc-read-journal.refresh-interval and jdbc-read-journal.journal-sequence-retrieval.query-delay configuration settings?
    For example, if I want to have a polling interval = 500ms, I set them both to 500ms. Is it correct or I misunderstood something?

    What is the recommended value?
    I have 3 nodes of my application with 30 shards for each entity (I'm using Lagom). The default value for both settings is '1s'.
    But during my experiments I found that for good performance I have to set it to <100ms. But in this case my DB experiences huge load by polling requests.

    Ruud Welling
    @WellingR

    The "journal-sequence-retrieval" configuration is a mechanism to ensure that the events are returned in the correct order (it only applies to eventsByTag queries). It basically checks which ids exist in the database, and it detects "missing" ids (which can happen in case a transaction is not committed yet, but also if a transaction fails). The "query-delay" configured how often it checks for new ids. In cases of missing ids, the "max-tries" setting determines the number of tries untill the assumption is made that the id has been skipped over. (note: setting this value too low might cause events to be missing in an eventsByTag query).

    The refresh-interval setting determine how often polling for new events is done. (for all queries)

    It makes no sense to have jdbc-read-journal.journal-sequence-retrieval.query-delay bigger than jdbc-read-journal.refresh-interval. Setting these to equal values seems to be okay to me.

    You can find the defaults here https://github.com/akka/akka-persistence-jdbc/blob/master/core/src/main/resources/reference.conf
    These are reccommended unless you need somethign different

    Michael
    @grouzen
    @WellingR Thanks a lot! I tried to read the source code of the plugin several times and still can't figure out this. You helped a lot!
    But it seems that I have a problem with these settings. When I set them to "10ms" I have normal performance, but when I fall back to default the performance is VERY low.
    And I see the difference in profiler (I'm using YourKit). The difference is that with 10ms my threads are loaded with work, and with default they are almost "yellow" all the time. That's the signal for me. But "10ms" setting is killing my DB.
    Ruud Welling
    @WellingR
    what is the real difference between the things you call normal and low performance?
    I don't know the thread colours of Yourkit, but it could very well be that yellow means that the thread is idle because there is nothing to do
    Setting these settings to very low values only leads to unneccecary polling
    Keep in mind that it is normal that there is a small delay when these events are delivered. With the default settings this delay is as most 10 seconds
    Michael
    @grouzen
    Yes, my threads are yellow despite the fact that there is a lot of work actually. But my application is slowly consuming request (1-4 rps). That's very slow.
    I have 40 concurrent connections that are just waiting. Also the latency for processing just single command is 30seconds - that's not normal in my world.
    So, my question is why the default setting is 1s. How it was determined?
    Ruud Welling
    @WellingR
    Are you using eventsByTag or eventsByPersistenceId? What version of akka-persistence-jdbc are you using?
    As far as I know the refresh-interval determines how long the db will wait in case the last attempt did not have any more events to retrieve
    If the last retrieved batch was complete, the next batch of events is retrieved immediately. In other words, even if the refresh-interval is set to a high value. (e.g. 10 seconds) then for a long-running steam there will be no significant slowdown. The only thing is that some events might arrive a bit later
    The lower you set these values, the lowever the delays might be, however this also means that more unnecessary polling will be done
    Ruud Welling
    @WellingR
    I cannot answer how the default of 1s was determined. I guess that no value is perfect for all situations.
    Michael
    @grouzen
    I'm using Lagom 1.6.1 with the latest version of the plugin. I do not call any methods directly. It does Lagom for me.
    Enno Runne
    @ennru
    A release candidate for Akka Persistence JDBC 4.0 is now available! https://discuss.lightbend.com/t/akka-persistence-jdbc-4-0-0-release-candidate/6377
    Abel Miguez
    @amiguez

    Hello all, I am testing Amazon Aurora postgreSQL with Akka persistence JDBC. Anybody has some feedback on the use of it? Searching on the chat I saw that some years ago @calvinlfer asked about it, thought could not find more details.
    I am interested on Akka persistence jdbc configuration tuning specific to the usage of Aurora.

    From my observation the queries to recover the journal are taking much longer that a local instance of Postgresql (docker) 29 s vs 1s.
    select "ordering", "deleted", "persistence_id", "sequence_number", "message", "tags" from "system"."journal" where ((("persistence_id" = 'processor') and ("deleted" = false)) and ("sequence_number" >= 1)) and ("sequence_number" <= 11854) order by "sequence_number" limit 9223372036854775807;
    Thanks!

    Abel Miguez
    @amiguez
    Re, I will just update... what I am experiencing is more related to the size of the events and the bandwidth between the AZ, that with some Aurora specific.
    Anyway if there is something I need to know configuration wise, I listen :raising_hand:
    Renato Cavalcanti
    @octonato
    @amiguez, I'm not aware of anything related with Amazon Aurora
    1 reply
    vector3f
    @vector3f

    Hey guys
    I've just plugged in jdbc to my poc application and persistence stopped working. The only logs it's giving me are

    [2020-08-02 20:50:59,258] [ERROR] [com.poc.License] [] [License-akka.actor.default-dispatcher-18] - Supervisor RestartSupervisor saw failure: Exception during recovery from snapshot. PersistenceId [License|arturs_license]. Circuit Breaker is open; calls are failing fast
    akka.persistence.typed.internal.JournalFailureException: Exception during recovery from snapshot. PersistenceId [License|arturs_license]. Circuit Breaker is open; calls are failing fast

    Is there any way to make it give me more information? log level is debug already
    Was working fine with cassandra before

    And AFAIK it should be configured correctly
    Ruud Welling
    @WellingR
    This is odd, i would expect that there would be more logging details in there. Are you sure that the logging is configured correctly?
    tavisca-asingla
    @tavisca-asingla

    hey guys, I am getting errors while saving the snapshot:
    (persistence_id, sequence_number)=(MEMBERSHIP-d8ec1971-f095-43d2-a69f-39772463fd90, 43) already exists.
    The issue is intermittent. But sometimes the latest snapshot gets deleted, and all the events are replayed for that actor for any new sequence number.
    More Info at:
    https://discuss.lightbend.com/t/error-while-saving-snapshot-persistence-id-sequence-number-already-exists/7389

    I am using postgres as backend as persistence store. Any leads will be helpful.

    tavisca-asingla
    @tavisca-asingla

    I had tried to run slick in debug mode, it is failing in upsert for the same sequence id for the third time, for the first two times, the opersation succeeds. For the third time it fails with the error I quoted in the message above.
    Sql is same in all the upserts:

    Executing prepared update: HikariProxyPreparedStatement@160991021 wrapping update "event"."snapshot" set "created"=?,"snapshot"=? where "persistence_id"=? and "sequence_number"=?; insert into "event"."snapshot" ("persistence_id","sequence_number","created","snapshot") select ?,?,?,? where not exists (select 1 from "event"."snapshot" where "persistence_id"=? and "sequence_number"=?)

    that is first it tries to update and then it tries to insert. Don't know why it tries to insert the third time because exists check is already there on insert.

    Per Wiklander
    @PerWiklander
    Is this the right room for akka-persistence-jdbc as of now, or is there a new room after the Lightbend takeover?
    Ignasi Marimon-Clos
    @ignasi35
    Hi @PerWiklander, it’s better to use akka/akka or akka/dev. We generally don’t maintain gitter rooms for each specific library under the akka org.