Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Tim Moore
    @mikla the format of the first-time-bucket setting changed. You need to add the time to your configuration.
    Artsiom Miklushou
    @TimMoore Oh, thanks, I forgot about this at all :)
    Andrew Ryno
    Hey, I posted this yesterday on the boards but wanted to see if I could get some more visibility. We had a fairly major outage last week due to this issue and haven't been able to track down the true root cause apart from a Future returned by akka-persistence-cassandra that never completed/failed (still waiting on it even 6+ days later on our old/saved hosts). Any help would be greatly appreciated.
    ayo is it better to explicitly deploy seed nodes or to let the cluster dynamically elect seeds at deployment time?
    Gustavo Momenté
    Hello guys, I'm having trouble to debug what's happening with my usage of akka-persistence.
    Under heavy load the CircuitBreaker opens and seems to never close
    And no errors are being reported on cassandra side, nor in the jmx metrics exposed by the persitence backend
    A little after 20h05 the CircuiteBreaker opens and stays that way until all messages in the actor mailboxes are processed, then it seems to close. I fear that maybe there is some thead starvation going on and them the CircuiteBreaker never closes.
    I couldn't find any erros on the Cassandra log
    Christopher Batey
    Do you have stats for number of requests incoming to Cassandra and response time? Do they steadily increase to the timeout? I'd still expect to see something in the plugin's logs
    Hi all, I am trying to swap in-memory to Cassandra in my unit tests, but the embed Cassandra instance could not start on port 9042. The akka-persistence, akka-persistence-launcher are of version 0.90 which is latest. There is another ERROR about failing to load JNR C library, but I think it's about timestamp generation which should probably not blocking. Can anyone share your thought?
    It seems that the akka-persistence-launcher did not launch the cassandra, the default cassandra directory target/embedded-cassandra hasn't been created
    Andrew Moreland
    Does anyone happen to know how to kill a materialized view or keyspace without being able to boot cassandra? I have an "eventsbytag1" materialized view that is crashing cassandra before it can serve a cqlsh. When I delete /data/the_relevant_keyspace, that folder magically re-appears after a restart.
    Every time I execute PersistenceQuery(as).readJournalFor[EventsByTagQuery](pluginId).eventsByTag(tag, offset) a new connection to cassandra is opened or there is some connection pool mechanism in place?
    Christopher Batey
    Currently we have max 3 Cluster objects from the c Java driver per journal you configure (normally just 1 of these). The 3 are the read, write and snapshot. We plan to fix this in 1.0 and share the same Cluster for all 3. The Cluster object from the c driver maintains a connection pool.

    Hi everyone,

    I'm working on a module that use akka-persistence-cassandra.

    As part of the akka-persistence-cassandra code - event that arrives with a sequence number != 1 will trigger a looking for missing procedure in case that sequence number 1 is not in the current time bucket.

    Since events for the same persistor could arrive in a very different times (will persist in a different time buckets) this flow is very common in our system.

    Each looking for missing procedure will block our stream until it fail to find the missing event since the event have persisted in a earlier time bucket and the search only look at the current and the previous time bucket - this will take ~10 seconds. in this 10 seconds no new events are emitted from the stream and this will delay the execution of events that we want to perform

    Is there any workaround to disable this looking for missing procedure and just persist the new event regardless of the sequence number he arrives with?

    Hi everyone..i am using akka-persistence-cassandra v0.59 . i am seeing a lot of errors with the following stack trace(in prd; where i cannot enable debug logs unfortunately)
    java.lang.IllegalStateException: Previous query was not exhausted at akka.persistence.cassandra.query.EventsByPersistenceIdStage$$anon$1.query(EventsByPersistenceIdStage.scala:291) at akka.persistence.cassandra.query.EventsByPersistenceIdStage$$anon$1.tryPushOne(EventsByPersistenceIdStage.scala:381) at akka.persistence.cassandra.query.EventsByPersistenceIdStage$$anon$1.$anonfun$newResultSetCb$1(EventsByPersistenceIdStage.scala:173) at akka.persistence.cassandra.query.EventsByPersistenceIdStage$$anon$1.$anonfun$newResultSetCb$1$adapted(EventsByPersistenceIdStage.scala:160) at at$AsyncInput.execute(ActorGraphInterpreter.scala:468) at at$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:745) at$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:760) at at$(Actor.scala:515) at at
    is there a known bug in v0.59 that the query method is called before doing queryState = QueryIdle
    Phil Kesel
    Hi gang. Anyone using akka-persistence-cassandra with Lightbend Telemetry(aka. Cinnamon) ? The Datastax driver uses a different version of the io.dropwizard.metrics packages and it causes the cassandra driver to crash. I've forked the akka-persistence-cassandra code and added a config option to disable JMX metrics in the Cassandra cluster initialization and it works, but I don't think I should have to go that far to get Cinnamon and cassandra persistence to play nicely.
    @patriknw @chbatey hi, do you have any input for akka/akka-persistence-cassandra#411 and it's affecting lagom/lagom#1667 also. Trying to speed things up, it's a bit of a blocking issue...
    Patrik Nordwall
    @kotdv Thanks for the ping. I think I know the reason. I'll take a closer look.
    @patriknw very appreciated :smile_cat:
    @patriknw and thanks for releasing it under 0.92 without breaking compatibility with current lagom ^_^
    Patrik Nordwall
    You're welcome.
    I'm loosing events with 0.92
    2019-01-20T13:32:17.920Z [info] akka.persistence.cassandra.query.EventsByTagStage [, akkaTimestamp=13:32:17.919UTC, akkaSource=EventsByTagStage(akka://application), sourceActorSystem=application] - [84f791e0-7a9e-436c-bf9b-c80e11f22437] com.example.notification.impl.NotificationEvent6: Failed to find missing sequence nr: Some(LookingForMissing{previousOffset=e98f2000-1caa-11e9-8080-808080808080 bucket=TimeBucket(1547989200000, Hour, inPast: false, currentBucket: true. time: 2019-01-20 13:00:00:000 ) queryPrevious=true maxOffset=be179440-1cb7-11e9-8eb9-e9f100215ba8 persistenceId=NotificationEntity|55176160-a264-49cc-85c9-a2e90cfef9f1 maxSequenceNr=2 missing=Set(1) deadline=Deadline(344528908042307 nanoseconds) failIfNotFound=false). PersistenceId: NotificationEntity|55176160-a264-49cc-85c9-a2e90cfef9f1
    hi guys, out of the blue... I'm very sorry to inform you, but you're doing very bad things with batches. I have no time to investigate or dive into your code right now, but FYI you must not use batches if you want somewhat good throughput, and for general solution like persistence cassandra this is critical. Cassandra and batches is VERY bad, besides some very rare cases like by_user / by_email cases storing the same data with different partition keys, and still need to understand how it hurts the performance.
    Using LWT would've been at least 100 folds better, because they only degrade performance by like x4-x5 times, instead of simply killing the nodes for good and triggering Full GC cycles.
    Christopher Batey
    We only use unlogged batches all for the same partition key, they increase the throughput significantly.
    LWTs would not fit the use case and would have been considerably slower. The 4x slower is for uncontended, once contended they tend to kill Cassandra clusters.
    Any reason why you can't switch to properly wrapped session.executeAsync ?
    LWT was not a suggestion, usually it's for some odd case too :/
    If the statements are idempotent as they should be... and you're not using logged batch...
    the only error scenario I see is if you in fact get an error (exception) from session.executeAsync
    in any other case you can be like 100% sure that your query passed
    I'm raising this question, because I'm encountering a spam of 10-12KB batches and losing events in 0.92 btw
    and getting to Full GC after some time
    I/O beyond 100 writes per second basically causes loss of events
    I was barely scratching the thing with 2k removal requests and lost like 600 events
    losing events is most likely another barely related bug... but who knows.
    @chbatey ehh checked your page and posts about anti-patterns and logged batches... and unlogged batches... and there we go :smile:
    akka persistence cassandra is using unlogged batches...
    lagom is relying on it and using logged batches...
    and everybody's having fun
    both things combined... create a very pleasant experience under quite some load... to debug :smile:
    need to mark some sentences with some "sarcasm" emoticon
    Copying from Lagom channel
    Adam Ritter @adamgerst it's interesting you mention Akka Persistence Cassandra using batching and losing events. Someone on my team wrote a project using Akka Persistence and we are backing everything with Cassandra. They are complaining that Cassandra is not getting all the writes into the database and aren't sure why. I'll relay this information to them and see if we can figure out what's going on.