Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Dan Di Spaltro
    @dispalt
    Is there a way to read from the journal in reverse order, newest first?
    Patrik Nordwall
    @patriknw
    @dispalt no, that is not supported
    Released akka-persistence-cassandra 0.103 with a new Cleanup tool for deleting events and snapshots. https://doc.akka.io/api/akka-persistence-cassandra/0.103/akka/persistence/cassandra/cleanup/Cleanup.html
    Dan Di Spaltro
    @dispalt
    @patriknw cool so I would have to basically copy the code (reuse as much as possible) to expose something like that?
    Jacek Ewertowski
    @jewertow
    @patriknw @chbatey, when do you plan to release version 1.0?
    Patrik Nordwall
    @patriknw
    we are about to release 1.0.0-RC1 later this week or early next week
    anilkumble
    @anilkumble

    Hi @patriknw Patrik
    I have used Cleanup tool to delete all the events related to persistence id : node1
    It deleted all events successfully but it has one single entry with all null values. Why so ??
    Below is that entry.

    persistence_id | partition_nr | sequence_nr | timestamp | timebucket | used | event | event_manifest | message | meta | meta_ser_id | meta_ser_manifest | ser_id | ser_manifest | tags | writer_uuid
    ----------------+--------------+-------------+-----------+------------+------+-------+----------------+---------+------+-------------+-------------------+--------+--------------+------+-------------
    node1 | 0 | null | null | null | True | null | null | null | null | null | null | null | null | null | null

    Patrik Nordwall
    @patriknw
    @anilkumble ok, that is expected because used is a "static column". That column has been optionally removed/replaced in version 0.101+, but complete removal requires the steps described in the migration guide: https://doc.akka.io/docs/akka-persistence-cassandra/0.103/migrations.html#migrations-to-0-101
    anilkumble
    @anilkumble
    Thank you @patriknw,
    May I know the purpose of the below columns ?
    1. event_manifest
    2. message
    3. ser_id
    4. meta
    5. meta_ser_id
    6. meta_ser_manifest
    7. ser_manifest
    8. tags
    9. writer_uuid
    Patrik Nordwall
    @patriknw
    Latest documentation for this isn't published yet but you can see a preview at https://github.com/akka/akka-persistence-cassandra/blob/master/docs/src/main/paradox/journal.md#messages-table
    anilkumble
    @anilkumble
    Thank you
    anilkumble
    @anilkumble
    Anyone can help me to choose minimum hardware requirements for a typical production Cassandra server like RAM, cpu cores and etc ?
    anilkumble
    @anilkumble
    Is there any limitation for persistence_id length ?
    Can I have very lengthy persistence Id?
    Also is there any size limitation for event(persistencable message) size ?
    anilkumble
    @anilkumble
    @patriknw Can I use latest Akka Persistence Cassandra version 0.103(Latest) with Akka Persistence(All other Akka like cluster, ddata) version 2.5.23 ?
    Patrik Nordwall
    @patriknw
    short answer: no
    longer: 0.103 is built with Akka 2.5.29, which means that you can use it with 2.5.29 or later version of Akka (including 2.6.x) but not an earlier version. We are backwards binary compatible, but not forwards binary compatible.
    persistence_id is a text column, so whatever limits Cassandra has to that
    event size, it will be inefficient with too large events, and I think Cassandra has some limit, such as 16MB (but that is anyway way more than what I'd recommend)
    anilkumble
    @anilkumble
    Okay, I can use scala 2.11 or 2.12. persistence Casandra latest version 0.103 is available for both scala version right ?
    anilkumble
    @anilkumble

    @anilkumble ok, that is expected because used is a "static column". That column has been optionally removed/replaced in version 0.101+, but complete removal requires the steps described in the migration guide: https://doc.akka.io/docs/akka-persistence-cassandra/0.103/migrations.html#migrations-to-0-101

    Here you mentioned the used column is optional in version 0.101+, So while creating message table i can remove used column right ?

    p-andreas
    @p-andreas

    @patriknw regarding

    short answer: no
    longer: 0.103 is built with Akka 2.5.29, which means that you can use it with 2.5.29 or later version of Akka (including 2.6.x) but not an earlier version. We are backwards binary compatible, but not forwards binary compatible.

    How can I use akka persistence 0.103 in combination with akka 2.6.x?

    As far as I noticed, it is not possible to use akka peristence cassandra together with the current version of akka 2.6.x?
    ManifestInfo.checkSameVerison does not allow such a scenario:

    java.lang.IllegalStateException: Detected possible incompatible versions on the classpath. Please note that a given Akka version MUST be the same across all modules of Akka that you are using, e.g. if you use [2.6.4] all other modules that are released together MUST be of the same version. Make sure you're using a compatible set of libraries. Possibly conflicting versions [2.6.4, 2.5.29] in libraries [akka-persistence:2.6.4, akka-discovery:2.6.4, akka-coordination:2.6.4, akka-actor:2.6.4, akka-remote:2.6.4, akka-cluster:2.6.4, akka-protobuf-v3:2.6.4, akka-distributed-data:2.6.4, akka-cluster-typed:2.6.4, akka-cluster-tools:2.6.4, akka-actor-typed:2.6.4, akka-persistence-query:2.5.29, akka-slf4j:2.6.4, akka-persistence-typed:2.6.4, akka-stream:2.6.4]
    [error]     at akka.util.ManifestInfo.checkSameVersion(ManifestInfo.scala:206)

    Please let me know if I am missing something or if akka 2.6.x is not compatible with akka persistence cassandra

    Patrik Nordwall
    @patriknw
    Scala version: use 2.12 then
    used column: yes, you can skip creating it if you disable it with config as described in migration guide. Good to do that if you start now.
    Akka 2.6: that’s fine, you just have to add all of the problematic dependencies explicitly in your build, with version 2.6.x
    Patrik Nordwall
    @patriknw
    akka-persistence-query, from that error message
    We are also releasing 1.0.0-RC1 on Monday, so you could start right there.
    anilkumble
    @anilkumble

    used column: yes, you can skip creating it if you disable it with config as described in migration guide. Good to do that if you start now.

    I am getting the below exception If I do not have used column in message column family

    java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: Undefined column name used

    I am using version 0.103
    Why i am getting this ?

    RAHUL REDDY KATIKI REDDY
    @katikireddy622
    Hi all
    osboxes@osboxes:~/jmx$ nodetool status
    Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
    at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
    Caused by: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
    at sun.net.httpserver.ServerImpl.bind(ServerImpl.java:133)
    at sun.net.httpserver.HttpServerImpl.bind(HttpServerImpl.java:54)
    at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer.<init>(HTTPServer.java:145)
    at io.prometheus.jmx.shaded.io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:31)
    ... 6 more
    FATAL ERROR in native method: processing of -javaagent failed
    Aborted (core dumped)
    my Cassandra Status is ------------------------------------

    osboxes@osboxes:~/jmx$ sudo service cassandra status
    ● cassandra.service - LSB: distributed storage system for structured data
    Loaded: loaded (/etc/init.d/cassandra; generated)
    Active: active (running) since Mon 2020-03-23 13:29:27 EDT; 24min ago
    Docs: man:systemd-sysv-generator(8)
    Process: 6803 ExecStop=/etc/init.d/cassandra stop (code=exited, status=0/SUCCESS)
    Process: 6920 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
    Tasks: 44 (limit: 4915)
    CGroup: /system.slice/cassandra.service
    └─6998 java -javaagent:/home/osboxes/jmx/jmx_prometheus_javaagent-0.12.0.jar=8888:/home/osboxes/jmx/config.yaml -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities -XX:ThreadPriority

    Mar 23 13:29:27 osboxes systemd[1]: Starting LSB: distributed storage system for structured data...
    Mar 23 13:29:27 osboxes systemd[1]: Started LSB: distributed storage system for structured data.
    lines 1-12/12 (END)
    osboxes@osboxes:~/jmx$

    Andrew
    @mcandre
    Who's running BOINC?
    Patrik Nordwall
    @patriknw
    @anilkumble Did you add akka.persistence.cassandra.journal-static-column-compat = off to your application.conf?
    anilkumble
    @anilkumble
    Yes I added this entry in application.conf
    Patrik Nordwall
    @patriknw
    Do you also have a stack trace in the logs that you can share?
    Patrik Nordwall
    @patriknw
    @/all akka-persistence-cassandra 1.0.0-RC1 has been released. See https://discuss.lightbend.com/t/akka-persistence-cassandra-1-0-0-rc1-released/ and make sure you follow the migration guide https://doc.akka.io/docs/akka-persistence-cassandra/current/migrations.html
    Let us know if you find any problems.
    anilkumble
    @anilkumble

    Do you also have a stack trace in the logs that you can share?

    I am using this version: akka-persistence-cassandra_2.11 » 0.103
    Below find the stack trace

    [INFO] [03/24/2020 12:26:06.440] [ReactorCluster-akka.actor.default-dispatcher-31] [akka.cluster.Cluster(akka://ReactorCluster)] Cluster Node [akka.tcp://ReactorCluster@127.0.0.1:4001] - Leader is moving node [akka.tcp://ReactorCluster@127.0.0.1:4001] to [Up]
    [ERROR] [03/24/2020 12:26:06.476] [ReactorCluster-akka.actor.default-dispatcher-31] [akka.tcp://ReactorCluster@127.0.0.1:4001/user/supervisor] Failed to persist event type [java.util.HashMap] with sequence number [1] for persistenceId [node1].
    java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: Undefined column name used
    at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:476)
    at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:435)
    at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
    at akka.persistence.cassandra.package$ListenableFutureConverter

    KaTeX parse error: Can't use function '$' in math mode at position 5: anon$̲2: anon$2
    anonfun$run$2.apply(package.scala:50)
    at scala.util.Try$.apply(Try.scala:192)
    at akka.persistence.cassandra.package$ListenableFutureConverter$$anon$2.run(package.scala:50)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Undefined column name used
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:181)
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:250)
    at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:219)
    at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTransform(Futures.java:1442)
    at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTransform(Futures.java:1433)
    at com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(Futures.java:1408)
    at com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)

    @/all akka-persistence-cassandra 1.0.0-RC1 has been released. See https://discuss.lightbend.com/t/akka-persistence-cassandra-1-0-0-rc1-released/ and make sure you follow the migration guide https://doc.akka.io/docs/akka-persistence-cassandra/current/migrations.html
    Let us know if you find any problems.

    what is difference between version 0.103 and 1.0.0-RC1

    Johan Andrén
    @johanandren
    Patrik Nordwall
    @patriknw
    and the release announcement describes most of the changes at a high level: https://discuss.lightbend.com/t/akka-persistence-cassandra-1-0-0-rc1-released/
    Patrik Nordwall
    @patriknw
    @anilkumble I have double checked and all statements where the used column is included are guarded by the cassandra-journal.write-static-column-compat. I think your config is not working as you intend. My previous comment was wrong for that version. The config property is cassandra-journal.write-static-column-compat=off, not akka.persistence.cassandra.journal-static-column-compat as I said previously.
    That must have been changed accidentally in the migration guide. I'll fix.
    anilkumble
    @anilkumble

    @anilkumble I have double checked and all statements where the used column is included are guarded by the cassandra-journal.write-static-column-compat. I think your config is not working as you intend. My previous comment was wrong for that version. The config property is cassandra-journal.write-static-column-compat=off, not akka.persistence.cassandra.journal-static-column-compat as I said previously.

    Thanks @patriknw It is working now

    anilkumble
    @anilkumble

    When working with persistence,
    Below is the recieve recover method

    @Override
    public Receive createReceiveRecover() {
    return receiveBuilder()
    .match(String.class, str -> {
    System.out.println(str);
    })
    .match(RecoveryCompleted.class, recoveryCompleted -> {
    int i=10/0;
    })
    .build();
    }

    here it will throw an arithmetic exception while receive RecoverCompleted message
    In this situation it is automatically kills the actor and recreating it.
    Is this expected ?
    If yes, Why ?

    struijenID
    @struijenID
    To me this is expected because you are dividing 10 by 0 when a RecoveryCompleted event arrives 😁
    -> int i=10/0;
    anilkumble
    @anilkumble

    i=10/0; Yes I am adding this statement purposefully,
    If there is an exception then it can kill the actor
    I am wondering why it is recreating ?

    @Override
    public Receive createReceiveRecover() {
    return receiveBuilder()
    .match(String.class, str -> {
    int i=10/0;
    })
    .match(RecoveryCompleted.class, recoveryCompleted -> {
    })
    .build();
    }

    For the above case, It just kills the actor.
    But If there is any exception in RecoveryCompleted message then it is killing and recreating it.
    And it went to infinity loop (like kills and recreate infinitely)

    anilkumble
    @anilkumble
    @patriknw Kindly help for the above doubt
    Patrik Nordwall
    @patriknw
    The restart is by the default supervision for actors. You can add supervision strategy to handle aritmeticException differently. https://doc.akka.io/docs/akka/current/fault-tolerance.html Exceptions when replaying events (second example) are different. They are treated as fatal, stopping the actor.
    By the way, since it looks like you are starting a new project I would strongly recommend that you use the new Actor APIs. For persistence the EventSourcedBehavior https://doc.akka.io/docs/akka/current/typed/persistence.html
    anilkumble
    @anilkumble

    May I know reason for the below exception

    "" "127.0.0.1" "kumble-8174" "" "" "" "47" "com.adventnet.mfw.Starter$SysLogStream" "log" "INFO" "31-03-2020 21:05:15:100" "78" "[ERROR] [03/31/2020 21:05:15.098] [CircuitExecutionCluster-akka.actor.default-dispatcher-2] [akka.tcp://CircuitExecutionCluster@127.0.0.1:2551/user/supervisorActor] Persistence failure when replaying events for persistenceId [supervisorActor]. Last known sequence number [0]
    java.lang.IllegalStateException: Sequence number [1] still missing after [10.00 s], saw unexpected seqNr [7] for persistenceId [supervisorActor].
    at akka.persistence.cassandra.query.EventsByPersistenceIdStage

    KaTeX parse error: Can't use function '$' in math mode at position 5: anon$̲1.lookForMissin…: anon$1.lookForMissingSeqNr(EventsByPersistenceIdStage.scala:377)
        at akka.persistence.cassandra.query.EventsByPersistenceIdStage
    anon$1.onTimer(EventsByPersistenceIdStage.scala:359)
    at akka.stream.stage.TimerGraphStageLogic.akka$stream$stage$TimerGraphStageLogiconInternalTimer(GraphStage.scala:1593)atakka.stream.stage.TimerGraphStageLogiconInternalTimer(GraphStage.scala:1593) at akka.stream.stage.TimerGraphStageLogicanonfun$akka$stream$stage$TimerGraphStageLogic
    KaTeX parse error: Can't use function '$' in math mode at position 22: …erAsyncCallback$̲1.apply(GraphSt…: getTimerAsyncCallback$1.apply(GraphStage.scala:1582)
        at akka.stream.stage.TimerGraphStageLogic
    anonfun$akka$stream$stage$TimerGraphStageLogic
    KaTeX parse error: Can't use function '$' in math mode at position 22: …erAsyncCallback$̲1.apply(GraphSt…: getTimerAsyncCallback$1.apply(GraphStage.scala:1582)
        at akka.stream.impl.fusing.GraphInterpreter.runAsyncInput(GraphInterpreter.scala:452)
        at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:481)
        at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
        at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter
    processEvent(ActorGraphInterpreter.scala:749)
    at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:539)
    at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
    at akka.actor.ActorCell.invoke(ActorCell.scala:581)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
    at akka.dispatch.Mailbox.run(Mailbox.scala:229)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    " "" "" "" "" "" "" "1585668915100" "" "" "" "" "" "" "ROOT" "{""service"":""worker"",""logger_name"":""SYSOUT""}"

    Aleksandar Skrbic
    @aleksandarskrbic
    Hello everyone, I was wondering if using PersistentActor backed by Cassandra would be good fit for the next use case. I have service that constantly consuming one Kafka topic transforms incoming messages and writes it to the Apache Ignite KV store. Service is implemented using Java and Kafka. I realized that Cassandra would be a better fit than Ignite and it definitely going to be replaced by Cassandra, but also I would like to introduce Akka in a project and implementing a small service like this one I have described can be pretty straightforward since it's not critical and I have some free time to experiment. So to sum up, I want to implement service that will consume Kafka topic using Alpakka Kafka, transform that message and pass it to Persistent Actor. And I only need to query it by id. I want to know how efficient to read Cassandra journal by id or it better just to use plain CQL and avoid using Persistent Actor at all? Also holding the whole state in memory wouldn't be so efficient.