Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Dan Di Spaltro
    @dispalt
    @nolah I think you should file a ticket
    pruthvi578
    @pruthvi578

    @pruthvi578
    i get the below error when try to configure akka cassandra for persistence
    [rajupru@ftc-svt-dev-sdap-dev-vm hb_movecervice]$ Uncaught error from thread [Uncaught error from thread [SimplivityHeartbeatInfoSightCluster-cassandra-plugin-default-dispatcher-25]: com/codahale/metrics/JmxReporter, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[SimplivityHeartbeatInfoSightCluster] java.lang.NoClassDefFoundError: com/codahale/metrics/JmxReporter at com.datastax.driver.core.Metrics.<init>(Metrics.java:146) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1501) at com.datastax.driver.core.Cluster.init(Cluster.java:208) at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:376) at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:355) at akka.persistence.cassandra.ConfigSessionProvider$$anonfun$connect$1.apply(ConfigSessionProvider.scala:50) at akka.persistence.cassandra.ConfigSessionProvider$$anonfun$connect$1.apply(ConfigSessionProvider.scala:44) at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253) at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:92) at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:92) at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:92) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49) at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Uncaught error from thread [SimplivityHeartbeatInfoSightCluster-cassandra-plugin-default-dispatcher-24]: com/codahale/metrics/JmxReporter, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[SimplivityHeartbeatInfoSightCluster] java.lang.NoClassDefFoundError: com/codahale/metrics/JmxReporter
    Please help how to resolve this?
    Have added below the dependencies for akka cassandra build
    val akkaCsdraPrst = "com.typesafe.akka" %% "akka-persistence-cassandra" % "0.104"
    val dropwizardMetics = "io.dropwizard.metrics" % "metrics-core" % "4.0.0"
    val dropwizardMetricsJmx = "io.dropwizard.metrics" % "metrics-jmx" % "4.0.0"

    In application conf
    `cassandra-journal {
    contact-points = ["127.0.0.1:9042"]
    authentication.username = "test"
    authentication.password = "password"
    }

    cassandra-snapshot-store {
    keyspace = "move-service_test"
    contact-points = ["127.0.0.1:9042"]
    authentication.username = "test"
    authentication.password = "password"
    }

    datastax-java-driver.advanced.reconnect-on-init = true
    datastax-java-driver {
    basic.contact-points = ["127.0.0.1:9042"]
    }

    akka {
    persistence {
    journal.plugin = "cassandra-journal"
    snapshot-store.plugin = "cassandra-snapshot-store"

    cassandra {
    journal.keyspace = "move_service_test"
    journal.authentication.username = "test"
    journal.authentication.password = "password"
    }
    }
    }
    `

    1 reply
    Zhenhao Li
    @Zhen-hao
    does the cassandra plugin support "Persist multiple events" as in def persist[Event, State](events: im.Seq[Event]): EffectBuilder[Event, State] ?
    I've found some strange behavior
    Zhenhao Li
    @Zhen-hao
    I have case of Effect.persist(Seq(event1)) that supposes to bring state from s0 to s1. I can see event1 gets persisted and handled. I can see s1 is the output of the event handler from the logs. however, the final state is still s0.
    Zhenhao Li
    @Zhen-hao
    never mind. it is a bug caused by a wrong variable reference. there is always an update event followed by a deletion event. the total effect is no change.
    Zhenhao Li
    @Zhen-hao
    so, as a side effect, I can confirm that persisting multiple events works just fine.
    pruthvi578
    @pruthvi578
    I get this error from application, the username and password is correct
    2020-11-19 06:29:51.714UTC WARN [SimplivityHeartbeatInfoSightCluster-akka.actor.default-dispatcher-20] a.p.c.q.s.CassandraReadJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: Authentication error on host /15.112.131.35:9042: Provided username cassandraadmin and/or password are incorrect 2020-11-19 06:29:51.739UTC WARN [SimplivityHeartbeatInfoSightCluster-akka.actor.default-dispatcher-20] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: Authentication error on host /15.112.131.35:9042: Provided username cassandraadmin and/or password are incorrect 2020-11-19 06:29:51.742UTC WARN [SimplivityHeartbeatInfoSightCluster-akka.actor.default-dispatcher-20] a.p.c.s.CassandraSnapshotStore - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: Authentication error on host /15.112.131.35:9042: Provided username cassandraadmin and/or password are incorrect 2020-11-19 06:29:51.754UTC ERROR[SimplivityHeartbeatInfoSightCluster-akka.actor.default-dispatcher-18] c.h.s.a.RdaTaskManagerActor - Persistence failure when replaying events for persistenceId [hou4-heartbeat-rda-manager]. Last known sequence number [0] com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /15.112.131.35:9042: Provided username cassandraadmin and/or password are incorrect
    Didac
    @umbreak
    Good morning. Akka persistence provides this eventsByTag and on the akka-persistence-cassandra you get all events by a certain tag ordered. Is there a simple way to get a stream of all events - regardless of the tag, or even without tag - ordered?
    Or the way to go is to add the same tag to all events and then do eventsByTag(myConstantTag)?
    Vadim Bondarev
    @haghard
    @umbreak When you say “a stream of all events” it is not clear what exactly you’re asking about. Do you have multiple persistentIds ?
    Didac
    @umbreak
    yes sure, many persistenceIds
    eventsByTag will stream all events ordered that have been tagged with a certain name. What if I want to stream all events regardless of the tag?
    Vadim Bondarev
    @haghard
    You may go with “add the same tag to all events and …” but this approach has some deficiencies e.g. processing all events in one single place could become a bottleneck. To improve it you can apply tags and cluster-sharding (https://www.youtube.com/watch?v=6ECsFlNNIAM)
    Didac
    @umbreak
    the approach of merging the streams would work too, right?
    eventsByTag(“a”) , eventsByTag(“b”) and then using something like ZipWith[A,B,...,Out] from AKka Streams
    Vadim Bondarev
    @haghard
    Well yes, but you need to know all your persistenceIds upfront
    Didac
    @umbreak
    why? which element to pick in the ZipWith should be decided by the timestamp value. Wouldn’t that be enough?
    Vadim Bondarev
    @haghard
    Sorry, I mean all tags
    [a, b] in your example
    Didac
    @umbreak
    Ah yes, that’s no problem :)
    Vadim Bondarev
    @haghard
    BTW, if you want to merge more then 2 sources probably MergeHub would be a better choice.
    MergeHub allows dynamic number of sources
    Didac
    @umbreak
    Thanks for the tip. Although in my case is not dynamic, we know the number of tags beforehand. So in this case I guess ZipWith should be the one, discriminating by timestamp. Maybe MergeSequence works too
    pruthvi578
    @pruthvi578
    Hello.. We have an akka application which persists the events in Cassandra. We observed that some actors were stopped due to memory issue. When we restarted the application the actors were not able to restart, failed with AskTimeoutException. Then we deleted the keyspace of cassandra and restarted application, it started without any failure. Is there any possible way to recover the corrupted entries in Cassandra?
    Dan Di Spaltro
    @dispalt
    Is there a way to configure akka persistence cassandra to dual write? My use case is I am trying to migrate infrastructure
    Kazunari Mori
    @kazzna
    Hi.
    How can I turn off the DEBUG log from akka-persistence-cassandra? I'm using version 1.0.4.
    I put akka.loglevel = OFF and akka.stdout-loglevel = OFF in my application.conf, it worked for my application logs, but still output DEBUG logs like below.
    23:56:41.709 [Blogs-akka.persistence.cassandra.default-dispatcher-7] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
    23:56:41.762 [Blogs-akka.persistence.cassandra.default-dispatcher-7] DEBUG com.datastax.oss.driver.internal.core.util.Reflection - Could not load com.esri.core.geometry.ogc.OGCGeometry: java.lang.ClassNotFoundException: com.esri.core.geometry.ogc.OGCGeometry
    java.lang.ClassNotFoundException: com.esri.core.geometry.ogc.OGCGeometry
        at java.lang.Class.forNameImpl(Native Method)
        at java.lang.Class.forName(Class.java:345)
        at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:55)
        at com.datastax.oss.driver.internal.core.util.DependencyCheck.isPresent(DependencyCheck.java:68)
        at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildCodecRegistry(DefaultDriverContext.java:570)
        at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.<init>(DefaultDriverContext.java:255)
        at com.datastax.oss.driver.api.core.session.SessionBuilder.buildContext(SessionBuilder.java:721)
        at com.datastax.oss.driver.api.core.session.SessionBuilder.buildDefaultSessionAsync(SessionBuilder.java:666)
        at com.datastax.oss.driver.api.core.session.SessionBuilder.buildAsync(SessionBuilder.java:598)
        at akka.stream.alpakka.cassandra.DefaultSessionProvider.connect(CqlSessionProvider.scala:53)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSession.<init>(CassandraSession.scala:53)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry.startSession(CassandraSessionRegistry.scala:99)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry.$anonfun$sessionFor$1(CassandraSessionRegistry.scala:84)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry$$Lambda$39711/0x0000000014690220.apply(Unknown Source)
        at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry.sessionFor(CassandraSessionRegistry.scala:84)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry.sessionFor(CassandraSessionRegistry.scala:74)
        at akka.stream.alpakka.cassandra.scaladsl.CassandraSessionRegistry.sessionFor(CassandraSessionRegistry.scala:64)
        at akka.persistence.cassandra.snapshot.CassandraSnapshotStore.<init>(CassandraSnapshotStore.scala:64)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at akka.util.Reflect$.instantiate(Reflect.scala:73)
        at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:101)
        at akka.actor.Props.newActor(Props.scala:226)
        at akka.actor.ActorCell.newActor(ActorCell.scala:613)
        at akka.actor.ActorCell.create(ActorCell.scala:640)
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:513)
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:535)
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:295)
        at akka.dispatch.Mailbox.run(Mailbox.scala:230)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
    23:56:41.763 [Blogs-akka.persistence.cassandra.default-dispatcher-7] INFO com.datastax.oss.driver.internal.core.context.InternalDriverContext - Could not register Geo codecs; this is normal if ESRI was explicit
    James Nowell
    @jamescnowell

    Maybe someone here can help debugging an issue we've been seeing on v0.102

    We're using eventsByTag with a partitioned set of tags (baseTag-{shardId}) to create some projections on the read side.

    Under light or medium load, we're able to "keep up" quite successfully (we measure the TimeBasedUUID we're currently writing vs. the current time).

    When we increase the load, we see our delay "sawtoothing". For some amount of time, a given tag may keep up, but eventually slows down and gets further and further behind.

    We have an internal check to detect the lag, and restart the stream if the lag is above a certain threshold. After the restart, the stream is very easily able to catch up almost instantly, but eventually lags behind again.

    Most of our effort so far has been focused on the Sink side of the stream, assuming we're somehow writing data slower than we can read it. After extensive testing, I'm not longer sure this is the case. We've added relatively large buffers to give us the best chance of writing larger, high throughput batches to our projected views, but those batches are nowhere near full.

    It seems as if the eventsByTag simply isn't pulling enough events for some reason, or otherwise the events are not being written to tag_views in a timely fashion. It seems more likely to be query related, not write related since we're able to catch up almost instantly.

    Any ideas where to look to understand more about why eventsByTag is slowing down?

    Laurynas Tretjakovas
    @n3ziniuka5
    Hello everyone. The documentation says that cassandra-plugin.journal.table-autocreate should only be used for local testing. But why is the default schema not good for production? With keyspace it is obvious since you need to create it manually with desired replication.
    Zhenhao Li
    @Zhen-hao
    I think it is more for preventing mistakes. you are HUGE trouble if the keyspace name gets changed at any time in production. during dev, people may change it and it can get merged to master without no one noticing
    Johan Andrén
    @johanandren
    One reason is that schema creation is not atomic and if you run a cluster you will have N nodes trying to create schema potentially at the same time which wreaks havoc with what actually gets created.
    Zhenhao Li
    @Zhen-hao
    do you recommend using DURABLE_WRITES = false?
    it seems to me safe enough with NetworkTopologyStrategy @johanandren
    Johan Andrén
    @johanandren
    I don’t see how NetworkTopologyStrategy has any effect on that each Akka cluster node tries to create the schema, potentially concurrently, but maybe I’m missing something.
    Zhenhao Li
    @Zhen-hao
    sorry, my question was not related to the previous topic. it is just about configuring Cassandra keyspace for good throughput + durability
    Laurynas Tretjakovas
    @n3ziniuka5
    Thanks @johanandren
    Johan Andrén
    @johanandren
    @Zhen-hao Ah, I see. I’m afraid I don’t have expertise in Cassandra enough to say if that is safe or not.
    Roy de Bokx
    @roy-tc
    @n3ziniuka5 @Zhen-hao , I can confirm that @johanandren 's answer is correct (regarding the havoc wreaked when multiple nodes start to create a schema). We've experienced this already in our test environment, where each PR is tested in its own environment, which is created from scratch. Having multiple services update the schema (eg. create keyspaces and tables) simultaneously through different Cassandra nodes drove Cassandra absolutely insane. It resulted in some weird "echoing" effect, where we saw resource consumption sky rocket, caused by the gossiping protocol of Cassandra.
    It took a while before we figured out what caused it (and that we should have RTFM); in the worst case it took several hours before our Cassandra cluster calmed down, even after all services were turned off.
    We now run a Jenkins script directly against Cassandra to initialize the keyspaces, before starting the services, which hasn't failed since. This also decreases the startup time of the service itself as it doesn't have to check keyspaces first.
    I hope this helps :)
    Zhenhao Li
    @Zhen-hao
    thanks for the tip! @roy-tc
    we didn't notice the issue in our local test environment, but we do use a script to set up Cassandra in our cloud environment
    Zhenhao Li
    @Zhen-hao

    it seems that the Cassandra connection happens only on the first read/write request because I see logs like

    [com.datastax.oss.driver.internal.core.adminrequest.AdminRequestHandler] [] [s0-io-2] - [s0] Executing query 'SELECT * FROM system_schema.tables' {}

    is there a way to make it not lazy? I want the cluster to fully load Cassandra metadata when it boots before handling requests

    1 reply
    pasikon
    @pasikon
    Hello, I'm experimenting with CosmosDB, works ok but after several hours I get such errors, interesing thing is that the service is still working events-by-tag are projected to the read side, etc. maybe I have missed some general Java Datstax Driver setting? Restarting the service these errors are gone for several hours
    [2021-02-25 20:06:27,877] [WARN] [akka.persistence.cassandra.query.EventsByTagStage] [SonarBookerActorSystem-akka.actor.default-dispatcher-11] [EventsByTagStage(akka://SonarBookerActorSystem)] - Backtrack failed, this will retried. java.lang.IllegalStateException: Tried to execute unprepared query 0x2bfde86b11f8ced501fa245f3bb2b83c but we don't have the data to reprepare it [2021-02-25 20:06:28,897] [WARN] [akka.persistence.cassandra.query.EventsByTagStage] [SonarBookerActorSystem-akka.actor.default-dispatcher-5] [EventsByTagStage(akka://SonarBookerActorSystem)] - Backtrack failed, this will retried. java.lang.IllegalStateException: Tried to execute unprepared query 0x2bfde86b11f8ced501fa245f3bb2b83c but we don't have the data to reprepare it [2021-02-25 20:06:29,046] [WARN] [akka.persistence.cassandra.query.EventsByTagStage] [SonarBookerActorSystem-akka.actor.default-dispatcher-12] [EventsByTagStage(akka://SonarBookerActorSystem)] - Cassandra query failed: No node was available to execute the query [2021-02-25 20:06:29,049] [WARN] [akka.stream.scaladsl.RestartWithBackoffSource] [SonarBookerActorSystem-akka.actor.default-dispatcher-3] [RestartWithBackoffSource(akka://SonarBookerActorSystem)] - Restarting graph due to failure. stack_trace: com.datastax.oss.driver.api.core.NoNodeAvailableException: No node was available to execute the query
    Sergei Egorov
    @bsideup
    Hi! Is there any cure for "Column family ID mismatch"?
    Sergei Egorov
    @bsideup
    Also, anyone knows what can be the cause for "Circuit Breaker Timed out" on journal write? Cassandra is running, Akka is able to connect to it, yet I am consistently getting "Circuit Breaker Timed out". Thanks in advance!
    1 reply