by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 18:05
    eyalfa commented #29410
  • 15:08
    akka-ci commented #29450
  • 15:08
    akka-ci unlabeled #29450
  • 15:08
    akka-ci labeled #29450
  • 14:36
    akka-ci unlabeled #29450
  • 14:36
    akka-ci labeled #29450
  • 14:33
    patriknw synchronize #29450
  • 14:33

    patriknw on wip-replicated-es-docs-patriknw

    link to more examples (compare)

  • 14:20
    akka-ci commented #29446
  • 14:20
    akka-ci unlabeled #29446
  • 14:20
    akka-ci labeled #29446
  • 13:58
    akka-ci commented #29450
  • 13:58
    akka-ci unlabeled #29450
  • 13:58
    akka-ci labeled #29450
  • 13:51
    akka-ci unlabeled #29446
  • 13:51
    akka-ci labeled #29446
  • 13:47
    ennru synchronize #29446
  • 13:26
    akka-ci labeled #29450
  • 13:24
    patriknw opened #29450
  • 13:23

    patriknw on wip-replicated-es-docs-patriknw

    Minor adjustments to replicated… (compare)

Chris Bowden
@cbcwebdev
what is the relative priority of: akka/akka-grpc#252
Jason Pickens
@steinybot
Do the TestSource and TestSink add an async boundary?
Jason Pickens
@steinybot
Hmm nope it seems to be my flow...
Jason Pickens
@steinybot
Can’t tell why my flow is on an island by itself (Wilson!!!). The wiring says the currentPhase is SourceModulePhase and targetSegment.phase is GraphStagePhase if that means anything to anyone.
Jason Pickens
@steinybot
Oh it’s not on its own, it is with the ProbeSource. Is it the ProbeSink separating them?
Jason Pickens
@steinybot
Ah so actually both ProbeSource and ProbeSink create islands for themselves. That is annoying. I don’t remember that being the case.
How can I test that my flow is pulling at the correct times if there is this automatic input buffer put inbetween it and the source probe? Do I just have to set the buffer to 1 and expect it to always be there?
Jason Pickens
@steinybot
Since 2.5.0-RC1 by the looks.
anilkumble
@anilkumble
What is the read/write consistency level in akka persistence Cassandra ?
Igmar Palsenberg
@igmar
QUORUM by default.
anilkumble
@anilkumble
Okay :thumbsup:
Can we simple snitching in production environment ?
Igmar Palsenberg
@igmar
?
anilkumble
@anilkumble
There is some property in Cassandra called snitching. I am asking whether simple snitch is advisable on Production ? Any idea on this ?
Igmar Palsenberg
@igmar
SimpleSnitch is for single DC cassandra deployments
anilkumble
@anilkumble
Okay, what should be used in multi DC env ?
Igmar Palsenberg
@igmar
Snitches are for server side, not client side.
And are dependent on usecase and infra setup. There is no single answer for that question
But as said : That is serverside, the client doesn't care.
anilkumble
@anilkumble

But as said : That is serverside, the client doesn't care.

oh..you mean we don't have to worry about it ?
I mean don't we have control over this property ?

Igmar Palsenberg
@igmar
Its a property set in cassandra.yml
And yes, you need to pick the right one. But as said : That is done once, you can't (easily) change it after its set and data is present
anilkumble
@anilkumble
Okay
RG-707
@RG-707
How can I limit the threads used by Akka?
6 replies
Igmar Palsenberg
@igmar
Why would you ?
RG-707
@RG-707
comparability to caf and erlang in my test, I spawn many systems in a mininet virtual network
3 replies
anilkumble
@anilkumble

I am using cleanup api to delete data. everything gets deleted except tag_scanning.

cleanup.deleteAll("id", true);
cleanup.deleteAllTaggedEvents("id");
May I know the reason for this ?

Trond Bjerkestrand
@tbjerkes_twitter
Hi, personally I'm not a big fan of the new runtime check for 'Unsupported access to ActorContext'. It's certainly nice when you catch these exceptions during testing and avoid a potential bug, but as context.log is often used for exceptional circumstanced and easily slips into a future callback by accident it makes me scared to use akka 2.6.6. and later in production.
Would it be possible to add further configuration to this feature so one could replace exceptions with logging warnings or turn it off ?
9 replies
corentin
@corenti13711539_twitter
Hi, anyone know why the Akka v2.6 Scheduler API doesn't provide a scheduleWithFixedDelay that would accept a function instead of Runnable though such a variant of scheduleOnce does exist?
6 replies
Paul DeMarco
@pauldemarco
Is there a book out there someone can recommend that would lay out various business processes and how they are implemented in the Actor Model?
I have the following process, an order is created, a carrier is found to ship the order, the order is added to the carriers todo list.
I'm looking for a reference that explained various actors I can use to implement such functionality.
Igmar Palsenberg
@igmar
It doesn't differ from any other approach.
Miklos Szots
@smiklos
Hi folks,
is this the right channel to ask for advice on akka streams?
Eli
@mosteli
Is it safe to pass the classic Akka scheduler through a Future? Are schedulers thread-safe?
Seeta Ramayya
@Seetaramayya

Hi,

When I am trying to consume data on server into memory later created strict entity with it, but getting following error message. Can you please help me?

Substream Source cannot be materialized more than once

Here is the code,

 if (!entity.isChunked()) {
    entity.dataBytes.runFold(ByteString.empty)(_ ++ _).flatMap { encryptedBody =>
      decrypt(encryptedBody).map { decrypted =>
        HttpEntity(decrypted)
      }
    }
  }

AkkaHttp Version: “10.0.13”

7 replies
Brian Maso
@bmaso
@smiklos Yes, this is a good place to ask streams questions.
Dave New
@davenewza
Hi. I'd like to hear thoughts on sharing immutable "singleton" state between actors in a non-clustered system. It's quite clear as to why a mutable shared state works against a lot of the guarantees that the actor model provides, but what if that state is immutable? In our case, we have a large dataset which needs to be readily available in memory in order for us to perform operations on it (it's a complex graph which is traversed). For every actor to hydrate the dataset from persistence and hold onto their own copy in memory is quite expensive. A solution to this is to inject a single instance of this data into each actor, thus having them use the same memory space. What are some drawbacks of this approach? Actors are essentially then sharing objects on the heap.
Miklos Szots
@smiklos

@bmaso Thanks. Here it goes:

I've a stream of job requests coming in and need to do some processing on them. The thing is that I would like to refresh some enrichment related data periodically that is needed for the processing of these jobs.
I've come up with two solutions.

  1. A cached stream that spits out the enrichment loaded from db repeatedly and via timeout on the stream I recursively reload the source. This works ok but it's a bit strange since the enrichment is rather part of a processing stage then the messages/jobs themselves so zipping the two looks just strange. The idea here is that the repeated source will always emit so it's safe to zip it with jobs stream to form a tuple for processing a job with some enrichment.
  2. I could create an actor that does the caching and receives a periodic message via a scheduled task to reload the enrichment. Then when processing the jobs, each job would be handled via a flatMapConcat where I can ask the actor for the latest enrichment and create the graphStage according to that information for every message.

I think the second approach is better. I'm wondering whether there is any other way or recommendation to do such a thing. (I of course could reload the data via some crappy counter that reloads on every n hit but the timer would be better and I'm also looking for a rather 'pure' streaming/akka solution)

anilkumble
@anilkumble
By default, How many of number of threads will akka actor span ?
And How to have a control over there ?
anilkumble
@anilkumble
What is the use of Receptionist ?
Can we use this to communicate a message between App server groups ?
Brian Maso
@bmaso
@smiklos I agree the second approach seems simpler. Use of flatMapConcat makes it much more obvious what the code is doing. Similarly, I prefer any mapping combinator to more arcane or hard-to-follow code that uses lower-level graph APIs.
1 reply
hbergmey
@hbergmey
Hello everyone. I am on akka-http 10.1.12 and implementing a data processing flow that consecutively accesses several REST APIs and POSTs a result to another service. All of the services may respond to me with 503 any time when overloaded, so I have to respect the Retry-After Header and backpressure my data flow which begins with a QueueSource. I am having a hard time finding good examples for this most probably very common use case. Akka Http Documentation states, that the common pool for Http().singleRequest(...) does retries by default, but I do not know for which HTTP status and I would like to apply explicit rules, which kind of responses are actually retried how often and with delays based on the Retry-After Header if available. To me this sounds so basic, that it should not be that complicated to accomplish. Does anybody know of good code examples?
hbergmey
@hbergmey
Basically I would like to wrap the common behavior around an Http.singleRequest() and just use my wrapper in several flow stages. I could as well instance a pool flow for each stage, but still, I do not see, how to implement the custom retry behaviors. Do I actually have to build a recursive flow with a delay stage myself?
Jan Ypma
@jypma

@hbergmey Yes, custom code probably would be the way to go. Akka's built-in retry i think can only be configured using max-retries, and reading https://github.com/akka/akka-http/blob/324f985b9d6a66518b789eaf0cccb9f6bf400079/docs/src/main/paradox/client-side/host-level.md#retrying-a-request, i think it's only meant to retry for which no response was received at all. So, retry on 503 you'd have to code yourself.

However, I don't think you'd need a Flow for that... just a simple recursive method tryRequest(rq: HttpRequest, retriesLeft: Int): Future[HttpResponse] could do it, which would call itself with retriesLeft - 1 with a delay on error.

hbergmey
@hbergmey
@jypma thanks for your response. I thought about a recursive method too. I just wonder, whether error handling and retry rules should not be handled on the pool level. Especially because an overloaded service will most probably respond to separate invocations in the same way until resources are free again. But I guess as a first step, I am going to wrap this in a retrying method, as you suggested.
Jan Ypma
@jypma

It's still going to hit the http pool underneath, where queueing is implemented (according to max-open-requests / max-connections), so that'll give you some safety w.r.t. concurrency.

A side remark: if your downstream services are indeed yielding 503 because they're overloaded, you may want to consider a proper circuit breaker instead (akka has one). That'll make your downstream services a lot happier than a blind retry.

hbergmey
@hbergmey
I am on a non-realitime data pipeline so back-pressure should be sufficient because the number of clients is low. But I have several consecutive requests to different endpoints on the same server in the flow and the 503s can be caused by AWS's limit of a thousand requests per client IP per minute. That's why I had hoped for a solution on the pool. I guess I will have realize this behavior in my HTTP client layer then.
Brian Maso
@bmaso
@hbergmey Is your app a clustered app or just a single node app? Connection pooling is a lot different in a single VM vs across a cluster