Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:38
    patriknw commented #29767
  • Oct 26 2020 07:07
    patriknw commented #29765
  • Oct 26 2020 06:56
    patriknw commented #25468
  • Oct 26 2020 06:30
    wtfiwtz starred akka/akka
  • Oct 26 2020 04:31
    YunSeongKim7 starred akka/akka
  • Oct 25 2020 16:21
    nitikagarw commented #25468
  • Oct 25 2020 09:22
    fubiao starred akka/akka
  • Oct 25 2020 05:09
    saguywalker starred akka/akka
  • Oct 24 2020 21:47
    tt4n starred akka/akka
  • Oct 24 2020 21:20
    akka-ci commented #29672
  • Oct 24 2020 21:05
    dope9967 commented #29672
  • Oct 24 2020 21:03
    akka-ci commented #29672
  • Oct 24 2020 21:03
    akka-ci unlabeled #29672
  • Oct 24 2020 21:03
    akka-ci labeled #29672
  • Oct 24 2020 20:44
    dope9967 synchronize #29672
  • Oct 24 2020 20:31
    akka-ci unlabeled #29672
Jean-Julien ALVADO
@jjalvado
Hi @leviramsey . I switched my code to use the StatusReply with the textual error and now no more issues! Thanks a lot :-)
Hamed Nourhani
@hnourhani

Hi guys question regarding akka-http, Http().newServerAt(...) , we can configure server.max-connections and as i checked the docs :

The number of concurrently accepted connections can be configured by overriding the akka.http.server.max-connections setting

https://github.com/akka/akka-http/blob/89ccd6bd69f2d408ec73d4923adf7cf170a5bdde/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala#L245

question: what would happen to incoming connections if max-connections exceed? it seems akka-stream will not drop any connection and will keep them till current open connections closes by some party. and if current connections not closed new incoming connections will be on hold and get timeout from client side

Hamed Nourhani
@hnourhani
second question: could we notify when server drop or terminate a connection by akka http public api, any exception, ... or event to listen
Merlijn van Ittersum
@merlijn

Hi good people, I was trying to replicate this video streaming example using akka-http here:
https://medium.com/@awesomeorji/video-streaming-with-akka-http-9e95985bfe86

But then I encountered this new standard file directive here:
https://doc.akka.io/docs/akka-http/current/routing-dsl/directives/file-and-resource-directives/getFromFile.html

It looks to work out of the box but I have questions about how it works and if it is performant?

The blog example clearly uses RandomAccessFile to seek to the appropriate requested range. However, when looking at the implementation in the akka directive though it looks like the FileIO.fromPath(path, chunksize, position) is always initiated at position 0 (beginning of the file) and then pulled through a sliceBytesTransformer which drops the amount of bytes not needed.

Does this mean when I seek to the (near) end of a video the whole file is read (and promptly discarded / garbage collected)? I just do not see any path in FileSource advancing the position without reading the prior bytes. Maybe I am missing something?

1 reply
raboof
@raboof:matrix.org
[m]
@merlijn: interesting observation! indeed it looks like withRangeSupportAndPrecompressedMediaTypeSupport is nice and generic, but always starts at the beginning of the stream, and doesn't have a way to open the stream 'in the middle'. I agree that would be a useful optimization.
1 reply
Jean-Julien ALVADO
@jjalvado

Hello all. We are currently testing the clustering capabilities of Akka with our solution which implement Cluster Singleton and Akka PubSub.

For the test, we are deploying our solution with a local config discovery method:

    management.cluster.bootstrap {
      new-cluster-enabled = on
      contact-point-discovery {
        required-contact-point-nr = 1
        service-name = "local-cluster"
        discovery-method = config
      }
    }
    cluster.downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"

And the nodes are defined like this:

  discovery {
    config.services = {
      local-cluster = {
        endpoints = [
          {
            host = "node1"
            port = 8558
          },
          {
            host = "node2"
            port = 8558
          },
          {
            host = "node3"
            port = 8558
          }
        ]
      }
    }
  }

As we did not specify the downing strategy, my understanding is that the default strategy is keep majority (which is fine for us).

Our tests show the following:

  • Three nodes cluster works as expected. We kill a node, the cluster evicts the node and keep operating as a two nodes cluster. We kill a second node, the cluster (reduced to the last node) evicts the node and keep working as a single node cluster;
  • Two nodes cluster, we have the following behaviour: when we kill a node, last node standing considers himself in a split brain situation (which we do understand) and automatically shutdown.

Long story short, my question is: assuming we are willing to take the risk to have a split brain situation, is there a way to tell Akka cluster that if a node fails unproperly in a two nodes cluster, the last standing node should keep operating as a single node cluster?

Thanks a lot in advance for your help
Merlijn van Ittersum
@merlijn

@jjalvado Have you thought this through? I would think that (generally speaking) the chance of a network partition is much greater than a node failure. You might be setting yourself up for disaster. I think there used to be an auto-downing option in akka cluster, I don't know about the current state. It might have been removed, which is probably a good thing.

I would have a read at the available strategies for SBR here:
https://doc.akka.io/docs/akka/current/split-brain-resolver.html

If none suits you there is also the option to write your own downing class using akka.cluster.downing-provider-class. This is a bit more involved. There might also be 3rd party downing providers, I don't know.

Jean-Julien ALVADO
@jjalvado
Hi @merlijn . We know having a two node cluster is an excellent bad idea because of split brain. Still, we are wondering if there is any way to make it work anyway :-). We did take a look at the other strategy after carefully reading the link you provided and we even tested the static-quorum with a minimal number of node set to 1... Same behaviour.
Anyway, due to the role of the Cluster Singleton if the application were to actually create two separate cluster, we would have strange behavior in the application but we would not mess with the data stored in our backend
So I would say that this would be a mess ut not a disaster :smile:
but it can be reactived. I will try that tx!
Merlijn van Ittersum
@merlijn

I know that there used to be a way:
https://doc.akka.io/docs/akka/2.4/java/cluster-usage.html

Search on this page for "auto-down-after". I think it does exactly what you want from what I remember. But the akka developers (which I am not FYI) reworked this part quite extensively. I think it still might be possible using a custom downing provider

Aha, it can be re-activated? Well, then... good luck!
Levi Ramsey
@leviramsey
It's worth explicitly noting that "majority" means, effectively, ceil(nodes/2 + 0.5), so for 2 nodes, the majority is 2 nodes.
Jean-Julien ALVADO
@jjalvado
Hi @leviramsey then what would be the math for a 3 node cluster? What I don't get is why a three node cluster would support 2 nodes degradation where a 2 node cluster would not support a one node degradation... for 3 nodes would the math be ceil(nodes/3 +0.5) i.e ~ 0,88 and so one node is enough?
cause having a three node cluster loosing 2 nodes is in my opinion as bas has a 2 node loosing 1....
Merlijn van Ittersum
@merlijn

@jjalvado
Obviously losing 2 nodes in a 3 nodes cluster is worse than losing 1.
When losing 1 node, assuming the 2 remaining nodes can still communicate they can form a majority. The math for calculating the majority remains the same regardless. Think about it, what does majority mean? Majority means more than half, so you divide by 2 (nodes/2). The 0.5 is added because you could end up in an even split, like 2-2 or 3-3 etc.. So for example, majority in a 4 node cluster is 3.

I also do not understand the outcome of the tests you did. If you are purely using Majority strategy the last remaining instance should also down itself, even when starting with 3 nodes. How are you killing the node exactly? There might be a jvm shutdown hook to gracefully leave the cluster in which case the remaining node knows that it is the only one left and it can remain up safely. But tbh I am not sure about this last point, it has been a while that I operated akka clusters.

Merlijn van Ittersum
@merlijn
Sorry, I misread your remark, in a 2 node cluster losing 1 node IS as bad as losing 2 nodes in a 3 node cluster. You are right
In both cases you lose majority
Jean-Julien ALVADO
@jjalvado
Hi @merlijn . First auto-down-after was removed from Akka... i.e. adding the 'auto-down-after' configuration does not affect the behaviour of the cluster. Regarding your last point, that would make sense but this is not what the results of our test show. We set up a three node cluster on three VMs. Singletons are running on node 1. We kill the java process on node 1 (kill -9) -> after 10s node 1 is evicted from the cluster and a 2 node cluster is reformed. Singleton migrates to node 2. We kill the java process on node 2 (kill -9) -> after 10s the node 2 is evicted from the cluster and the singleton is move on node 3. Node 3 keeps operating as a single node cluster and does not shutdown. That would make sense if the formula to calculate majority is (alive nodes / cluster size) + 0.5. With this formula, majority would be reached with 0,88 node, i.e. 1 node for a three node cluster. Am I correct? Thanks a lot for your help guys. This is really appreciated.
Levi Ramsey
@leviramsey
That's not the majority calculation
The third node shouldn't operate as a single node cluster in that situation
The only way it should happen is if kill -9 leads to a graceful exit, which would shrink the cluster size to 1 (thus majority is 1, so a single-node cluster is OK).
Jean-Julien ALVADO
@jjalvado
Hi @leviramsey kill -9 is a hard kill then the akka/play stack should not terminate gracefully. I will make sure of this with some more tests...
Jean-Julien ALVADO
@jjalvado
well, you are right @leviramsey . Killing two nodes the ugly way -> last node shutdown properly! Thanks
aflox
@aflox:matrix.org
[m]
Hi guys. I have a question about akka-http. I want to measure the response times of responses, similarly to the example given in the custom directives, but it seems it only measures 'time to first byte', not the time until the whole response has been sent. Is this a correct observation?
Arunav Sanyal
@Khalian
Hi. Question -> Is it possible to setup akka streams using programmatic thread pools for dispatchers, instead of using config like this : https://doc.akka.io/docs/akka/current/general/stream/stream-configuration.html ?
2 replies
Arunav Sanyal
@Khalian
And if thats not possible, is it instead possible to get a handle on the fork join pool/other exec service that we used to setup the dispatcher ?
benthecarman
@benthecarman
Hi, I am trying to use the UnixDomainSocket to make rpc requests to an application but some requests never get a response. I think it either might not be waiting long enough for a response or there is something wrong with how I am making the requests. Does anyone have any ideas?
ScalaBender
@ScalaBender
Hi, when upgrading to latest Akka, using classic actors, we notice that that the TestProbes have become unresolvable when doing actorSelection. For instance, in a test case using ScalaTest with Akka TestKit, if I do TestProbe("MyProbe") and then system.actorSelection("*/MyProbe*").resolveOne() no instance will be resolved. It seem that the TestProbe has become lazy because if I poke on it with “.ref” before doing the selection it will be resolved. Backtracking this indicates that the change of behavior was introduced in Akka 2.6.13.
I have not seen any reports of this change of behavior so do anyone know if this was a deliberate change? In my case the named TestProbe is injected and acts as a blackhole, no other calls are made on it, and production code does the lookup of it before usage. Anyone knows of any good workaround in materializing the probe, except side effecting on it?
Levi Ramsey
@leviramsey
That change was introduced in akka/akka#29956 . It wouldn't surprise me if it's down to the testkit previously using a Scala feature (early initializers) which was deprecated in one of the 2.x version and removed in 3.
olgakabirova
@olgakabirova

Hi there! I have a question about the Akka streams, I hope this is the right place. If not, please, give me an advice, where I can ask

I have an Akka stream

      Source
      ....
      .groupBy(3,_.id %3)
      .mapAsync(1) { record =>
        someAsyncAction(record)
      }
      .mergeSubstreams    
       ....
      .Sink

I need to test, that I have parallel processing after diving into substreams. How can I do it? Thank you in advance

charego
@charego
I see that 2.6.17 was tagged recently. Looking forward to the release notes :)
https://github.com/akka/akka/releases/tag/v2.6.17
(And the publish to maven central)
Patrik Nordwall
@patriknw
volgad
@volgad
Hi all, I am trying to interface a simple python component to a larger akka actor / stream system. The akka system is fully functional and running on a server and I am trying to send a small json from python running on another machine to a specific actor (address / system fully known). I have a kafka server to which both the python component / the scala component are sending or receiving data so I could use this as an interface but is there any quicker way of sending a json to a remote actor (no response or ack, just fire and forget here). I saw some people arguing for grpc, akka http or MQ but this seems very complex for just sending a small json. Any reference / blog post would be super appreciated. Thank you
to be more precise, the akka actor is already receiving and handlinig json since I am using json representation to send / receive message between remote actors . All I want is to send it from python running on another machine
Thanks
Per Wiklander
@PerWiklander
An Akka Http endpoint for this would be a few lines of code, not overly complex.
2 replies
zvuki
@zvuki_twitter

Hi all, I am trying to take advantage of graceful shutdown using the code from the docs https://doc.akka.io/docs/akka-http/current/server-side/graceful-termination.html

val bindingFuture = Http().newServerAt("localhost", 8080).bind(routes)
  .map(_.addToCoordinatedShutdown(hardTerminationDeadline = 10.seconds))

I can't make it work though, it seems the connections are being dropped right away, no matter what deadline I set:

Oct 15, 2021 @ 16:18:34.106 -05:00    Aborting tcp connection to /127.0.0.1:41870 because of upstream failure: akka.http.impl.engine.server.ServerTerminationDeadlineReached: 
Server termination deadline reached, shutting down all connections and terminating server...
akka.http.impl.engine.server.GracefulTerminatorStage$$anon$1.installTerminationHandlers(ServerTerminator.scala:265)
akka.http.impl.engine.server.GracefulTerminatorStage$$anon$1.$anonfun$preStart$1(ServerTerminator.scala:211)
akka.http.impl.engine.server.GracefulTerminatorStage$$anon$1.$anonfun$preStart$1$adapted(ServerTerminator.scala:209)

Oct 15, 2021 @ 16:18:34.105 -05:00    [terminator] Initializing termination of server, deadline: 10.00 s

I can confirm that I have ~5 in-flight requests when it happens.

zvuki
@zvuki_twitter
hm maybe it's this akka/akka-http#3209
mgutmanis
@mgutmanis
Hi, would it be possible to know when the next akka-http version would be released with the latest fixes ? I'm particularly interested on receiving the fix in akka/akka-http#3904
Sriram Sundharesan
@sriramsundhar
hey!! we are trying out akka EventSourcedBehaviorWithEnforcedReplies and want to use lightbend telemetry to publish some custom metrics.. but am not find how I can access the akka system... Is there a way I can access akka system or push an ActorRef into EventSourcedBehaviorWithEnforcedReplies implementation?
2 replies
Nicholas Connor
@nkconnor
looking at a Jackson's streaming JSON generator.. it takes an OutputStream. Can't figure out how to adapt this out to something like Source[ByteString] and wondering if anyone from Javaland has suggestions
the gist of the Jackson API is:
ByteArrayOutputStream stream = new ByteArrayOutputStream();
JsonFactory jfactory = new JsonFactory();
JsonGenerator jGenerator = jfactory
  .createGenerator(stream, JsonEncoding.UTF8);

jGenerator.writeJson("first json part");
jGenerator.writeJson("second json part");
i'd like to be able to do something like:
def streamJson: Flow[CaseClass, ByteString] = 
    Flow[CaseClass]
        .fromFunction(cc => {
                jGenerator.writeField("key", cc.value);
        })
Nicholas Connor
@nkconnor
I'm not sure how to achieve this without implementing OutputStream
Levi Ramsey
@leviramsey

StreamConverters.asOutputStream materializes to an OutputStream and produces ByteStrings downstream... which seems to be what you're asking for? https://doc.akka.io/docs/akka/current/stream/operators/StreamConverters/asOutputStream.html

Something like:

def streamJson: Flow[CaseClass, ByteString] =
  Flow[CaseClass]
    .flatMapConcat { cc =>
      val (outputStream, src) = StreamConverters.asOutputStream().preMaterialize()
      val jGenerator = jfactory.createGenerator(outputStream, JsonEncoding.UTF8)
      // jGenerator calls... remember to have jGenerator close the stream!
      src
    }
4 replies