Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:38
    patriknw commented #29767
  • Oct 26 2020 07:07
    patriknw commented #29765
  • Oct 26 2020 06:56
    patriknw commented #25468
  • Oct 26 2020 06:30
    wtfiwtz starred akka/akka
  • Oct 26 2020 04:31
    YunSeongKim7 starred akka/akka
  • Oct 25 2020 16:21
    nitikagarw commented #25468
  • Oct 25 2020 09:22
    fubiao starred akka/akka
  • Oct 25 2020 05:09
    saguywalker starred akka/akka
  • Oct 24 2020 21:47
    tt4n starred akka/akka
  • Oct 24 2020 21:20
    akka-ci commented #29672
  • Oct 24 2020 21:05
    dope9967 commented #29672
  • Oct 24 2020 21:03
    akka-ci commented #29672
  • Oct 24 2020 21:03
    akka-ci unlabeled #29672
  • Oct 24 2020 21:03
    akka-ci labeled #29672
  • Oct 24 2020 20:44
    dope9967 synchronize #29672
  • Oct 24 2020 20:31
    akka-ci unlabeled #29672
anilkumble
@anilkumble
How to do load balancing in Akka gRPC client-server architecture. As I have more servers, I want my client to send the request to servers in a distributed fashion. Is this possible??
3 replies
Henry Story
@bblfish
Just wondering what the story on the TechEmpower benchmarks are. Is it that they don't really test the interesting parts of Akka?
4 replies
Henry Story
@bblfish
Btw. I opened an issue on typelevel/cats#3729 for a use case to do with Akka. I am not sure that is the right way to do things anymore, but I think it describes an not often spoken about use of Free Monads: one Free Monad, multiple completely different implementations as the Script is sent around to different actors.
stela-leon
@stela-leon
Hi there, I am new to alpakka-streams-cassandra connector, is it possible to insert an UDT via CassandraFlow? Using the bound statement I could insert data as a raw string but not directly as a case class that models the UDT. Before using the datastax java driver, i was wondering if any of you managed to do this via the alpakka connector ? :)
David Salgado
@DavidPDP
Hello, a question if I want an independent two-way communication (websocket) do I have to have two Flows? one that is from the source (client ws) to an actor for example and the other taking the actor as source to the client?
5 replies
brabo-hi
@brabo-hi
Hi all, how to add MDC context Behaviors.withMdc to an EventSourcedBehavior
3 replies
eyal farago
@eyalfa
is there a due-date for the next akka release? (2.6.11 I think)
1 reply
Adrien Mannocci
@amannocci
Hi everyone, Is there a way to create a source from a flow by discarding inlet ?
I refer to this use case https://doc.akka.io/docs/akka/current/stream/stream-io.html where I'm only interested by the flow output but not the input.
If this is possible then I can use flatMapMerge after a decoding phase.
Or I'm totally wrong ?
raboof
@raboof:matrix.org
[m]
you could attach a Source.empty, but sometimes that will cause the connection to be torn down before you want to stop consuming its output. In such cases using Source.maybe might help.
Adrien Mannocci
@amannocci
Thanks you very much, I miss this part.
David Salgado
@DavidPDP
Hi, excuse me, I wanted to ask if Streams is a concept isolated from actors. I understand that through the materializer I can create actors that help to process the Stream but what happens in the case that I already have an actor created (example in the chat, a Chat Room, where the Chat Room Actor would be created with a status as users list connected to it), through the materializer, could an actor already created (ActorRef) be passed to the Stream? This thought for example for the exposure of a Rest API or a websocket where it would need the Stream to process the requests.
3 replies
Henry Story
@bblfish
Akka relies a lot on Futures (or did, I may have missed some changes). There is some criticism of Futures as not being referentially transparent, and some frameworks have sought to address those. Is there a story for Akka going forward on that issue?
raboof
@raboof:matrix.org
[m]
@bblfish: that distinction never seemed very relevant to me in the Akka context - do you have an example?
Henry Story
@bblfish
I am just trying to decide on which framework to use, and am leaning towards Akka as I know it, and because it supports HTTP/2.0 and I really like the graph programming (I wrote a simple crawler in a page of code with it)...
My use case is writing an implementation of Tim Berners-Lee's Solid Web Server, with each resource given as an Akka actor, and a guard to determine if the request is allowed access. I'd like it to be used for streaming videos and later enterprise integration.
raboof
@raboof:matrix.org
[m]
interesting!
Henry Story
@bblfish
I have received EU funding to do this open source (luckily! I had run out of savings learning Category Theory). I wonder if there is a community where I could get feedback on architectural choices I make for this.
2 replies
brabo-hi
@brabo-hi

Hi,
i generated a key for production environment using playGenerateSecret , however i am still getting an exception when making a post request

io.jsonwebtoken.security.WeakKeyException: The signing key's size is 168 bits which is not secure enough for the HS256 algorithm.  The JWT JWA Specification (RFC 7518, Section 3.2) states that keys used with HS256 MUST have a size >= 256 bits (the key size must be greater than or equal to the hash output size).  Consider using the io.jsonwebtoken.security.Keys class's 'secretKeyFor(SignatureAlgorithm.HS256)' method to create a key guaranteed to be secure enough for HS256.  See https://tools.ietf.org/html/rfc7518#section-3.2 for more information.
    at io.jsonwebtoken.SignatureAlgorithm.assertValid(SignatureAlgorithm.java:387)
    at io.jsonwebtoken.SignatureAlgorithm.assertValidSigningKey(SignatureAlgorithm.java:315)
    at io.jsonwebtoken.impl.DefaultJwtBuilder.signWith(DefaultJwtBuilder.java:122)
    at io.jsonwebtoken.impl.DefaultJwtBuilder.signWith(DefaultJwtBuilder.java:134)
    at io.jsonwebtoken.impl.DefaultJwtBuilder.signWith(DefaultJwtBuilder.java:142)
    at play.api.mvc.JWTCookieDataCodec$JWTFormatter.format(Cookie.scala:776)
    at play.api.mvc.JWTCookieDataCodec.encode(Cookie.scala:631)
    at play.api.mvc.JWTCookieDataCodec.encode$(Cookie.scala:629)
    at play.api.mvc.DefaultJWTCookieDataCodec.encode(Cookie.scala:816)
    at play.api.mvc.FallbackCookieDataCodec.encode(Cookie.scala:798)
    at play.api.mvc.FallbackCookieDataCodec.encode$(Cookie.scala:798)
    at play.api.mvc.DefaultSessionCookieBaker.encode(Session.scala:123)
    at play.api.mvc.CookieBaker.encodeAsCookie(Cookie.scala:466)
    at play.api.mvc.CookieBaker.encodeAsCookie$(Cookie.scala:465)
    at play.api.mvc.DefaultSessionCookieBaker.encodeAsCookie(Session.scala:123)
    at play.api.mvc.Result.$anonfun$bakeCookies$2(Results.scala:354)
    at scala.Option.map(Option.scala:242)
    at play.api.mvc.Result.bakeCookies(Results.scala:353)
    at play.core.server.common.ServerResultUtils.prepareCookies(ServerResultUtils.scala:291)
    at play.core.server.AkkaHttpServer.$anonfun$runAction$5(AkkaHttpServer.scala:432)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:434)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source)
    at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)

I also generated a key using

 SecretKey key = Keys.secretKeyFor(SignatureAlgorithm.HS256);
 String secret = Encoders.BASE64.encode(key.getEncoded());

But still getting the same exception

David Salgado
@DavidPDP
Hi, I've been looking at the documentation and in the examples I always find that the root actor (ActorSystem) exposes the entire protocol. This means that it exposes all the possible commands that can be done in the entire hierarchy, is this not redundant? I mean, if I want to consume a specific command from a child actor, can't I bring a child actor from a non-actor class and start the protocol for that child? Example (chat): a gabbler has the command to be able to send a message to a room, then for another class to consume it, not an actor, the command is raised to the root Actor causing the root to send a message to the gabbler which then sends it to the room.
5 replies
Youngoh Kim
@Kyocfe
Hello, i have a question
I am now developing using cluster akka. The reference example is this link (https://github.com/elleFlorio/akka-cluster-playground)
Basically, the code structure is almost the same.
I have seen that actors are created in a round robin fashion to cluster members according to the akka actor deployment configuration.
object ProcessorActor {
  case object Hello

  case object PingBroad

  def props(roomID: Int, userID: Int) = Props(new ProcessorActor(roomID, userID))
}

class ProcessorActor(roomID: Int, userID: Int) extends Actor with ActorLogging {
  override def preStart(): Unit = {
    log.info("Room {} User {}", roomID, userID)
  }

  override def receive: Receive = {
    case Hello => {
      log.info("@@ PING ACTORS My actor path {}  @@", context.self.path)
    }
    case PingBroad => {
      log.info("@@ PING BROADCAST My actor path {}  @@", context.self.path)
    }
    case _ =>
  }
}
class Node(ip: String, nodeId: String) extends Actor with ActorLogging {
  var host: String = ip
  val processor: ActorRef = context.actorOf(Processor.props(nodeId), "processor")
  val processorRouter: ActorRef = context.actorOf(FromConfig.props(Props.empty), "processorRouter")
  val clusterManager: ActorRef = context.actorOf(ClusterManager.props(nodeId), "clusterManager")

  private var actors: Set[ActorRef] = Set.empty

  override def receive: Receive = {
    case GetClusterMembers => clusterManager forward GetMembers
    case GetFibonacci(value) => processorRouter forward ComputeFibonacci(value)
    case GenActor(roomID: Int, userID: Int) => processorRouter forward CreateActor(roomID, userID)
    case CreatedActor(result: ActorRef, replyTo: ActorRef) => {
      log.info("{} created", result.path)
      actors += result
      replyTo ! ProcessorResponse(nodeId, actors.size)
    }
    case GetActorCount => {
      val replyTo = sender()
      replyTo ! ProcessorResponse(nodeId, actors.size)
    }
    case PingActors => {
      actors.foreach(_ ! Hello)
    }
    case PingRoom(roomID: Int) => {
      actors.foreach(_ ! PingBroad)
      context.actorSelection(s"akka.tcp://cluster-playground@*/user/node/processor/$roomID-*") ! PingBroad
    }
  }
}
Youngoh Kim
@Kyocfe
Is there any way to know the actors /user/node/processor/$roomID-* of members of the cluster at once?
context.actorSelection(s"akka.tcp://cluster-playground@*/user/node/processor/$roomID-*")! PingBroad
15 replies
Hamed Nourhani
@hnourhani
Hi guys one question : How can we shutdown ActorSystem base on exceptions thrown in the main App :
object Boot extends App{

  implicit val system: ActorSystem = ActorSystem("Boot")
  implicit val materializer: ActorMaterializer = ActorMaterializer()


  throw new Exception("Boom!!!")

}
in this case ActorSystem will be still alive and App will not stop anymore
Josep Prat
@jlprat
I replied on the discuss site, but what about catching the exception and call system.terminate() from there?
Hamed Nourhani
@hnourhani
so the question is , do we need to put the whole logic in a try catch or need to access ActorSystem instance everyWhere we need to terminate it ?
is there any way to catch jvm exceptions indirectly , maybe some crazy idea
Josep Prat
@jlprat
if after the exception your program is stopping you can also try:
sys.addShutdownHook {
   system.terminate()
}
Hamed Nourhani
@hnourhani

the main problem is i want to shutdown system base on exception thrown , for example this snippet :

object Boot extends App{

  implicit val system: ActorSystem = ActorSystem("Boot")
  implicit val materializer: ActorMaterializer = ActorMaterializer()

  sys.addShutdownHook(system.terminate())

  throw new Exception("Boom!!!")

}

it still not work for my situation , the problem is we have so many logic out of ActorSystem , and for some reason exceptions stack reaches to Boot.scala , but as ActorSystem will not shutdown , so App will not terminated,

we want to terminate the App if any exception escalated to the Boot object (our main singleton)
4 replies
Patrik Nordwall
@patriknw
Zhenhao Li
@Zhen-hao
nice features!
Gaël Ferrachat
@gael-ft

Hello guys, I can't find a solution using existing stream operators for the following problem:

I have a source of elements (here Strings) which are sent to an actor for processing.
I want this actor to return Done and not the string itself to be sure it can't modify the string.
But I need keep the string in the scope to publish it downstream, how can I do that ?

val actorFlow: Flow[String, Done, NotUsed] = ActorFlow.ask ...
val statefulFlowOfString: Flow[String, String, NotUsed] = ???

Source(List("A", "B", "C")
 .via(statefulFlowOfString)

It could look like statefulMapConcat except that this method must return an IterableOnce where I would like to return a Flow[String, String, NotUsed]

Thanks

Levi Ramsey
@leviramsey
When using custom message serialization, is there any guidance on serializing EntityRefs? There doesn't seem to me to be an analogue to the ActorRefResolver.
1 reply
Patrik Nordwall
@patriknw
Current guidence would be to use the String represenation of the entityId in the serialized representation.
2 replies
raboof
@raboof:matrix.org
[m]
@Kyocfe: indeed actorSelection doesn't work across the cluster without knowing IP addresses, and usually other mechanisms to obtain ActorRef's (like cluster sharding) are nicer in that case
1 reply
BenCalus-RKA
@BenCalus-RKA

Hi all,
I'm trying to upload a file with the Akka Http client using:

def createEntity(file: File): Future[RequestEntity] = {
    require(file.exists())
    val formData =
      Multipart.FormData(
        Source.single(Multipart.FormData.BodyPart.fromFile("file", MediaTypes.`application/octet-stream`, file)))
    Marshal(formData).to[RequestEntity]
  }

However:

[2021-01-18 11:53:02,377] [ERROR] [akka.actor.ActorSystemImpl] [MediaGatewayHttpServer-akka.actor.default-dispatcher-240] [akka.actor.ActorSystemImpl(MediaGatewayHttpServer)] - Error during processing of request: 'EntityStreamSizeException: incoming entity size (while streaming) exceeded size limit (8388608 bytes)! This may have been a parser limit (set via `akka.http.[server|client].parsing.max-content-length`), a decoder limit (set via `akka.http.routing.decode-max-size`), or a custom limit set with `withSizeLimit`.'. Completing with 413 Payload Too Large response. To change default exception handling behavior, provide a custom ExceptionHandler.

Anyone able to help me out?

6 replies
Dennis van der Bij
@DennisVDB
Hi! Question on an implementation detail of the SBR.
What is the reason for interpreting exited (exiting confirmed?) and downed nodes as unreachable? During a rolling update this ends up triggering the SBR with a noop and logs SBR found unreachable members... which is not true.
  def receive: Receive = {
    // ...
    case UnreachableMember(m)         => unreachableMember(m)
    case MemberDowned(m)              => unreachableMember(m)
    case MemberExited(m)              => unreachableMember(m)
    // ...
  }
3 replies
Johan Andrén
@johanandren
Henry Story
@bblfish
Very cool.
Is there going to be work on HTTP/3.0 soon? It seems to be shipped in most browsers now. In Safari shipped with Big Sur it is on by default (the others need to be switched on).
1 reply
Henry Story
@bblfish
Ah I found there is an issue for Quic here akka/akka-http#3692
sh-kobi
@sh-kobi
Hi,
We are using Akka cluster & DistributedPubSub on kubernetes,
We have a problem, in some cases when we are calling "mediator.ask" the target actor is not receiving the message and the message goes to dead letters.
It happens even if the target actor is on the same kubernetes pod as the sender.
Does anyone have an idea why it can happen?
Tom
@tarossi
Hi there! I have a question about the standard Kinesis connector. Does it support aggregation of messages into a single record (as KPL does, see https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html )? I mean, besides supporting batching of records and retries of failed PUTs.
Thanks in advance!