Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:39
    patriknw labeled #29767
  • Oct 26 2020 07:38
    patriknw commented #29767
  • Oct 26 2020 07:07
    patriknw commented #29765
  • Oct 26 2020 06:56
    patriknw commented #25468
  • Oct 26 2020 06:30
    wtfiwtz starred akka/akka
  • Oct 26 2020 04:31
    YunSeongKim7 starred akka/akka
  • Oct 25 2020 16:21
    nitikagarw commented #25468
  • Oct 25 2020 09:22
    fubiao starred akka/akka
  • Oct 25 2020 05:09
    saguywalker starred akka/akka
  • Oct 24 2020 21:47
    tt4n starred akka/akka
  • Oct 24 2020 21:20
    akka-ci commented #29672
  • Oct 24 2020 21:05
    dope9967 commented #29672
  • Oct 24 2020 21:03
    akka-ci commented #29672
  • Oct 24 2020 21:03
    akka-ci unlabeled #29672
  • Oct 24 2020 21:03
    akka-ci labeled #29672
  • Oct 24 2020 20:44
    dope9967 synchronize #29672
  • Oct 24 2020 20:31
    akka-ci unlabeled #29672
José Luis Colomer Martorell
@beikern
akka/akka-management#910 Oh, there's an issue describing the problem.
Hope it helps others :)
James Phillips
@jdrphillips
Is there a way in akka-http to easily turn a Route and an HttpRequest into an HttpResponse? Without using the testkit and without starting a server up. Starting a server not bound to a port could be a solution but I didn't spot a way of doing that
1 reply
Adesh Atole
@AdeshAtole

Hello people. Would anyone recommend Akka for developing CRUD apps at scale?

Just a background about myself, I have been developing in Spring for last 2 years, and now came across Akka. I am trying to understand what kind of usecases can Akka be a perfect fit for? I went through some docs and other material, but didn't find any which addresses my concern of usecase fit. I am not able to comprehend where it can be used. Please help me in understanding the usecases it can cater to in some real world prod app.

14 replies
Objektwerks
@objektwerks
Does anyone on the Akka-Http team know why Jetbrains has been unable to resolve false Akka-Http DSL errors? Sad to see no comment by Lightbend on my above post and question. Despite multiple bugs filed on this issue, Jetbrains has not resolved this issue. The productivity hit is such that using Intellij is virtually unusable, except for big code refactoring efforts. Metals is no Intellij; but, at least, it doesn't falsely report Akka-Http DSL errors. I've enjoyed using Akka-Http for many years, going back to its predecessor, Spray. But, perhaps, it's time for a change, at least from a coding sanity perspective. Umm.... I will say no more. All the best to the Akka community, going forward. Cheers!
Raymond Te Hok
@rtehok

Hi there, I am looking for some help about this (issue)[https://stackoverflow.com/questions/68376263/akka-stream-check-source-and-upload] I have.

I managed to produce this code

    val source: Source[akka.util.ByteString, Any] = ???

    val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] =
      S3.multipartUpload(bucket = "csvfiles", key = s"${user.id.value}/$dataSetId.csv")

    val firstRowsSink: Sink[ByteString, Future[Seq[String]]] =
      Flow[akka.util.ByteString]
        .via(Framing.delimiter(akka.util.ByteString.fromString(System.lineSeparator()), Int.MaxValue, allowTruncation = true))
        .map(_.utf8String)
        .filterNot(_ == "")
        .take(10)
        .toMat(Sink.seq)(Keep.right)

    val (eventualUpload: Future[MultipartUploadResult], eventualFirstRows: Future[Seq[String]]) =
      RunnableGraph
        .fromGraph(GraphDSL.create(s3Sink, firstRowsSink)(Tuple2.apply) {
          implicit builder =>
            (s3S, firstRowsS) =>
              import GraphDSL.Implicits._
              val broadcast = builder.add(Broadcast[ByteString](2, eagerCancel = false))
              source ~> broadcast.in
              broadcast.out(0) ~> Flow[ByteString].async ~> s3S
              broadcast.out(1) ~> Flow[ByteString].async ~> firstRowsS
              ClosedShape
        })
        .run()

    val result = for {
      seq <- eventualFirstRows
      fileMetaData <- extractMetadata(seq)
      projectId <- addProject(fileMetaData)
    } yield (projectId, fileMetaData)

    onComplete(result) {
      case Success((projectId, fileMetaData)) =>
        Future(
          eventualUpload.onComplete {
            case Failure(exception) =>
              logError(exception.getStackTrace.mkString("\n"))
            case Success(_) =>
              logDebug(s"File uploaded (name: $fileName)")
          }
        )
        genericCreatedWithJsonContent(projectId.toJsonResponse)
      case Failure(e) =>
        internalServerError(Some(s"Error while uploading the file $fileName"))
    }

While I am receiving the projectId early in the HTTP response (as expected), I get a akka.stream.SubscriptionWithCancelException$StageWasCompleted$ and the uploaded file to S3 has ~1000 rows (on a 10k+ rows file). Am I doing something wrong (maybe in the Future)? Why would the second broadcasted stream being cancelled?
Thank you for your help and input.

2 replies
brightinnovator
@brightinnovator
what are the open source tool is available to monitor/go through kafka messages ?
SeymurMammadrza
@SeymurMammadrza

Hi people. I am a newbie (intern) scala developer coming from java. Our new project is using Akka and I have really fundamental problems to resolve some issues. I'm preparing a demo project based on Akka Digital platform guide to learn somethings better. However, I don't know how to store my data coming from GRPC in State of Actor. I need it for altering in some points . Adding, Removing or updating a field of task are my goals. Given below, I have a "task" coming from .proto and "summary" to keep these tasks in the state. What is the best practice to keep tasks ? With map I could only save id and name of tasks but I want to keep them as objects. By that way, I could reach its fields to change. Sorry for my long post, I really couldn't find the way through research on internet and it has already been some days..

`message Task {
string taskId = 1;
string name = 2;
Category category = 3;
Priority priority = 4;
}

enum Category{
UNKNOWN_CATEGORY = 0;
LEISURE = 1;
WORK = 2;
CHORE = 3;
OTHER = 4;
}

enum Priority{
UNKNOWN_PRIORITY = 0;
LOW = 1;
MIDDLE = 2;
HIGH = 3;
}

message TodoList{
string todoListId = 1;
repeated Task tasks = 2;
}`

final case class Summary( tasks: Map[String,String], finished: Boolean) extends CborSerializable

Nathan Fischer
@nrktkt:matrix.org
[m]
why can you only save the id and name in a map?
case class Task(id: String, name: String, category: Category, priority: Priority)
Map[String, Task]
?
razertory
@razertory
Hey guys, is Akka a good way of implementing online code interview ?
Sean Farrow
@SeanFarrow
Hi all, are there any examples of asynchronously downloading a file usingAkka http? The file is very large and is compressed in the seven zip format, any help appreciated.
Johannes Rudolph
@jrudolph
@/all we are happy to announce the latest Akka HTTP release 10.2.5, see https://akka.io/blog/news/2021/07/27/akka-http-10.2.5-released for more information
SeymurMammadrza
@SeymurMammadrza
Dear @nrktkt:matrix.org, I kept id out of task entity and mapped with task. Thank you for your time and help. 🙂
volgad
@volgad
Hi All, I am looking for a reference / article / blog post on how to handle a TimeoutException from a remote Actor. I checked already that every message is handled properly and this doesnt seem to be issue. It seems more related to thread starvation. I tried to change the dispatcher from forkJoin to a dedicated dispatcher but this doesnt seem to have solved the issue. From akka doc, my understanding is that the next step is to try recovering my stream / supervise the actor with retry but is there any approach specific to Timeouts? Any reference would be appreciated. Thank you!
9 replies
I can also do a recover on the resulty of the future of ask or askWithStatus but I was looking for something more akka centric
sebarys
@sebarys

Hello everyone,
I have a question about akka-http logRequestResult directive. I've added to it function like below to log on debug successful responses, on info error responses and rejections:

  private def requestBasicInfoAndResponseStatus(req: HttpRequest): RouteResult => Option[LogEntry] = {
    case RouteResult.Complete(res) if res.status.isFailure() => Some(
      LogEntry(s"Request ${req.method} to ${req.uri} resulted in failure with status ${res.status}", Logging.InfoLevel)
    )
    case RouteResult.Complete(res) => Some(
      LogEntry(s"Request ${req.method} to ${req.uri} resulted in response with status ${res.status}", Logging.DebugLevel)
    )
    case RouteResult.Rejected(rejections) => Some(
      LogEntry(s"Request ${req.method} to ${req.uri} was rejected with rejections: $rejections", Logging.InfoLevel)
    )
  }

in logs I see very often

Request HttpMethod(POST/GET/PATCH) to http://URL/PATH was rejected with rejections: List()

I've read in docu that empty rejection list could be related to no routing for given path, but that is not the case - provided (path, method) is covered by Route. Is there any way to determine why it happens?

Carlos De Los Angeles
@cdelosangeles
Hello everyone! I have been dealing with a problem with akka cluster since last week. The general problem is that once the cluster is up and running, if I take down the primary seed (main seed for starting the cluster), when I restart the node it cannot rejoin the cluster. Essentially creating a split brain with one node inside. I see the other nodes receiving and successfully replying back to requests from the main seed node to join within milliseconds of receiving the request. I just never see the main node processing the response. This cluster has been running for a couple of years without issues and we just recently increased the number of nodes from 20 to 30 in two DCs for a total of 60 nodes. It is multi-dc but only for meta data transfer. A request received in one DC will never be processed in the other DC. Has anyone experienced something similar?
1 reply
Carlos De Los Angeles
@cdelosangeles
One more note on the question above. Other nodes are able to leave and join the cluster without issue.
Max
@maxstreese
Hi everyon, me again, the guy asking unlimited questions. This time it's Akka Streams's turn: I have a TCP connection. Everything is working fine. But I am wondering if there is a developed pattern for the following: The API I connect to offers compression which one can enable and disable. This means that ideally I would like to have some decompression stage in the flow that can be turned on and off. Is there any Akka Streams pattern for this? Currently I handle both the delimiting of messages as well as the decompression in my connection handler actor. But I would like to move this part to the stream if possible.
Carlos De Los Angeles
@cdelosangeles
Hello Max. How would you determine if a message needs to be decompressed or not? Message parameter/type? Or is it purely on/off for all messages?
7 replies
Stephen Hogan
@hogiyogi597
I am trying to serve my Swagger yaml using the getResourceFromDirectory route but it isn’t working. I have split my Swagger yaml into multple yaml files that are referenced by the main yaml file. However, when my server is loading the index.html, it can load the initial yaml file perfectly but then when it requests the referenced yamls it is failing to marshal them saying that the content type is incorrect. Has anyone had any experience working with Swagger (and not using the Swagger Akka libraries)?
1 reply
Quynh Anh "Emma" Nguyen
@qaemma

Hi, after recent akka version update, I start seeing this error akka.stream.SubscriptionWithCancelException$NoMoreElementsNeeded$. How can I handle it?

Looking at https://doc.akka.io/api/akka/current/akka/stream/Attributes$$CancellationStrategy$.html, it seems to be a new behavior for 2.6.x. The default is PropagateFailure, which is not configurable, has this documentation:

Strategy that treats cancelStage in different ways depending on the cause that was given to the cancellation.
If the cause was a regular, active cancellation (SubscriptionWithCancelException.NoMoreElementsNeeded), the stage receiving this cancellation is completed regularly.
If another cause was given, this is treated as an error and the behavior is the same as with failStage.

So it seems it shouldn't raise this error to user level. What am I missing? I have tried with 2.6.10, 2.6.15 etc after seeing this akka/akka#30071 but it doesn't work in my case.

4 replies
Todor Kolev
@todor-kolev
Hi, is there a way to generate openapi spec from akka http routes directly without using annotations or I need to use a wrapper for that (tapir, endpoints4s etc)? Thanks!
federico cocco
@federico.cocco_gitlab
Hello. is somebody able to explain me the recover method for Akka Streams? My understanding is that instead of catching the error and transforming it, it will also close the streams. Is there another way i can catch an upstream error, transform it to continue the stream? Thanks
Shayne Koestler
@skoestler
Hello all, I'm using akka typed cluster sharding and I'm curious if there's a way to watch/supervise remote actors?
8 replies
sebarys
@sebarys

I've read in docu that empty rejection list could be related to no routing for given path, but that is not the case - provided (path, method) is covered by Route. Is there any way to determine why it happens?

is anyone able to help with such case, or at last redirect me where I could describe this issue?

Domantas Petrauskas
@ptrdom
Hi, does Akka Projection have a similar thing to Lagom ReadSideProcessor globalPrepare? The only thing I found is an alternative to prepare in HandlerLifecycle start. Looking at how the API is constructed, this could also be the responsibility of ShardedDaemonProcess, but I haven't found anything relevant there too.
constantOut
@constantOut
Hi everyone, could you please tell me about internals of new actor creation ?
I was assigned to a legacy project which has single actor that is responsible for creating actor per connection (we don't use Akka HTTP, we use another web framework) I have an impression that this actor is a "bottle neck", as sometimes we have spikes in new connections. Memory dump shows that we have more open sockets than worker actors (created per connection).
springdev2022
@springdev2022
which is the best open source reporting tool for java application which has open source designer as well?
Youngoh Kim
@Kyocfe
I changed the classic actor to use a lot of ask pattern through Await.result, could this be a problem?
If we want to replace this format, how should we develop it? The code is too readable to just use the tell pattern.
How can I improve this without performance issues? Currently, I think there is a container down issue due to cpu rise after the server has been used for a few days because of this part.
2 replies
Gaël Ferrachat
@gael-ft

Hello here,

From my understanding, by default, Akka completes a stream operator on a UpstreamFinish
I would like to know how to keep an operator alive after upstream completion ? Do I have to create my own GraphStage to override the method onUpstreamFinish() ?

My code currently looks like:

val someFuture: Future[MyObject]
val longlifeSink: Sink[MyObject, Future[Done]]

Source
  .fromFuture(someFuture)
  .wireTap(longlifeSink)
  .runWith(Sink.head)

From example above, I would like my longlifeSink to continue even after the UpstreamFinish, how to do that ?
Note I am asking the question as I would like to avoid having to create a GraphStage ...

raboof
@raboof:matrix.org
[m]
@gael-ft: typically after upstream completes there is nothing left to do - after the sink processed its last incoming element what is the value of keeping it 'alive'?
Levi Ramsey
@leviramsey
One approach, if you want the same sink to be fed by multiple streams with independent lifecycles is to have use Source.queue or Source.actorRef (or similar) and have your wireTap just send elements to the queue/actor.
Gaël Ferrachat
@gael-ft

Hello @raboof:matrix.org ,

The idea is that the longlifeSink is a quite heavy operator.
But when the Source emit the value from the Future, it then emit the onUpstreamFinish, and it looks like it closes the ongoing wireTap before it actually complete.

I dont wan't it to remain alive, I want it to run all the code of the wireTap Sink, and then closes the operator.
Am I missing something ?

Hello @leviramsey ,

As I said earlier, reusing the Sink is not what I am trying to achieve, I want it to run completly before it get closed by the default onUpstreamFinish which calls (by default) completeStage

I was not very clear on first message ...
raboof
@raboof:matrix.org
[m]
What does your sink look like? is it a custom graph?
Tom Burke
@TomB_Signify_twitter
Hey guys, I am new to the Gitter community and not sure if this is right room to be posting this, if not it would be great for someone to direct me I am currently working on behalf of a company that use Akka as their main backend and are looking for 5-6 new contractors to join their team. You could be based anywhere as they have teams that work in all timezones and the rates are €350 - €450 per day - 12 month contracts Let me know if this could be for you
Patrik Nordwall
@patriknw
We don't encourage job ads in this forum, but leaving it for this time. Hope you find the right people for the job.
Gaël Ferrachat
@gael-ft
Here what my Sink looks like:

// in fact a method arg
val wuid: String = ...

// method body
Flow
      .setup { case (mat, _) =>
        implicit val materializer: ActorMaterializer = mat
        implicit val classicSystem: classic.ActorSystem = mat.system

        val workspaceDb = ReactiveDatabase(classicSystem.toTyped).forWorkspace(wuid)

        Flow[Urn]
          .flatMapConcat { urn =>
            val dbReadQuery: org.reactivestreams.Publisher[Metadata] = ...

            Source
              .fromPublisher(dbReadQuery)
              .map(replaceValueOf(urn, _)) // some changes in the Metadata itself, but anyway ...
              .grouped(20)
              .async
              .zipWithIndex // just trick to get the batch number available for logs
              .flatMapConcat { case (metaSeq, index) =>
                val dbWriteQuery = ...

                Source
                  .fromPublisher(dbWriteQuery)
                  .log(s"test", extractBatchLog(urn, index))
              }
          }
      }
      .mapMaterializedValue(_ => NotUsed)
      .toMat(Sink.ignore)(Keep.right)
  }
Levi Ramsey
@leviramsey
Remember that wireTap by design drops elements if the sink backpressures (e.g. because it's currently processing a message)... is the sink in the wireTap busy when upstream finishes emitting?
Gaël Ferrachat
@gael-ft
I tried with alsoTo as well :(
In fact it is alsoTo, until it works hehe
Levi Ramsey
@leviramsey
If the element-dropping isn't desirable, alsoTo can be used, at the expense of propagating backpressure upstream. Alternatively, especially if the upstream is bursty, you can put a backpressuring buffer in front of the sink
Gaël Ferrachat
@gael-ft
I'll try some of the proposal you gave me, and review a little bit more the workflow, and come back then to let you know.
Really appreciated the help here ;)
Vipin Tiwari
@vtiwari227
Hello,I'm seeing below error
"akka.http.impl.engine.client.OutgoingConnectionBlueprint$UnexpectedConnectionClosureException: The http server closed the connection unexpectedly before delivering responses for 1 outstanding requests" from app which utilizes a internal library. From research i found that I can add keep-alive timeout to resolve the same. https://doc.akka.io/docs/akka-http/current/common/timeouts.html#keep-alive-timeout.
My question is, shall i add this config (akka.http.host-connection-pool.keep-alive-timeout) in main app or the internal lib where the actual call taking place. Thanks for help
Levi Ramsey
@leviramsey
It depends on how the internal library starts the ActorSystem. I would try in the main app first, since that's where Akka would look by default.
1 reply
Vadim Bondarev
@haghard

Hello, has anyone thought about implementing additional "zombie fencing" mechanism for akka-cluster+sharding applications using in akka idiomatic way without some external linearizable system like (zookeeper or etcd). SBR doesn't give 100% guarantee against multiple writes (specifically for zombie processes) existing at the same time. I understand that preventing concurrent writes can be done on the db level for some dbs that support serializable or higher consistency level which is not the case for most dbs our there.

My best idea right now is to have custom crdt (MVRegister) to detect concurrent writes. I recently implemented exactly that for my project. Although this approach might work well for some applications, it still have drawbacks. Mainly the fact that even with sharding that gives us single writer principle all writes have to go through the ddata layer but we only take advantage of that during network partitions.
I wonder if anyone's had other ideas about how it can be achieved in akka idiomatic way.

31 replies
Chaminda Bandara
@jmcabandara

When I connect to WildFly 10 I am getting below error...

[root@3gq4 jmxquery-1.3]# ./check_jmx -U service:jmx:rmi:///jndi/rmi://194.135.92.51:9990/jmxrmi -O java.lang:type=Memory -A HeapMemoryUsage -K used -I HeapMemoryUsage -J used -vvvv -w 731847066 -c 1045495808 -username admin -password password
JMX CRITICAL - Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
java.net.SocketTimeoutException: Read timed out] connecting to java.lang:type=Memory by URL service:jmx:rmi:///jndi/rmi://194.135.92.51:9990/jmxrmijava.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
java.net.SocketTimeoutException: Read timed out]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369)
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270)
at jmxquery.DefaultJMXProvider.getConnector(DefaultJMXProvider.java:17)
at jmxquery.JMXQuery.connect(JMXQuery.java:91)
at jmxquery.JMXQuery.runCommand(JMXQuery.java:68)
at jmxquery.JMXQuery.main(JMXQuery.java:114)
Caused by: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
java.net.SocketTimeoutException: Read timed out]
at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:136)
at com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:205)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1955)
at javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1922)
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:287)
... 5 more
Caused by: java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
java.net.SocketTimeoutException: Read timed out
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:307)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:343)
at sun.rmi.registry.RegistryImpl_Stub.lookup(RegistryImpl_Stub.java:116)
at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:132)
... 10 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:246)
... 14 more