Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 19:08
    dependabot-preview[bot] synchronize #4337
  • 19:08

    dependabot-preview[bot] on nuget

    Bump FSharp.Quotations.Evaluato… (compare)

  • 19:08
    dependabot-preview[bot] edited #4337
  • 19:07
    dependabot-preview[bot] edited #4337
  • 19:07

    Aaronontheweb on dev

    * Add app-version to the Member… (compare)

  • 19:07
    Aaronontheweb closed #4577
  • 16:51
    Aaronontheweb labeled #4577
  • 16:51
    Aaronontheweb milestoned #4577
  • 16:46
    Aaronontheweb assigned #4357
  • 15:36
    Aaronontheweb commented #4493
  • 15:36
    Aaronontheweb demilestoned #4493
  • 15:35
    Aaronontheweb commented #4493
  • 15:34
    Aaronontheweb edited #4493
  • 15:33
    Aaronontheweb milestoned #4493
  • 15:33
    Aaronontheweb labeled #4493
  • 15:33
    Aaronontheweb assigned #4493
  • 15:27
    Aaronontheweb commented #3759
  • 15:27
    Aaronontheweb assigned #3759
  • 15:27
    Aaronontheweb labeled #3759
  • 15:27
    Aaronontheweb labeled #3759
Ricky Blankenaufulland
@ZoolWay
@Silv3rcircl3 To be honest I was curious about when that Task completes and when the continue is triggered by myself. There is some kind of OnCompleted for an Akka.Stream but I did not figure out how to apply it to my scenario. The docs are rather limited about Akka.Stream to my feelings.
Ricky Blankenaufulland
@ZoolWay
@Silv3rcircl3 You are right, when I apply a small delay before disposing the materializer, it works. So the idea to do it when the returned Task completes was bad. The streaming process is not completed at that point in time yet. Will have to find out how to have a callback are message to myself send the stream completed.
Marc Piechura
@marcpiechura
@ZoolWay maybe a better solution would be to use a Select stage where you send the bytes to your actor and simply return Unit.Default. As Sink use Sink.Ignore which provides a Task which is completed once the stream has actually processed all elements
or SelectAsnyc + actor.Ask if you want backpressure
but that's obviously quite unsafe over the network ;)
Janusz Fijałkowski
@JohnnyTheAwesome

Hello everyone! I'm currently working on a clustered Akka.net application. While trying to catch some exceptions thrown in one microservice, aggregate them into a list, and send to another microservice I've run into the following error:

08/03/2017 17:31:00 [ERROR] [akka://TestHub/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FTestHub%4010.16.4.33%3A57197-2] "No parameterless constructor defined for this object."
System.MissingMethodException: No parameterless constructor defined for this object.
   at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
   at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
   at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache, StackCrawlMark& stackMark)
   at System.Activator.CreateInstance(Type type, Boolean nonPublic)
   at System.Activator.CreateInstance(Type type)
   at Hyperion.SerializerFactories.ExceptionSerializerFactory.<>c__DisplayClass9_0.<BuildSerializer>b__0(Stream stream, DeserializerSession session)
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue(Stream stream, DeserializerSession session)
   at Hyperion.SerializerFactories.EnumerableSerializerFactory.<>c__DisplayClass5_0.<BuildSerializer>b__1(Stream stream, DeserializerSession session)
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue(Stream stream, DeserializerSession session)
   at lambda_method(Closure , Stream , DeserializerSession )
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue(Stream stream, DeserializerSession session)
   at lambda_method(Closure , Stream , DeserializerSession )
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue(Stream stream, DeserializerSession session)
   at Hyperion.Serializer.Deserialize[T](Stream stream)
   at Akka.Serialization.HyperionSerializer.FromBinary(Byte[] bytes, Type type)
   at Akka.Serialization.Serialization.Deserialize(Byte[] bytes, Int32 serializerId, String manifest)
   at Akka.Remote.MessageSerializer.Deserialize(ActorSystem system, SerializedMessage messageProtocol)
   at Akka.Remote.DefaultMessageDispatcher.Dispatch(IInternalActorRef recipient, Address recipientAddress, SerializedMessage message, IActorRef senderOption)
   at Akka.Remote.EndpointReader.<Reading>b__11_1(InboundPayload inbound)
   at lambda_method(Closure , Object , Action`1 , Action`1 , Action`1 )
   at Akka.Tools.MatchHandler.PartialHandlerArgumentsCapture`4.Handle(T value)
   at Akka.Actor.ReceiveActor.ExecutePartialMessageHandler(Object message, PartialAction`1 partialAction)
   at Akka.Actor.ReceiveActor.OnReceive(Object message)
   at Akka.Actor.UntypedActor.Receive(Object message)
   at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
   at Akka.Actor.ActorCell.ReceiveMessage(Object message)
   at Akka.Actor.ActorCell.Invoke(Envelope envelope)
--- End of stack trace from previous location where exception was thrown ---
   at Akka.Actor.ActorCell.HandleFailed(Failed f)
   at Akka.Actor.ActorCell.SysMsgInvokeAll(EarliestFirstSystemMessageList messages, Int32 currentState)

Both Exception and List<T> have parameterless constructors so I'm at a loss here. Does anybody know how to figure out which type "this object" is?

David Rivera
@mithril52
Question for anyone that might know: In reading the documentation, it sounds as though if you have a cluster with multiple seed nodes, and one seed node goes down, new nodes can't be joined intot eh cluster until that seed node comes back up. Thats all fine and good. However, in my testing, its seems that this limitation also happens when a normal non-seed node goes down. I can't get any new nodes to join the cluster until that node that died, comes back up and rejoins the cluster. Is this the way it is supposed to work?
David Rivera
@mithril52
Aha. I found the '''auto-down-unreachable-after''' configuration
Maxim Cherednik
@maxcherednik
@mithril52 if any node is unreachable, no new node can join the cluster.
It's also considered a bad thing to use auto-downing.
David Rivera
@mithril52
What would I use besides auto-downing?
Maxim Cherednik
@maxcherednik
It's stated you should not use it. I guess some kind of external monitoring tools, probably with semi automatic downing.
Apart from this there is always a way to down the unreachable node manually through console.
David Rivera
@mithril52
Yeah, manually kinda goes against elasticity :) I've tried that route, but I'm having problems getting pbm to work. Its unable to load System.Collections.Immutable, and I havn't been able to figure out why yet. But, for my purposes, I think auto-down will work perfectly
Maxim Cherednik
@maxcherednik
Yeah, manual doesn't fit the idea:-)
Bartosz Sypytkowski
@Horusiath
@maxcherednik regarding cluster singleton - by default singleton lives on the oldest node in the cluster. I'm not sure, but I think that singleton may not move from the unreachable node, but only from removed one (since the actual alive/dead state of unreachable is unknown, and we don't want to risk having 2 singletons at the same time) - if you're using singleton proxies, they will buffer messages until new singleton location will be known, and then they will forward their buffers to it.
Maxim Cherednik
@maxcherednik
Ok, that means it's by design.
Cause I couldn't find it on the documentation.
Again, it pushes me to turn auto-downing.
Maxim Cherednik
@maxcherednik
Still can't wrap my head around. "Do not do auto downing. It's going to be a split brain". But nothing works without it:-)
Bartosz Sypytkowski
@Horusiath
@maxcherednik this phrase about autodowning is more valid in case if you have alternatives (i.e. on JVM side you can buy subscription with better solutions for that issue ;) )
Maxim Cherednik
@maxcherednik
What about remote death watch? It seems it's also not working when node is unreachable. It needs to be down.
Bartosz Sypytkowski
@Horusiath
but unless you clusters grow big, this can work fairly well
Maxim Cherednik
@maxcherednik
I see.
So it's the way to go until we really face a split brain:-)
Bartosz Sypytkowski
@Horusiath
if I were you I'd just set auto downing (otherwise implement custom split brain resolver, but it's not out of the box)
Maxim Cherednik
@maxcherednik
Ok, thanks. Could you please confirm the remote death watch behavior?
Again my expectation would be: I receive Terminated event in case of network issue.
Is this correct?
Bartosz Sypytkowski
@Horusiath
@maxcherednik I wouldn't count on it (Unreachable is treated as sort of temporal undefined state, so in most of the cases this state change is undecisive to perform any actions), but you can check it easily if you have some cluster already spinning up. /cc @Aaronontheweb
Maxim Cherednik
@maxcherednik
@Horusiath I did actually... but as far as I remember I had it working before.
and I am also reading this: http://getakka.net/docs/remoting/deathwatch
This section: When Will You Receive a Terminated Message?
As far as I remember, I was receiving Terminated message right away...
Maxim Cherednik
@maxcherednik
I start to remember something. It seems I already asked exactly the same question and the answer was: Remote and Cluster watch works slightly different and there is no documentation for this. @Aaronontheweb
Ricky Blankenaufulland
@ZoolWay
@Silv3rcircl3 Hm, that would work. In the Task.ContinueWith handler I can dispose stream and materializer without problem. I will have to manually send my stream completed message there too which Sink.ActorRef() has done for me before. Also, it does not send Akka.Actor.Status.Failure on failures, they are just silently ignored as far as I can see. So that approach does not seem suitable.
There must be some kind of after completion callback for Sinks I guess
Vagif Abilov
@object
Good morning. Question on persistent actors. Persistent actor will alway get RecoveryCompleted message upon the restoration of its state. What if the state restoration is slow and persistent actor receives other messages while its state is being built? Does actor system guarantee that other messages will be played after the actor state is recovered or the actor needs to stash such messages if they expect correct state?
Arjen Smits
@Danthar
By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages. New messages sent to a persistent actor during recovery do not interfere with replayed messages. They are cached and received by a persistent actor after recovery phase completes.
Vagif Abilov
@object
Thank you @Danthar
Vagif Abilov
@object
I wonder if others experienced problems caused by persistent actors snapshots. We are using MS SQL Server adapter for persistent actors and often see Akka.Persistence.RecoveryTimedOutException with message "Recovery timed out, didn't get snapshot within 30s." They occur no matter if an actor has snapshots. Even if we clean all snapshots for all actors.
We also see Circuit Breaker exceptions from akka.persistence.journal.sql-server ("Circuit Breaker is open; calls are failing fast"). They often come right after the application start.
tstojecki
@tstojecki
@object really nice talk on akka streams at ndc oslo.... did you publish the sample code anywhere?
Vagif Abilov
@object
Thank you @tstojecki, you can grab sample code here: https://dl.dropboxusercontent.com/u/8734289/ReactiveTweets-NDC.zip
tstojecki
@tstojecki
thanks!
Kosta Petan
@kostapetan
Hey guys, anyone can help me with setting up akka remoting under docker?
basically I have a docker-compose file defining an API service , seed node (worker) and regular node (worker), but so far no success in connecting the nodes between each other
Jalal EL-SHAER
@jalchr
@object Yes I tried persistent actors few days ago and experienced the same issues you got.
Kosta Petan
@kostapetan
my remote config looks like this
        remote {
            log-remote-lifecycle-events = INFO
            log-received-messages = on

            dot-netty.tcp {
              transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
                  applied-adapters = []
                  transport-protocol = tcp
              #will be populated with a dynamic host-name at runtime if left uncommented
              #public-hostname = "127.0.0.1"
              hostname = 0.0.0.0
              port = 4053
              maximum-frame-size = 256000b
            }
        }