Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Aug 18 20:15

    Aaronontheweb on dev

    Cluster singleton should consid… (compare)

  • Aug 18 20:15
    Aaronontheweb closed #6035
  • Aug 18 20:15
    Aaronontheweb closed #6065
  • Aug 18 20:15
    Aaronontheweb auto_merge_disabled #6065
  • Aug 18 13:50
    Arkatufus synchronize #6065
  • Aug 18 13:46
    Aaronontheweb commented #6065
  • Aug 17 21:09
    Aaronontheweb commented #6065
  • Aug 17 21:09
    Aaronontheweb auto_merge_enabled #6065
  • Aug 17 21:09
    Aaronontheweb commented #6065
  • Aug 17 21:03
    Arkatufus synchronize #6065
  • Aug 17 20:57
    Arkatufus synchronize #6065
  • Aug 16 18:05
    Aaronontheweb synchronize #6065
  • Aug 16 18:05
    Aaronontheweb labeled #6065
  • Aug 16 18:05
    Aaronontheweb review_requested #6065
  • Aug 16 16:58
    Aaronontheweb commented #6063
  • Aug 16 16:58

    Aaronontheweb on dev

    [Cluster] Enable HeartbeatRespo… (compare)

  • Aug 16 16:58
    Aaronontheweb closed #6061
  • Aug 16 16:58
    Aaronontheweb closed #6063
  • Aug 16 14:51
    Arkatufus synchronize #6063
  • Aug 16 14:36
    Arkatufus synchronize #6063
Aaron Stannard
@Aaronontheweb
I have a failover scenario where a cluster singleton has to migrate from one node to another
Andrey Leskov
@andreyleskov
it is interesting
Aaron Stannard
@Aaronontheweb
and the other nodes relying on data from that singleton must be able to receive notifications about the location of the new singleton on the network
I'm going to be using the MNTR to test my recovery mechanism there
because a simple unit test can't do it justic3e
have a similar scenario where the node with the singelton running on it gets blacked out temporarily and comes back
"does my cluster react according to my plan?" is what my MNTR spec tries to answer there
Andrey Leskov
@andreyleskov
ok, it seems I should use MNTR tests only for high-level scenarios, and test all application logic with TestKit
by the way - is there any good profiling tool for akka.net like dotTrace? I'm using NBench to measure performance and looking for a simpler ways to find bottlenecks
or any guide how to analyze akka-based application performance with common tools
best practices, may be
@thomaslazar please provide a minimal example for reproducing the issue
Aaron Stannard
@Aaronontheweb
working on something for doing much more detailed system tracing and monitoring
but it's in early stages still
Andrey Leskov
@andreyleskov
ok, I'm sure it will be great !
Jalal EL-SHAER
@jalchr

Hi @Aaronontheweb
In your post here: https://petabridge.com/blog/intro-to-persistent-actors/ , you mentioned the following:

The PersistenceId and the SequenceNr, together, form the primary key. And the sequence number is a value that monotonically increases in-memory inside the persistent actor - so imagine if you have two actors with the same PersistenceId but different sequence numbers writing to the same store. It will be chaos and will inevitably error out - so that’s why it’s crucial that every PersistenceId be globally unique within your ActorSystem (at least for all actors writing to that store.)

I have a question regarding the "PersistenceId" in Akka.net in a cluster. Does it work like this or should I need a specifier per node ?

Joshua Garnett
@joshgarnett
Good morning everyone
Thusitha
@dnnbuddy
how stable is the .net port of the framework compared with the original akka version? Do we have information on that?
Joshua Garnett
@joshgarnett
1.3 brought in some breaking changes, mostly that was to bring it closer to the originak Akka version
Bartosz Sypytkowski
@Horusiath
@jalchr PersistenceId is an unique logic identifier of a target component, think of it as i.e. user id.
Ricardo Abreu
@codenakama

hey guys, I keep getting this error using actors in a cluster and sending messages:

[ERROR][10/15/2017 11:13:38][Thread 0010][remoting] Cannot find serializer with id [9]. The most probable reason is that the configuration entry 'akka.actor.serializers' is not in sync between the two systems.

however the config is the same in all projects
I tried NewtonsoftJson and Messagepack without success. In my previous tests it was working though :/

my hocon config

akka { actor { provider = \"Akka.Cluster.ClusterActorRefProvider, AkkA.Cluster\" } remote { log-remote-lifecycle-events = DEBUG, helios.tcp { hostname = \"localhost\", port = 7000 } } cluster { seed-nodes= [ \"akka.tcp://ClusterSystem@localhost:7000\" ] } serializers { messagepack = \"Akka.Serialization.MessagePack\" } serialization-bindings { \"System.Object\" = messagepack } }

Ricardo Abreu
@codenakama
can anybody share their hocon config within a cluster?
one with serializers setup
also how hocon is better than json or xml? it seems to not work properly when I have breaklines in the wrong place (sometimes added by the editor) and I have to scape parenthesis
actually that was the issue the whole time

having

serializers
{
}

doesn't work

you have to have serializers {.... }

Bartosz Sypytkowski
@Horusiath
@codenakama from what I've checked last time, serializer with Id 9 is a decicated one for a DistributedPubSub feature.
Ricardo Abreu
@codenakama
@Horusiath the error was a few breaklines. I changed packages and tried many things to realise that it was the breaklines....
Visual studio for Mac adds weird ones when you write Hocon in the appsettings json file.
The configuration is parsed without errors and the actorsystem assumes it's all OK. But then when you send messages between actors in different nodes it throws those errors
Thomas Denoréaz
@ThmX

Hi all! I have a small problem with Akka.IO, very rarely I get this error: Resource temporarily unavailable
However, no event Tcp.Closed nor Tcp.PeerClosed is sent to neither of the connection handlers.
After a bit of debugging I was also able to see that my handlers are correctly registered (stored inside the ConnectionInfo).
The error is coming from this line TcpConnection.cs#L835:

[DEBUG][10/15/17 11:56:01 PM][Thread 0006][[akka://benchmark-a/system/IO-TCP/$a#359412270]] Closing connection due to IO error System.Net.Sockets.SocketException (0x80004005): Resource temporarily unavailable
   at System.Net.Sockets.Socket.Send(IList`1 buffers)
   at Akka.IO.TcpConnection.PendingBufferWrite.<DoWrite>g__WriteToChannel7_0(ByteString data, <>c__DisplayClass7_0& )
   at Akka.IO.TcpConnection.PendingBufferWrite.DoWrite(ConnectionInfo info)
[DEBUG][10/15/17 11:56:01 PM][Thread 0003][[akka://benchmark-a/system/IO-TCP/$a/$a#757063873]] Closing connection due to IO error System.Net.Sockets.SocketException (0x80004005): Connection reset by peer
[INFO][10/15/17 11:56:01 PM][Thread 0006][akka://benchmark-a/system/IO-TCP/$a] Message SocketReceived from akka://benchmark-a/deadLetters to akka://benchmark-a/system/IO-TCP/$a was not delivered. 1 dead letters encountered.

Unfortunately, I was not able to create a reproducible example as it happens only from time to time during the launch of my benchmarks with BenchmarkDotNet.

Thomas Denoréaz
@ThmX
It is also worth mentioning, I tried adding a Context.Watch(...) on both TcpConnection but they are both still there
Jessie Wadman
@JessieWadman
Looks like remoting (and clustering) breaks when communicating cross-platform using .NET base types like DateTimeOffset, List and Dictionary, etc. Consider when a Windows Forms application remotes to an actor system running on Linux built with .NET Core. The problem isn't with Akka.Remoting itself, but with the underlying serializers (both Newtonsoft.Json and Hyperion), because of the strict type naming configuration in Akka, resulting in the serializers not being able to resolve types between mscorlib.dll/System.Core.dll (on .NET 4.6) and System.Private.CoreLib.dll (.NET Core). Anyone already bumped into this problem and solved it? It's only a problem for BCL types, obviously, but you'd be surprised at how often you use List and Dictionary and DateTimeOffset :-)
Thomas Denoréaz
@ThmX
Ok it was my bad, I found the problem. I'm still surprised though, as the message was not displayed as unhandled message. I was only handling Tcp.Closed and Tcp.PeerClosed and it was firing a Tcp.ErrorClosed, so now I'm directly handling Tcp.ConnectionClosed.
Robert Stiff
@uatec
so here's a fun story
i have very valuable data in my FSM
and i just corrupted it
i am storing my state in redis and I can see that the "akka:persistence:snapshotssnapshot:myfsm" is intact, but the "akka:persistence:snapshotsjournal:persisted:myfsm" is corrupted
i have a back up of akka:persistence:snapshotsjournal:persisted:myfsm though, but it might not be insync with the current state of akka:persistence:snapshotssnapshot:myfsm
can anyone help me figure out how best to integrate these remaining bits of state?
Robert Stiff
@uatec
phew, actually, my FSM had not persisted it's state, so the journal was corrupt, but removed the journal and reset back to 10 minutes ago
Jalal EL-SHAER
@jalchr

I'm trying to make the Akka.Persistence.AtLeastOnceDeliveryReceiveActor v1.3.1

                Deliver(commandRouter.Path,
                    messageId =>
                    new ReliableDeliveryEnvelope<StartJob>(startJob, messageId));

Where commandRouter is defined like this

            var commandRouter = ClusterSystem.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "tasker");

and has following hocon config:

/tasker {
                  router = consistent-hashing-group
                  routees.paths = ["/user/api"]
                  virtual-nodes-factor = 8
                  cluster {
                      enabled = on
                      max-nr-of-instances-per-node = 1
                      allow-local-routees = on
                      use-role = web
                  }

The message never reaches the destination actor. If I use an actor instead of a router, it works fine !

So this works
                Deliver(SystemActors.ApiMaster.Path,
                    messageId =>
                    new ReliableDeliveryEnvelope<StartJob>(startJob, messageId));
Robert Stiff
@uatec
with things like that, i find that you have to create the router on every single node
that's definately the case with singletons
Jalal EL-SHAER
@jalchr
I think this is more like a bug
Bartosz Sypytkowski
@Horusiath
@jalchr if you have a cluster group router, you're basically creating an endpoint on the local node, that will redirect all incoming messages to actors defined by routees.paths, living on an every node in the cluster having a configured role. This also means, that those actors (in your case /user/api) needs to be established manually as group routers won't create them by themselves