Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Oct 18 19:12
    IgorFedchenko commented #3998
  • Oct 18 18:29
    Aaronontheweb commented #3998
  • Oct 18 18:24
    Aaronontheweb opened #3998
  • Oct 18 18:19

    Aaronontheweb on fix-readme-logo

    (compare)

  • Oct 18 17:30
    Aaronontheweb milestoned #3973
  • Oct 18 16:38
    jaydeboer opened #3997
  • Oct 18 15:53
    Aaronontheweb synchronize #3973
  • Oct 18 15:52

    dependabot-preview[bot] on dev

    Bump Microsoft.NET.Test.Sdk fro… (compare)

  • Oct 18 15:52

    dependabot-preview[bot] on nuget

    (compare)

  • Oct 18 15:52
    dependabot-preview[bot] closed #3996
  • Oct 18 15:52
    Aaronontheweb commented #3996
  • Oct 18 14:53
    Aaronontheweb commented #3973
  • Oct 18 12:20
    IgorFedchenko commented #3973
  • Oct 18 12:17
    IgorFedchenko commented #3973
  • Oct 18 11:58
    IgorFedchenko synchronize #3973
  • Oct 18 11:33
    IgorFedchenko commented #3973
  • Oct 18 11:25
    IgorFedchenko synchronize #3973
  • Oct 18 07:04
    dependabot-preview[bot] labeled #3996
  • Oct 18 07:04
    dependabot-preview[bot] opened #3996
  • Oct 18 07:04

    dependabot-preview[bot] on nuget

    Bump Microsoft.NET.Test.Sdk fro… (compare)

Peter Bergman
@peter-bannerflow
@Danthar ok, thanks for verifying my assumptions
Peter Bergman
@peter-bannerflow
What about if I pass an IActorRef in a message to some other node in the cluster, will an actor receiving that message be able to send a direct message to that IActorRef?
Marc Piechura
@marcpiechura
@peter-bannerflow yes
Peter Bergman
@peter-bannerflow
Right, thanks 😊
Marc Piechura
@marcpiechura
Np
JaspritBola
@JaspritBola
I'm looking at some logs here, and
I'm looking at some logs here, and I'm noticing that the "Association failure ---> Akka.Remote.Transport.AkkaProtocolException: The remote system has a UID that has been quarantined. Association aborted." eventually turns into a "Akka.Remote.Transport.InvalidAssociationException: Association failure ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown."
JaspritBola
@JaspritBola
Another odd behaviour is that I'm still getting Association failures after I downed the address. The cluster state shows the status is down. Do I also have to call leave on the address to stop association attempts?
Pablo Castilla
@pablocastilla
Hi! I have just left a question on stackoverflow. Basically is which is the best way to assure that a timeout message is sent, maybe some could help. Thanks!
Bartosz Sypytkowski
@Horusiath
@pablocastilla what do you mean by timeout message? With fire-and-forget messages, there is nothing to wait for
Pablo Castilla
@pablocastilla
@Horusiath I mean defered messages. When you want to get notified at a certain exact time.
Bartosz Sypytkowski
@Horusiath
this is hard case when you try to think about consistency i.e. should it be allowed to call the same scheduled timeout twice?
Pablo Castilla
@pablocastilla
that would be ok. The actor could discard the second message. But I have to make sure it gets there, we have business timeouts
another option could be to make sure certain actors are recreated after a reboot or a machine missed? :S
Bartosz Sypytkowski
@Horusiath
this would require to have some scheduler working on each node and distributed "database" where the timeout requests would we written to, so they could be taken over by other nodes in case of unexpected crash
@pablocastilla with cluster sharding I guess you could
(but I haven't checked if actors would be recreated in case of crash, only when migrated between machines)
Bart de Boer
@boekabart
@pablocastilla I suggested a solution at SO
Pablo Castilla
@pablocastilla
so I would have to write it, wouldn't I?
Bartosz Sypytkowski
@Horusiath
also for the first case ddata module would be good, but it's still in development
eventually something like redis would do the job
Pablo Castilla
@pablocastilla
@boekabart I think I got the idea and it should work. Thanks so much. Tell me what you think about the comments plz
Bart de Boer
@boekabart
After wrapping my Akka.Remoting application in a Topshelf service, it won't quit anymore. CTRL-C does 'stop' the service OK, but then the process hangs (with the console window open). Attaching a debugger shows 6 threads waiting for UnfairSemaphore.Wait():
Not Flagged > 12944 8 Worker Thread akka.remote.default-remote-dispatcher_1 Akka.dll!Helios.Concurrency.DedicatedThreadPool.UnfairSemaphore.Wait Normal
any clue what might cause this?
Tomasz Jaskula
@tjaskula
Hi
Bart de Boer
@boekabart
@Aaronontheweb any idea what those threads are actually waiting for?
Aaron Stannard
@Aaronontheweb
@boekabart that's what the ForkJoinDispatcher runs on top of
as for why they're all waiting - should be because they're expecting work
however, if you're terminated the actor system by then
and they still aren't shutdown
it means we're not properly disposing of them
Bart de Boer
@boekabart
according to the log, remoting is correctly shut down
Aaron Stannard
@Aaronontheweb
ok
that's a bug then
I'll file an issue for it
need to dispose all dispatchers on shutdown
thanks for reporting it Bart
Bart de Boer
@boekabart
It seems to only happen when using topshelf
Aaron Stannard
@Aaronontheweb
hmmm...
is it possible that with the way Topshself is configured in your instance
that those threads are running in the foreground?
and not the background?\
latter being the default
hi @tjaskula
Bart de Boer
@boekabart
I haven't changed any dispatcher configuration
hold on, I did create dispatcher in foreground, in fact, for some system tasks
Aaron Stannard
@Aaronontheweb
I use Topshelf too
ohhhhhhhhh