Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 14 19:13
    hwanders commented #4096
  • Dec 14 13:05
    IgorFedchenko commented #4085
  • Dec 14 03:08
    hhko commented #4094
  • Dec 13 21:37
    Aaronontheweb commented #4085
  • Dec 13 20:28
    IgorFedchenko commented #4085
  • Dec 13 20:27
    IgorFedchenko commented #4085
  • Dec 13 15:38
    Aaronontheweb labeled #4096
  • Dec 13 15:38
    Aaronontheweb milestoned #4096
  • Dec 13 15:38
    Aaronontheweb labeled #4096
  • Dec 13 15:38
    Aaronontheweb opened #4096
  • Dec 13 10:41
    peirens-bart opened #4095
  • Dec 13 08:37
    Aaronontheweb synchronize #4071
  • Dec 13 08:13
    jiyeongj opened #4094
  • Dec 12 15:42
    Aaronontheweb synchronize #4086
  • Dec 12 15:42
    Aaronontheweb closed #4083
  • Dec 12 15:42

    Aaronontheweb on dev

    Fix #4083 - Endpoint receive bu… (compare)

  • Dec 12 15:42
    Aaronontheweb closed #4089
  • Dec 12 15:42
    Aaronontheweb labeled #4093
  • Dec 12 15:42
    Aaronontheweb labeled #4093
  • Dec 12 15:42
    Aaronontheweb labeled #4093
Vasily Kirichenko
@vasily-kirichenko
@Horusiath ^^
Bartosz Sypytkowski
@Horusiath
@vasily-kirichenko It potentially could.
I don't remember right now if singleton migration is triggered when node becomes unreachable, or when it's marked as dead
but I'm almost sure that it's need to be identified as dead - unreachable nodes could happen in network partition quite often, so it would be bad to spawn another singleton as reaction for simple unreachability
Vasily Kirichenko
@vasily-kirichenko
the log is
12:47:29 INFO  [ClusterSingletonManager, 6] Previous oldest removed [akka.tcp://xxx@localhost:4888]
12:47:29 INFO  [ClusterSingletonManager, 6] Younger observed OldestChanged: [ -> myself]
12:47:29 INFO  [ClusterSingletonManager, 6] Singleton manager started singleton actor [akka://xxx]
12:47:29 INFO  [ClusterSingletonManager, 6] ClusterSingletonManager state change [Younger -> Oldest] Akka.Cluster.Tools.Singleton.YoungerData
12:47:29 INFO  [0, Culture=neutral, PublicKeyToken=null]], 9] Getting server list...
12:47:30 ERROR [OneForOneStrategy, 11] Object reference not set to an instance of an object. Object reference not set to an instance of an object.
BTW, I have no idea how to catch the exception on the last line.
What can it be?
Bartosz Sypytkowski
@Horusiath
No idea. Do you use some custom supervision strategy?
Vasily Kirichenko
@vasily-kirichenko
No.
Don't even specify one explicitly.
Vasily Kirichenko
@vasily-kirichenko
@Horusiath I have another question :) That cluster singleton actor spawns a couple of other actors and a stream. As far as I understand, it's its responsibility to shutdown the stream when it's stopping. Do you find the following code good or there is a better / simpler way to do it?
let stream mat =
    Source...
    |> Source.viaMat (KillSwitches.Single()) Keep.both
    |> Source...
    |> Source.``to`` Sink.ignore
    |> Graph.run mat

let props (mat: IMaterializer) : Props =
    props(
        let rec loop (streamKillSwitch: IKillSwitch) (msg: obj) =
            match msg with
            | LifecycleEvent PostStop -> 
                streamKillSwitch.Shutdown()
                ignored()

            | _ -> unhandled()

        fun (ctx: Actor<obj>) ->
            let taskQueue, streamKillSwitch = stream mat
            spawn ctx "another-actor" (Another.props taskQueue) |> ignore
            become (loop streamKillSwitch)
    ).ToProps()
Deniz İrgin
@Blind-Striker
is it possible to define more than one dispatcher with different throughput to use in different actors. Like this ;
``` 
            custom-dispatcher-300 {
                type = Dispatcher
                throughput = 300
            }

            custom-dispatcher-400 {
                type = Dispatcher
                throughput = 400
            }

            custom-dispatcher-500 {
                type = Dispatcher
                throughput = 500
            }
``` 
Onur Gumus
@OnurGumus
@vasily-kirichenko how would you gracefully handover a singleton ?
Vasily Kirichenko
@vasily-kirichenko
@OnurGumus Interesting question. The singleton mechanism seem to be based on cluster membership events, so a singleton is not moved until the node is unreachable (IMHO).
Onur Gumus
@OnurGumus
I am thinking calling Cluster.Leave
however not sure how graceful it would be
Vasily Kirichenko
@vasily-kirichenko
but why do you need to move it to another node?
Onur Gumus
@OnurGumus
The reason is I have two processes running. Ocassionally we want to upgrade the application. So we stop one, handover happens, upgrade the stopped, then start and stop the 2nd one.
So that we don't have downtime.
during the upgrade process.
I am not sure if this is the bestway though
Vasily Kirichenko
@vasily-kirichenko
so you want to switch fast? I dunno.
Onur Gumus
@OnurGumus
Yes. I think this should be combined with a coordinated shutdown
Vasily Kirichenko
@vasily-kirichenko
I played with it today and it took about a minute :(
Onur Gumus
@OnurGumus
No that's obviously wrong
Vasily Kirichenko
@vasily-kirichenko
The docs promise it should take "seconds"
Onur Gumus
@OnurGumus
it is almost instant for me but do you use splitbrain ?
How do you down the other ?
Vasily Kirichenko
@vasily-kirichenko
I suspect the Consul was the issue
Onur Gumus
@OnurGumus
Even if you close the application, you need to "down" it.
Vasily Kirichenko
@vasily-kirichenko
Ctrl+C :)
Onur Gumus
@OnurGumus
that's not sufficient for handover
you also have to down it.
Vasily Kirichenko
@vasily-kirichenko
so the other nodes log that it's unreachable for that minute
Onur Gumus
@OnurGumus
Either use auto down config or use split brain
Then it is normal
Vasily Kirichenko
@vasily-kirichenko
auto down is evil! evil!
:)
Onur Gumus
@OnurGumus
Use split brain then.
Vasily Kirichenko
@vasily-kirichenko
how?
resolver you mean?
Onur Gumus
@OnurGumus
It is very simple and documented
Yes
Vasily Kirichenko
@vasily-kirichenko
ah. will read about it.
Onur Gumus
@OnurGumus
to tackle with network partitioning
Vasily Kirichenko
@vasily-kirichenko
However, it's not very important in my case. The actor is just a batch worker, nobody even sends messages to it.
so a minute downtime is ok
Onur Gumus
@OnurGumus
no it is not okay
you have to down it by some means
if you don't down it then I don't know maybe then it is okay
Bartosz Sypytkowski
@Horusiath
@vasily-kirichenko kill switch invoked on PostStop seems fine.