Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 16 22:10
    dependabot[bot] labeled #210
  • Jun 16 22:10
    dependabot[bot] opened #210
  • Jun 16 22:10

    dependabot[bot] on nuget

    Bump AkkaVersion from 1.4.20 to… (compare)

  • Jun 16 17:43

    Aaronontheweb on 1.4.19

    (compare)

  • Jun 16 17:22

    Aaronontheweb on master

    add Dependabot support (#93) Bump Microsoft.NET.Test.Sdk fro… Bump AkkaVersion from 1.4.16 to… and 6 more (compare)

  • Jun 16 17:22
    Aaronontheweb closed #100
  • Jun 16 17:15
    Aaronontheweb synchronize #100
  • Jun 16 17:15

    Aaronontheweb on dev

    Merge pull request #1 from Aaro… Merge pull request #3 from Aaro… Merge pull request #6 from Aaro… and 7 more (compare)

  • Jun 16 17:15
    Aaronontheweb auto_merge_enabled #100
  • Jun 16 17:15
    Aaronontheweb opened #100
  • Jun 16 17:15

    Aaronontheweb on dev

    Update RELEASE_NOTES.md for 1.4… (compare)

  • Jun 16 17:15
    Aaronontheweb closed #99
  • Jun 16 16:54
    Arkatufus synchronize #99
  • Jun 16 16:47
    Arkatufus opened #99
  • Jun 16 16:46

    Aaronontheweb on dev

    Added v1.4.22 placeholder for n… (compare)

  • Jun 16 16:46

    Aaronontheweb on v1.4.22

    (compare)

  • Jun 16 16:46
    Aaronontheweb closed #5097
  • Jun 16 16:38
    Aaronontheweb auto_merge_enabled #5097
  • Jun 16 16:38
    Aaronontheweb opened #5097
  • Jun 16 16:38

    Aaronontheweb on v1.4.22

    Added v1.4.22 placeholder for n… (compare)

Aaron Stannard
@Aaronontheweb
Akka.NET Community Standup coming in 45 minutes https://www.youtube.com/watch?v=nWPZIfSi2uE
Calven Yow
@CalvenYow_twitter
Hi,
Calven Yow
@CalvenYow_twitter
What is the deployment strategy in Kubernetes is preferred for deploying cluster sharding system? Deployment or StatefulSet kind? Currently I'm using statefulset kind, however during the rolling deployment, I could see the shards are handover to the old instance first and wait for the new instance is deployed and rebalance the shards again. Is there a way to handover the shards to the new instance directly without going through the old instance and wait for rebalance?
David Mercer
@dbmercer
Is it OK to process a message in an async void method that is called with Receive instead of ReceiveAsync so long as you don't touch shared state after the await? Basically, I'm doing an await and then a Self.Tell, which seems pretty much equivalent to using PipeTo, right?
Ismael Hamed
@ismaelhamed
@dbmercer no, if you try to access the context after an await you should get an "There is no active ActorContext" exception, since Akka does not preserve the context in this case. Use the PipeTo instead, it is there to prevent you from using all these anti-patterns you mentioned.
David Mercer
@dbmercer
@ismaelhamed Ah, gotcha. The code looked better without the ContinueWiths and PipeTo, but that makes sense.
David Mercer
@dbmercer
Having said that, I can make my code look fine just by encapsulating all that logic in another method, but your explanation still improved my understanding.
Shukhrat Nekbaev
@snekbaev
@dbmercer when I researched the topic some years ago my understanding was that if .PipeTo is used that actor will be processing those messages 'concurrently' that is if you send 10 messages to fetch user by id then it will invoke 10 calls to db, however, if you 'await' those calls then messages will be processed one after another and only 1 db call at a time. Which one to use depends on your situation, at least in the solution I built back then with tens and tens of actors and async processing I didn't use .PipeTo. You can also achieve a behavior similar to PipeTo if you would use actor pools for example - it all depends what your needs are and then you choose how to implement it. If you need the context/sender/self to be used after the async call - save it into a local variable var sender = Sender;/ var context = Context;/var self = Self; first, then do the async call and finally invoke method(s) on either of those variables. Afair, there was also RunTask method which allowed to run async stuff within a non-async method. Like said, it's been a while, please check the docs to validate :)
David Mercer
@dbmercer
Unless I am misunderstanding, await in a ReceiveAsync will not process additional messages while awaiting. My idea, though, was to use await in a synchronous Receive. I thought so long as I don’t access my actor’s internal state after the await I would be fine. But as @ismaelhamed pointed out, even if I am not explicitly using the state, I’m probably using the internal context. My thought is that I might be able to figure out a way to get it to work, but using PipeTo is going to be safer, so unless I have a compelling reason to use my unorthodox approach, I should stick to PipeTo.
Michael Handschuh
@mchandschuh
How do you define different persistence configurations on a per-actor basis? I have requirements to persist to different stores. I'm only familiar with configuring persistence via hocon and didn't see a mechanism for linking different persistence configs to a particular set of actors or vice-versa, linking specific actors from code to specific persistence configuration sections. I was expecting to find a setting on Props for specifying the correct persistence configuration section to use. I also expect there to be a mechanism from within the deployment section of hocon to specify which persistence configuration section to use, but couldn't find any examples of this in the documentation. Thanks!
gcohen12
@gcohen12
@Aaronontheweb
Opentelemetry
I have read a year ago the comparison of opentelemetry vs open trace.
Is the status updated after version 1.0 has been released?
I'm considering adding trace but not sure whether to us opentelemetry or open trace.
Ismael Hamed
@ismaelhamed
@dbmercer @snekbaev I'm having this same conversation in akkadotnet/akka.net#5066
@mchandschuh you can use the JournalPluginId property available in PersistentActors to configure a journal programatically.
Michael Handschuh
@mchandschuh
@ismaelhamed -- awesome! I missed that, many thanks!
Michael Handschuh
@mchandschuh
Are mailbox message counts deducted as soon as a message is dequeued? I'm planning on using the smallest mailbox router pool/logic, but I want the pool to count a message that's currently being processed, otherwise each worker actor will have one processing and one in the mailbox before creating a new worker actor for the pool. Am I misunderstanding something or do I need to write a custom pool that looks at whether or not an actor is currently processing (not sure what APIs are available at this level, haven't looked yet) instead of looking at the number of messages in the mailbox. If all actors are currently processing a message then spin up a new actor (or multiple), otherwise send message to idle actor. Settings would define min/max pool size and size increments/etc. Thoughts? Thanks!
Edvard Pitka
@epitka
I am looking for a way to ensure that all messages that are in actors mailbox are processed, before actor is stopped. I know there is a graceful stop, but I am not sure that that is what I need. I have 1 actor that receives one type of message (EventApplied) and adds them in a buffer, to be later flushed. This flushing is triggered either by number of received messages, or when "live processing" starts, at which point this "batching" actor should stop. As I receive messages, each message has dispatched sequence number, and I add them to buffer ensuring that each message is +1 in sequence. If not, I re-send message back to actor. At any point "live processing" can start. What I would like to do after receiving "live processing" message, is either consume all messages in mail box (all are same type), or consume all subsequent messages and then shut down actor. It is guaranteed that after "live processing" no more messages will be sent to this actor, but I am concerned about sending messages to itself, that could happen after live processing message.
8 replies
Edvard Pitka
@epitka
What is the suggested pattern to use, when I want to kill all "sibling" actors, when one particular actor dies, for what ever reason, other than gracefully stopping it. I have 2 actor types, let's call them Projector and Batcher. Both are created by StreamSubscriber actor. Projector actor will have many actors, one per stream id, but Batcher is only one 1 instance. Projectors send model transformations to Batcher. I want to kill all Projectors if Batcher dies, unless I stop it gracefully. I remember reading somewhere about "death-watch" while ago.
Gustavo
@sainzg

Is this the correct configuration in a cluster node, to be able to send messages to another actor which resides in another cluster node role?

AdminService - Cluster Node - app.conf partial

deployment {
/SystemRouter {
router = round-robin-pool
nr-of-instances = 2
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = off
use-role = "SystemNode"
}
}
"/SystemRouter/*/AdminRouter" {
router = round-robin-pool
nr-of-instances = 2
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = on
use-role = "AdminNode"
}
}

Is this needed here, so any admin actor can via Absolute path, send messages to a notification actor in the NotificationNode cluster role???

Context.System.ActorSelection("user/SystemRouter/*/NotificationRouter).Tell(message);

  "/SystemRouter/*/NotificationRouter" {
    router = round-robin-pool
    nr-of-instances = 2
    cluster {
      enabled = on
      max-nr-of-instances-per-node = 1
      allow-local-routees = off
      use-role = "NotificationNode"
    }
  }    
}

}

Remoting configuration

remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
retry-gate-closed-for = 15s
startup-timeout = 10s
dot-netty {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
transport-protocol = tcp
tcp {
port = 0 # set it to zero (0) to bound it to a dynamic port assigned by the OS
hostname = "127.0.0.1"
maximum-frame-size = 1000000b

    # tcp performance tuning based on batching transmissions
    batching {
      enabled = true
    }
  }
}
transport-failure-detector {
  heartbeat-interval = 4s
  acceptable-heartbeat-pause = 120s
}
watch-failure-detector {
  heartbeat-interval = 7s
  acceptable-heartbeat-pause = 10s
}

}

cluster {
failure-detector {
heartbeat-interval = 3s
acceptable-heartbeat-pause = 5s
expected-response-after = 5s
}
retry-unsuccessful-join-after = 10s
shutdown-after-unsuccessful-join-seed-nodes = off
auto-down-unreachable-after = off
log-info = on
publish-stats-interval = 3s
gossip-time-to-live = 4s
heartbeat-interval = 7s
threshold = 15.0
min-std-deviation = 500 ms
acceptable-heartbeat-pause = 10s
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 30s
}
down-removal-margin = 30s
seed-nodes = ["akka.tcp://TestSystem@127.0.0.1:5010"]
roles = ["AdminNode", "AdminApi"]
}

Continuation

NotificationService - Cluster Node - app.conf partial

deployment {
/SystemRouter {
router = round-robin-pool
nr-of-instances = 2
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = off
use-role = "SystemNode"
}
}
"/SystemRouter/*/NotificationRouter" {
router = round-robin-pool
nr-of-instances = 2
cluster {
enabled = on
max-nr-of-instances-per-node = 1
allow-local-routees = on
use-role = "NotificationNode"
}
}
}
}

Remoting configuration

remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
retry-gate-closed-for = 15s
startup-timeout = 10s
dot-netty {
transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
transport-protocol = tcp
tcp {
port = 0 # set it to zero (0) to bound it to a dynamic port assigned by the OS
hostname = "127.0.0.1"
maximum-frame-size = 1000000b

    # tcp performance tuning based on batching transmissions
    batching {
      enabled = true
    }
  }
}
transport-failure-detector {
  heartbeat-interval = 4s
  acceptable-heartbeat-pause = 120s
}
watch-failure-detector {
  heartbeat-interval = 7s
  acceptable-heartbeat-pause = 10s
}

}

cluster {
failure-detector {
heartbeat-interval = 3s
acceptable-heartbeat-pause = 5s
expected-response-after = 5s
}
retry-unsuccessful-join-after = 10s
shutdown-after-unsuccessful-join-seed-nodes = off
auto-down-unreachable-after = off
log-info = on
publish-stats-interval = 3s
gossip-time-to-live = 4s
heartbeat-interval = 7s
threshold = 15.0
min-std-deviation = 500 ms
acceptable-heartbeat-pause = 10s
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 30s
}
down-removal-margin = 30s
seed-nodes = ["akka.tcp://TestSystem@127.0.0.1:5010"]
roles = ["NotificationNode", "NotificationApi"]
}

Markus Schaber
@markusschaber
Is there some Library (maybe 3rd party) which builds a Orleans-style RPC layer on top of Akka? (So I can define Interfaces, async calls and events/observers instead of defining message classes?)
Preferrably with compile-time code generation.
9 replies
Edvard Pitka
@epitka
Is there a way to push message to actors mailbox, from within actor, to the front. While I am processing message, if certain condition is satisfied, I want to push new message to the front of the queue.
mrxrsd
@mrxrsd
@epitka, maybe using priority mailboxes
2 replies
Aaron Stannard
@Aaronontheweb
@/all https://twitter.com/AkkaDotNET/status/1405158695995125760 - Akka.NET v1.4.21 is now live on NuGet

@Aaronontheweb
Opentelemetry
I have read a year ago the comparison of opentelemetry vs open trace.
Is the status updated after version 1.0 has been released?
I'm considering adding trace but not sure whether to us opentelemetry or open trace.

@gcohen12 sure

so the metrics half of OTel is still a work in progress
only supported provider is Prometheus and I'm not sure what the state of that is right now
on the tracing side of things, looks like that's ready to go for the most part
we're planning on migrating Phobos off of OpenTracing and App.Metrics and onto OpenTelemetry once the metrics piece is fully baked
we won't even consider doing it before then simply because I don't think our current customers will accept more than one "bite at the apple" for that type of thing
Aaron Stannard
@Aaronontheweb
I will say, generally speaking, that project has been ridiculously slow at releasing things considering that they were already working from other well-defined and in some cases, adopted standards in the industry
but that's probably also an issue of working within standards bodies
the biggest benefit of OpenTelemetry is that it's going to blow apart all of the weird, opinionated stuff that various APM vendors force you to do
for instance - there's no feasible way to record custom metrics for Elastic APM via the current .NET agent APIs
that's... pretty absurd TBH
that this is even an issue in 2021
but every APM vendor has weird crap like this
OTel should commoditize the consumption side of APM and beat that weakness out of the entire industry, God-willing
and leave it up to each APM vendor how best to turn that data into useful information for developers
Aaron Stannard
@Aaronontheweb
so I'm bullish on it - we built an entire Phobos prototype around OTel over a year ago and the results were pretty nasty (memory leaks, escaped context, brittle APIs, etc) but we'll retry that again probably after .NET 6 releases, since that's supposed to include BCL support for OTel
Aaron Stannard
@Aaronontheweb

What is the deployment strategy in Kubernetes is preferred for deploying cluster sharding system? Deployment or StatefulSet kind? Currently I'm using statefulset kind, however during the rolling deployment, I could see the shards are handover to the old instance first and wait for the new instance is deployed and rebalance the shards again. Is there a way to handover the shards to the new instance directly without going through the old instance and wait for rebalance?

@CalvenYow_twitter StatefulSet is the way to go when deploying any Akka.NET application that retains state in K8s

Is there a way to handover the shards to the new instance directly without going through the old instance and wait for rebalance?
I've thought about this - sadly the best way to probably do it is to abort the old process without leaving the cluster and simply replacing it right away
that's the way to have the least amount of churn on updating a sharded deployment
I haven't tried doing that myself yet
if a node becomes unreachable, restarts, and rejoins the cluster without being downed (edit: AND, in order for this to work node addresses have to be identical - mynode-0 has to be mynode-0 again when it restarts)
the new incarnation is immediately marked as up and the old incarnation is immediately removed
that would guarantee that the rebalance happens once: to the new node only
none of the remaining nodes (that are also going to be replaced) would receive any actors
honestly that's probably hard to get just right though - you'll have to tweak timing settings in both Akka.NET and K8s to get that just right