Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Oct 15 15:46
    SeanKilleen synchronize #5324
  • Oct 15 02:59
    SeanKilleen commented #5312
  • Oct 14 21:36
    sean-gilliam commented #5312
  • Oct 14 17:30
    SeanKilleen edited #5312
  • Oct 14 17:30
    SeanKilleen edited #5312
  • Oct 14 17:29
    SeanKilleen edited #5312
  • Oct 14 17:28
    SeanKilleen edited #5312
  • Oct 14 17:28
    SeanKilleen edited #5312
  • Oct 14 17:27
    SeanKilleen edited #5312
  • Oct 14 17:26
    SeanKilleen edited #5312
  • Oct 14 17:25
    SeanKilleen edited #5312
  • Oct 14 17:23
    SeanKilleen edited #5312
  • Oct 14 17:22
    SeanKilleen edited #5312
  • Oct 14 16:57
    SeanKilleen synchronize #5324
  • Oct 14 16:26
    SeanKilleen edited #5312
  • Oct 14 16:26
    SeanKilleen opened #5324
  • Oct 14 16:04
    SeanKilleen edited #5312
  • Oct 14 15:58
    SeanKilleen edited #5312
  • Oct 14 15:58
    SeanKilleen opened #5323
  • Oct 14 15:46
    SeanKilleen edited #5312
Aaron Stannard
@Aaronontheweb
in a scenario where the client is actively writing to the cluster you should always be using ClusterClient.Send to do that
I mean in order to keep track of available receptionist nodes it would need to monitor members up and down events..right? why wouldn’t it handle the scenario where the receptionist it’s currently connected to is going down and simply connect to one that’s available. I Have a feeling I’m missing something obvious:( or perhaps I’m not fully understanding the purpose of a clusterclient
the ClusterClient does keep track of this
so you don't have to worry about it
but the actor ON THE NODE YOU ARE TALKING TO which gets created by the ClusterClient.Send
that actor can die if a node terminates
if you're expecting a stream of data to come back from one of those nodes, you need to monitor that actor
the Context.Watch will signal you that the node is gone - or it will signal you that the actor was programmatically terminated (which automatically occurs if the cluster doesn't try to send that client a message within 60 seconds - but that value is configurable)
that client is called a "ClientTunnelActor"
it's the cluster's handle for being able to communicate back to the ClusterClient running outside of the cluster
Aaron Stannard
@Aaronontheweb
https://petabridge.com/cluster/lesson4 - read the section on "Working with ClusterClient" here
this image in particular
if the response tunnel closes you won't receive any more data from the server
for simple request-response interactions via ClusterClient that's probably not a big deal
but for a really long-running stream of data coming back from the cluster
i.e. in our case, continuously streaming stock price and volume updates
you have to monitor that response tunnel actor's health
because if I do a deployment that actor is going to be killed and we'll need to re-create our subscription to ticker data
does that help clarify things?
Robbert
@Robbert-Driven-It
Oke I get that, I ‘m not doing anything like streaming data from the cluster. My case is pretty simple: a backend cluster and a webapi using a clusterclient to send commands and sometimes request-response messages. But once the original cluster node is down not a single clusterclient.send message gets processed. So even though there is a prefectly healthy clusternode the clusterclient isn’t able to reach it.. I think from what you are telling me my expectation that this should work is correct so perhaps There is some configuration or something that’s causing this.
Btw thanks for taking the time to respond.
Robbert
@Robbert-Driven-It
The only complicating factor I can think of is that this remaining node isn’t a seeding node, it has a dynamic port, so it isn’t in the initalcontact list either. Joins the cluster correctly by contacting the initial node and then (after a while) I shutdown the initial node.. the remaining node moves to become a leader but then the webapi clusterclient stops being able to deliver messages to the cluster, it’s like it doesn’t know the remaining node is even there.
Michael Handschuh
@mchandschuh
Is the source code available for Akka.DistributedData and Akka.DistributedData.LightningDb? If so where? I was unable to find it in the akka.net and petabridge github accounts
rlugmania
@rlugmania
image.png
Aaron Stannard
@Aaronontheweb
But once the original cluster node is down not a single clusterclient.send message gets processed. So even though there is a prefectly healthy clusternode the clusterclient isn’t able to reach it.. I think from what you are telling me my expectation that this should work is correct so perhaps There is some configuration or something that’s causing this.
what do your roles settings look like for the ClusterClientReceptionist ?
issue might be how the two halves are configured @Robbert-Driven-It
if you can post your HOCON either here or in a Github Discussion I'd be happy to help diagnose
@mchandschuh the answer @rlugmania gave you is correct - that stuff is tucked away in the /src/contrib folder
even though it's all part of "core" akka.net
guess that's a bit of legacy baggage from when those modules were still experimental years ago
Michael Handschuh
@mchandschuh
I did end up stumbling upon it :) for some reason I had never looked in that folder, lots of goodies in there!
Markus Schaber
@markusschaber
Are there any more practical examples for cluster sharding than on https://getakka.net/articles/clustering/cluster-sharding.html?
Aaron Stannard
@Aaronontheweb
(I'm in the process of updating that workshop so it can be done entirely online - it used to be part of an in-person workshop back in 2019 when we first wrote it)
Seth B Spearman
@sethspearman
I just read Aaron's How to Build Headless Akka.Net Services with IHostedService and watched the video. So does IHostedService means that you no longer need Topshelf to easily convert my console app to a windows service? Are the technologies incompatible?
I have a console app built with akka that makes httpclient calls...and was trying to make the api calls resilient using Polly (retry, circuit breaker, timeout etc) but i was not successful in setting all of this up because I could not get the HttpClientFactory into the actur through the constuctor. It is not necessary to use DI but those are the only examples I can find.
Manuel Islas
@trentcioran

@trentcioran that sounds like a great use case for Akka.Streams IMHO

Thank you @Aaronontheweb I'll take a look at Akka streams

Aaron Stannard
@Aaronontheweb
@/all Akka.NET v1.4.22 is now live on nuget - thanks to everyone who contributed to this release and to all of our users https://twitter.com/AkkaDotNET/status/1423432856114761731
Markus Schaber
@markusschaber
@Aaronontheweb Thanks a lot. looks nice!
Markus Schaber
@markusschaber
Is there any explanation of how the EntityId and ShardId are constructed?
My assumption is that EntityId is a string completely under my control, and ShardId is to be derived somehow like HashCode of EntityId, then mod number of shards?
And is there any way of which cluster members are utilized to hold shard instances? I'd like to restrain entity types to certain groups of nodes, while being able to send messages to them from other groups.
mijoki
@mijoki
You can set the ShardId manually. It can just be difficult to evenly distribute the entities over the entire cluster so people often use the consistent hashing. If you want to though you can absolutely pick and choose where entities go
1 reply
Seth B Spearman
@sethspearman
I got my app working with DI! Yeah. Still wondering if you can use Topshelf with IHostedService (or if my akka IHostedService can be easily made into a Windows Service without Topshelf). Sorry I know a little bit off-topic.
Aaron Stannard
@Aaronontheweb
@markusschaber EntityId is totally under your control - that gets extracted via the IMessageExtractor you use to configure the sharding system
ShardId can be extracted the same way - but the default implementation of Akka.Cluster.Sharding that most users adopt, the HashCodeMessageExtractor, uses a Murmur3 (consistent hashing algorithm) hash of your EntityId to determine which shard [0,n] your entity belongs to
it's possible to customize this so you can collocate shards together
which is very useful for certain types of applications, i.e. graph search where you want all related entities to be collocated together in-memory (by being in the same shard) in order to reduce latency
but 99% of Akka.NET users never really need to do this