I'm getting "has 0 routees" message with web-crawler sample
to link the local builded nuget packages, its the same behavior as with the prerelease, no akka assemblies are under references
@ramusbucket 0 routees means your cluster hasn't formed.
You should be able to work out what's going on via the console output
@thelegendofando thank you for your comments
It seems that I was ignorant about how to run those samples. The instructions given on Github wiki pages are not sufficient. I somehow figured out that I need to run "WebCrawler.CrawlService" and "WebCrawler.TrackingService" along with the "WebCrawler.Web" app and "Lighthouse" app
so I've just discovered that actors are created asynchronously (when ActorOf call completes my actor isn't guaranteed to have been ctor'd). Is there a way to control that, and also presumably if I send a message to an uncreated actor the message will be buffered untli it is created?
@ramusbucket if you run the solution it should fire up one of each service, although yes, you need a seed node, doesn't have to be Lighthouse.
@thelegendofando you are correct
What if I run those services from command prompt from Debug folder. No logs show up for lighthouse or crawler service
Hi! The simplest dummest question of the entire book I guess, but I have a use case where I have to correlate two message, but those message means data I cannot affort to loose. For this particular use case, one of the message knows (have and ID) the correlation with the second, but the second can be either never correlated with anyone, or correlated with one of the first one.
The first message is an actor how tries to reach the second if is available, second can arrive after first arrived, thus the first one is in charge of search for the second one. But actor for first need to be started even if a node crash. So my question is, how can achieve high availability on the even of a failure in one or more nodes in my cluster??
Thank you all in advance.
@Horusiath If I want to contribute in cluster tools and cluster sharding, where will be in the code a good start for reading and getting the sense of it?
@feanor41 your case can be solved by cluster sharding, but in general it's the case of some third service actor, that both actors use to establish the connection with each other
concerning Akka.Cluster.Tools and Akka.Cluster.Sharding - most (if not all) of the code is already ported, but I need to make multinode test runner specs working. If you want to help a little, I can try to find something, that could interest you. Just give me a while to reorganize the code ;)
@Horusiath Thanks, no rush. I was looking at Cluster Sharding documentation available and arrived myself to cluster sharding path. Thanks for confirm it.
@Horusiath#1455 should fix it - proof above
if you say so ;)
akka.net spec suite have so many races and it's so resource consuming, that I want to tell xUnit guys to build their test runner on top of akka.net, so that their runner will finally stop failing from pressure we generate
Anybody else running on Windows using Akka.Remoting having runtime exception where specific google.protobuf ver 521 is required? (nuget restores latest - 555)? Works at home under mono. Just breaks at work...
@feanor41 if you want to play a little with the Akka.Cluster.Tools, take a look at PR #1408 - under Examples/Cluster/ClusterTools you can find projects with samples for each of the 3 major features:
cluster distributed pub/sub mechanim
cluster client -> this one to be better visible will require to set .Node project actor ref provider to RemoteActorRefProvider, while keeping .Seed actor ref provider to ClusterActorRefProvider (the purpose is to show that non-cluster actor system is able to call cluster itself)
@kelly-cliffe you can manage this using assembly bindings in your app.config file
@Horusiath Yes thanks. That's what I've done. Just wondered if everybody else was currently taking that approach, or if i was alone and something sinister was happening on my dev machine.
yeah, ultimately I think that we should be able to cover those cases automagically
@Horusiath Excellent, I'll take a look at this as your suggestion!
@Horusiath could Akkling IActorRef<'Msg> use flexible types? If I make an actor of type obj then I have to downcast every message to :> obj
step-by-step docs for setting that up within NuGet
Chris G. Stevens
I have a 2 websites that are part of my cluster. Sometimes one websites will become unreachable and my logic for all of the other members determine that after 120 seconds they should do a Cluster.Down(ThatWebsiteAddress). I can see that all of the members get a MemberRemoved for that website and then it is reported as Down. My problem is that when my other service detects that this website is down and has been removed from the cluster it tries restarts that website. I can see see it trying to join and I get this message: [[akka://MyService/system/cluster/core/daemon]] - New incarnation of existing member [UniqueAddress: (akka.tcp://My@220.127.116.11:57771, 303375918)] is trying to join. Existing will be removed from the cluster and then new member will be allowed to join. But it never gets removed from the cluster so the member never is able to join. I can try to have a member .Leave(ThatWebAddress) and do another .Down(ThatWebAddress) but it never gets removed. Basically I have a ClusterStatus actor that monitors the status of the cluster from its view and determines if it needs to restart itself or if a member has been unreachable for x seconds to down it. If so it shuts that service down and logs to the event log for Solarwinds to determine if the service or website needs to be started back up.
@cgstevens I'll give you a full answer when I get into the office, but the short answer is that nodes have to be manually issued a Cluster.Down command to remove them from the cluster
node being disconnected != node leaving the cluster
Akka.Cluster achieves partition tolerance by treating unexpected disconnects as transient failures, and for now relies on humans to manually tell the cluster when a disconnect is a permanent failure or not
definitely do not recommend turning auto-down on in HOCON config, which it sounds like might be the case here
you'll end up with a split brain most of the time
Chris G. Stevens
right I was using that but that killed my cluster.
so I created my own ClusterStatus to down per our business rules..
ah, very cool - so are you sure the right node is being downed when that happens?