Aaronontheweb on 1.4.14
Aaronontheweb on master
Bump Mongo2Go from 2.2.14 to 2.… Bump MongoDB.Driver from 2.11.4… Bump Microsoft.NET.Test.Sdk fro… and 4 more (compare)
Aaronontheweb on 1.4.14
Aaronontheweb on dev
Added v1.4.14 release notes (#1… (compare)
Aaronontheweb on dev
ensure that all recovered Times… (compare)
can you comment on what ActorProducerPluginBase might be replaced with? We are using it currently and wanna switch to the new thing if its in place already. Or keep it on our radar whenever its replacement arrives :)
Props
is just really odd today
using (var system = ActorSystem.Create(_systemName, _config.WithFallback(ClusterClientReceptionist.DefaultConfig())
.WithFallback(DistributedPubSub.DefaultConfig())))
{
var shardRegion = ClusterSharding.Get(system).Start(
nameof(SenderActor),
Props.Create<SenderActor>(),
ClusterShardingSettings.Create(system).WithRole(Roles.Sharding),
new MessageExtractor());
for (int i = 0; i < 100; i++)
{
Thread.Sleep(2000);
shardRegion.Tell(new ShardEnvelope("1", "1", "test" + i));
}
system.WhenTerminated.Wait();
}
public class SenderActor : ReceiveActor
{
private readonly IActorRef _mediator = DistributedPubSub.Get(Context.System).Mediator;
public SenderActor()
{
Receive<string>(msg =>
{
Console.WriteLine(msg);
_mediator.Tell(new Publish(Topics.SendMessageTopic, msg));
});
}
}
public class ReceiverActor : ReceiveActor
{
private readonly IActorRef _mediator = DistributedPubSub.Get(Context.System).Mediator;
public ReceiverActor()
{
Receive<string>(msg =>
{
Console.WriteLine(msg);
});
}
protected override void PreStart()
{
base.PreStart();
_mediator.Tell(new Subscribe(Topics.SendMessageTopic, Self));
}
protected override void PostStop()
{
_mediator.Tell(new Unsubscribe(Topics.SendMessageTopic, Self));
base.PostStop();
}
}
[09:59:17 ERR] Exception in ReceiveRecover when replaying event type [Akka.Cluster.Sharding.PersistentShardCoordinator+ShardHomeAllocated] with sequence number [9] for persistenceId [/system/sharding/customerCoordinator/singleton/coordinator]
System.ArgumentException: Region [akka.tcp://imburse@10.240.0.79:8081/system/sharding/customer#594538619] not registered
Parameter name: e
at Akka.Cluster.Sharding.PersistentShardCoordinator.State.Updated(IDomainEvent e)
at Akka.Cluster.Sharding.PersistentShardCoordinator.ReceiveRecover(Object message)
at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
at Akka.Persistence.Eventsourced.<>c__DisplayClass91_0.<Recovering>b__1(Receive receive, Object message)
Hi guys, i have a question.
I generated a huge amount of message and sending these message to a remote actor. Then i connection is closed and quarantined by system, after getting following warnings.
[WARNING] ... You should probably implement flow control to avoid flooding the remote connection.
Is it quarantined because of the catastrophic communication?
Is it possible to throttling messages? Any suggestions to avoid this issue?
@Horusiath I found sender can set shardId. But receiver can not set shardId . So I have sender1 to send message to shardId1 and sender2 to send message to shardId2 . How to set receiver1 just receive the message from sender1 and receiver2 just receive the message from sender2 ?
sender :
var shardRegion = ClusterSharding.Get(system).Start(
nameof(SenderActor),
Props.Create<SenderActor>(),
ClusterShardingSettings.Create(system).WithRole(Roles.Sharding),
new MessageExtractor());
for (int i = 0; i < 100; i++)
{
Thread.Sleep(2000);
shardRegion.Tell(new ShardEnvelope("1", "1", "TestFromSender1")); // or shardRegion.Tell(new ShardEnvelope("2", "1", "TestFromSender1"));
}
receiver:
var clusterSharding = ClusterSharding.Get(system);
SenderShardActor = clusterSharding.StartProxy(nameof(SenderActor), Roles.Sharding, new MessageExtractor()); //Did I miss any settings at here?
ReceiverActor = system.ActorOf<ReceiverActor>("receiver");
@Aaronontheweb Somehow your 'looks cool' comment about my job scheduler gave me some encouragement... After getting some feedback from my normal work job team, I did some refactoring... I now have an example where various actors in a pipeline can get Jobs as wire-friendly results aggregated in a collection , such that the final effects of a command can be committed atomically.
I think it could be a great way to bridge the gap for distributed systems into scenarios where people want transactional behavior with minimum effort
@heyixiaoran if you have two nodes (both running sharding), they will split new shards more or less evenly. But if you have already have a node with few shards on it, and then add new node to a cluster, the shards from old node don't have to migrate to new node immediately.
The heuristic here is that shard migration process is expensive, therefore it doesn't occur right away and it doesn't have to occur if difference in number of shards hosted between nodes is too small. Only if nodes have substantial difference between number of hosted shards (see reference config), a rebalancing will occur.
Hey everyone. I've been using cluster sharding in a development/CI environment for a while now and this error I'm getting is a recurring theme. The only way I can get my application back on track is to clear the event journal and allow the application to start from fresh...
Is there a bug here or have I configured something incorrectly?
I am using Kubernetes pods so each deployment will (or is highly likely to) have a new address. As the error states, there is no region deployed to that address (verified by lighthouse and PBM tool)
If I'm correct, in this instance, should the shard region not be reallocated?
Error:[09:59:17 ERR] Exception in ReceiveRecover when replaying event type [Akka.Cluster.Sharding.PersistentShardCoordinator+ShardHomeAllocated] with sequence number [9] for persistenceId [/system/sharding/customerCoordinator/singleton/coordinator] System.ArgumentException: Region [akka.tcp://imburse@10.240.0.79:8081/system/sharding/customer#594538619] not registered Parameter name: e at Akka.Cluster.Sharding.PersistentShardCoordinator.State.Updated(IDomainEvent e) at Akka.Cluster.Sharding.PersistentShardCoordinator.ReceiveRecover(Object message) at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message) at Akka.Persistence.Eventsourced.<>c__DisplayClass91_0.<Recovering>b__1(Receive receive, Object message)
Sorry to be a pain but I'm not sure whether this is poor configuration (on my part) or if I should chalk this up as a bug. Does anyone have experience with how shard regions are allocated/reallocated?
EventJournal
SnapshotStore
and Metadata
tables.Cluster.Sharding
module? I accept that it's in BETA.@heyixiaoran looks like you're talking about two different things:
It's like difference between memory and CPU.