Exception while calling IQueueAdapter.CreateNewReceiver.
System.NullReferenceException: Object reference not set to an instance of an object.
at OrleansAWSUtils.Storage.SQSStorage.CreateClient() in D:\build\agent\_work\23\s\src\AWS\Orleans.Streaming.SQS\Storage\SQSStorage.cs:line 86
at OrleansAWSUtils.Streams.SQSAdapterReceiver.Create(SerializationManager serializationManager, ILoggerFactory loggerFactory, QueueId queueId, String dataConnectionString, String serviceId) in D:\build\agent\_work\23\s\src\AWS\Orleans.Streaming.SQS\Streams\SQSAdapterReceiver.cs:line 30
at OrleansAWSUtils.Streams.SQSAdapter.CreateReceiver(QueueId queueId) in D:\build\agent\_work\23\s\src\AWS\Orleans.Streaming.SQS\Streams\SQSAdapter.cs:line 39
at Orleans.Streams.PersistentStreamPullingAgent.InitializeInternal(IQueueAdapter qAdapter, IQueueAdapterCache queueAdapterCache, IStreamFailureHandler failureHandler) in D:\build\agent\_work\23\s\src\Orleans.Runtime\Streams\PersistentStream\PersistentStreamPullingAgent.cs:line 107
http://localhost:4576/123456789012/test_queue
@illia-maier
My question about tools for some kind on deployment help features like "suspend" cluster, when all grains deactivating and save state and no one new created, because now i implementing it for Blue-Green deploy in k8s, we has many inputs(HTTP, message bus) so can not use builtin k8s features for HTTP only
I'd like to understand this more - could you open an issue so we can discuss?
@jdom
I assume pluggable grain directory would bring some out of the box options that would allow, for example, to guarantee a single activation for a certain grain instance... is that correct?
It's certainly a step in that direction. There would need to be an active deactivation protocol or leasing mechanism to give strong guarantees (because a silo might not know that it's been declared dead, for example, and a grain could still be processing some long-running request, etc)
@OracPrime
Presumably you've checked that having string ids doesn't hit performance? Non-fixed size sounds like memory overhead and trouble.
Yes, they cheap and internable (poolable). There is a mechanism for amortizing allocations even when they're frequently sent over the wire.
UseDevelopmentClustering
(server) + UseStaticClustering
(client)(http://dotnet.github.io/orleans/Documentation/clusters_and_clients/configuration_guide/typical_configurations.html#unreliable-deployment-on-a-cluster-of-dedicated-servers) instead?
UseDevelopmentClustering
with one IP on each case, not trying to put all clusters together since they are meant to work independently)
// server 1
.UseOrleans(builder =>
{
builder
.UseDevelopmentClustering(new IPEndPoint(IPAddress.Parse(myIp), 11111))
.ConfigureEndpoints(siloPort: 11111, gatewayPort: 30001)
.Configure<ClusterOptions>(options =>
{
options.ClusterId = "ResourceManagerCluster";
options.ServiceId = "ResourceManagerService";
})
.ConfigureApplicationParts(parts => parts.AddApplicationPart(typeof(AllocatorGrain).Assembly).WithReferences())
.AddMemoryGrainStorage(name: "ResourceManagerStorage");
})
// server 2
.UseOrleans(builder =>
{
builder
.UseDevelopmentClustering(new IPEndPoint(IPAddress.Parse(myIp), 11112))
.ConfigureEndpoints(siloPort: 11112, gatewayPort: 30002)
.Configure<ClusterOptions>(options =>
{
options.ClusterId = "WorkloadManagerCluster";
options.ServiceId = "WorkloadManagerService";
})
.ConfigureApplicationParts(parts => parts.AddApplicationPart(typeof(SchedulerGrain).Assembly).WithReferences())
.AddMemoryGrainStorage(name: "WorkloadManagerStorage");
})
// client 1
Client = new ClientBuilder()
.UseStaticClustering(new[] { new IPEndPoint(IPAddress.Parse(myIp), 30001) })
.Configure<ClusterOptions>(options =>
{
options.ClusterId = "ResourceManagerCluster";
options.ServiceId = "ResourceManagerService";
})
.Build();
// client 2
Client = new ClientBuilder()
.UseStaticClustering(new[] { new IPEndPoint(IPAddress.Parse(myIp), 30002) })
.Configure<ClusterOptions>(options =>
{
options.ClusterId = "WorkloadManagerCluster";
options.ServiceId = "WorkloadManagerService";
})
.Build();
An unhandled exception occurred while processing the request.
TimeoutException: Response did not arrive on time in 00:00:30 for message: Request *cli/a29af79c@b0dd2faa->S10.123.248.74:30002:0*grn/3413AF80/00000000 #16: . Target History is: <S10.123.248.74:30002:0:*grn/3413AF80/00000000:>.