Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 14:54
    lucavice opened #5420
  • 14:47
    Aaronontheweb commented #5419
  • 14:47
    Aaronontheweb milestoned #5419
  • 14:47
    Aaronontheweb labeled #5419
  • 14:47
    Aaronontheweb labeled #5419
  • 14:46
    Aaronontheweb commented #5415
  • 14:46
    Aaronontheweb closed #5415
  • 14:46

    Aaronontheweb on dev

    [Documentation] Add extra help … (compare)

  • 14:46
    Aaronontheweb closed #5418
  • 14:46
    Aaronontheweb commented #5418
  • 14:46
    Aaronontheweb milestoned #5418
  • 14:46
    Aaronontheweb labeled #5418
  • 14:46
    Aaronontheweb labeled #5418
  • 14:44
    Aaronontheweb commented #5416
  • 12:22
    lucavice edited #5419
  • 10:51
    lucavice opened #5419
  • 09:47
    Zetanova commented #5416
  • 09:45
    Zetanova commented #5416
  • 08:58
    Zetanova synchronize #5416
  • 07:58
    Zetanova commented #5416
Deniz İrgin
@Blind-Striker
@muratmert i have 3 different nodes
Deniz İrgin
@Blind-Striker
@Horusiath we added DistributedPubSub.DefaultConfig() as fallback but we still getting same error
Stijn Herreman
@stijnherreman
I have an actor that needs to handle one type of message only, RetrieveFoo, to retrieve a resource from a REST API. Once the resource is retrieved, it is cached indefinitely. Is this a good case for using async/await (and thus blocking the actor), or should I still attempt to use PipeTo and keep track in some way of RetrieveFoo messages that need to be responded to?
I want to avoid multiple requests to retrieve the resource.
Stijn Herreman
@stijnherreman
After thinking about this some more, I think I can just store a task in a variable when receiving the first message and use PipeTo, and then for all subsequent messages skip creating the task and just use PipeTo on the existing task. That should work even when the task is completed already.
Stijn Herreman
@stijnherreman
I came up with the following (initial) implementation. It feels a bit awkward to use ContinueWith but maybe that's of my habit to use async/await.
    public sealed class Specification
        : ReceiveActor
    {
        private readonly IClient specsDataServiceClient;

        private Task<Messages.Models.Specification> modelTask;

        public Specification(IClient specsDataServiceClient)
        {
            this.specsDataServiceClient = specsDataServiceClient ?? throw new ArgumentNullException(nameof(specsDataServiceClient));

            this.Receive<GetSpecification>(message => this.GetSpecification(message));
        }

        private void GetSpecification(GetSpecification message)
        {
            if (modelTask == null)
            {
                var getSpecificationTask = this.specsDataServiceClient.Specification_GetSpecificationAsync(message.SpecificationId);
                var getOperationsTask = this.specsDataServiceClient.Operations_GetOperationsForSpecIdAsync(message.SpecificationId);
                var getTestsTask = this.specsDataServiceClient.Test_GetTestsForSpecIdAsync(message.SpecificationId);

                var modelTask = Task.WhenAll(getSpecificationTask, getOperationsTask, getTestsTask)
                    .ContinueWith(_ =>
                    {
                        return new Messages.Models.Specification(getSpecificationTask.Result, getOperationsTask.Result, getTestsTask.Result);
                    });
            }

            var senderClosure = this.Sender;
            modelTask.PipeTo(senderClosure);
        }
    }
Arsene
@Tochemey
@stijnherreman Since you are using Actor if it is possible simply use the synchronous version of your function call.
@stijnherreman Also if you can explain in details what you want to do I can be of a help.
Stijn Herreman
@stijnherreman
@Tochemey no sync versions available unfortunately. The IClient implementation code is generated by a third-party tool, from an OpenAPI (Swagger) spec.
Arsene
@Tochemey
@stijnherreman Okay
Stijn Herreman
@stijnherreman
I was reading http://gigi.nullneuron.net/gigilabs/asynchronous-and-concurrent-processing-in-akka-net-actors/ (and the previous article), about using ReceiveAsync or using PipeTo.
Arsene
@Tochemey
@stijnherreman That is cool
Stijn Herreman
@stijnherreman
Basically, the actor is responsible for retrieving a REST resource and supplying it back to whoever asked for it. But it should only ever retrieve the resource once, and then respond with the cached resource.
Arsene
@Tochemey
@stijnherreman From the Rest API controller use Ask<> to send the message to the Actor.
Stijn Herreman
@stijnherreman
@Tochemey ok, I'll take a look at that. I had previously read that Ask<> should be avoided, but to be honest I haven't taken a proper look at it yet.
Thank you :)
Arsene
@Tochemey
Then within the Actor where you are doing async stuff use:
CheckAsync().ContinueWith(task =>
                                {
                                    var response = task.Result;
                                    return response;
                                },
                                TaskContinuationOptions.AttachedToParent &
                                TaskContinuationOptions.ExecuteSynchronously).PipeTo(
                                closure);
@stijnherreman I have a production app using Rest API and Akka.NET
@stijnherreman CheckAsync() is a function that is running asynchronously.
@stijnherreman closure = Sender
Stijn Herreman
@stijnherreman
I'll read the article, it looks like a good resource.
Arsene
@Tochemey
@stijnherreman These articles have helped to be grounded in the Ask/PipeTo pattern
Arjen Smits
@Danthar
@stijnherreman avoid ask if you can. In your scenario. Why not use the Become feature.
So you receive a request for your REST data. You detect that you dont have it in cache
Arsene
@Tochemey
@Danthar from REST API controller I think Ask is the best bet.
Arjen Smits
@Danthar
then perform your async -> pipeto stuff. At the end, switch behaviors with the Become feature
where you wait for your response message
and stash everything else
once you receive your response message
unstash all. And serve all the other stuff from your cache
that way you dont have to block your actor's execution. And you are sure you only pay the hit once
From your REST API Controller, ask is the only way to do request -> response style communication with an actor. So yes. Then you should use Ask
But then your communicating from outside your actor system. So you don't have much options there :)
Stijn Herreman
@stijnherreman
@Danthar thank you for the insight, it sounds like a viable path. I'll try out some things tomorrow, end of the (work) day for me here.
Jack Wild
@jackowild
Hi all, just submitted a PR to upgrade the Akka.Persistence.MongoDB plugin to 1.3.1. If somebody could review this it would be much appreciated.
Here's the link: AkkaNetContrib/Akka.Persistence.MongoDB#30
Michel van den Berg
@promontis
I'm having trouble to inject a specific HOCON config for my cluster-sharding section into my persistence plugin. When I look at other implementations of persistence plugins, I see two flavors: a constructor with and without a config parameter. Eg. SqliteJournal (https://github.com/akkadotnet/akka.net/blob/1595e1e832fb42b4a70a4ebf0b3dc52b87b40a96/src/contrib/persistence/Akka.Persistence.Sqlite/Journal/SqliteJournal.cs) has public SqliteJournal(Config journalConfig), whereas other persistence plugins do not provide the Config parameter. What's the deal with the Config parameter? Will the cluster sharding plugin inject the current config into the persistence plugin via that ctor?
Arjen Smits
@Danthar
@promontis check out the cluster sharding settings
it uses a journal-plugin-id and snapshot-plugin-id setting which contains the absolute path to the journal or snapshot plugin config entity
if you dont define that it uses the system default (whatever you have defined)
So in short cluster sharding does not initialise the persistence plugin. Its reponsible for doing that itself. In short through its respective hocon config or manually before you intialise the cluster sharding system
The Config parameter is there to allow you to manually provide override configs or a while config yourself.
If you dont define anything it will fallback to whatever is in the hocon config
And its also used for testing :P
@jackowild we probably already seen your PR come up. but I forwarded your request internally as well. Can't make any promises as to when someone will get around to it though.
Michel van den Berg
@promontis
@Danthar but how does the cluster sharding plugin tell the persistence plugin which config to use? I mean, if I look at the persistence plugins they all know how to persist to a journal using the default journal config (eg. akka.persistence.journal.sql-server). They do not check the config for the cluster setting, so how does the cluster sharding persistence config (as configured via the journal-plugin-id, as you said) flow to the persistence plugin?
For example, this plugin (https://github.com/alexvaluyskiy/Akka.Persistence.Azure/blob/dev/src/Akka.Persistence.AzureTable/AzureTablePersistence.cs#L85) always uses the same config section 'akka.persistence.journal.azure-table' for its persistence. It seems not able to get a different config for eg cluster sharding. This means both normal actor state and cluster sharding state will end up in the same table (as configured by that one config section)
with the sql-server persistence plugin it seems to work when I configure it like this (https://github.com/promontis/stylister.sample/blob/master/Liking/Stylister.Liking.Downloader/settings.hocon), but I don't get how the akka.persistence.journal.sharding config section is passed to the sql-server persistence plugin