Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 15:09
    Aaronontheweb labeled #5420
  • 15:09
    Aaronontheweb labeled #5420
  • 15:09
    Aaronontheweb auto_merge_enabled #5420
  • 15:07
    Aaronontheweb synchronize #5420
  • 14:54
    lucavice opened #5420
  • 14:47
    Aaronontheweb commented #5419
  • 14:47
    Aaronontheweb milestoned #5419
  • 14:47
    Aaronontheweb labeled #5419
  • 14:47
    Aaronontheweb labeled #5419
  • 14:46
    Aaronontheweb commented #5415
  • 14:46
    Aaronontheweb closed #5415
  • 14:46

    Aaronontheweb on dev

    [Documentation] Add extra help … (compare)

  • 14:46
    Aaronontheweb closed #5418
  • 14:46
    Aaronontheweb commented #5418
  • 14:46
    Aaronontheweb milestoned #5418
  • 14:46
    Aaronontheweb labeled #5418
  • 14:46
    Aaronontheweb labeled #5418
  • 14:44
    Aaronontheweb commented #5416
  • 12:22
    lucavice edited #5419
  • 10:51
    lucavice opened #5419
Stijn Herreman
@stijnherreman
Basically, the actor is responsible for retrieving a REST resource and supplying it back to whoever asked for it. But it should only ever retrieve the resource once, and then respond with the cached resource.
Arsene
@Tochemey
@stijnherreman From the Rest API controller use Ask<> to send the message to the Actor.
Stijn Herreman
@stijnherreman
@Tochemey ok, I'll take a look at that. I had previously read that Ask<> should be avoided, but to be honest I haven't taken a proper look at it yet.
Thank you :)
Arsene
@Tochemey
Then within the Actor where you are doing async stuff use:
CheckAsync().ContinueWith(task =>
                                {
                                    var response = task.Result;
                                    return response;
                                },
                                TaskContinuationOptions.AttachedToParent &
                                TaskContinuationOptions.ExecuteSynchronously).PipeTo(
                                closure);
@stijnherreman I have a production app using Rest API and Akka.NET
@stijnherreman CheckAsync() is a function that is running asynchronously.
@stijnherreman closure = Sender
Stijn Herreman
@stijnherreman
I'll read the article, it looks like a good resource.
Arsene
@Tochemey
@stijnherreman These articles have helped to be grounded in the Ask/PipeTo pattern
Arjen Smits
@Danthar
@stijnherreman avoid ask if you can. In your scenario. Why not use the Become feature.
So you receive a request for your REST data. You detect that you dont have it in cache
Arsene
@Tochemey
@Danthar from REST API controller I think Ask is the best bet.
Arjen Smits
@Danthar
then perform your async -> pipeto stuff. At the end, switch behaviors with the Become feature
where you wait for your response message
and stash everything else
once you receive your response message
unstash all. And serve all the other stuff from your cache
that way you dont have to block your actor's execution. And you are sure you only pay the hit once
From your REST API Controller, ask is the only way to do request -> response style communication with an actor. So yes. Then you should use Ask
But then your communicating from outside your actor system. So you don't have much options there :)
Stijn Herreman
@stijnherreman
@Danthar thank you for the insight, it sounds like a viable path. I'll try out some things tomorrow, end of the (work) day for me here.
Jack Wild
@jackowild
Hi all, just submitted a PR to upgrade the Akka.Persistence.MongoDB plugin to 1.3.1. If somebody could review this it would be much appreciated.
Here's the link: AkkaNetContrib/Akka.Persistence.MongoDB#30
Michel van den Berg
@promontis
I'm having trouble to inject a specific HOCON config for my cluster-sharding section into my persistence plugin. When I look at other implementations of persistence plugins, I see two flavors: a constructor with and without a config parameter. Eg. SqliteJournal (https://github.com/akkadotnet/akka.net/blob/1595e1e832fb42b4a70a4ebf0b3dc52b87b40a96/src/contrib/persistence/Akka.Persistence.Sqlite/Journal/SqliteJournal.cs) has public SqliteJournal(Config journalConfig), whereas other persistence plugins do not provide the Config parameter. What's the deal with the Config parameter? Will the cluster sharding plugin inject the current config into the persistence plugin via that ctor?
Arjen Smits
@Danthar
@promontis check out the cluster sharding settings
it uses a journal-plugin-id and snapshot-plugin-id setting which contains the absolute path to the journal or snapshot plugin config entity
if you dont define that it uses the system default (whatever you have defined)
So in short cluster sharding does not initialise the persistence plugin. Its reponsible for doing that itself. In short through its respective hocon config or manually before you intialise the cluster sharding system
The Config parameter is there to allow you to manually provide override configs or a while config yourself.
If you dont define anything it will fallback to whatever is in the hocon config
And its also used for testing :P
@jackowild we probably already seen your PR come up. but I forwarded your request internally as well. Can't make any promises as to when someone will get around to it though.
Michel van den Berg
@promontis
@Danthar but how does the cluster sharding plugin tell the persistence plugin which config to use? I mean, if I look at the persistence plugins they all know how to persist to a journal using the default journal config (eg. akka.persistence.journal.sql-server). They do not check the config for the cluster setting, so how does the cluster sharding persistence config (as configured via the journal-plugin-id, as you said) flow to the persistence plugin?
For example, this plugin (https://github.com/alexvaluyskiy/Akka.Persistence.Azure/blob/dev/src/Akka.Persistence.AzureTable/AzureTablePersistence.cs#L85) always uses the same config section 'akka.persistence.journal.azure-table' for its persistence. It seems not able to get a different config for eg cluster sharding. This means both normal actor state and cluster sharding state will end up in the same table (as configured by that one config section)
with the sql-server persistence plugin it seems to work when I configure it like this (https://github.com/promontis/stylister.sample/blob/master/Liking/Stylister.Liking.Downloader/settings.hocon), but I don't get how the akka.persistence.journal.sharding config section is passed to the sql-server persistence plugin
Arjen Smits
@Danthar
eh. I dont understand your question. Because you answered it yourself already ?
Michel van den Berg
@promontis
haha
Arjen Smits
@Danthar
Wellicht PM even snel ?
Michel van den Berg
@promontis
sure :)
Jack Wild
@jackowild
thanks @Danthar , I'm not sure how the build/CI process works so will need some help with that when one of you gets round to it :+1:
Michal Dabrowski
@defrag2_twitter
Hey guys, question: whenever using a supervising actor for persistent actor (in case where they cannot really be
created or recreated on demand), is the approach of creating actors while replaying event mentioned in "akka in action"
book (https://github.com/RayRoestenburg/akka-in-action/blob/master/chapter-persistence/src/main/scala/aia/persistence/ShoppersSingleton.scala#L71)
the best way to go? Or some streaming from some other source would be more applicable?
Ricardo Abreu
@codenakama

hey guys. I have this scenario where I need to able to do things like determining for a given user: nearby shops, distance from a shop, distance and time away from delivery person/courier.

For this I decided to create a "Locations Service" and the idea is to have other services actors make requests to it to get location related data using Akka.Remote .

For example: Shop Service asks locationsService which are the nearby shops for a given point/user location (latitude, longitude). My problem is that currently I have the dbs separated by each service. How would shop service know which shops to present to the user? Right now all I have is geo spatial data stored in locations service db and a name for each entry. I feel like I'm missing something and I might be even confusing myself :/

Ricardo Abreu
@codenakama
maybe I should kill the locations service and have each service that requires location handle their own spatial data queries without having to call a separate service
Bartosz Sypytkowski
@Horusiath
@codenakama isn't geospatial something that can be simply handled by the database itself?
@defrag2_twitter As long as it's idempotent and won't fail (in your example it's get-or-create an actor), it should be good to go
Ricardo Abreu
@codenakama
@Horusiath indeed, the logic can be done from the db. I think I should keep that inside each micro service and jsut kill my "location service" which was a bad idea
however, I could have a persistent actor registering "login" events for each shop, so this way I could tell which shops are available to order from