Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 21 20:34
    Aaronontheweb synchronize #5516
  • Jan 21 01:57
    Aaronontheweb commented #5400
  • Jan 21 01:42
    Aaronontheweb closed #5525
  • Jan 21 01:42
    Aaronontheweb commented #5525
  • Jan 21 01:41
    Aaronontheweb assigned #5525
  • Jan 21 01:41
    Aaronontheweb unassigned #5525
  • Jan 21 01:40
    Aaronontheweb assigned #5525
  • Jan 21 01:40
    Aaronontheweb labeled #5525
  • Jan 20 18:19
    Arkatufus synchronize #5516
  • Jan 20 18:08
    Arkatufus synchronize #5516
  • Jan 20 18:05
    Arkatufus edited #5516
  • Jan 20 18:05
    Arkatufus labeled #5524
  • Jan 20 18:05
    Arkatufus unlabeled #5524
  • Jan 20 18:02
    Arkatufus synchronize #5516
  • Jan 20 17:32
    fscavo opened #5525
  • Jan 20 16:31
    Arkatufus edited #5524
  • Jan 20 16:29
    Arkatufus edited #5524
  • Jan 20 16:29
    Arkatufus opened #5524
  • Jan 20 16:29
    Arkatufus labeled #5524
  • Jan 20 01:45

    Aaronontheweb on dev

    Removed comment that no longer … (compare)

Arjen Smits
@Danthar
From your REST API Controller, ask is the only way to do request -> response style communication with an actor. So yes. Then you should use Ask
But then your communicating from outside your actor system. So you don't have much options there :)
Stijn Herreman
@stijnherreman
@Danthar thank you for the insight, it sounds like a viable path. I'll try out some things tomorrow, end of the (work) day for me here.
Jack Wild
@jackowild
Hi all, just submitted a PR to upgrade the Akka.Persistence.MongoDB plugin to 1.3.1. If somebody could review this it would be much appreciated.
Here's the link: AkkaNetContrib/Akka.Persistence.MongoDB#30
Michel van den Berg
@promontis
I'm having trouble to inject a specific HOCON config for my cluster-sharding section into my persistence plugin. When I look at other implementations of persistence plugins, I see two flavors: a constructor with and without a config parameter. Eg. SqliteJournal (https://github.com/akkadotnet/akka.net/blob/1595e1e832fb42b4a70a4ebf0b3dc52b87b40a96/src/contrib/persistence/Akka.Persistence.Sqlite/Journal/SqliteJournal.cs) has public SqliteJournal(Config journalConfig), whereas other persistence plugins do not provide the Config parameter. What's the deal with the Config parameter? Will the cluster sharding plugin inject the current config into the persistence plugin via that ctor?
Arjen Smits
@Danthar
@promontis check out the cluster sharding settings
it uses a journal-plugin-id and snapshot-plugin-id setting which contains the absolute path to the journal or snapshot plugin config entity
if you dont define that it uses the system default (whatever you have defined)
So in short cluster sharding does not initialise the persistence plugin. Its reponsible for doing that itself. In short through its respective hocon config or manually before you intialise the cluster sharding system
The Config parameter is there to allow you to manually provide override configs or a while config yourself.
If you dont define anything it will fallback to whatever is in the hocon config
And its also used for testing :P
@jackowild we probably already seen your PR come up. but I forwarded your request internally as well. Can't make any promises as to when someone will get around to it though.
Michel van den Berg
@promontis
@Danthar but how does the cluster sharding plugin tell the persistence plugin which config to use? I mean, if I look at the persistence plugins they all know how to persist to a journal using the default journal config (eg. akka.persistence.journal.sql-server). They do not check the config for the cluster setting, so how does the cluster sharding persistence config (as configured via the journal-plugin-id, as you said) flow to the persistence plugin?
For example, this plugin (https://github.com/alexvaluyskiy/Akka.Persistence.Azure/blob/dev/src/Akka.Persistence.AzureTable/AzureTablePersistence.cs#L85) always uses the same config section 'akka.persistence.journal.azure-table' for its persistence. It seems not able to get a different config for eg cluster sharding. This means both normal actor state and cluster sharding state will end up in the same table (as configured by that one config section)
with the sql-server persistence plugin it seems to work when I configure it like this (https://github.com/promontis/stylister.sample/blob/master/Liking/Stylister.Liking.Downloader/settings.hocon), but I don't get how the akka.persistence.journal.sharding config section is passed to the sql-server persistence plugin
Arjen Smits
@Danthar
eh. I dont understand your question. Because you answered it yourself already ?
Michel van den Berg
@promontis
haha
Arjen Smits
@Danthar
Wellicht PM even snel ?
Michel van den Berg
@promontis
sure :)
Jack Wild
@jackowild
thanks @Danthar , I'm not sure how the build/CI process works so will need some help with that when one of you gets round to it :+1:
Michal Dabrowski
@defrag2_twitter
Hey guys, question: whenever using a supervising actor for persistent actor (in case where they cannot really be
created or recreated on demand), is the approach of creating actors while replaying event mentioned in "akka in action"
book (https://github.com/RayRoestenburg/akka-in-action/blob/master/chapter-persistence/src/main/scala/aia/persistence/ShoppersSingleton.scala#L71)
the best way to go? Or some streaming from some other source would be more applicable?
Ricardo Abreu
@codenakama

hey guys. I have this scenario where I need to able to do things like determining for a given user: nearby shops, distance from a shop, distance and time away from delivery person/courier.

For this I decided to create a "Locations Service" and the idea is to have other services actors make requests to it to get location related data using Akka.Remote .

For example: Shop Service asks locationsService which are the nearby shops for a given point/user location (latitude, longitude). My problem is that currently I have the dbs separated by each service. How would shop service know which shops to present to the user? Right now all I have is geo spatial data stored in locations service db and a name for each entry. I feel like I'm missing something and I might be even confusing myself :/

Ricardo Abreu
@codenakama
maybe I should kill the locations service and have each service that requires location handle their own spatial data queries without having to call a separate service
Bartosz Sypytkowski
@Horusiath
@codenakama isn't geospatial something that can be simply handled by the database itself?
@defrag2_twitter As long as it's idempotent and won't fail (in your example it's get-or-create an actor), it should be good to go
Ricardo Abreu
@codenakama
@Horusiath indeed, the logic can be done from the db. I think I should keep that inside each micro service and jsut kill my "location service" which was a bad idea
however, I could have a persistent actor registering "login" events for each shop, so this way I could tell which shops are available to order from
Thomas Lazar
@thomaslazar
good morning
i have a question... i can'T seem to get akka testing to run wiht nunit 3 (any version). i seen there were some fixes done around an issue regarding nunit3... but i can't seem to make it work. anyone got any clues?
Jose Carlos Marquez
@oeaoaueaa
what kind of error are you having?
Thomas Lazar
@thomaslazar
when i try to run the tests the resharper testrunner tells me "no testfixtures" found
Andrey Leskov
@andreyleskov
Hey guys, I'm scaling my application with cluster sharding, and have a look at multinode test kit (http://getakka.net/articles/networking/multi-node-test-kit.html). How it supposed to be debugged? is there any guide? may be runners for Resharper Tests, as for NBench ? (https://github.com/Pro-Coded/Pro.NBench.xUnit) I can imaging only attach VS to multinode test runner, and it seems to be painful and long process. (
Aaron Stannard
@Aaronontheweb
@thomaslazar running .NET Core or .NET desktop?
@andreyleskov debugging MNTR specs is a dark art
having done it for years, I have a couple of approaches I use
first - relying on logging as your primary source of information is the lowest-friction path
trying to debug 4-5 processes running concurrently
is going to end poorly usually
especially if you don't know where the problem is
Andrey Leskov
@andreyleskov
oh, >_< yep
Aaron Stannard
@Aaronontheweb
you can use the child process debugger extension
is a free add-on to Visual Studio
zero clue if that supports .NET Core or not
that can help get the debugger attached to breakpoints in the child process
so my methodology is this: use logging to smoke out the error
the MNTR runner will produce detailed logs for each node
and the overall test run report will show you which nodes had failed assertions
get enough log data to form a theory as to why the thing might be failing