Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:45
    nagytech edited #4086
  • 10:45
    nagytech synchronize #4086
  • 10:43
    nagytech opened #4086
  • 09:00
    Horusiath commented #4077
  • 06:31
    Aaronontheweb commented #4085
  • 06:28
    Aaronontheweb commented #4085
  • 06:24
    Aaronontheweb commented #4085
  • Dec 07 11:49
    IgorFedchenko commented #4085
  • Dec 07 10:31
    IgorFedchenko commented #4085
  • Dec 07 08:36
    IgorFedchenko commented #4085
  • Dec 06 20:57
    IgorFedchenko commented #4085
  • Dec 06 20:32
    IgorFedchenko commented #4085
  • Dec 06 20:01
    IgorFedchenko commented #4085
  • Dec 06 19:55
    IgorFedchenko commented #4085
  • Dec 06 16:22
    Aaronontheweb labeled #3997
  • Dec 06 16:22
    Aaronontheweb closed #3997
  • Dec 06 16:20
    IgorFedchenko commented #3997
  • Dec 06 16:08
    IgorFedchenko commented #4085
  • Dec 06 15:50
    Aaronontheweb assigned #4085
  • Dec 06 15:50
    Aaronontheweb labeled #4085
Alex Achinfiev
@aachinfiev
Akka.Persistence.Cassandra (1.0.6) atm with Akka.Persistence 1.0.6 because it's not compatible with 1.2.0 yet
If I could spawn a fixed number (say M = 1000) of persistent actors to handle N (say 100k) entities and each new id dynamically replays state for new message that would work. But I don't know if it works that way.
Bartosz Sypytkowski
@Horusiath
@aachinfiev having 100k-500k actors may not be that bad (you probably relax amount of memory used by the process). If you're going to use them anyway within close time range, it's better to have them in memory than trying to kill some of them and respawn on demand -> latter is much more expensive.
Alex Achinfiev
@aachinfiev
Using sharding won't help me here, since I don't want to keep all 100k of them in memory regardless.
100k-500k is an arbitrary number. I can have other record types during import that may go into millions.
Most of the requests are for a new entity .. with sometimes updates to existing one during same import
Bartosz Sypytkowski
@Horusiath
I would measure how much memory does it take. Probably few GB
Alex Achinfiev
@aachinfiev
And after each import send a purge request to reclaim memory? I have a number of services running and box has only 8 - 16gb in total.
Bartosz Sypytkowski
@Horusiath
@Kavignon you must clearly define actors relationship. Because actors that can be either created or already active doesn't fit master-slave scenario (as slave may be already there, where master is being created or I get your case wrong)
@aachinfiev actor-based systems are stateful by default, and this also means they'll consume a lot of memory. Maybe actor-per-entity is not a good option for you, and you need something lighter indeed
Bartosz Sypytkowski
@Horusiath
(if you really need, you may fabricate a message that persistent actor sends to its journal when it's going to persist an event: see example)
Alex Achinfiev
@aachinfiev
So you directly communicate with Journal rather than making a persitentactor do one message at a time?
I.e. pool a set of messages and then batch them to journal to persist
Bartosz Sypytkowski
@Horusiath
I'm using this when I want to have compatibility with akka.persistence protocol without actually creating thousands of actors
for persisting multiple events at once you may just use PersistAll method
Alex Achinfiev
@aachinfiev
Your first point is my use case for the import case. Looks very interesting. I need to check that out. Thanks :)
Michael Chandler
@optiks
Hello. Is it possible to intercept ICanTell.Tell()? I'd like to add some logging. Or would this be better done by monitoring the mailbox or
  • or some other means.
Basically, I want to be able to visualise the messages flying around. I'd like to generate a sequence diagram or similar. AroundReceive works for the receiving side, but not the sending side.
Aaron Stannard
@Aaronontheweb
@optiks on the sending side...
unfortunately I don't think you can intercept the .Tell operation without using something like PostSharp to do IL weaving
since that's built into objects like all of the IActorRef implementations
Kevin Avignon
@Kavignon
@Horusiath Well I'd Actor A to start first and it starts 4 actors at different moments. And once they're not useful any more, I'd like to dispose of them
Michael Chandler
@optiks
@Aaronontheweb Thanks. I had a poke around and came to the same conclusion. It would be a nice extensibility point :)
Michael Chandler
@optiks
Hacky workaround is to use an extension method. i.e. Duplicate ActorRefImplicitSenderExtensions. It just needs to be defined in each project due to scoping.
Aaron Stannard
@Aaronontheweb
@to11mtm yeah, we're going to bring that back into the picture
I'd be up for releasing that now
seen a couple of interesting ideas
one is the aftorref caching
the other is lazily creating the remote actor ref
for the sender
if it's not needed
@schepersk dude, I'm so sorry - looks like I totally missed your messages over the past week
do you still need help with those cluster issues?
Bartosz Sypytkowski
@Horusiath
@Kavignon with standard akka.fsharp you can use mailbox.Context.Child(name) to try to get the child from the parent by name. If such actor didn't exist, it will return ActorsRefs.Nobody as a reference.
Aaron Stannard
@Aaronontheweb
seems weird to me that now that we have TeamCity formatting for NBench all of our performance specifications suddenly stop being racy again
the concurrent programming version of the observer effect :p
Sean Farrow
@SeanFarrow
@aaronontheweb is that just on the server or building locally as well
Aaron Stannard
@Aaronontheweb
@SeanFarrow building locally it's worked fine for the most part. We had issues on the server where the time-sensitive Akka.Streams NBench specs would fail
5ms deadlines on some of them
Aaron Stannard
@Aaronontheweb
blob
there we go
we're going to be adding this to the Multi node test runner as well
that'll allow TeamCity's "flaky tests" report to help signal to us where we need to go spend some time hardening stuff
the benchmarking stuff is a bit harder to fix mostly because benchmarking concurrent code is a dark art
we do things like toggle the processor priority to help it get scheduled ahead of everything else, but we're still at the mercy of the OS when it comes to that
in the grand scheme of things, relativistic benchmarking is the way to solve that
build up a history of every benchmark run for the same hardware profile and judge any new benchmark relative to the old ones
and use an estimated weighted moving average or some sort of threshold system to determine if a new change is seriously out of line or not
but that requires storing the state somewhere and adding a bunch of fun network calls to the assertion engine NBench uses