Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:23
    huzaifak commented #4303
  • 17:23
    huzaifak closed #4303
  • 02:13
    ShawnYun commented #4366
  • Apr 01 17:18
    huzaifak commented #4303
  • Apr 01 14:30
    Aaronontheweb commented #4366
  • Apr 01 14:29
    Aaronontheweb closed #4360
  • Apr 01 14:29
    Aaronontheweb commented #4360
  • Apr 01 06:43

    dependabot-preview[bot] on nuget

    (compare)

  • Apr 01 06:43
    dependabot-preview[bot] closed #151
  • Apr 01 06:43
    dependabot-preview[bot] commented #151
  • Apr 01 06:43
    dependabot-preview[bot] labeled #152
  • Apr 01 06:43
    dependabot-preview[bot] opened #152
  • Apr 01 06:43

    dependabot-preview[bot] on nuget

    Bump AkkaVersion from 1.4.1 to … (compare)

  • Apr 01 06:43

    dependabot-preview[bot] on nuget

    (compare)

  • Apr 01 06:43
    dependabot-preview[bot] closed #125
  • Apr 01 06:43
    dependabot-preview[bot] commented #125
  • Apr 01 06:43
    dependabot-preview[bot] labeled #127
  • Apr 01 06:43
    dependabot-preview[bot] opened #127
  • Apr 01 06:43

    dependabot-preview[bot] on nuget

    Bump AkkaVersion from 1.4.2 to … (compare)

  • Apr 01 03:26
    ShawnYun opened #4366
Roman Golenok
@shersh
How to use Database in Akka.NET ? Any samples? Do I need to create Actor for Db reading operations or I should perform action inside actor where it need?
Natan Vivo
@nvivo
@Aaronontheweb, @rogeralsing, I found some ways to optimize the queue. A simple thing is that Count is quite expensive in ConcurrentQueue, and it's being called twice in the PoolWorker. Just by calling it once you can improve perf by ~15%.
For more gains, the test needs to be changed because the way it is, it considers the worst possible scenario where messages are being queued and dequeued at the same time in all threads all the time, so ConcurrentQueue is already very optimized for this and it will be hard to beat it. But if you change the pattern to add some messagesm wait for a while then add more, you can measure other improvements with some tricks.
I tested a variation of your swap pattern, where I basically return the entire concurrentqueue at once and create another. locks only on add and dequeueall. I could see about 40% improvements doing that vs dequeueing one at a time
Natan Vivo
@nvivo
well.. just some ideas...
Roger Johansson
@rogeralsing
Neat, that was what i tried to solve lockless (but failed due to stale reads)
Is the perf gain st
Still true in a multi producer senario?
Or does the gain vanish then due to contention?
Bartosz Sypytkowski
@Horusiath
@rogeralsing @Aaronontheweb Do we have some release planned in the near future?
Roger Johansson
@rogeralsing
Not that I know of, unless @Aaronontheweb has something planned
Natan Vivo
@nvivo
@rogeralsing I didn't change the tests, so it remained one producer. But it seems most of the time spent in ConcurrentQueue is not actually waiting, but dealing with the internal structures of it.
What I did that helped is that I grabbed the ConcurrentQueue code from corefx and added it locally to the project, and run the profiler.
We can see that in the code that is on github, ~30 % of the time is spent on TryDequeue and 30% is on Count
most of this time is just looping through segments inside the concurrent queue
so, it's not actually locks, just regular code running. it's hard to optimize it more for general use cases.
(when I say locks in concurrentqueue, I mean spinwait, not the lock keyword)
Natan Vivo
@nvivo
if you were to pass all the overhead to adding and zero to retrieving, the most optimal solution would be to use a simple List<T> with a predefined size using locks/spinwait on Add, and for dequeue you have a single method that locks/spinwait, and exchange the list with a new one. you can then loop the list with tasks in the pool directly as an array. the issue is that it's hard to know what size this array should be
so, becomes a trade off to a linkedlist doing the same.
after that, the path would be a linked list of arrays, and that's exactly what ConcurrentQueue is...
Natan Vivo
@nvivo
the point I said about the test is that unless you have some time to fill a list with some items without dequeueing, the gains you have for swapping the list are lost. in the test, it keeps producing and swapping all the time, so the cost becomes of swapping becomes higher. if you just give it some time to fill a list and swap with a minimum amount of items, swapping becomes cheaper and these other solutions apply
another thing.. if using concurrentqueue, replacing Count for IsEmpty is much cheaper, as IsEmpty only checks the local segment, while Count needs to loop through all of them
Roger Johansson
@rogeralsing
:+1:
Roger Johansson
@rogeralsing
another solution would be to add a true DequeAll in a modified version of ConcurrentQueue. that would be feasable too, right? we could just make the concurrentqueue head point to where the last segment is, and then enumerate over all of the segments we just removed... havent checked the code for it, but that should work, right?
Natan Vivo
@nvivo
It could work. One of the things I thought is just using that code and tweaking some parameter to match the usage. ConcurrentQueue has a lot of parameters inside that were probably calculated for the most general cases
as I understand, what it does inside is that it creates a 32-item array for each segment, and it grows by linking them. Each segment has its lock, so you only lock if you are modifying that 32-item boundary
From the profiler, even with the 4th iteration (10 million actions I guess?), the time it wasted on spinwait was about 4% only. 30% was just moving through segments
so, in theory reducing the # of segments (that is - increasing the segment size) could help...
but at some point I wonder if you're not optimizing only for the benchmark
in the case of DequeueAll, if it needs to loop and lock through all segments, there might not be a lot of gain. But it could get the entire segment as an array and work those before going to the next
Aaron Stannard
@Aaronontheweb
@Horusiath @rogeralsing 1.0 should be ready to go shortly
1-2 weeks IIRC
Bartosz Sypytkowski
@Horusiath
Recently I've asked myself two basic questions, I have hard time to find answers for:
  1. Why JSON.NET? Why not for example protocol buffers by default?
  2. Why there is no globally configurable receive timeout?
Roger Johansson
@rogeralsing
  1. because of friction, json.net is pretty much the only serializer that allows serializing messages of any shape or form by default w/o altering constructors, adding attributes or interfaces
and for surrogates, surrogate support is mandatory as ActorRef needs special handling to be resolved into a real actorref.. and it can be embedded deeply inside another message
I do want to get rid of Json.NET, but most other serializers lack some features
e.g. polymorphic serialization
Bartosz Sypytkowski
@Horusiath
and what about 2nd? Don't you think we should add a globally configurable HOCON option for this? I think it's very risky that now we don't have any kind of timeout configured by default. It's not safe, if user forgot at any point to set receive timeout, his/her service could possibly hang forever
Roger Johansson
@rogeralsing
Im not following here, you mean a timeout for the receive method? I thought you were talking about ReceiveTimeout, the callback feature for actors
if you mean a way to ensure that no receive method runs forever.. I have looked into this, but there is no fast way to deal with it. there would need to be some supervisor thread that polls that no thread have been executing a receiveblock for too long
Roger Johansson
@rogeralsing
there is a dispatcher setting "deadlinetime" which decides for how long each mailbox run may run, but that does not deal with frozen/zombie threads. it only decied when the mailbox run should exit a batch op
During the very early stages of akka.net, I put together this: https://gist.github.com/rogeralsing/8472797 it does what you want, but expensive, and the implementation would beed to be cleaned
Bartosz Sypytkowski
@Horusiath
it think that the most common case is Ask - AFAIK it doesn't have any timeout by default
Michal Franc
@michal-franc
there is no default timeout i just crashed my VS because of that, due to infinite ncrunch test running and running :)
Roger Johansson
@rogeralsing
thats pretty much the same as for any code. you cant make a function break after x time in oop either
Dave Bettin
@dbettin
I plan on using Akka.net; Is it a terrible idea to read Akka in Action? I am concerned it will lead me down a path of confusion.
Roger Johansson
@rogeralsing
No, its a great book, actually I'm reading it right now, halfway through and everything applies except for the special Scala specific stuff like their Future library
Dave Bettin
@dbettin
Perfect! Thanks!
Bartosz Sypytkowski
@Horusiath
is there any way to override actor system settings after it's created?
Bartosz Sypytkowski
@Horusiath
I've got a lot freakin problems with persistence specs, when I run them one by one, it's all ok, when I run then all at once, a random failures occurs almost everytime