Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:48
    dependabot-preview[bot] synchronize #4066
  • 17:48

    dependabot-preview[bot] on nuget

    Bump FSharp.Quotations.Evaluato… (compare)

  • 17:48
    dependabot-preview[bot] synchronize #3985
  • 17:48

    dependabot-preview[bot] on nuget

    Bump FsCheck.Xunit from 2.9.0 t… (compare)

  • 17:48
    dependabot-preview[bot] edited #4066
  • 17:48
    dependabot-preview[bot] edited #3985
  • 17:48
    dependabot-preview[bot] synchronize #3986
  • 17:48

    dependabot-preview[bot] on nuget

    Bump NUnit from 3.6.1 to 3.12.0… (compare)

  • 17:48
    dependabot-preview[bot] edited #3986
  • 17:47
    dependabot-preview[bot] edited #3985
  • 17:47
    dependabot-preview[bot] edited #4066
  • 17:47
    dependabot-preview[bot] edited #3986
  • 17:47

    dependabot-preview[bot] on dev

    Bump FsPickler from 5.3.1 to 5.… (compare)

  • 17:47

    dependabot-preview[bot] on nuget

    (compare)

  • 17:47
    dependabot-preview[bot] closed #4230
  • 17:47
    Aaronontheweb commented #4230
  • 17:46
    Aaronontheweb commented #3809
  • 17:45
    mika76 closed #3809
  • 17:45
    mika76 commented #3809
  • 17:17
    atrauzzi commented #4229
Natan Vivo
@nvivo
We can see that in the code that is on github, ~30 % of the time is spent on TryDequeue and 30% is on Count
most of this time is just looping through segments inside the concurrent queue
so, it's not actually locks, just regular code running. it's hard to optimize it more for general use cases.
(when I say locks in concurrentqueue, I mean spinwait, not the lock keyword)
Natan Vivo
@nvivo
if you were to pass all the overhead to adding and zero to retrieving, the most optimal solution would be to use a simple List<T> with a predefined size using locks/spinwait on Add, and for dequeue you have a single method that locks/spinwait, and exchange the list with a new one. you can then loop the list with tasks in the pool directly as an array. the issue is that it's hard to know what size this array should be
so, becomes a trade off to a linkedlist doing the same.
after that, the path would be a linked list of arrays, and that's exactly what ConcurrentQueue is...
Natan Vivo
@nvivo
the point I said about the test is that unless you have some time to fill a list with some items without dequeueing, the gains you have for swapping the list are lost. in the test, it keeps producing and swapping all the time, so the cost becomes of swapping becomes higher. if you just give it some time to fill a list and swap with a minimum amount of items, swapping becomes cheaper and these other solutions apply
another thing.. if using concurrentqueue, replacing Count for IsEmpty is much cheaper, as IsEmpty only checks the local segment, while Count needs to loop through all of them
Roger Johansson
@rogeralsing
:+1:
Roger Johansson
@rogeralsing
another solution would be to add a true DequeAll in a modified version of ConcurrentQueue. that would be feasable too, right? we could just make the concurrentqueue head point to where the last segment is, and then enumerate over all of the segments we just removed... havent checked the code for it, but that should work, right?
Natan Vivo
@nvivo
It could work. One of the things I thought is just using that code and tweaking some parameter to match the usage. ConcurrentQueue has a lot of parameters inside that were probably calculated for the most general cases
as I understand, what it does inside is that it creates a 32-item array for each segment, and it grows by linking them. Each segment has its lock, so you only lock if you are modifying that 32-item boundary
From the profiler, even with the 4th iteration (10 million actions I guess?), the time it wasted on spinwait was about 4% only. 30% was just moving through segments
so, in theory reducing the # of segments (that is - increasing the segment size) could help...
but at some point I wonder if you're not optimizing only for the benchmark
in the case of DequeueAll, if it needs to loop and lock through all segments, there might not be a lot of gain. But it could get the entire segment as an array and work those before going to the next
Aaron Stannard
@Aaronontheweb
@Horusiath @rogeralsing 1.0 should be ready to go shortly
1-2 weeks IIRC
Bartosz Sypytkowski
@Horusiath
Recently I've asked myself two basic questions, I have hard time to find answers for:
  1. Why JSON.NET? Why not for example protocol buffers by default?
  2. Why there is no globally configurable receive timeout?
Roger Johansson
@rogeralsing
  1. because of friction, json.net is pretty much the only serializer that allows serializing messages of any shape or form by default w/o altering constructors, adding attributes or interfaces
and for surrogates, surrogate support is mandatory as ActorRef needs special handling to be resolved into a real actorref.. and it can be embedded deeply inside another message
I do want to get rid of Json.NET, but most other serializers lack some features
e.g. polymorphic serialization
Bartosz Sypytkowski
@Horusiath
and what about 2nd? Don't you think we should add a globally configurable HOCON option for this? I think it's very risky that now we don't have any kind of timeout configured by default. It's not safe, if user forgot at any point to set receive timeout, his/her service could possibly hang forever
Roger Johansson
@rogeralsing
Im not following here, you mean a timeout for the receive method? I thought you were talking about ReceiveTimeout, the callback feature for actors
if you mean a way to ensure that no receive method runs forever.. I have looked into this, but there is no fast way to deal with it. there would need to be some supervisor thread that polls that no thread have been executing a receiveblock for too long
Roger Johansson
@rogeralsing
there is a dispatcher setting "deadlinetime" which decides for how long each mailbox run may run, but that does not deal with frozen/zombie threads. it only decied when the mailbox run should exit a batch op
During the very early stages of akka.net, I put together this: https://gist.github.com/rogeralsing/8472797 it does what you want, but expensive, and the implementation would beed to be cleaned
Bartosz Sypytkowski
@Horusiath
it think that the most common case is Ask - AFAIK it doesn't have any timeout by default
Michal Franc
@michal-franc
there is no default timeout i just crashed my VS because of that, due to infinite ncrunch test running and running :)
Roger Johansson
@rogeralsing
thats pretty much the same as for any code. you cant make a function break after x time in oop either
Dave Bettin
@dbettin
I plan on using Akka.net; Is it a terrible idea to read Akka in Action? I am concerned it will lead me down a path of confusion.
Roger Johansson
@rogeralsing
No, its a great book, actually I'm reading it right now, halfway through and everything applies except for the special Scala specific stuff like their Future library
Dave Bettin
@dbettin
Perfect! Thanks!
Bartosz Sypytkowski
@Horusiath
is there any way to override actor system settings after it's created?
Bartosz Sypytkowski
@Horusiath
I've got a lot freakin problems with persistence specs, when I run them one by one, it's all ok, when I run then all at once, a random failures occurs almost everytime
Roger Johansson
@rogeralsing
Depends on what settings you want to change. the core settings are parsed only once at start up, but you can inject top level fallbacks (you do that in f# module, right?)
Bartosz Sypytkowski
@Horusiath
@rogeralsing nope, just for some specs (I've noticed that one of my tests using TestLatches was hanging when message serialization was turned on)
but eventually I just moved it to separate spec class
Roger Johansson
@rogeralsing
I've seen a similar problem before, back then, it was that some static resources, like NoRouter or DefaultDeploy or whatver, was overwritten by the serializer .. e.g. some primitives took a static resource as an input argument, then the serializer overwrote the properties of the primitive, which was backed by the static resource, and tada , the static resource was corrupted
could that be a similar problem here?
e.g. public Props() : this(Some.Static.Resource) {} then when the deserializer tried to deserialize an object with such ctor, the injected resource would be corrupted
Bartosz Sypytkowski
@Horusiath
I know that test latch uses System.Threading.CountdownEvent so I supose it may be not safe to try to serialize/deserialize it
Roger Johansson
@rogeralsing
were you passing a testlatch object as a message?
Bartosz Sypytkowski
@Horusiath
I have to for test verification ;)
Roger Johansson
@rogeralsing
and if so, shouldnt that be marked with the INoSerializationNeeded or whatever the name
NoSerializationVerificationNeeded
that will ofc not help if the testlatch is part of a bigger message, but if it is the root, that would make it bypass serialization in inproc systems
Bartosz Sypytkowski
@Horusiath
it worked, I didn't know about that interface