by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • May 28 19:06
    snekbaev opened #4446
  • May 28 10:43
    Zetanova commented #4434
  • May 28 06:49

    dependabot-preview[bot] on nuget

    Bump Google.Protobuf from 3.12.… (compare)

  • May 28 06:49
    dependabot-preview[bot] labeled #4445
  • May 28 06:49
    dependabot-preview[bot] opened #4445
  • May 28 00:03
    Aaronontheweb commented #4434
  • May 27 23:41
    Zetanova commented #4434
  • May 27 22:57
    Zetanova commented #4434
  • May 27 22:34
    Aaronontheweb commented #4419
  • May 27 15:54
    Ralf1108 commented #4432
  • May 27 14:42
    Ralf1108 commented #4419
  • May 27 09:41
    Zetanova commented #4434
  • May 27 06:38
    dependabot-preview[bot] labeled #141
  • May 27 06:38
    dependabot-preview[bot] opened #141
  • May 27 06:38

    dependabot-preview[bot] on nuget

    Bump AkkaVersion from 1.4.6 to … (compare)

  • May 27 06:33

    dependabot-preview[bot] on nuget

    (compare)

  • May 27 06:33
    dependabot-preview[bot] closed #156
  • May 27 06:33
    dependabot-preview[bot] commented #156
  • May 27 06:33
    dependabot-preview[bot] labeled #157
  • May 27 06:33
    dependabot-preview[bot] opened #157
Aaron Stannard
@Aaronontheweb
I've literally never had this issue happen to me on v0.4.6 but other users have reported it
I would strongly, strongly encourage rolling back to v0.4.6
Jesse Connor
@jesseconnr
I was having the issue on 0.4.6 and an older version of akka and upgraded to see if it would fix it and no dice.
Aaron Stannard
@Aaronontheweb
well shucks
can you guys do me a favor and open an issue on the DotNetty github
I don't have a window where I can even look into the issue for another 5 days
you're going to get a faster resolution on it if we can get their org to start looking at the problem
this is not an Akka.NET issue
it's a problem with length-frame encoding not being written or read correctly
Jesse Connor
@jesseconnr
Ah yeah I see the bug report and the mention. I get that same error. DotNetty.Codecs.TooLongFrameException: Adjusted frame length exceeds 128000: 419561476 - discarded
Aaron Stannard
@Aaronontheweb
Azure/DotNetty#360
Jesse Connor
@jesseconnr
hmm, v5 was next week in february, but I guess it never got there
Aaron Stannard
@Aaronontheweb
I have the DotNetty guys on the horn in their gitter now
please go post a copy of your error message et al
that would help
have to demonstrate the pervasiveness of the issue
Jesse Connor
@jesseconnr
Would having an old 1.3.0 version of Akka.Serialization.Hyperion possibly have an effect?
I just realized that when everything else was updated, that wasn't because I didn't check the prerelease in visual studio.
Aaron Stannard
@Aaronontheweb
try an upgrade and see what happens
but this looks really clearly like a DotNetty issue
Ryan Anthony
@ryandanthony_twitter
Can you use a Lighthouse type of solution with Akka.Cluster.Sharding?
Aaron Stannard
@Aaronontheweb
Ryan Anthony
@ryandanthony_twitter
ok let me try that
thanks
Jesse Connor
@jesseconnr
Ah well, I would try, but using nuget from within visual studio to downgrade was apparently a bad idea, everything is wonderfully jacked now. :)
Aaron Stannard
@Aaronontheweb
lol
well, got some answers from the DotNetty crew that make some sense
due to some issues with .NET Core, it's possible that the byte buffer pools DotNetty uses can be released early
by the socket itself
especially true on Linux
we use the default byte buffer allocator in our DotNetty transport, which uses pooling
this normally helps performance big time, because it eliminates allocations and keeps GC pressure down
guess I could expose some sort of setting to change the pooling strategy within Akka.Remote's DotNetty transport ("none" being an option)
but anyway, that issue is fixed in DotNetty v0.5.0 and they have one last PR they're trying to sort out before moving ahead with that
Ryan Anthony
@ryandanthony_twitter
@Aaronontheweb thanks for the help that did it. now onto the next problem.
If I have 2+ nodes (cluster sharding) running, and I start sending messages, none are getting processed, once I drop down to 1, processing starts on that node, then I can later add additional nodes and they will sometimes pick up the load.. Is this by design?
am I missing another config?
Aaron Stannard
@Aaronontheweb
oh man, this is running v1.3.5 right?
Ryan Anthony
@ryandanthony_twitter
yea
beta60
Aaron Stannard
@Aaronontheweb
sounds like this bug we fixed
that we're trying to ship in 1.3.6
Ryan Anthony
@ryandanthony_twitter
ok
Aaron Stannard
@Aaronontheweb
if you try the nightlies that should work
Ryan Anthony
@ryandanthony_twitter
ok
my bad on the delay on the 1.3.6 release
Ryan Anthony
@ryandanthony_twitter
I understand
when do you think that the cluster sharding stuff will be out of beta?
Aaron Stannard
@Aaronontheweb
soon, hopefully - we just fixed the biggest bug with it in 1.3.6