Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 19:13
    hwanders commented #4096
  • 13:05
    IgorFedchenko commented #4085
  • 03:08
    hhko commented #4094
  • Dec 13 21:37
    Aaronontheweb commented #4085
  • Dec 13 20:28
    IgorFedchenko commented #4085
  • Dec 13 20:27
    IgorFedchenko commented #4085
  • Dec 13 15:38
    Aaronontheweb labeled #4096
  • Dec 13 15:38
    Aaronontheweb milestoned #4096
  • Dec 13 15:38
    Aaronontheweb labeled #4096
  • Dec 13 15:38
    Aaronontheweb opened #4096
  • Dec 13 10:41
    peirens-bart opened #4095
  • Dec 13 08:37
    Aaronontheweb synchronize #4071
  • Dec 13 08:13
    jiyeongj opened #4094
  • Dec 12 15:42
    Aaronontheweb synchronize #4086
  • Dec 12 15:42
    Aaronontheweb closed #4083
  • Dec 12 15:42

    Aaronontheweb on dev

    Fix #4083 - Endpoint receive bu… (compare)

  • Dec 12 15:42
    Aaronontheweb closed #4089
  • Dec 12 15:42
    Aaronontheweb labeled #4093
  • Dec 12 15:42
    Aaronontheweb labeled #4093
  • Dec 12 15:42
    Aaronontheweb labeled #4093
Nikita Tsukanov
@kekekeks
It doesn't actually have built-in worker support, but the first variant can be easily implemented with it
Roman Golenok
@shersh
@kekekeks thanks, I did see that project. My question is just for self-learning
Joshua Benjamin
@annymsMthd
@smalldave About running the Node tests in an AppDomain instead of starting a new process for each
Joshua Benjamin
@annymsMthd
@smalldave @Aaronontheweb My local branch now uses AppDomain for the individual nodes in multinode tests and i can debug the process without additional addons. Xunit actually does this under the covers when it runs a test suite. It has helped me spot a few race conditions going on. I am also seeing an issue with the ClusterDeathWatchSpec though. The node that was downed by the leader isn't recognizing that it itself is down and therefore isn't shutting down.
Nikita Tsukanov
@kekekeks
Dumb question. How does Akka.Persistence deal with renamed classes/namespaces?
I've seen "$type" in snapshots with FQ type names
Bartosz Sypytkowski
@Horusiath
It's pretty simple: it doesn't :)
That depends on used serializer (I don't remember how JSON.NET behaves in situation when $type field addresses non-existing type)
Roger Johansson
@rogeralsing
It would be pretty easy to add some resolver for that
Bartosz Sypytkowski
@Horusiath
If you have to, you could always use custom serializer, which is able to handle that problem for persistent events
Nikita Tsukanov
@kekekeks
Is there some way to access default serializer's configuration?
Bartosz Sypytkowski
@Horusiath
i.e. you could persist your events with protocol buffers serializer - since they use explicit proto schemas, as long as your mapped classes satisfy schema (which depends only on fields order and type, not even names) they're good to go
Nikita Tsukanov
@kekekeks
Well, I'm usually against using human-inreadable formats for persistence
Aaron Stannard
@Aaronontheweb
the serialization layer is fully pluggable
Nikita Tsukanov
@kekekeks
And with protobuf you can't switch data types (eg. from int to double, string, etc)
Aaron Stannard
@Aaronontheweb
the strong typing protobufs introduces is where all of the serialization performance benefits come from
Bartosz Sypytkowski
@Horusiath
but you can change field names, and protobuf is faster and more compact
Nikita Tsukanov
@kekekeks
Protobuf is great for transfering data on the wire, but for something that will be stored for years? Probably not.
Aaron Stannard
@Aaronontheweb
yep, I agree with that
serialization overhead isn't something that matters much in the context of durable stores
Aaron Stannard
@Aaronontheweb
alrighty, meeting time
Joshua Benjamin
@annymsMthd
Is the meeting in here?
Natan Vivo
@nvivo
@Aaronontheweb, @rogeralsing about the dispatchers. From what I understood, the main use cases to set a dispatcher are a) specify a thread model for performance reasons, so your actor is bound to one or more specific threads, or b) to limit the damage area in case too much work is being done, so you don't starve the rest of the system. Is that right? Are other common use cases?
Nikita Tsukanov
@kekekeks
To limit the number of concurrent connections to something like Postgres (which really doesn't like when you have >100 connections to it)
David Smith
@smalldave
sorry guys. as you might have gathered I'm not going to make the meeting tonight
@annymsMthd sounds good. we did discuss using appdomains instead of processes before. somebody came up with what I remember being a good reason not to. possibly @rogeralsing. certainly not against the idea though. would make things a lot easier.
Arjen Smits
@Danthar
@skotzko interesting email
Bartosz Sypytkowski
@Horusiath

To limit the number of concurrent connections to something like Postgres

@kekekeks This doesn't sound like a job for a dispatcher, rather the matter of design constrained pool of actors managing the connections

Nikita Tsukanov
@kekekeks
Do you use database from a single actor type?
Natan Vivo
@nvivo
@Horusiath I'm looking for things to write in the dispatcher documentation. like "reasons you might want to do this". if you have more recommendations, I'd be glad to hear
Nikita Tsukanov
@kekekeks
I. e. one can have a ton of different actors that act like a repository
Natan Vivo
@nvivo
@kekekeks you can restrict concurrent connections to a database with routers, I have been doing with actors only
Bartosz Sypytkowski
@Horusiath
@kekekeks you may wrap database access layer in actor
Nikita Tsukanov
@kekekeks
And lose access to ORM?
nope
Natan Vivo
@nvivo
you don't need to lose ORM
Nikita Tsukanov
@kekekeks
You lost me here
You can't use one pool since actors are actually of different types
i. e. UsersRepository, MessageRepository and so on
and even have pools for groups of actors of one type
Natan Vivo
@nvivo
it all depends on how you see the problem
Nikita Tsukanov
@kekekeks
In this case you'll have a lot of actors that do exclusively DB operations but are limited by DB connection pool
Arjen Smits
@Danthar
Besides leveraging the supervision capabilities, and perhaps error handling. Why would you want your data access, which is already abstracted into repositories. abstracted behind an Actor?
Natan Vivo
@nvivo
if you want to restrict all queries to a database, you could create an actor that routes or executes only 5 queries a time for example (an idea)
if you want to abstract per repository, then you can use all your ORM inside that actor
Nikita Tsukanov
@kekekeks
in this case I'll have to do everything in one god-class
Arjen Smits
@Danthar
@nvivo true. but if you are using an ORM, access to that level, for that kind of manipulation is abstracted away
Natan Vivo
@nvivo
the point is that there is not a single way to do it. akka don't require you to stop using an orm
depends on the orm right?