Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Oct 15 20:04
    IgorFedchenko synchronize #3973
  • Oct 15 20:03
    IgorFedchenko synchronize #3973
  • Oct 15 19:34
    IgorFedchenko synchronize #3973
  • Oct 15 17:53
    Aaronontheweb closed #3972
  • Oct 15 17:53
    Aaronontheweb commented #3972
  • Oct 15 17:53
    Aaronontheweb closed #3976
  • Oct 15 17:53
    Aaronontheweb commented #3976
  • Oct 15 17:40

    Aaronontheweb on dev

    cleaned up some samples to use … (compare)

  • Oct 15 17:40
    Aaronontheweb closed #3975
  • Oct 15 16:33
    IgorFedchenko synchronize #3973
  • Oct 15 16:31
    IgorFedchenko synchronize #3973
  • Oct 15 14:04
    wsvdyk opened #3976
  • Oct 14 21:02
    Aaronontheweb synchronize #3975
  • Oct 14 21:02
    Aaronontheweb opened #3975
  • Oct 14 20:11
    IgorFedchenko commented #3973
  • Oct 14 20:10
    IgorFedchenko synchronize #3973
  • Oct 14 20:06
    IgorFedchenko synchronize #3973
  • Oct 14 20:06
    IgorFedchenko synchronize #3973
  • Oct 14 19:42
    IgorFedchenko edited #3973
  • Oct 14 18:08
    Aaronontheweb commented #3937
Bartosz Sypytkowski
@Horusiath
@snekbaev sorry, but I don't have an obvious answer for you. What I can think of in your scenario is to use ClusterClient from Akka.Cluster.Tools to connect web app as clients and hook to you web application lifecycle to order it to shutdown gracefully (via ActorSystem's Shutdown).
Bartosz Sypytkowski
@Horusiath
@Dan-Albrecht depending on how massive these substitutions are:
  • if I'm right ${ENV_VAR} should work with environment variables substitutions by now
  • if I need to sustitute few config values from code, I'm just building a HOCON string directly in code (lame I know, but we're working on making dynamic in-code configs possible)
  • if I need to substitue a lot (like dev/prod), I just have my-project.dev.conf and my-project.prod.conf and depending on the build specifics I'm loading one file or another.
AndreSteenbergen
@AndreSteenbergen
@angrybug ah that does make sense
Shukhrat Nekbaev
@snekbaev
@Horusiath thank you for your reply. For that I will need to start using clustering in this stand-alone VM's service, that's the downside, but the upside is the cluster client (coming from WebAPI) will be able to connect directly to it and cluster itself will never try to connect to the client, thus disassociation will never happen. Is my understanding correct? And two more questions: what happens if WebAPI won't gracefully shutdown its cluster client? And do you happen to remember some good example you can point to that deals with akka io. I made own version which is a hybrid of docs + your blog post. Seems to be working ok, just want to make sure I'd handling/disposing things correctly. For example, when shutting down the server part, I'm getting the deathpactexception coming from the ReceiveListener and some deadletters with SocketEventArgs or something like that
Dan Albrecht
@Dan-Albrecht
Thanks @Horusiath . We currently have 15 environments, so want to avoid going down the one file per env route. Will play around with having a single shared defaults file and constructing a HOCON string in code for the per-environment stuff and slapping them together with WithFallback. Was initially opposed to doing environment variables, because I'd have to write code to set the environment variables, but that might be the cleanest solution...
Aaron Dandy
@aarondandy
Maybe somebody can help me out with this: My context is that I am coming from a background where I was playing with Orleans and using Service Fabric (virtual) actors. I understand that Akka actors are not virtual but what I don't understand is how I avoid running out of RAM. I'm assuming there is some pattern involving the actor hierarchy but I can't find a clean way to understand how that would look in practice. If I have a parent manage the lifetimes of the child actors, reactivating them and putting them to sleep, that sounds like a bit more work for lazy old me. And if I get a bit more motivation I still have a problem where I am making a parent actor a single bottleneck for messaging. What am I missing? What does a long lived actor system look like? How are you all approaching problems where you have new entities appearing in a system and slowly fading out of common use but not vanishing entirely?
Ismael Hamed
@ismaelhamed

@aarondandy I see two main design patterns, depending on your use case:

1) Actors with identity (like users, games, devices, etc), when you need to evenly distribute actors across the nodes in a cluster. Also, see passivation in cluster sharding for getting actor out of memory when they're idle.
2) Routers, for those cases in which you just need a bunch of actors to perform some work in a load-balancing fashion. If you need to route work with a certain key to the same routee, see the ConsistentHashing routers.

Otherwise, I think you're pretty much on your own when it comes to managing an actor lifecycle.

Havret
@Havret
Is it possible to use akka persistance without binary serialization? I mean to persist events in event journal (sql server) in human readable way?
Aaron Dandy
@aarondandy
Thanks for the leads ♥️
Ismael Hamed
@ismaelhamed
@Havret like the JSON serializer?
Havret
@Havret
@ismaelhamed Yep, sth like that.
Aaron Stannard
@Aaronontheweb
@Havret I thought JSON.NET was the default for Akka.Persistence.Sqlserver?
Havret
@Havret
@Aaronontheweb Yes it is, but it saves data in binary format. I would like to have it as a plain string.
Onur Gumus
@OnurGumus
@ismaelhamed when a new persistent actor created, how does it know it is a new actor or one that needs recovery from database ?
Bartosz Sypytkowski
@Horusiath
@OnurGumus every persistent actor has to define its persistenceId - it's a reference identifier used to track all of the events that correspond to this logical entity. Akka.Persistence requires to have only once actor with given persistenceId at the time (otherwise you can corrupt the actor's state), it up to you to fulfill that requirement. One simple way is to keep all actors under the same parent (if possible). Also other parts of the platform, like Akka.Cluster.Sharding can help you with keeping one entity at the time.
Shukhrat Nekbaev
@snekbaev
@Horusiath, @Aaronontheweb when Akka.NET Remote is used with WebApp it needs a special remoting settings (enforce-ip-family = true, dns-use-ipv6 = false etc.). Those are documented. I've refactored the old remoting logic into Akka IO. It works locally and on LAN, however, when WebApp is published it doesn't seem to connect. Are you aware of config/code changes required to make it connect? Thank you!
Amongst logs: Could not establish connection because finishConnect never returned true (consider increasing akka.io.tcp.finish-connect-retries)
Shukhrat Nekbaev
@snekbaev
in code using _remoteAddress = new IPEndPoint( ipAddress, port ); where ipAddress is IPAddress for both client and server
Shukhrat Nekbaev
@snekbaev

could it be the

public TcpOutgoingConnection(TcpExt tcp, IActorRef commander, Tcp.Connect connect)
            : base(tcp, new Socket(SocketType.Stream, ProtocolType.Tcp) { Blocking = false }, connect.PullMode)

it creates a new socket, which internally seems to assume it's ipv6 and turns on the dual mode. Given the remote config settings, I think there it is strictly disabling anything related to ipv6. Maybe that's why it can't connect...

Shukhrat Nekbaev
@snekbaev
ok, I can confirm it is exactly that. Added the address family and bingo, will open a GitHub issue
Chris G. Stevens
@cgstevens
@Havret and @AndreSteenbergen
Thanks for the replies! Sorry i have not been on as I have been head down.
I did end up just injecting the DistributedPubSub into my actor and everything seems to be working great.
Thanks you for your help last week.
Peter Huang
@ptjhuang
Version tolerance of Hyperion - if a new version of an assembly adds a new property to a type, and the VersionTolerance setting is true, should it successfully deserialize a stream to the new version? Had a look at DefaultCodeGenerator.cs, it seems the field orders are important. But even if that's observed, I've got a gist that seems to fail when adding new property. https://gist.github.com/angrybug/5e63b0ac8945d51fd62f5279a5db6c0d Often, we upgrade a microservice adding a few properties, it would be nice to just swap over to new version without recompiling all the dependent services. Is it general best practice to turn version tolerance off?
Shukhrat Nekbaev
@snekbaev
here's the issue: akkadotnet/akka.net#3679
joowon
@HIPERCUBE
Peter Huang
@ptjhuang
@HIPERCUBE doesn't look like it. It's tricky to implement the query API as it is supposed to be able to monitor log additions. For many rdbms that means polling which is hard to make performant
joowon
@HIPERCUBE
@angrybug https://github.com/AkkaNetContrib/Akka.Persistence.PostgreSql/tree/dev/src/Akka.Persistence.PostgreSql.Tests/Query
I found persistence-query related test code in postgresql provider repo
Is it meaningless test code?
joowon
@HIPERCUBE
‘’’
ReadJournal = Sys.ReadJournalFor<SqlReadJournal>(SqlReadJournal.Identifier);
‘’’
It seems like using ‘SqlReadJournal’ for persistence query when using postgresql.
I tried with SqlReadJournal and postgrsql, but it doesn’t work.
Onur Gumus
@OnurGumus
@Horusiath thanks , though my real question is this, if I create a persistent actor from scratch, does it go to database on its recovery phase ?
Peter Huang
@ptjhuang
@HIPERCUBE that's the generic polling implementation for all sql - it should work, what errors do you see?
Peter Huang
@ptjhuang
Peter Huang
@ptjhuang
and from then on, it just uses new events written to inform the query side (my mistake - i thought it polls)
joowon
@HIPERCUBE
@angrybug I can't find any error logs.
image.png
I tried to select events by PersistenceId with upper code, but it doesn't work. It was not printed at all.
Also i checked the DB log, but there's no select request at all.
Peter Huang
@ptjhuang
Can you paste in the config?
joowon
@HIPERCUBE
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }
  }
}
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }

    query.journal.sql {
      class = "Akka.Persistence.Query.Sql.SqlReadJournalProvider, Akka.Persistence.Query.Sql"
      refresh-interval = 1s
      max-buffer-size = 100
    }
  }
}
I tried both of them
Bartosz Sypytkowski
@Horusiath

if I create a persistent actor from scratch, does it go to database on its recovery phase ?

@OnurGumus by default, yes.

Onur Gumus
@OnurGumus
@Horusiath This is actually causing an issue for me. My Event Journal table contains millions of records. It is inefficient to go to database whenever I create a brand new persistent actor.
What can I do about it ?
by checking ToSequenceNr == 0
Onur Gumus
@OnurGumus
@Horusiath and regarding that do you think extending FunPersistentActor as below is ok?
type FunPersistentActor2<'Message>(actor : Eventsourced<'Message> -> Effect<'Message>, recovery)  =
    inherit  FunPersistentActor<'Message>(actor)
    override __.Recovery  = recovery

let propsPersist2 (receive: Eventsourced<'Message> -> Effect<'Message>, recovery) : Props<'Message> =

        Props<'Message>.ArgsCreate<FunPersistentActor2<'Message>, Eventsourced<'Message>, 'Message>([|receive,recovery|])
joowon
@HIPERCUBE

@angrybug Here is my configuration

I tried both of them

Bartosz Sypytkowski
@Horusiath
@OnurGumus it was originally created so that you don't need to think if actor is new or not (how would you know it, if a machine can crash)
Peter Huang
@ptjhuang
@HIPERCUBE had another look at your first snippet, have you try adding await/Wait()