Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Nov 21 23:45
    nagytech commented #4054
  • Nov 21 23:40
    MaximG1234 commented #4054
  • Nov 21 23:37
    MaximG1234 commented #4054
  • Nov 21 23:37
    MaximG1234 commented #4054
  • Nov 21 23:36
    nagytech commented #4054
  • Nov 21 23:36
    nagytech commented #4054
  • Nov 21 23:36
    MaximG1234 commented #4054
  • Nov 21 23:12
    MaximG1234 commented #4054
  • Nov 21 23:12
    MaximG1234 commented #4054
  • Nov 21 23:11
    MaximG1234 commented #4054
  • Nov 21 21:54
    nagytech commented #4054
  • Nov 21 21:44
    Aaronontheweb commented #4054
  • Nov 21 21:44
    Aaronontheweb commented #4054
  • Nov 21 21:33
    MaximG1234 commented #4054
  • Nov 21 21:31
    Aaronontheweb commented #4054
  • Nov 21 21:31
    MaximG1234 commented #4054
  • Nov 21 21:31
    MaximG1234 commented #4054
  • Nov 21 20:04
    Aaronontheweb synchronize #4022
  • Nov 21 20:03

    Aaronontheweb on dev

    NRE fix (#4056) (compare)

  • Nov 21 20:03
    Aaronontheweb closed #4056
Onur Gumus
@OnurGumus
@ismaelhamed when a new persistent actor created, how does it know it is a new actor or one that needs recovery from database ?
Bartosz Sypytkowski
@Horusiath
@OnurGumus every persistent actor has to define its persistenceId - it's a reference identifier used to track all of the events that correspond to this logical entity. Akka.Persistence requires to have only once actor with given persistenceId at the time (otherwise you can corrupt the actor's state), it up to you to fulfill that requirement. One simple way is to keep all actors under the same parent (if possible). Also other parts of the platform, like Akka.Cluster.Sharding can help you with keeping one entity at the time.
Shukhrat Nekbaev
@snekbaev
@Horusiath, @Aaronontheweb when Akka.NET Remote is used with WebApp it needs a special remoting settings (enforce-ip-family = true, dns-use-ipv6 = false etc.). Those are documented. I've refactored the old remoting logic into Akka IO. It works locally and on LAN, however, when WebApp is published it doesn't seem to connect. Are you aware of config/code changes required to make it connect? Thank you!
Amongst logs: Could not establish connection because finishConnect never returned true (consider increasing akka.io.tcp.finish-connect-retries)
Shukhrat Nekbaev
@snekbaev
in code using _remoteAddress = new IPEndPoint( ipAddress, port ); where ipAddress is IPAddress for both client and server
Shukhrat Nekbaev
@snekbaev

could it be the

public TcpOutgoingConnection(TcpExt tcp, IActorRef commander, Tcp.Connect connect)
            : base(tcp, new Socket(SocketType.Stream, ProtocolType.Tcp) { Blocking = false }, connect.PullMode)

it creates a new socket, which internally seems to assume it's ipv6 and turns on the dual mode. Given the remote config settings, I think there it is strictly disabling anything related to ipv6. Maybe that's why it can't connect...

Shukhrat Nekbaev
@snekbaev
ok, I can confirm it is exactly that. Added the address family and bingo, will open a GitHub issue
Chris G. Stevens
@cgstevens
@Havret and @AndreSteenbergen
Thanks for the replies! Sorry i have not been on as I have been head down.
I did end up just injecting the DistributedPubSub into my actor and everything seems to be working great.
Thanks you for your help last week.
Peter Huang
@ptjhuang
Version tolerance of Hyperion - if a new version of an assembly adds a new property to a type, and the VersionTolerance setting is true, should it successfully deserialize a stream to the new version? Had a look at DefaultCodeGenerator.cs, it seems the field orders are important. But even if that's observed, I've got a gist that seems to fail when adding new property. https://gist.github.com/angrybug/5e63b0ac8945d51fd62f5279a5db6c0d Often, we upgrade a microservice adding a few properties, it would be nice to just swap over to new version without recompiling all the dependent services. Is it general best practice to turn version tolerance off?
Shukhrat Nekbaev
@snekbaev
here's the issue: akkadotnet/akka.net#3679
joowon
@HIPERCUBE
Peter Huang
@ptjhuang
@HIPERCUBE doesn't look like it. It's tricky to implement the query API as it is supposed to be able to monitor log additions. For many rdbms that means polling which is hard to make performant
joowon
@HIPERCUBE
@angrybug https://github.com/AkkaNetContrib/Akka.Persistence.PostgreSql/tree/dev/src/Akka.Persistence.PostgreSql.Tests/Query
I found persistence-query related test code in postgresql provider repo
Is it meaningless test code?
joowon
@HIPERCUBE
‘’’
ReadJournal = Sys.ReadJournalFor<SqlReadJournal>(SqlReadJournal.Identifier);
‘’’
It seems like using ‘SqlReadJournal’ for persistence query when using postgresql.
I tried with SqlReadJournal and postgrsql, but it doesn’t work.
Onur Gumus
@OnurGumus
@Horusiath thanks , though my real question is this, if I create a persistent actor from scratch, does it go to database on its recovery phase ?
Peter Huang
@ptjhuang
@HIPERCUBE that's the generic polling implementation for all sql - it should work, what errors do you see?
Peter Huang
@ptjhuang
Peter Huang
@ptjhuang
and from then on, it just uses new events written to inform the query side (my mistake - i thought it polls)
joowon
@HIPERCUBE
@angrybug I can't find any error logs.
image.png
I tried to select events by PersistenceId with upper code, but it doesn't work. It was not printed at all.
Also i checked the DB log, but there's no select request at all.
Peter Huang
@ptjhuang
Can you paste in the config?
joowon
@HIPERCUBE
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }
  }
}
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }

    query.journal.sql {
      class = "Akka.Persistence.Query.Sql.SqlReadJournalProvider, Akka.Persistence.Query.Sql"
      refresh-interval = 1s
      max-buffer-size = 100
    }
  }
}
I tried both of them
Bartosz Sypytkowski
@Horusiath

if I create a persistent actor from scratch, does it go to database on its recovery phase ?

@OnurGumus by default, yes.

Onur Gumus
@OnurGumus
@Horusiath This is actually causing an issue for me. My Event Journal table contains millions of records. It is inefficient to go to database whenever I create a brand new persistent actor.
What can I do about it ?
by checking ToSequenceNr == 0
Onur Gumus
@OnurGumus
@Horusiath and regarding that do you think extending FunPersistentActor as below is ok?
type FunPersistentActor2<'Message>(actor : Eventsourced<'Message> -> Effect<'Message>, recovery)  =
    inherit  FunPersistentActor<'Message>(actor)
    override __.Recovery  = recovery

let propsPersist2 (receive: Eventsourced<'Message> -> Effect<'Message>, recovery) : Props<'Message> =

        Props<'Message>.ArgsCreate<FunPersistentActor2<'Message>, Eventsourced<'Message>, 'Message>([|receive,recovery|])
joowon
@HIPERCUBE

@angrybug Here is my configuration

I tried both of them

Bartosz Sypytkowski
@Horusiath
@OnurGumus it was originally created so that you don't need to think if actor is new or not (how would you know it, if a machine can crash)
Peter Huang
@ptjhuang
@HIPERCUBE had another look at your first snippet, have you try adding await/Wait()
Ilya Komendantov
@IlyaKomendantov_twitter

Hey guys,
The situation:

  1. Players have inventory with items
  2. Shops have the same items
  3. Items have different parameters
    The parameters of the item can change.

How to store this data correctly?

I'm going to store only Guid and amount of the item
Also I'm going to have a Resolver that keeps the description of each item (by Guid)
But everytime when player is asked about item, it needs to resolve it first (the same for shops). This will probably create a huge load for Resolver..
I can create a PersistentActor for each item with the PersistentId = Guid. Then I can get this actor by Context.ActorSelection("../Guid") , how about performance here?
The projections can be used, but the parameters of the item I need in a lot of different places, will this work?

Can you suggest the best practice for such scenario?

Peter Huang
@ptjhuang
@HIPERCUBE the using block kills the materializer before it has a chance to run.
Onur Gumus
@OnurGumus
@Horusiath just in case will my above code work for akkling?
Aaron Stannard
@Aaronontheweb
@/all Akka.NET v1.3.11 is now live on NuGet https://twitter.com/AkkaDotNET/status/1075064280804925442
@HIPERCUBE having some trouble following all of the threads in Gitter chat here - I'd be happy to help though. Would you mind opening a Github issue with all of this in a single thread?
that'd be really helpful for me!
joowon
@HIPERCUBE
@angrybug Thanks so much!
Now it works :)
atresnjo
@atresnjo
When using Become, for some reason one of my Receive never gets triggered and the message ends up Unhandled. Anyone an idea what I could be doing wrong? Checked the type, and it's 100% correct. It feels like my last Receive doesn't replace the previous one or something.
Chris G. Stevens
@cgstevens
After my demo at work I was ask how Akka.Net comes to service mesh. The implementation to compare is Istio.
Can you even compare the two? I have some reading to do but figure I would ask if anyone know about what concepts overlap and why use one over the other.
Besides the fact it doesn't do anything with Actors.. more of the managing of the microservices I guess... Will need to read up on it.
Any info would be great! I am trying to sell Akka.Net here at my new place of work. Had the first part of my demo today which I felt like it went really well.
Will finish tomorrow but this was one of the questions.
Peter Huang
@ptjhuang
What's a good design for Akka Persistence using different event stores? i.e. in a multi-tenancy setting where each tenant needs a separate event store database (read connection string)? Looks like someone tried to do this: https://stackoverflow.com/questions/49776339/akka-net-config-multi-tenant, but what if you have a large number of tenants (does Hcon scale to 50k lines)? Another alternative I'm considering is using custom AsyncWriteJournal/SnapshotStore that changes write location based on PersistenceId - is that a good idea?
to11mtm
@to11mtm

@cgstevens I almost feel like the two are on slightly different layers... Akka.NET with clustering can act as sort of a service mesh, but it looks like Istio is a higher level of abstraction.

IMHO, from an architectural standpoint Istio looks like it does a lot.... possibly too much. That's an opinionated statement, but concepts like Authentication should be handled at the API Gateway level and not at a service mesh level.

Also, looks like if you're not using Kubernetes... good luck?
(I don't trust google.)