Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 21:39
    Aaronontheweb synchronize #4037
  • 21:39
    Aaronontheweb ready_for_review #4037
  • 19:57
    Aaronontheweb synchronize #4037
  • 19:53
    Aaronontheweb commented #4037
  • 19:51
    Aaronontheweb opened #4037
  • 19:08
    Aaronontheweb synchronize #4036
  • 19:02
    Aaronontheweb opened #4036
  • 16:48
    IgorFedchenko synchronize #4022
  • 16:25
    Aaronontheweb commented #4035
  • 16:24
    Aaronontheweb commented #4022
  • 16:23
    Aaronontheweb labeled #4035
  • 16:23
    Aaronontheweb labeled #4035
  • 16:23
    Aaronontheweb opened #4035
  • 16:21
    Aaronontheweb commented #4022
  • 15:54
    Aaronontheweb commented #4022
  • 15:49
    IgorFedchenko commented #4022
  • 15:47
    IgorFedchenko synchronize #4022
  • 15:13

    dependabot-preview[bot] on dev

    Bump Hyperion from 0.9.10 to 0.… (compare)

  • 15:13

    dependabot-preview[bot] on nuget

    (compare)

  • 15:13
    dependabot-preview[bot] closed #4034
joowon
@HIPERCUBE
@angrybug https://github.com/AkkaNetContrib/Akka.Persistence.PostgreSql/tree/dev/src/Akka.Persistence.PostgreSql.Tests/Query
I found persistence-query related test code in postgresql provider repo
Is it meaningless test code?
joowon
@HIPERCUBE
‘’’
ReadJournal = Sys.ReadJournalFor<SqlReadJournal>(SqlReadJournal.Identifier);
‘’’
It seems like using ‘SqlReadJournal’ for persistence query when using postgresql.
I tried with SqlReadJournal and postgrsql, but it doesn’t work.
Onur Gumus
@OnurGumus
@Horusiath thanks , though my real question is this, if I create a persistent actor from scratch, does it go to database on its recovery phase ?
Peter Huang
@ptjhuang
@HIPERCUBE that's the generic polling implementation for all sql - it should work, what errors do you see?
Peter Huang
@ptjhuang
Peter Huang
@ptjhuang
and from then on, it just uses new events written to inform the query side (my mistake - i thought it polls)
joowon
@HIPERCUBE
@angrybug I can't find any error logs.
image.png
I tried to select events by PersistenceId with upper code, but it doesn't work. It was not printed at all.
Also i checked the DB log, but there's no select request at all.
Peter Huang
@ptjhuang
Can you paste in the config?
joowon
@HIPERCUBE
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }
  }
}
akka {
  cluster.sharding {
    journal-plugin-id = "akka.persistence.journal.sharding"
    snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
  }

  persistence {
    journal {
      plugin = "akka.persistence.journal.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = event_journal
        auto-initialize = on
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = metadata
        stored-as = BYTEA
        refresh-interval = 1s
      }
      sharding {
        connection-string = "Correct connection string"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.PostgreSql.Journal.PostgreSqlJournal, Akka.Persistence.PostgreSql"
        connection-timeout = 30s
        schema-name = public
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = sharding_metadata
      }
    }

    sharding {
      journal-plugin-id = "akka.persistence.journal.sharding"
      snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }

    snapshot-store {
      plugin = "akka.persistence.snapshot-store.postgresql"
      postgresql {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = snapshot_store
        auto-initialize = on
        stored-as = BYTEA
      }
      sharding {
        class = "Akka.Persistence.PostgreSql.Snapshot.PostgreSqlSnapshotStore, Akka.Persistence.PostgreSql"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Correct connection string"
        connection-timeout = 30s
        schema-name = public
        table-name = sharding_snapshot_store
        auto-initialize = on
      }
    }

    query.journal.sql {
      class = "Akka.Persistence.Query.Sql.SqlReadJournalProvider, Akka.Persistence.Query.Sql"
      refresh-interval = 1s
      max-buffer-size = 100
    }
  }
}
I tried both of them
Bartosz Sypytkowski
@Horusiath

if I create a persistent actor from scratch, does it go to database on its recovery phase ?

@OnurGumus by default, yes.

Onur Gumus
@OnurGumus
@Horusiath This is actually causing an issue for me. My Event Journal table contains millions of records. It is inefficient to go to database whenever I create a brand new persistent actor.
What can I do about it ?
by checking ToSequenceNr == 0
Onur Gumus
@OnurGumus
@Horusiath and regarding that do you think extending FunPersistentActor as below is ok?
type FunPersistentActor2<'Message>(actor : Eventsourced<'Message> -> Effect<'Message>, recovery)  =
    inherit  FunPersistentActor<'Message>(actor)
    override __.Recovery  = recovery

let propsPersist2 (receive: Eventsourced<'Message> -> Effect<'Message>, recovery) : Props<'Message> =

        Props<'Message>.ArgsCreate<FunPersistentActor2<'Message>, Eventsourced<'Message>, 'Message>([|receive,recovery|])
joowon
@HIPERCUBE

@angrybug Here is my configuration

I tried both of them

Bartosz Sypytkowski
@Horusiath
@OnurGumus it was originally created so that you don't need to think if actor is new or not (how would you know it, if a machine can crash)
Peter Huang
@ptjhuang
@HIPERCUBE had another look at your first snippet, have you try adding await/Wait()
Ilya Komendantov
@IlyaKomendantov_twitter

Hey guys,
The situation:

  1. Players have inventory with items
  2. Shops have the same items
  3. Items have different parameters
    The parameters of the item can change.

How to store this data correctly?

I'm going to store only Guid and amount of the item
Also I'm going to have a Resolver that keeps the description of each item (by Guid)
But everytime when player is asked about item, it needs to resolve it first (the same for shops). This will probably create a huge load for Resolver..
I can create a PersistentActor for each item with the PersistentId = Guid. Then I can get this actor by Context.ActorSelection("../Guid") , how about performance here?
The projections can be used, but the parameters of the item I need in a lot of different places, will this work?

Can you suggest the best practice for such scenario?

Peter Huang
@ptjhuang
@HIPERCUBE the using block kills the materializer before it has a chance to run.
Onur Gumus
@OnurGumus
@Horusiath just in case will my above code work for akkling?
Aaron Stannard
@Aaronontheweb
@/all Akka.NET v1.3.11 is now live on NuGet https://twitter.com/AkkaDotNET/status/1075064280804925442
@HIPERCUBE having some trouble following all of the threads in Gitter chat here - I'd be happy to help though. Would you mind opening a Github issue with all of this in a single thread?
that'd be really helpful for me!
joowon
@HIPERCUBE
@angrybug Thanks so much!
Now it works :)
atresnjo
@atresnjo
When using Become, for some reason one of my Receive never gets triggered and the message ends up Unhandled. Anyone an idea what I could be doing wrong? Checked the type, and it's 100% correct. It feels like my last Receive doesn't replace the previous one or something.
Chris G. Stevens
@cgstevens
After my demo at work I was ask how Akka.Net comes to service mesh. The implementation to compare is Istio.
Can you even compare the two? I have some reading to do but figure I would ask if anyone know about what concepts overlap and why use one over the other.
Besides the fact it doesn't do anything with Actors.. more of the managing of the microservices I guess... Will need to read up on it.
Any info would be great! I am trying to sell Akka.Net here at my new place of work. Had the first part of my demo today which I felt like it went really well.
Will finish tomorrow but this was one of the questions.
Peter Huang
@ptjhuang
What's a good design for Akka Persistence using different event stores? i.e. in a multi-tenancy setting where each tenant needs a separate event store database (read connection string)? Looks like someone tried to do this: https://stackoverflow.com/questions/49776339/akka-net-config-multi-tenant, but what if you have a large number of tenants (does Hcon scale to 50k lines)? Another alternative I'm considering is using custom AsyncWriteJournal/SnapshotStore that changes write location based on PersistenceId - is that a good idea?
to11mtm
@to11mtm

@cgstevens I almost feel like the two are on slightly different layers... Akka.NET with clustering can act as sort of a service mesh, but it looks like Istio is a higher level of abstraction.

IMHO, from an architectural standpoint Istio looks like it does a lot.... possibly too much. That's an opinionated statement, but concepts like Authentication should be handled at the API Gateway level and not at a service mesh level.

Also, looks like if you're not using Kubernetes... good luck?
(I don't trust google.)
to11mtm
@to11mtm

(Broader opinionated statement) I'm always careful of libraries/frameworks that are backed by massive corps like Google/IBM (and even MS; see: EF). They have a lot of organizational/process strength to enforce rules that go with making large frameworks work well. Smaller shops, it's often a very difficult battle.

OTOH, This has been re-hammered into me because I'm on a team with only 4 devs in the entire company right now. ;_;

Bartosz Sypytkowski
@Horusiath
@angrybug you can use different stores per actor type, but not per persistenceId, if this is what you are asking for
Peter Huang
@ptjhuang
@Horusiath that's a shame - what would you suggest for multi-tenancy requirement like that, where the event store needs to be separate for each tenant? i was hoping there's some way for each actor instance to signal which store it should live in, but the design doesn't seem to allow that naturally. Would you say a better approach be to use the persistence store as a "temporary" event store, then replay it to separate the entries into the tenant-specific stores? (duplicated data etc)
On another note, how often do AkkaContrib pull requests get processed? I noticed the last one was in March for MongoDb persistence repo. Was hoping to not have to make our own nuget packages in private repo in the mean time.
Bartosz Sypytkowski
@Horusiath
@angrybug it really depends on the people's occupation. I can review your PR eventually, but it's 2k lines of code and I'm pretty tight on time myself ;)
regarding multi-tenant event stores, you'd probably need something customized
Aaron Stannard
@Aaronontheweb

After my demo at work I was ask how Akka.Net comes to service mesh. The implementation to compare is Istio

Honestly, Akka.NET eliminates the need for a lot of infrastructure like Istio

Istio and service mesh technologies in general are designed for making it easy to connect stateless microservices together, because these types of technologies inherently have no concept of "topology" built into them
(because they are stateless)
Akka.Cluster pushes topology awareness into every node inside the system, because fundamentally all of those services are peers in a peer-to-peer network
not stateless client-server services
therefore, I don't need a central point for looking up the address of services