Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Oct 23 16:08
    Aaronontheweb review_requested #4587
  • Oct 23 16:08
    Aaronontheweb milestoned #4587
  • Oct 23 16:08
    Aaronontheweb commented #4587
  • Oct 23 16:07
    Aaronontheweb labeled #4588
  • Oct 23 16:07
    Aaronontheweb commented #4588
  • Oct 23 12:04
    wasphub opened #4588
  • Oct 23 10:37
    zbynek001 opened #4587
  • Oct 20 14:10
    dependabot-preview[bot] synchronize #4337
  • Oct 20 14:10

    dependabot-preview[bot] on nuget

    Bump FSharp.Quotations.Evaluato… (compare)

  • Oct 20 14:10
    dependabot-preview[bot] edited #4337
  • Oct 20 14:09
    dependabot-preview[bot] edited #4337
  • Oct 20 14:08

    dependabot-preview[bot] on dev

    Bump FSharp.Core from 4.7.2 to … (compare)

  • Oct 20 14:08

    dependabot-preview[bot] on nuget

    (compare)

  • Oct 20 14:08
    dependabot-preview[bot] closed #4586
  • Oct 20 14:08
    Aaronontheweb commented #4586
  • Oct 20 14:08
    Aaronontheweb closed #4585
  • Oct 20 14:08
    Aaronontheweb commented #4585
  • Oct 20 06:57
    alexvaut commented #4496
  • Oct 20 05:50

    dependabot-preview[bot] on nuget

    Bump FSharp.Core from 4.7.2 to … (compare)

  • Oct 20 05:50
    dependabot-preview[bot] labeled #4586
Aaron Stannard
@Aaronontheweb
without knowing its address
a better way to do that is to create a ClusterSingletonProxy
that will act like a router
which can point to the ClusterSingleton regardless of where it is or even if it moves
gcohen12
@gcohen12
how do I know on the ClusterSingletonProxy that I'm the singleton itself?
Aaron Stannard
@Aaronontheweb
the singleton proxy would hide that from you so you can transparently communicate with the singleton without needing to know where it is
if you need to know where the singleton is for some reason (i.e. telemetry) you can grab it's address using the two methods I mentioned earlier
gcohen12
@gcohen12
thank you very much
Aaron Stannard
@Aaronontheweb
hope it helps!
mrxrsd
@mrxrsd
Hi everyone! I have two questions about pools. Is it a bad practice reusing the same message type between "coordinator" and "workers"? And the other one is that I've seen in akka-bootcamp a "waiting phase" before send job to workers, is it a good practice?
Coordinator.cs
private void CreateTicketHandler(CreateTicket input)
{
    _ticketSequencer.Ask<NextTicketSequence>(new GetNextTicketSequence()).ContinueWith(tr => new ProcessTicket(tr.Result.Id, input.Ticket)).PipeTo(Self);
}

private void ProcessTicketHandler(ProcessTicket input)
{
    // reuse message
    _ticketWorker.Tell(new CreateTicket());
    // or create a new type
    _ticketWorker.Tell(new CreateTicketWorker());
}
to11mtm
@to11mtm

@mrxrsd RE: Reusing message types, It depends somewhat on intent and composability...

By that, I mean, sometimes if you use an Envelope of some fashion for your coordinators, it can make it easier to abstract parts of them out.

For instance, consider an AtLeastOnceDelivery implementation (which is, in some ways, similar to a coordinator) If It were to just process everything that came in the same way, It could be less intuitive to override that behavior. Whereas if I'm working with envelopes, the handling that coordinator does can be abstracted away. Of course there's more than one way to solve the problems I mentioned and you might not care depending on your use case.

I will say, in my experience even if you decide to use envelopes later, if you were good about abstracting out your communication into the coordinator, going from reusing messages to an envelope setup can be a fairly simple refactoring.

Do you have an example of the waiting phase? I've never actually done the boot camp lol

Fabio Catunda Marreco
@fabiomarreco
Thank you @Aaronontheweb!
mrxrsd
@mrxrsd
@to11mtm, thanks, I think you are right. In the end I decide to reuse the message type, something like a decorator pattern or in the message world, a pipeline.
mrxrsd
@mrxrsd
Another one... possible there is no right or wrong, it's a matter of taste... how do you usually implement a retry pattern? There is no header, so I can't requeue with metadata containing the retry count. I don't know if is a good approach create a generic wrapper message to dealing with this. Another option is the old fashioned try catch with count or implement "error kernel pattern" with at-least-once delivered and nrs max of retries.
Swoorup Joshi
@Swoorup
I am having difficulties wrapping my head around streams. So I have stream such as
stream1: Dsl.Source<'TEvent,Akkling.ActorRefs.IActorRef<EventSourcedMessage<'TCommand,'TEvent>>
Stream2: Dsl.Source<FeedData,unit>
I want to feed output ’TCommand of stream 2 to stream 1.
What the heck do I do lol
driving me nuts
to11mtm
@to11mtm
So I'm still wrapping my head around parts of this, but you should try to make stream1 into a flow and plug that in. I'm wording it poorly...
Swoorup Joshi
@Swoorup
I still want to be able to use IActorRef though, and it is a persistent stream.
stream1 is defined as
type EventSourcedMessage<'TCommand, 'TEvent> =
  | Command of 'TCommand
  | Event of 'TEvent
  | GetState

type EventSourceStream<'TEvent,'TCommand> = Source<'TEvent,IActorRef<EventSourcedMessage<'TCommand,'TEvent>>>

let persistActor (aggregate: Aggregate<'TState, 'TCommand, 'TEvent>)( queue: ISourceQueue<'TEvent>)  = 
  propsPersist (fun mailbox ->
    let rec loop state =
      actor {
        let! msg = mailbox.Receive()

        match msg with
        | Event (changed) ->
            queue.AsyncOffer(changed) |!> retype mailbox.Self
            return! loop (aggregate.ApplyEvent state changed)
        | Command (cmd) ->
            let events = aggregate.ExecuteCommand state cmd
            let events = events |> Validation.unwrap
            return PersistAll(List.map Event events)
        | GetState ->
            mailbox.Sender() <! state
            return! loop state
      }

    loop aggregate.Zero)

let persistentQueue aggregate system pid (overflowStrategy: OverflowStrategy) (maxBuffer: int):EventSourceStream<_,_> =
  Source.queue overflowStrategy maxBuffer
  |> Source.mapMaterializedValue (persistActor aggregate >> spawn system pid)
Swoorup Joshi
@Swoorup
anybody interested to put together a git repo for awesome akka.net?
plenty for scala jvm world
Swoorup Joshi
@Swoorup
is there any plans for supporting Typed Akka?
Looks like a more controlled and safe way to pass messages using behaviours
Popov Sergey
@F0b0s
Hello, can someone explain to me how to use different serializer for akka. net Persistence. I use JSON serializer for snaphots and it's very unefficient in my case, my snapshots is 200+Mb. I would like to use some binary serializer like ProtoBuff or Hyperion, but i can't figure out from documentation which config section needs to be modified.
to11mtm
@to11mtm
@F0b0s What Persistence module are you using?
@Swoorup I think it's being looked into on some level, and I would be very interested in doing something like an adn-awesome repo
I know there's some stuff on my github that could be interesting/useful to others
to11mtm
@to11mtm

Umm so this Stream code is F#, I'm not perfect at reading....

Can you materialize Stream 1 and then feed the materialized result into the flow you build from Source2?

Fabio Catunda Marreco
@fabiomarreco

Hello all, I have a question on how to best organize my code.

I have an entity using cluster sharding, but now I need to create a satellite entity
that I wish it to be deployed on the same shard as my first entity. I thought of 2 solutions..
1) Create a parent actor that is sharded and forward each message to the correct entity
2) Create a separate shard for my second entity, and use the same ConsistentHashing algorithm on both cases.

What is the best practice in this case

Swoorup Joshi
@Swoorup

@to11mtm I managed to fix my issue. Wish the F# akkling had more documentation.

let feedCmdSourceToPersistentQueue (system: IActorRefFactory) (feeds: Source<_,unit> list) (persistentQueue: EventSourceStream<_,_>) =
  let mat = system.Materializer()

  let (actorRef, b) = 
    persistentQueue
    |> Source.toMat (Sink.forEach (printfn "Piu: %A")) Keep.both
    |> Graph.run mat

  feeds
  |> List.iter (
    Source.map Command
    >> Source.toSink (Sink.toActorRef GetState actorRef)
    >> Graph.run mat)

  b |> Async.Start
  actorRef

I can put together awesome-and repository if anybody is interested to contributed.

Popov Sergey
@F0b0s
@to11mtm this is my config:
"Akka": { "StdoutLogLevel": "ERROR", "LogLevel": "INFO", "Loggers": [ "Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog" ], "LogConfigOnStart": "on", "Actor": { "Debug": { "Receive": "on", "Autoreceive": "on", "Lifecycle": "on", "Eventstream": "off", "Unhandled": "on" }, "Serializers": { "Hyperion": "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion" }, "SerializationBindings": { "Object": "hyperion" }, "Provider": "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster" }, "Remote": { "DotNettyTcp": { "Port": 0, "HostName": "localhost", "MaximumFrameSize": "10240000b" } }, "Cluster": { "AutoDownUnreachableAfter": "off", "SeedNodes": [], "Roles": [ "participation" ], "Sharding": { "RebalanceThreshold": 3, "RememberEntities": "true", "Role": "participation", "JournalPluginId": "akka.persistence.journal.sharding", "SnapshotPluginId": "akka.persistence.snapshot-store.sharding" } }, "Persistence": { "Journal": { "Plugin": "akka.persistence.journal.sql-server", "SqlServer": { "Class": "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer", "PluginDispatcher": "akka.actor.default-dispatcher", "ConnectionString": "", "ConnectionTimeout": "30s", "SchemaName": "dbo", "TableName": "EventJournal", "AutoInitialize": "on", "TimestampProvider": "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common", "MetadataTableName": "Metadata" }, "Sharding": { "Class": "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer", "PluginDispatcher": "akka.actor.default-dispatcher", "ConnectionString": "", "ConnectionTimeout": "30s", "SchemaName": "dbo", "TableName": "ShardingJournal", "AutoInitialize": "on", "TimestampProvider": "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common", "MetadataTableName": "ShardingMetadata" } }, "SnapshotStore": { "Plugin": "akka.persistence.snapshot-store.sql-server", "SqlServer": { "Class": "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer", "PluginDispatcher": "akka.actor.default-dispatcher", "ConnectionString": "", "ConnectionTimeout": "30s", "SchemaName": "dbo", "TableName": "SnapshotStore", "AutoInitialize": "on" }, "Sharding": { "Class": "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer", "PluginDispatcher": "akka.actor.default-dispatcher", "ConnectionString": "", "ConnectionTimeout": "30s", "SchemaName": "dbo", "TableName": "ShardingSnapshotStore", "AutoInitialize": "on" } }, "JournalFallback": { "RecoveryEventTimeout": "60s" } }
I use Hyperion, but in fact my snapshots are saved with serializerId = 1 - JSON
to11mtm
@to11mtm
@F0b0s can you try "System.Object" instead of "Object" in your serialization bindings?
Fabio Catunda Marreco
@fabiomarreco
Is it possible to get the timestamp of an event on an PersistentQuery ?
Fabio Catunda Marreco
@fabiomarreco
I thought it would be present on EventEnvelope
Swoorup Joshi
@Swoorup
Are there any plans to merge Akkling to main?
Aaron Stannard
@Aaronontheweb
@Swoorup it's been discussed but I think @Horusiath likes being able to experiment with the Akkling API in ways that he's less free to do with the main Akka.NET API
we maintain pretty strict version tolerance and backwards compatibility guidelines
I would really like to redo the F# API though since it's out of date
and if everyone is using Akkling defacto
then we should integrate some of those innovations into the official API
and evolve it
kkolstad
@kkolstad

Has anyone had any success with using lighthouse in fargate containers on AWS ECS? My containers seems to run fine using Service Discovery to get the initial seed address, but I am unable to figure a health check that will work.

I've tried several permutaions of pbm (including pbm cluster status | grep 'akka.tcp://mysystem@lighthouse.service-discovery.dev.local:4053') commands that work locally with docker-compose but nothing seems to work with ECS.

Swoorup Joshi
@Swoorup
@Aaronontheweb sounds good.
I know its asking a lot but first class F# support would really be good, as it fits the scala equivalent more so, Scala 3 contains interesting features that would like to see in F#.
to11mtm
@to11mtm
The more I read Scala it's the language I wish I could write
haha