Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 14:40
    cptjazz synchronize #3974
  • 14:07
    cptjazz opened #3974
  • 08:30
    ismaelhamed commented #3937
  • Oct 12 15:50
    IrvinDominin opened #127
  • Oct 11 18:21
    Aaronontheweb commented #3973
  • Oct 11 18:20
    Aaronontheweb commented #3937
  • Oct 11 18:16
    Zetanova commented #3937
  • Oct 11 18:11
    Zetanova commented #3937
  • Oct 11 15:09
    Aaronontheweb commented #3937
  • Oct 11 15:08
    Aaronontheweb commented #3937
  • Oct 11 14:36
    Aaronontheweb commented #3973
  • Oct 11 01:00
    Horusiath commented #3057
  • Oct 10 20:02
    IgorFedchenko synchronize #3973
  • Oct 10 19:59
    IgorFedchenko synchronize #3973
  • Oct 10 19:58
    IgorFedchenko commented #3973
  • Oct 10 19:53
    IgorFedchenko opened #3973
  • Oct 10 14:04
    stijnherreman commented #3057
  • Oct 10 13:54
    Aaronontheweb commented #3970
  • Oct 10 13:54
    Aaronontheweb synchronize #3970
  • Oct 10 10:10
    Zetanova commented #3937
Arjen Smits
@Danthar
@philiplaureano the default-dispatcher has an throughput property in the config. If you set that to 1 (its 30 by default) the dispatcher would basically behave more in an pre-emptive way. Although it wont be perfect :P. https://github.com/akkadotnet/akka.net/blob/dev/src/core/Akka/Configuration/Pigeon.conf
Bartosz Sypytkowski
@Horusiath
@Danthar not quite the case - message still needs to be processed before going back to the dispatcher ;) True preemptive scheduler should be able to intercept executed code in non-fixed points.
that's the difference:
  • cooperative - the task/process says when it's ready to release control back to the scheduler
  • preemptive - scheduler is able to intercept task/process and take control back, when it "think" it's right
Arjen Smits
@Danthar
@Horusiath that is true. However with preemptive the result is generally that each task gets a certain time-slice in which work can be done. Also to prevent misbehaving code from constantly claiming alot of resources.
If you reduce the throughput in the scheduler to 1, it can be a way to try and simulate that behavior, although it would be most apparent in an saturated env. But then again, it would not be perfect. Because like you say: it wont be able to aggressively take back control of a certain task.
So it would basically become an very-poor-mans-preemptive scheduler :P
Aaron Stannard
@Aaronontheweb
@belenmorenate saw your SO issue! I'll take a look at that this morning
Belén
@belenmorenate
Great! thanks
Alex Michel
@amichel
@Horusiath I see. That could possibly lead to race condition and lead to out of order messages being sent from both stash and clients
Alex Michel
@amichel
When creating a cluster shard proxy can I use interface as type name? I want to avoid referencing and deployment of actual actor implementation in client side applications.
Bartosz Sypytkowski
@Horusiath
@amichel type name is used only as identifier of a segment within actor path. This can be any URI-compatible string
Aaron Stannard
@Aaronontheweb
looks like we need to be ignoring *project.json.lock files in our .gitignore files
those things are a nuisance
Aaron Stannard
@Aaronontheweb
@belenmorenate answered your question on SO and submitted #2424 to test if this is an issue. Looks like DistributedPubSub will work fine under the case you outlined, but you have to make sure that the TestKit actor system is configured to use the ClusterActorRefProvider via HOCON configuration
showed a sample on the SO post
Alex Michel
@amichel
@Horusiath found it thanks - CoordinatorSingletonManagerName
Belén
@belenmorenate
@Aaronontheweb It works, thanks!
Arsene T. Gandote
@Tochemey
Merry xmas to all the akka geeks. Wonderful work this year
Hyungho Ko
@hhko
I read "Using custom transports" document on etakka.net
however, I don't know how to get Google's Quic protocol.
I tried to find it on nuget.org.
Could you give a hit to solve it?
Bartosz Sypytkowski
@Horusiath
@hhko afaik, QUIC specification still isn't finished. Also there's a client implementation for .NET, but not the server.
Hyungho Ko
@hhko
@Horusiath this means that Google's quic isn't yet read for akka.net remote.
Is it right?
ready
Bartosz Sypytkowski
@Horusiath
@hhko yes
Hyungho Ko
@hhko
@Horusiath Thank you for your time and consideration
always
ssathasivan
@ssathasivan

As a part of changing our application from a giant monolith to a mic We are developing a Notification micro-service that can be used by all the other modules for sending notifications like Email, SMS , Push Notifications etc.

One client of this notification service is windows service that we are planning to develop , which triggers email notifications for various events like User Registration , Password Reset etc . The windows service will have a 2 parts

A REST based API that can be called by modules like User Registration to trigger the notifications. When the ReST API is called , this service will load the appropriate template , fill in the necessary values and call the notification service to send the email. In case the api call fails ,the details of the notifications shall be sent to a back ground task, which will retry the action for a fixed number of times before giving up and raising an error
The background task which will which will retry the action for a fixed number of times for sending the notification before giving up and raising an error
Our initial plan was to use a queue to communicate between the two parts. The flow will be like this

Client --> ReST API --> load Template and Fill Values -> Call Notification Service --> Add the message to Queue (In case Notification fails) --> back ground task pulls the message of the queue --> retries action --> Mark the notification as failed in case maximum retries have failed (which will be taken up manually

Instead of using a Queue, another approach is to use persistent AKKA.NET actors for doing this, since AKKA.NET supports Mail box but have not found any similiar use cases... Are we on the right path if we choose akka.net... Please sent ur comments

Arsene T. Gandote
@Tochemey
Hello I have put up a quick sample repo for AkkaStream . https://github.com/Tochemey/StreamTcpServer. I would like you guys look at it and advice me. I am very very new to Akka Streams and the whole reactive streams stuff. Thank you
Vagif Abilov
@object
I am getting strange results when using CurrentEventsByPersistenceId with persistence queries. Instead of 1 result per a combination of PersistenceId+SequenceNr I get 1 result for each event adapter. In the HOCON section I have 4 event adapters and even though there is only 1 event with the given PersistenceId+SequenceNr in the whole database, I get back 4 events for each combincation: one correct event and 3 empty events for other adapters.
Here is my HOCON section:
event-adapters {
akamai-disk-volume = "Nrk.Odd.Tanner.PersistenceUtils+EventAdapter1[[Nrk.Odd.Tanner.PersistentTypes+AkamaiDiskUsage, Tanner]], Tanner" akamai-storage-assignment = "Nrk.Odd.Tanner.PersistenceUtils+EventAdapter1[[Nrk.Odd.Tanner.PersistentTypes+AkamaiStorageAssignment, Tanner]], Tanner"
origin-storage-assignment = "Nrk.Odd.Tanner.PersistenceUtils+EventAdapter1[[Nrk.Odd.Tanner.PersistentTypes+OriginStorageAssignment, Tanner]], Tanner" file-distribution = "Nrk.Odd.Tanner.PersistenceUtils+EventAdapter1[[Nrk.Odd.Tanner.PersistentTypes+FileDistribution, Tanner]], Tanner"
}
And here is the code:
let queries = PersistenceQuery.Get(system).ReadJournalFor<SqlReadJournal>("akka.persistence.query.journal.sql")
let mat = ActorMaterializer.Create(system)
let aggregateEvents acc item = acc + 1
queries.CurrentEventsByPersistenceId("file-distribution:ps~msus13001916~msus13001916aa~msus13001916aa_id270.mp4", 0L, System.Int64.MaxValue)
.RunAggregate(0, System.Func<int, EventEnvelope, int> aggregateEvents, mat)
|> Async.AwaitTask
|> Async.RunSynchronously
Vagif Abilov
@object
UPDATE. What I found it is that if I replace this HOCON line in event-adapter-bindings:
"Newtonsoft.Json.Linq.JObject, Newtonsoft.Json" = [akamai-disk-volume,akamai-storage-assignment,origin-storage-assignment,file-distribution]
with this one:
"Newtonsoft.Json.Linq.JObject, Newtonsoft.Json" = [file-distribution]
then everything works OK. But I don't think this is correct configuration because I only list one event adapter (and I have 3 others for other persistent actor types).
Vagif Abilov
@object
I think I've figured it out: I can only have 1 adapter that converts events from a given type (JSON in my case) to Journal.
Actually quite logical :-)
Alex Michel
@amichel
What do I need to change in HOCON when running ClusterSharding proxy instead of node? I copied the config and system creation from node console app which works and when using StartProxy I can't manage to connect to cluster
I am using LightHouse as seed node
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
wire = "Akka.Serialization.WireSerializer, Akka.Serialization.Wire"
}
serialization-bindings {
"System.Object" = wire
}
}
remote {
helios.tcp {
        #public-hostname = "localhost"
        hostname = "localhost"
        port = 0
      }
    }
    cluster {
      auto-down-unreachable-after = 5s
      sharding {
        least-shard-allocation-strategy.rebalance-threshold = 3
        role = sharding
      }
      seed-nodes = ["akka.tcp://sharded-cluster-system@127.0.0.1:4053"]          
      roles=[sharding]
    }

    persistence {
       publish-plugin-commands = on
        journal {
          plugin = "akka.persistence.journal.sql-server"
          sql-server {
              class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
              plugin-dispatcher = ""akka.actor.default-dispatcher""
              table-name = EventJournal
              schema-name = dbo
              auto-initialize = on
              connection-string-name = "Sandbox"
           }
        }
        snapshot-store {
               plugin = "akka.persistence.snapshot-store.sql-server"
               sql-server {
                   class = "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
                   plugin-dispatcher = "akka.actor.default-dispatcher"
                   table-name = SnapshotStore
                   schema-name = dbo
                   auto-initialize = on
                   connection-string-name = "Sandbox"
               }
        }
    }
  }
Mike Clark
@mclark1129
I'd like to better understand how Akka.NET might be used in a microservices architecture. Is it common to use a single actor system and just deploy your microservices as individual actors under the same system? Or is it a more common practice to implement your microservice as an instance of an actor system, with its components being made up of actors?
simonpetty
@simonpetty
Hi all. I have a question about Persistence.Query.Sql...hoping someone can shed some light
what about this project is SQL specific if it all hooks into the standard Persistence API ?
seems odd to couple the query logic to a particular backend store
to11mtm
@to11mtm
@simonpetty Shoot, I'm fairly certain someone here will give it a go. Posting a Question on StackOverflow is also a way to try, contribs and other folks look there, and that way the knowledge is retained in a better format (it's frustrating to go through pages of history to find an old answer in my experience)
oh... scrolling blocked some of that.
to11mtm
@to11mtm
@Horusiath can probably explain that one better than I.
to11mtm
@to11mtm

@mclark1129 In our teams experience, it's usually a little of both. Some of our microservices are ATM little more than the old class logic exposed in actors, others we've started to optimize the pipeline and use more actors throughout.

That said, we typically use separate application instances for the microservices; Separate instances make it a bit easier to deploy changes to just that microservice, makes it a little easier to reason about scaling up/out later. Also, you don't have as much communication i/o competing in said service.

The biggest tradeoff we consider in that (whether to bundle parts inside one microservice versus others) is locality; If two actors are in the same local VM, things move much faster than even two different running apps on the same machine. So if there are two microservices that are doing a lot of communication between them, we consider whether it makes sense to bundle them up. We also know that's not the most technically correct, but perhaps most pragmatic.

to11mtm
@to11mtm

As a general example, We have a service for refreshing data across two different Databases into a single target DB. Must run FAST (Design Goal was to refresh status data for up to 20k inventory units a minute across 1000 customers and provide aggregate data. I'm aware this wasn't the right way to do so much of this, yay brownfield engineering with a shotgun deadline.)

Side A has Actors for: 1.dispatching requests per customer, 2.'internal data collectors', 3. 'external request buffers' for talking to side B, 4. 'Collection Writers', 5. 'Calculators', and 6. 'Result writers'

Side B has 1. 'External Data collector' and 2.'External result buffers' for sending the result back to Side A.
There's a WebAPI on each end of that, there's business/technical/contractual reasons why we won't communicate between sides using Akka instead of HTTP.

One of the really cool things about designing that process was tuning how many of each of those actors were present throughout. Make a more of the actors that have to do the heavier lifting (db Queries). Less of the ones that just do simple things in memory.