These are chat archives for akkadotnet/akka.net

5th
Nov 2015
Yin Zhang
@melcloud
Nov 05 2015 03:59
Hi guys, can I specify a consistent hash mapping in config? Or if I config a consistent hash router through hocon config, can I add a consistent hash mapping later on?
Hyungho Ko
@hhko
Nov 05 2015 04:21
Can I get the sample about CQRS on Akka.NET?
Arjen Smits
@Danthar
Nov 05 2015 06:51
@hhko @Horusiath made a sample. Although note that its not based on the current version of Akka. https://github.com/Horusiath/AkkaCQRS
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 07:19
@MrTortoise IMO F# is even better fit than C#, only lack of time and F# contributors stops us from making F# API more and more beautiful
@Zetanova I think, we should drop this CS_PID field, I'll take that once I finish our problems with build manager
Hussein Ait-Lahcen
@hussein-aitlahcen
Nov 05 2015 07:41
@Horusiath Is the fsm implementation finished or should i still use the Become for my behavior ?
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 07:42
FSM is finished and fuels things such as akka remote and cluster ;) but using it is matter of personal preferences
Zetanova
@Zetanova
Nov 05 2015 07:50
@Horusiath yes sql server has support for an index on strings. Checksum performnce good on ntext and blob or composite keys. Never the less in both situations a check on the keys itself is required
 'WHERE CS_PID = CHECKSUM(@PersistenceId) AND PersistenceID = @PersistenceId'
Zetanova
@Zetanova
Nov 05 2015 08:17
I got this problem because i am using a Sequenced Guid generator so that an Guid-Index performance well. But in Akka its encoded to Base36 with a Prefix 'ARTypeName-5m6w1yqafkxkqb4o74fbfc1z9'. A Row Sequenced Guid as PersistendId would perform much better.
John Nicholas
@MrTortoise
Nov 05 2015 15:12
hmm i have a load of actors that stop logging (Context.GetLogger()) after a few calls down. The actor code hits lots of task based code that messses with schedulers and then heads back into an akka actor ... no idea about what thread it is on at this point ... but it seems like logging isnt working. Does that imply i need to marshall back onto the thread that the console would be on?
jweimann
@jweimann
Nov 05 2015 15:47
@MrTortoise do you have a gist of the generics problem you're having? sounds like it may be something I had to do a while ago, but not 100% clear.
John Nicholas
@MrTortoise
Nov 05 2015 16:07
@jweimann the probelm i have is a lack of overload. I dont know what type of the inner type in the generic is at the time of writing the recieve handler.
@jweimann I have worked around that by simply not using a generic - there was a problem elsewhere which meant it was opportune to factor it out and use a different appraoch anyway
Chris G. Stevens
@cgstevens
Nov 05 2015 16:43
Not sure how to explain this... I have a dev server which is located on a separate part of the network and it has 7 ip addresses.
My Tasker is configured to use 1.1.98.230, nothing specifies the hostname. The error that I am errors in my logs are "[akka://MyService/system/endpointManager/reliableEndpointWriter-akka.tcp%3a%2f%2fMyService%401.9.2.8%3a60133-9] - Invalid address: akka.tcp://MyService@1.9.2.8:60133" which is my ClusterViewer app.
Afterwards it becomes disassociated and eventually causes my tasker to loose its leader role and everything seems to fall apart from there. Not understanding why it would not be leader anymore which causes me not to be able to connect to it even when the status is UP.
I ran WireShark on the server and found that when this happens wireshark shows my Source to be a different address... (1.1.98.234).
When everything seems to run good WireShark shows the ip 1.1.98.230 talking with the 1.9.2.8.
So I guess why is my cluster not only working on the ip 1.1.98.230 which is specified in the HOCON configs. To me the cluster shouldn't even be trying to communicate on 1.1.98.234.
Thanks for any info!
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 16:50
@Aaronontheweb to fix #1399 I'll need some of your assist, I see that there is heavy amount of custom code back there, could you explain it to me?
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:01
@Horusiath this is just for the NuGet push?
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:03
yes
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:03
ok
//--------------------------------------------------------------------------------
// Pack nuget for all projects
// Publish to nuget.org if nugetkey is specified

let createNugetPackages _ =
    let removeDir dir = 
        let del _ = 
            DeleteDir dir
            not (directoryExists dir)
        runWithRetries del 3 |> ignore

    ensureDirectory nugetDir
    for nuspec in !! "src/**/*.nuspec" do
        printfn "Creating nuget packages for %s" nuspec

        let project = Path.GetFileNameWithoutExtension nuspec 

        let workingDir = workingDir </> project

        CleanDir workingDir

        let projectDir = Path.GetDirectoryName nuspec
        let projectFile = (!! (projectDir @@ project + ".*sproj")) |> Seq.head
        let releaseDir = projectDir @@ @"bin\Release"
        let packages = projectDir @@ "paket.references"
        let packageDependencies = if (fileExists packages) then (Paket.GetDependenciesForReferencesFile packages |> Seq.toList) else []
        let dependencies = packageDependencies @ getAkkaDependency project
        let releaseVersion = getProjectVersion project

        let pack outputDir symbolPackage =
            NuGetHelper.NuGet
                (fun p ->
                    { p with
                        Description = description
                        Authors = authors
                        Copyright = copyright
                        Project =  project
                        Properties = ["Configuration", "Release"]
                        ReleaseNotes = release.Notes |> String.concat "\n"
                        Version = releaseVersion
                        Tags = tags |> String.concat " "
                        OutputPath = outputDir
                        WorkingDir = workingDir
                        SymbolPackage = symbolPackage
                        Dependencies = dependencies })
                nuspec

        // Copy dll, pdb and xml to libdir = workingDir/lib/net45/
        ensureDirectory libDir
        !! (releaseDir @@ project + ".dll")
        ++ (releaseDir @@ project + ".pdb")
        ++ (releaseDir @@ project + ".xml")
        ++ (releaseDir @@ project + ".ExternalAnnotations.xml")
        |> CopyFiles libDir

        // Copy all src-files (.cs and .fs files) to workingDir/src
        let nugetSrcDir = workingDir @@ @"src/"
        // CreateDir nugetSrcDir

        let isCs = hasExt ".cs"
        let isFs = hasExt ".fs"
        let isAssemblyInfo f = (filename f).Contains("AssemblyInfo")
        let isSrc f = (isCs f || isFs f) && not (isAssemblyInfo f) 
        CopyDir nugetSrcDir projectDir isSrc

        //Remove workingDir/src/obj and workingDir/src/bin
        removeDir (nugetSrcDir @@ "obj")
        removeDir (nugetSrcDir @@ "bin")

        // Create both normal nuget package and symbols nuget package. 
        // Uses the files we copied to workingDir and outputs to nugetdir
        pack nugetDir NugetSymbolPackage.Nuspec

let publishNugetPackages _ = 
    let rec publishPackage url accessKey trialsLeft packageFile =
        let tracing = enableProcessTracing
        enableProcessTracing <- false
        let args p =
            match p with
            | (pack, key, "") -> sprintf "push \"%s\" %s" pack key
            | (pack, key, url) -> sprintf "push \"%s\" %s -source %s" pack key url

        tracefn "Pushing %s Attempts left: %d" (FullName packageFile) trialsLeft
        try 
            let result = ExecProcess (fun info -> 
                    info.FileName <- nugetExe
                    info.WorkingDirectory <- (Path.GetDirectoryName (FullName packageFile))
                    info.Arguments <- args (packageFile, accessKey,url)) (System.TimeSpan.FromMinutes 1.0)
            enableProcessTracing <- tracing
            if result <> 0 then failwithf "Error during NuGet symbol push. %s %s" nugetExe (args (packageFile, "key omitted",url))
        with exn -> 
            if (trialsLeft > 0) then (publishPackage url accessKey (trialsLeft-1) packageFile)
            else raise exn
    let shouldPushN
so these are the two methods that are failing
actually, I take that back
it's just publishNuGetPackages
let me double check the build log
yep, that's correct - just publishNuGetPackages
let publishNugetPackages _ = 
    let rec publishPackage url accessKey trialsLeft packageFile =
        let tracing = enableProcessTracing
        enableProcessTracing <- false
        let args p =
            match p with
            | (pack, key, "") -> sprintf "push \"%s\" %s" pack key
            | (pack, key, url) -> sprintf "push \"%s\" %s -source %s" pack key url

        tracefn "Pushing %s Attempts left: %d" (FullName packageFile) trialsLeft
        try 
            let result = ExecProcess (fun info -> 
                    info.FileName <- nugetExe
                    info.WorkingDirectory <- (Path.GetDirectoryName (FullName packageFile))
                    info.Arguments <- args (packageFile, accessKey,url)) (System.TimeSpan.FromMinutes 1.0)
            enableProcessTracing <- tracing
            if result <> 0 then failwithf "Error during NuGet symbol push. %s %s" nugetExe (args (packageFile, "key omitted",url))
        with exn -> 
            if (trialsLeft > 0) then (publishPackage url accessKey (trialsLeft-1) packageFile)
            else raise exn
    let shouldPushNugetPackages = hasBuildParam "nugetkey"
    let shouldPushSymbolsPackages = (hasBuildParam "symbolspublishurl") && (hasBuildParam "symbolskey")

    if (shouldPushNugetPackages || shouldPushSymbolsPackages) then
        printfn "Pushing nuget packages"
        if shouldPushNugetPackages then
            let normalPackages= 
                !! (nugetDir @@ "*.nupkg") 
                -- (nugetDir @@ "*.symbols.nupkg") |> Seq.sortBy(fun x -> x.ToLower())
            for package in normalPackages do
                publishPackage (getBuildParamOrDefault "nugetpublishurl" "") (getBuildParam "nugetkey") 3 package

        if shouldPushSymbolsPackages then
            let symbolPackages= !! (nugetDir @@ "*.symbols.nupkg") |> Seq.sortBy(fun x -> x.ToLower())
            for package in symbolPackages do
                publishPackage (getBuildParam "symbolspublishurl") (getBuildParam "symbolskey") 3 package
just realized that not all of it made it onto the last code snippet
so all this code does is locate all of the *.nupkg and .symbols.nupkg files
and upload them based on the arguments passed into FAKE
i.e. the publishKey and target
which are different for our build server (MyGet) and production (NuGet)
honestly the easiest way to fix this would be to just add back in build.cmd and build.sh the instructions to dynamically download nuget.exe
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:09
because the way this code works is by spawning a process and passing in the commandline args directly
relevant batchfile from DotNetty
@echo off

pushd %~dp0

SETLOCAL
SET CACHED_NUGET=%LocalAppData%\NuGet\NuGet.exe

IF EXIST %CACHED_NUGET% goto copynuget
echo Downloading latest version of NuGet.exe...
IF NOT EXIST %LocalAppData%\NuGet md %LocalAppData%\NuGet
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest 'https://www.nuget.org/nuget.exe' -OutFile '%CACHED_NUGET%'"

:copynuget
IF EXIST .nuget\nuget.exe goto restore
md .nuget
copy %CACHED_NUGET% .nuget\nuget.exe > nul

:restore

.nuget\NuGet.exe update -self

.nuget\NuGet.exe install FAKE -OutputDirectory packages -ExcludeVersion -Version 3.28.8

.nuget\NuGet.exe install xunit.runner.console -ConfigFile .nuget\Nuget.Config -OutputDirectory packages\FAKE -ExcludeVersion -Version 2.0.0

if not exist packages\SourceLink.Fake\tools\SourceLink.fsx ( 
  .nuget\nuget.exe install SourceLink.Fake -OutputDirectory packages -ExcludeVersion
)
rem cls

set encoding=utf-8
packages\FAKE\tools\FAKE.exe build.fsx %*

popd
and BASH equivalent
#!/bin/bash

SCRIPT_PATH="${BASH_SOURCE[0]}";
if ([ -h "${SCRIPT_PATH}" ]) then
  while([ -h "${SCRIPT_PATH}" ]) do SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
fi
pushd . > /dev/null
cd `dirname ${SCRIPT_PATH}` > /dev/null
SCRIPT_PATH=`pwd`;
popd  > /dev/null

if ! [ -f $SCRIPT_PATH/.nuget/nuget.exe ] 
    then
        wget "https://www.nuget.org/nuget.exe" -P $SCRIPT_PATH/.nuget/
fi

mono $SCRIPT_PATH/.nuget/nuget.exe update -self

mono $SCRIPT_PATH/.nuget/nuget.exe install FAKE -OutputDirectory $SCRIPT_PATH/packages -ExcludeVersion -Version 3.28.8

mono $SCRIPT_PATH/.nuget/nuget.exe install xunit.runners -OutputDirectory $SCRIPT_PATH/packages/FAKE -ExcludeVersion -Version 2.0.0

if ! [ -e $SCRIPT_PATH/packages/SourceLink.Fake/tools/SourceLink.fsx ] ; then
    mono $SCRIPT_PATH/.nuget/nuget.exe install SourceLink.Fake -OutputDirectory $SCRIPT_PATH/packages -ExcludeVersion

fi

export encoding=utf-8

mono $SCRIPT_PATH/packages/FAKE/tools/FAKE.exe build.fsx "$@"
that would need to be merged with whatever Paket code is there
use Paket for restore
NuGet for push
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:10
I'm not 100% sure, but Paket can publish nuget packages and it's self updating
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:11
yeah, but why do we need that?
this code works fine as long as the binary is there
Paket's primary value is on restore
do whatever you need to, but something needs to be fixed in order to get nightlies back
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:21
@Horusiath any thoughts on that?
btw - going to be upgrading the build server to support C# 6, F# 4, and Code Contracts soon
I just have to install a new version of Visual Studio and do some other song and dance
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:26
+1000 for code contracts
I'll see what I can do
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:26
yeah, I fell in love with them on DotNetty
started using them in NBench, which I'm hoping to have a demo of in our contributors meeting
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:28
pretty bad, that they should be basically compiler feature ;)
yeah, I agree with that haha
NBench is something I came up with while I was on vacation - to build a robust history going forward around performance in key areas of Akka.NET, Helios, DotNetty, and our serializers
combines profiling techniques with unit testing
i.e. write a unit test that guarantees that a benchmark never has more than N GC2 collections during each run
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:30
@Aaronontheweb I'd also focus on that: http://www.privateeye.io/
it's in beta and it's pretty impressive right now - I've talked with greg young, and he wants to keep it free for OSS projects
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:32
cool - what I want is automated performance testing though
i.e. something that screams and fails loudly when a key metric gets fucked up on a PR
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:32
yeah I know, this is more for profiling
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:32
.NET needs better profiling tools
PerfView has been what I've been using
and it's a bit of a bear to be honest
if you're a Microsoft engineer and have spent a lot of time working with Event Tracing for Windows, PerfView is probably amazing
PrivateEye looks cool
Bartosz Sypytkowski
@Horusiath
Nov 05 2015 17:33
this one has metrics about bytes used, nr of calls, allocations and more - and what's the best it works from F# Repl, can be used on living code and composed via observables with any UI you want
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:33
would love to use that for troubleshooting issues with the performance-critical parts
Ashit Shakrani
@ashshak
Nov 05 2015 17:35
Hi Guys,
I am just spiking the FSM and can't get it to transition to the 2nd state, am I missing anything here:
public MyFsm()
        {
            MyStates myStates = new MyStates(); 
            StartWith(myStates.Ready, new MyData());

            When(myStates.Ready, @event =>
            {
                State<MyState, MyData> nextState=null;

                @event.FsmEvent.Match()
                    .With<MyEvents.Calculate>(calculate =>
                    {
                        StateData.Item1 = calculate.Item1;
                        StateData.Item2 = calculate.Item2;
                        nextState = GoTo(myStates.Calculating, StateData);
                    })
                    .Default(o =>
                    {
                        nextState = Stay();
                    });

                return nextState;
            });

            When(myStates.Calculating, @event =>
            {
                StateData.Sum = StateData.Item1 + StateData.Item2;
                return GoTo(myStates.Calculated, StateData);
            });
            Initialize();
        }
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:36
@ashshak can you show the full FSM code along with the messages you're sending it?
Ashit Shakrani
@ashshak
Nov 05 2015 17:40
sure, here a gist:
https://gist.github.com/ashshak/211a7323a78065558b43
Thanks for having a look.
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:42
ah, I think I may know why.. can you verify that the GoTo(myStates.Calculating, StateData) is being called?
with the debugger?
Ashit Shakrani
@ashshak
Nov 05 2015 17:46
yes the event is being pattern-matched and nextState is being set.
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:53
ok, I think I might know what's up
this is from the internals of the FSM<TState, TData> class
private void ProcessEvent(Event<TData> fsmEvent, object source)
        {
            if(DebugEvent)
            {
                var srcStr = GetSourceString(source);
                _log.Debug("processing {0} from {1}", fsmEvent, srcStr);
            }
            var stateFunc = _stateFunctions[_currentState.StateName];
            var oldState = _currentState;
            State<TState, TData> upcomingState = null;

            if(stateFunc != null)
            {
                upcomingState = stateFunc(fsmEvent);
            }

            if(upcomingState == null)
            {
                upcomingState = HandleEvent(fsmEvent);
            }

            ApplyState(upcomingState);
            if(DebugEvent && !Equals(oldState, upcomingState))
            {
                _log.Debug("transition {0} -> {1}", oldState, upcomingState);
            }
        }
this is where a state transition occurs
the stateFunc this in case is your lambda inside the Where( clause
ah, nevermind - I'm an idiot
turn on the following setting in HOCON
Ashit Shakrani
@ashshak
Nov 05 2015 17:57
yup, I saw the source file. And even the handler for OnTransition() is firing. But it never enters the stateFunc of the 2nd state.
Aaron Stannard
@Aaronontheweb
Nov 05 2015 17:58
akka.loglevel = DEBUG
this will print out the _log.Debug("transition {0} -> {1}", oldState, upcomingState); messages to the console
so you'll know for certain if the transition occurs
I think you are transitioning to the second state
but you need to send another event
right now nothing is handled in that state
events don't cascade
they're handled and cause a transition
Ashit Shakrani
@ashshak
Nov 05 2015 18:03
ok, I see. I have just fired another message and it now hits handler. I thought I could just move from one to another. Thanks for your help.
Aaron Stannard
@Aaronontheweb
Nov 05 2015 18:03
no worries @ashshak - it was a good gut check for me too
the thing I was worried about was using nullable types to define states
since we do so many STATE1 == STATE2 comparisons
but you handled that by using singletons
we really need to document FSMs anyway - @skotzko just put together a big sample the other day showing how to unit test them
Ashit Shakrani
@ashshak
Nov 05 2015 18:08
Where can I find that. http://getakka.net/docs/FSM seems incomplete to me as it there are no GoTo s. Your markedup blog post for FSM was helpful.
Aaron Stannard
@Aaronontheweb
Nov 05 2015 18:16
we'll be publishing it soon - the getakka.net docs for the FSM are scarce
@rogeralsing @Horusiath btw, this is supposed to be getting merged today Azure/DotNetty#40
has all of the stuff needed to replace Helios AFAIK
Roger Johansson
@rogeralsing
Nov 05 2015 18:18
nice!
Aaron Stannard
@Aaronontheweb
Nov 05 2015 18:18
only thing missing is the multi-threaded event loop
which helps throughput across multiple connections
makes no difference over a 1:1 connection
Arjen Smits
@Danthar
Nov 05 2015 18:22
Commit message Motivation: Let me guess. commit messages are mandatory ? ^^
Aaron Stannard
@Aaronontheweb
Nov 05 2015 18:22
lol what's that from?
Arjen Smits
@Danthar
Nov 05 2015 18:23
o wait. its followed by a newline :D
Azure/DotNetty@0722ade
Aaron Stannard
@Aaronontheweb
Nov 05 2015 18:23
haha
Max has a very precise communication style
Arjen Smits
@Danthar
Nov 05 2015 18:26
System.Diagnostics.Contract stuff is great. Been using it more and more in my own code as well
l3igmike
@l3igmike
Nov 05 2015 20:05
is it possible to kill an actor, recreate it somewhere else and include its former mailbox messages?
Aaron Stannard
@Aaronontheweb
Nov 05 2015 20:36
@l3igmike Akka.Persistence can do this
backs up its state, but not its messages, to a durable store
Yin Zhang
@melcloud
Nov 05 2015 20:54
Morning guys. Is there a way to filter out death watch and heartbeat log message under debug? I want to see all sent/receive messages between two remote nodes, but those heartbeat message are rally annoying! :smile:
Aaron Stannard
@Aaronontheweb
Nov 05 2015 20:55
if you're debugging Akka.Remote
or Cluster
one thing I recommend doing is just setting the heartbeat interval way the hell long
so you can stay at a breakpoint for a few minutes without a bunch of failure detectors going off
I have a snippet for that...
akka {          
           log-config-on-start = on
                        actor {
                            provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
                        }

                        remote {
                            log-remote-lifecycle-events = DEBUG

                            helios.tcp {
                                transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
                                applied-adapters = []
                                transport-protocol = tcp
                                #will be populated with a dynamic host-name at runtime if left uncommented
                                #public-hostname = "POPULATE STATIC IP HERE"
                                hostname = "127.0.0.1"
                                port = 800
                            }
             transport-failure-detector {
               implementation-class = "Akka.Remote.DeadlineFailureDetector,Akka.Remote"
               heartbeat-interval = 400 s
               acceptable-heartbeat-pause = 300000 s
               monitored-by-nr-of-members = 5
               expected-response-after = 5000 s
             }
                        }            

                        loggers = ["Akka.Logger.Serilog.SerilogLogger, Akka.Logger.Serilog"]

                        cluster {
                            #will inject this node as a self-seed node at run-time
                            seed-nodes = []
                            roles = [lighthouse]
             failure-detector {
               implementation-class = "Akka.Remote.DeadlineFailureDetector,Akka.Remote"
               heartbeat-interval = 400 s
               acceptable-heartbeat-pause = 300000 s
               monitored-by-nr-of-members = 5
               expected-response-after = 5000 s
             }
                        }
                    }
pardon the weird tabbing
Yin Zhang
@melcloud
Nov 05 2015 20:57
@Aaronontheweb nice... you kill two birds with one stone... :+1:
Aaron Stannard
@Aaronontheweb
Nov 05 2015 20:57
but that lets me debug a cluster for effectively forever
that setting would be terrible to run in production
obviously
Yin Zhang
@melcloud
Nov 05 2015 20:58
yeah, for sure. I just need it for development testing
Aaron Stannard
@Aaronontheweb
Nov 05 2015 20:58
we have an issue where we're considering automatically doing that if the debugger is attached
but that probably wouldn't fix it if the service you're debugged is connected to a bunch of other ones that aren't
better just to modify the config
let me know if that works
Yin Zhang
@melcloud
Nov 05 2015 21:03
By the way, we really need intellisence for akka config!
Aaron Stannard
@Aaronontheweb
Nov 05 2015 21:12
yeah, that's a popularly requested one
Yin Zhang
@melcloud
Nov 05 2015 21:13
@Aaronontheweb Yeah, that config works well. thanks
Aaron Stannard
@Aaronontheweb
Nov 05 2015 21:19
:thumbsup: glad to hear it
Yin Zhang
@melcloud
Nov 05 2015 23:22
hi guys, can I use Become in AtLeastOnceDelivery? Is it a bad idea?