Btw, just to add to this, if you run the test from the command line tool (
coyote test or
coyote replay) you can add the option
-b) which will start the debugger before it runs the test (it basically instruments a
System.Diagnostics.Debugger.Launch()). This typically opens a Visual Studio selector for the debug session, as @akashlal described. But I have only tried this approach of debugging using the VS IDE on Windows so far.
Another way, is to run the Coyote test programmatically. We plan to add documentation on this asap, but basically you can programmatically create a Coyote
TestingEngine and run the test based on a test
coyote command line tool uses this logic under the hood). That way, you can just debug the test with whatever IDE or process you are using until now (e.g. add breakpoints, right click the test and click debug). This is also useful if you want to run Coyote tests on a unit testing framework like xUnit or MSTest, etc. Until we add this info in our docs, you can see my answer on this closed GitHub issue that provides a code snippet how to do this (shows both test and replay): microsoft/coyote#23
ValueTasktype yet, only the original
Tasktype and other common types like
TaskCompletionSource(please note that comparing to our actor/state-machine programming model, our task programming model is still in-preview, so we are keep adding new features and supported types, to make it easier to be consumed, and are prioritizing those based on user demand and feedback). Saying this, supporting
ValueTaskshould not be too hard (as a lot of the logic to control and systematically test those is the same with
Task), so we can try to add this asap.
Shutdownevent, to which an actor will respond by serializing and storing its state, and then halting itself. There should not be any complications here, unless you have special requirements like shutting down actors without pausing client requests.
OnHaltAsync(Event e)and called
runtime.Stop()which according to the docs "Terminates the runtime and notifies each active actor to halt execution." however
OnHaltAsyncwas never called. Does that seem like a bug or should I move that to an issue?
OnHaltAsyncmethod is called when an actor is about to halt. Note that halting of actors has to be done explicitly, by either sending the
HaltEventto it, or the actor itself doing a
RaiseHaltEvent. Were you trying to halt an actor, but
OnHaltAsyncnever got invoked? If so, do file an issue.
I checked the code and indeed Runtime Stop was not designed to notify all existing running actors via OnHaltAsync. it just does this :-)
public void Stop() => this.IsRunning = false;
But is an interesting idea, but we'd have to change the api to make Stop async, perhaps a new HaltAsync on ICoyoteRuntime would make sense, and this would be implemented by the ActorRuntime, and ignored by the task runtime. So how about this: microsoft/coyote#36
I have some questions about using Coyote and Orleans. I think I understand the conceptual programming model with System
Task vs Coyote
Task; one in which the
Task behavior is transparent when running in the runtime/release mode vs under test using the coyote test engine. However, w.r.t. Coyote's Actor model, there doesn't seem to be a 1:1 comparison because there is no
Actor object in the .NET Framework. The closest model we have would be something like Akka.net or Orleans.
So, how does one conceptualize the usage of Coyote's
Actor in relation to Orleans
Grain? Is there some way to make Coyote's
Actor's behavior become transparent when running under the Orleans runtime?
ActorsCoyote provides a runtime. However, the runtime is in-memory and very lightweight. One option is the following. Each Orleans
Grainhosts a single Coyote
Actor. When the grain gets a message, it passes it to the actor. For testing, the
Grainis erased away and the actors talk directly to each other. In production, the
Grainprovides all the distributed goodies. This way, you do incur a small overhead of using the Coyote runtime in production, but it should be very minor. This seems to be a common question -- if you're up for it, perhaps you can prototype a simple solution for "test with coyote, deploy with Orleans" so we can discuss more concretely.
Hi @akashlal , I have the basic Orleans <-> Coyote grain/actor working. However, the coyote testing part isn't done yet.
The example below tries to model the operation of a shopping cart:
I wanted to pause here because I'm noticing some call semantic issues between the Coyote and Orleans runtimes; I'd like to get some feedback on the call semantics.
The first thing I notice is this part here:
actorRuntime.SendEvent and this operation returns immediately when the item is enqueued for the Coyote Actor's inbox. The ideal semantics I'm looking is "awaiting" for the event to be processed before returning control to the Orleans runtime. I think this is important because if control is returned back to the Orleans runtime (based on Coyote inbox semantics), there's a chance that the Coyote Actor will "lag behind" the Orleans runtime; which I don't think is a good situation to be in.
The second issue is getting simple values "out of" the Coyote runtime here:
Again, I think I need some kind of construct to "await" for the Coyote Actor to "finish its work" before we can return control back to the Orleans runtime. One approach that seemed to work is using the
AwaitableEventGroup<T> mechanism, but it feels like a hack. It feels like a hack because I think I should be using something like
.SendEvent(backToCaller, response); and instead I'm using a completely different response mechanism
this.CurrentEventGroup to acquire the mechanism to respond; where other mechanisms use
.SendEvent in the Coyote Actor.
Ideally, the natural semantics I'm looking for are shown below (when interacting with the Coyote runtime from outside):
actorRuntime.SendEvent(event); await actorRuntime.ReceiveEventAsync(actorId, typeof(Response));
I guess I could create C# extension methods to hack-in using the
this.CurrentEventGroup mechanism, but not sure. Let me know your thoughts. Thanks!
@Psiman62 We don't have specific guidance documented at the moment. The basic strategy follows what we have in our samples: use the distributed machinery in production (e.g., use the "Send" that the distributed actor framework provides), but erase it all away for testing (e.g., use the Coyote Send instead).
We have some experience doing this with Service Fabric Actors, but it was on an internal codebase. @bchavez is putting together a solution with Orleans Actors that can be a useful guide as it comes together. The "serverless" aspect of Durable Function is perhaps new, so I would be interested in following your progress. And happy to answer questions, of course, as you go along.
I'm a little confused by that, if you are treating coyote tests as some form of unit tests where you are faking the boundaries between your system and an IO system how are you ever going to find any locking issues.
For example if I was using files to save and loading information into my application and there was alocking issue in that particular part of my code but then in my coyote tests I replace actual file system with a fake that just talks to in memory file system how would it ever find the locking?