Hi @nanderto, if I understand correctly, you are trying to run a MSTest or xunit where the return type is a Microsoft.Coyote.Tasks.Task? This one https://github.com/nanderto/FirstAspNetCoyote/blob/master/SecondAspNetCoyoteTests/Pages/CounterViewModelTests.cs#L27, right? I don't think that is possible as MSTest or xunit does not understand our type (similar to how an async Main method cannot return a Coyote task) and require a regular C# task. We actually need to add this information to our documentation, but to run Coyote test programmatically (from inside a unit test like MSTest or xunit) and not using the CLI tool 'coyote', you need to do something like the following:
using Microsoft.Coyote.Specifications;
[TestMethod]
public async System.Threading.Tasks.Task IncrementCountTestAsnc()
{
// Create some test configuration, this is just a basic one using 100 test iteration.
// Each iteration executes the test method from scratch, exploring different interleavings.
Configuration configuration = Configuration.Create().WithTestingIterations(100);
// Create the Coyote runner programmatically.
// Assign the configuration and the test method to run.
TestingEngine engine = TestingEngine.Create(configuration, async () =>
{
var viewmodel = new SecondCoyoteLibrary.Pages.CounterViewModel(runtime);
viewmodel.CurrentCount = 10;
viewmodel.IncrementAmount = 3;
await viewmodel.IncrementCount();
Specification.Assert(viewmodel.CurrentCount == 13);
});
// Run the Coyote test.
engine.Run();
// Check for bugs.
Console.WriteLine($"Found #{engine.TestReport.NumOfFoundBugs} bugs.");
if (engine.TestReport.NumOfFoundBugs == 1)
{
Console.WriteLine($"Bug: {engine.TestReport.BugReports.First()}");
}
}
Basically, you create a Coyote TestingEngine and pass to it a test configuration and a lambda (this lambda is using Coyote tasks). You then run it (which runs it like if you were running the Coyote tool from command line) and finally can check the report for bugs.
@lanekelly IIRC, the DGML feature currently only works with the Actors programming model, not with Tasks.
@lanekelly @akashlal thats correct, DGML tracing currently only works for actors (+ @lovettchris )
@pdeligia Thanks for taking a look. Yes I was returning a Coyote task instead of a System.Threading.Tasks task . Interestingly If I change to a System.Threading.Tasks then my code still compiles and runs in the website. This below is the method that is called and it returns a System.Threading.Tasks.Task
public async Task IncrementCount()
{
var request = new RequestEvent<int, int>(IncrementAmount);
AddActor = runtime.CreateActor(typeof(AddActor), request);
var response = await request.Completed.Task;
CurrentCount = CurrentCount + response;
runtime.SendEvent(AddActor, HaltEvent.Instance);
}
This actually works in the website even though the requested.Completed.Task is a coyote task, but it does not work in the unit test.
The sample you provide also does not work as there is no runtime available to pass into the method. Besides which, I am not looking to replace the how the Coyote test engine works I just want to be able to run unit tests.
@nanderto sorry my code snippet was incomplete, this is how you pass the runtime:
TestingEngine engine = TestingEngine.Create(configuration, async runtime =>
{
...
});
Btw, my code does not try to replace how the Coyote test engine works, it shows you how to run a Coyote test from inside another unit testing framework (MSTest, Xunit) using the Coyote systematic testing engine :)
You can think of the Coyote testing engine as its own "unit test runner for Coyote". The TestingEngine will create a special "systematic testing mode" Coyote runtime and pass it to your test via dependency injection. You are not supposed to create your own Coyote runtime during systematic testing (only in production).
E.g. see how tests are written here: https://github.com/microsoft/coyote-samples/blob/master/HelloWorldActors/Program.cs. Notice that a runtime is passed as input parameter (and not created by the user), when writing a test method using the Microsoft.Coyote.TestingServices.Test
attribute (https://github.com/microsoft/coyote-samples/blob/master/HelloWorldActors/Program.cs#L38), but in the Main method you explicitly create a runtime (https://github.com/microsoft/coyote-samples/blob/master/HelloWorldActors/Program.cs#L16) since that is invoked in production.
However, using the Microsoft.Coyote.TestingServices.Test
only works when running tests with our coyote test
command line tool (see https://microsoft.github.io/coyote/learn/tools/testing). What I described above does the same thing but allows you to integrate with other unit testing frameworks like MSTest or Xunit.
Hope this clarifies your question?
@nanderto if you just want to run a regular unit test, without the systematic testing capability that coyote test
gives you, then you can do this:
[TestMethod]
public async System.Threading.Tasks.Task IncrementCountTestAsnc()
{
ICoyoteRuntime runtime = new CoyoteRuntime();
var viewmodel = new SecondCoyoteLibrary.Pages.CounterViewModel(runtime);
viewmodel.CurrentCount = 10;
viewmodel.IncrementAmount = 3;
await viewmodel.IncrementCount();
Assert.AreEqual(13, viewmodel.CurrentCount);
}
Notice how I changed the signature of the method to return a regular C# task. Thats all you need to run this. System tasks and Coyote tasks can work together in production :) But doing this approach will result in a flaky test as the concurrency is not controlled, whereas the approach that I described in my last post will result in Coyote controlling the concurrency and systematically exploring interleavings to find bugs in your unit test!
ConcurrentDictionary
. I don't see any of its methods returning a native Task. For improved test coverage, make sure to insert a ExploreContextSwitch
before calling any method on a ConcurrentDictionary
. Let me look up the Channel
type, haven't used it before ...
Channel
. Ideally, you should mock it in the same way that we mocked TaskCompletionSource
. But this implementation uses some internal APIs that make the mocking easy. We are going to consider making these APIs public so that your exercise will be simpler. For now, maybe you can have a simple mock.
Btw, just to add to this, if you run the test from the command line tool (coyote test
or coyote replay
) you can add the option --break
(-b
) which will start the debugger before it runs the test (it basically instruments a System.Diagnostics.Debugger.Launch()
). This typically opens a Visual Studio selector for the debug session, as @akashlal described. But I have only tried this approach of debugging using the VS IDE on Windows so far.
Another way, is to run the Coyote test programmatically. We plan to add documentation on this asap, but basically you can programmatically create a Coyote TestingEngine
and run the test based on a test Configuration
(the coyote
command line tool uses this logic under the hood). That way, you can just debug the test with whatever IDE or process you are using until now (e.g. add breakpoints, right click the test and click debug). This is also useful if you want to run Coyote tests on a unit testing framework like xUnit or MSTest, etc. Until we add this info in our docs, you can see my answer on this closed GitHub issue that provides a code snippet how to do this (shows both test and replay): microsoft/coyote#23
ValueTask
type yet, only the original Task
type and other common types like TaskCompletionSource
(please note that comparing to our actor/state-machine programming model, our task programming model is still in-preview, so we are keep adding new features and supported types, to make it easier to be consumed, and are prioritizing those based on user demand and feedback). Saying this, supporting ValueTask
should not be too hard (as a lot of the logic to control and systematically test those is the same with Task
), so we can try to add this asap.
Shutdown
event, to which an actor will respond by serializing and storing its state, and then halting itself. There should not be any complications here, unless you have special requirements like shutting down actors without pausing client requests.
OnHaltAsync(Event e)
and called runtime.Stop()
which according to the docs "Terminates the runtime and notifies each active actor to halt execution." however OnHaltAsync
was never called. Does that seem like a bug or should I move that to an issue?
OnHaltAsync
method is called when an actor is about to halt. Note that halting of actors has to be done explicitly, by either sending the HaltEvent
to it, or the actor itself doing a RaiseHaltEvent
. Were you trying to halt an actor, but OnHaltAsync
never got invoked? If so, do file an issue.
I checked the code and indeed Runtime Stop was not designed to notify all existing running actors via OnHaltAsync. it just does this :-)
public void Stop() => this.IsRunning = false;
But is an interesting idea, but we'd have to change the api to make Stop async, perhaps a new HaltAsync on ICoyoteRuntime would make sense, and this would be implemented by the ActorRuntime, and ignored by the task runtime. So how about this: microsoft/coyote#36
I have some questions about using Coyote and Orleans. I think I understand the conceptual programming model with System Task
vs Coyote Task
; one in which the Task
behavior is transparent when running in the runtime/release mode vs under test using the coyote test engine. However, w.r.t. Coyote's Actor model, there doesn't seem to be a 1:1 comparison because there is no Actor
object in the .NET Framework. The closest model we have would be something like Akka.net or Orleans.
So, how does one conceptualize the usage of Coyote's Actor
in relation to Orleans Actor
/Grain
? Is there some way to make Coyote's Actor
's behavior become transparent when running under the Orleans runtime?
Task
and for Actors
Coyote provides a runtime. However, the runtime is in-memory and very lightweight. One option is the following. Each Orleans Grain
hosts a single Coyote Actor
. When the grain gets a message, it passes it to the actor. For testing, the Grain
is erased away and the actors talk directly to each other. In production, the Grain
provides all the distributed goodies. This way, you do incur a small overhead of using the Coyote runtime in production, but it should be very minor. This seems to be a common question -- if you're up for it, perhaps you can prototype a simple solution for "test with coyote, deploy with Orleans" so we can discuss more concretely.
Hi @akashlal , I have the basic Orleans <-> Coyote grain/actor working. However, the coyote testing part isn't done yet.
The example below tries to model the operation of a shopping cart:
https://github.com/bchavez/coyote-samples/tree/master/OrleansActors
I wanted to pause here because I'm noticing some call semantic issues between the Coyote and Orleans runtimes; I'd like to get some feedback on the call semantics.
The first thing I notice is this part here:
https://github.com/bchavez/coyote-samples/blob/0b1d4bb43aa4c8a1549a356840f23913d72802a4/OrleansActors/Grains/CartGrain.cs#L32-L42
Orleans calls actorRuntime.SendEvent
and this operation returns immediately when the item is enqueued for the Coyote Actor's inbox. The ideal semantics I'm looking is "awaiting" for the event to be processed before returning control to the Orleans runtime. I think this is important because if control is returned back to the Orleans runtime (based on Coyote inbox semantics), there's a chance that the Coyote Actor will "lag behind" the Orleans runtime; which I don't think is a good situation to be in.
The second issue is getting simple values "out of" the Coyote runtime here:
https://github.com/bchavez/coyote-samples/blob/0b1d4bb43aa4c8a1549a356840f23913d72802a4/OrleansActors/Grains/CartGrain.cs#L65-L76
Again, I think I need some kind of construct to "await" for the Coyote Actor to "finish its work" before we can return control back to the Orleans runtime. One approach that seemed to work is using the AwaitableEventGroup<T>
mechanism, but it feels like a hack. It feels like a hack because I think I should be using something like .SendEvent(backToCaller, response);
and instead I'm using a completely different response mechanism this.CurrentEventGroup
to acquire the mechanism to respond; where other mechanisms use .SendEvent
in the Coyote Actor.
Ideally, the natural semantics I'm looking for are shown below (when interacting with the Coyote runtime from outside):
actorRuntime.SendEvent(event);
await actorRuntime.ReceiveEventAsync(actorId, typeof(Response));
I guess I could create C# extension methods to hack-in using the this.CurrentEventGroup
mechanism, but not sure. Let me know your thoughts. Thanks!
@Psiman62 We don't have specific guidance documented at the moment. The basic strategy follows what we have in our samples: use the distributed machinery in production (e.g., use the "Send" that the distributed actor framework provides), but erase it all away for testing (e.g., use the Coyote Send instead).
We have some experience doing this with Service Fabric Actors, but it was on an internal codebase. @bchavez is putting together a solution with Orleans Actors that can be a useful guide as it comes together. The "serverless" aspect of Durable Function is perhaps new, so I would be interested in following your progress. And happy to answer questions, of course, as you go along.