dependabot[bot] on npm_and_yarn
Bump copy-props from 2.0.4 to 2… (compare)
dependabot[bot] on npm_and_yarn
dependabot[bot] on npm_and_yarn
Bump codecov from 3.6.5 to 3.8.… (compare)
dependabot[bot] on npm_and_yarn
Bump pathval from 1.1.0 to 1.1.… (compare)
dependabot[bot] on npm_and_yarn
dependabot[bot] on npm_and_yarn
Bump node-fetch from 2.6.0 to 2… (compare)
{ fulfillmentText: 'intent response', outputContexts: [] }
... that might be ok, if you always override dialogflow agent's response with your own (on function's level)... but if you still want to get agent's response, which is defined at Dialogflow UI, it won't be working as expected... GH project that I've shared contains code, which extracts response from agent... and it fails, unfortunately... I guess due to mocked request / response?..
Hi. I'm working on building an AudioPlayer based alexa skill and I'm trying to use the virtual-alexa for testing. Specifically, I'm trying to exercise the PlaybackNearlyFinished() handler that I've built.
If I understand the reading that I've done it appears there's no simple way to just simulate a PlaybackNearlyFinished event.
I tried doing it at first by generating a launch event, then using a filter to change the type to be a PlaybackNearlyFinished. This worked in that it directly generated calls to my handler, but the behavior wasn't entirely predictable. Specifically I had trouble with the save/load of persistent data in my request/response interceptors. In addition it appears to automatically fire off a PlaybackStarted event for the next stream that I've enqueued. I don't care about that event and want to test that separately.
I did a little more reading and discovered that there is the audioPlayer().playbackNearlyFinished() method. The examples though appear to imply that I need to go through a series of utterances or intents to setup the system before being able to announce that playback is nearly finished. This ends up behaving less like a unit test and more like an integration or end to end test. An issue, for example, in kicking off playback would break the test, even if the handler I'm trying to test is working as expected.
Can you provide some guidance on the best way to unit test a single handler? How do I focus in on the smallest unit possible to ensure that it is behaving as expected?
Well... FWIW, it turned out the problem I was having may have been related to a mock that I was using for the persistence layer. I fixed it by moving it from a beforeEach/afterEach init/reset pattern to a beforeAll/afterAll reset. In the former case I was calling "restore" and then re-mocking the library each time. In the latter (which ended up working) I used the "reset" functionality after each test runs so the mock is only set once, but it's history is cleared after each run.
More importantly to this project: I was able to generate the PlaybackNearlyFinished request by using the launch()
method with a filter
that changed the request type. I used the mock to setup the environment properly, called the filtered launch, and then tracked the response.
Me too :)
The v-a is a powerful tool. Thanks for building it and making it available. It's also pretty unique in the marketplace. When I couldn't get it to do exactly what I wanted I searched for an alternative. There really isn't one. So kudos all around.
I'd love to see more access to individual intents/requests. I'd love to see the audioplayer more able to work in isolation (better setup to create the state you want) and I'd really love to see a simulator for S3 that's similar to the one you guys have for Dynamo. The Dynamo one is cool... if you're using dynamo for persistence. I'm not :)
Again though, great work and thank you.
Makes sense. I ended up just mocking the persistence layer with a fake persistence adapter.
I have a dedicated persistence.js
module with a method getPersistenceAdapter
. That initializes my S3PersistenceAdapter with a bucket name (and any other settings) and is called during the custom skill initialization.
In test I use sinon
to stub out that method.
stubPersistenceAdapter(sinon.stub(persistence, 'getPersistenceAdapter'));
I then pass that into the method below to get a standard mock for the persistence adapter
/**
* Creates a consistent stub for the persistence adapter which saves/loads attributes
* @param stub - a sinon stub which has stubbed the getPersistenceAdapter
* @returns a sinon stub which has been setup as a persistence adapter
*/
export const stubPersistenceAdapter = (stub) => {
let savedAttributes = null;
stub.returns({
getAttributes: (requestEnvelope) => { return Promise.resolve(savedAttributes);},
saveAttributes: (requestEnvelope, attributes) => {
savedAttributes = attributes;
return Promise.resolve();
}
});
stub.getSavedAttributes = () => savedAttributes;
stub.setSavedAttributes = (val) => savedAttributes = val;
return stub;
};
The nice thing about this solution is that it works regardless of the persistence layer (S3, Dymamo, custom?). It also allows me to get/set the attributes that have been written so that I can examine them directly in my tests. Most of my unit tests look at the responseBuilder generated output from the handler as well as the data written to persistence. In all cases I can easily populate that data before the test runs.
Anyway, that's what's worked for me so far :)
virtualAlexa.dynamoDB().mock();
Hi @davinci26 , with virtual-alexa , this is the documentation for dynamo
https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#dynamodb
In order to put some items, use the dynamo put method, and then you can use the get