Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 27 20:13
    dependabot[bot] labeled #145
  • Jun 27 20:13
    dependabot[bot] opened #145
  • Jun 27 20:13

    dependabot[bot] on npm_and_yarn

    Bump copy-props from 2.0.4 to 2… (compare)

  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jun 23 19:26
    dependabot[bot] closed #126
  • Jun 23 19:26
    dependabot[bot] commented #126
  • Jun 23 19:26
    dependabot[bot] labeled #144
  • Jun 23 19:26
    dependabot[bot] opened #144
  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    Bump codecov from 3.6.5 to 3.8.… (compare)

  • Feb 11 00:22
    dependabot[bot] labeled #143
  • Feb 11 00:22
    dependabot[bot] opened #143
  • Feb 11 00:22

    dependabot[bot] on npm_and_yarn

    Bump pathval from 1.1.0 to 1.1.… (compare)

  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jan 22 03:48
    dependabot[bot] closed #127
  • Jan 22 03:48
    dependabot[bot] commented #127
  • Jan 22 03:48
    dependabot[bot] labeled #142
  • Jan 22 03:48
    dependabot[bot] opened #142
  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    Bump node-fetch from 2.6.0 to 2… (compare)

  • Jan 15 02:01
    dependabot[bot] labeled #141
  • Jan 15 02:01
    dependabot[bot] opened #141
Sergey Korol
@sskorol
note that all required info for getting responses is present in agent's json files... is there any plan to support parsing them to retrieve responses, contexts, utters, etc?
Juan Perata
@jperata
@sskorol At the moment we are still not using the responses and elements from the agent.json or the ones set up in the dialog flow intent configurations. There are some requests for that functionality, so there are plans to support it, but we haven't set the time to make those improvements yet.
Sergey Korol
@sskorol
@jperata ok, got it, thanks
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Hi. I'm working on building an AudioPlayer based alexa skill and I'm trying to use the virtual-alexa for testing. Specifically, I'm trying to exercise the PlaybackNearlyFinished() handler that I've built.

If I understand the reading that I've done it appears there's no simple way to just simulate a PlaybackNearlyFinished event.

I tried doing it at first by generating a launch event, then using a filter to change the type to be a PlaybackNearlyFinished. This worked in that it directly generated calls to my handler, but the behavior wasn't entirely predictable. Specifically I had trouble with the save/load of persistent data in my request/response interceptors. In addition it appears to automatically fire off a PlaybackStarted event for the next stream that I've enqueued. I don't care about that event and want to test that separately.

I did a little more reading and discovered that there is the audioPlayer().playbackNearlyFinished() method. The examples though appear to imply that I need to go through a series of utterances or intents to setup the system before being able to announce that playback is nearly finished. This ends up behaving less like a unit test and more like an integration or end to end test. An issue, for example, in kicking off playback would break the test, even if the handler I'm trying to test is working as expected.

Can you provide some guidance on the best way to unit test a single handler? How do I focus in on the smallest unit possible to ensure that it is behaving as expected?

Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Well... FWIW, it turned out the problem I was having may have been related to a mock that I was using for the persistence layer. I fixed it by moving it from a beforeEach/afterEach init/reset pattern to a beforeAll/afterAll reset. In the former case I was calling "restore" and then re-mocking the library each time. In the latter (which ended up working) I used the "reset" functionality after each test runs so the mock is only set once, but it's history is cleared after each run.

More importantly to this project: I was able to generate the PlaybackNearlyFinished request by using the launch() method with a filter that changed the request type. I used the mock to setup the environment properly, called the filtered launch, and then tracked the response.

John Kelvie
@jkelvie
Hi @damiansilbergleithcunniff thanks for the update on this
The point you make about unit-testing versus integration testing is a good one - we automate a lot of the behavior around the AudioPlayer simply because we found this to be very useful
However, to allow it to be broken apart and use in isolation though I agree is also useful, and is more appropriately deemed a unit-test
John Kelvie
@jkelvie
I'm glad to hear, though, that you were able to get it working with filter
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Me too :)

The v-a is a powerful tool. Thanks for building it and making it available. It's also pretty unique in the marketplace. When I couldn't get it to do exactly what I wanted I searched for an alternative. There really isn't one. So kudos all around.

I'd love to see more access to individual intents/requests. I'd love to see the audioplayer more able to work in isolation (better setup to create the state you want) and I'd really love to see a simulator for S3 that's similar to the one you guys have for Dynamo. The Dynamo one is cool... if you're using dynamo for persistence. I'm not :)

Again though, great work and thank you.

John Kelvie
@jkelvie
Thanks @damiansilbergleithcunniff - appreciate the feedback
With regard to S3 support, that is not planned near-term, but we use Nock to mock Dynamo - you might find a similar approach would work for S3
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Makes sense. I ended up just mocking the persistence layer with a fake persistence adapter.

I have a dedicated persistence.js module with a method getPersistenceAdapter. That initializes my S3PersistenceAdapter with a bucket name (and any other settings) and is called during the custom skill initialization.

In test I use sinon to stub out that method.

  stubPersistenceAdapter(sinon.stub(persistence, 'getPersistenceAdapter'));

I then pass that into the method below to get a standard mock for the persistence adapter

/**
 * Creates a consistent stub for the persistence adapter which saves/loads attributes
 * @param stub - a sinon stub which has stubbed the getPersistenceAdapter
 * @returns a sinon stub which has been setup as a persistence adapter
 */
export const stubPersistenceAdapter = (stub) => {
  let savedAttributes = null;
  stub.returns({
    getAttributes: (requestEnvelope) => { return Promise.resolve(savedAttributes);},
    saveAttributes: (requestEnvelope, attributes) => {
      savedAttributes = attributes;
      return Promise.resolve();
    }
  });
  stub.getSavedAttributes =  () => savedAttributes;
  stub.setSavedAttributes = (val) => savedAttributes = val;
  return stub;
};

The nice thing about this solution is that it works regardless of the persistence layer (S3, Dymamo, custom?). It also allows me to get/set the attributes that have been written so that I can examine them directly in my tests. Most of my unit tests look at the responseBuilder generated output from the handler as well as the data written to persistence. In all cases I can easily populate that data before the test runs.

Anyway, that's what's worked for me so far :)

John Kelvie
@jkelvie
That's a really nice approach
Siwani Agrawal
@siwaniagrawal
Hey I'm siwani. I am working on an google-assistant project, the testing is done manually but I need a support for deployment level testing. Can I use bespoken for the purpose?
Juan Perata
@jperata
Hi @siwaniagrawal , Bespoken is intended for that purpose, you can setup a mixture of unit testing against your action code and e2e tests against your deployed action.
Here is a blog post detailing a complete CI and CD setup for Alexa, for google the setup would be the same with the exception of setting the specific platform and setup of your back-end of choice. The difference in configuration is explained in detail in https://read.bespoken.io/ depending on if you are doing unit or e2e testing.
Siwani Agrawal
@siwaniagrawal
Thanks :+1:
Siwani Agrawal
@siwaniagrawal
Hey! @jperata , I was going through the blog post and doc you suggested,but as I'm new to writing tests I'm finding it hard how to configure and make testing files for actions on google. Can you suggest me github link to any project on actions where I can understand where to unzip the json files, and where to write the testing.yml files and how to configure.
Juan Perata
@jperata
Hi @siwaniagrawal , we have https://github.com/bespoken/GuessThePriceForGoogle , it works starting a server instead of a google cloud function, but for guidance in the setup, I think it would be helpful to you. We are missing the CI configuration on that one, but the test setup is the same as this other one https://github.com/bespoken-samples/GuessThePrice/blob/master/.circleci/config.yml
Siwani Agrawal
@siwaniagrawal
Hi! @jperata I was running the unit test for actions on google project and this is the error I get every time, googleFunction is not a function TypeError: googleFunction is not a function How should I fix this?
Juan Perata
@jperata
Hi @siwaniagrawal , this error happens when we cannot execute the handler of your functions, this could be due to the handlers being set up incorrectly, or the function not being exported. Could you share your testing.json file with us and ensure that you are exporting your function. You can share it with me via DM if you don't want to share it in public
Sotiris Nanopoulos
@davinci26
Hey, thanks for the project. This has been asked probably a million times :) but is there a way to mock the get function of a dynamo db in the node version of VA?
Juan Perata
@jperata
Hi @davinci26 , that is a common concern, here is our documentation on how to approach Dynamo in general
https://read.bespoken.io/unit-testing/use-cases/#testing-with-dynamo
Sotiris Nanopoulos
@davinci26
@jperata thanks for the reply :). I am not using the yaml style files. I am working with mocha and the node api. Even with a yaml style I do not see from the documentation how I can pre-load the db with data. Am I missing something
What I tried is to add some put items after I user virtualAlexa.dynamoDB().mock();
Juan Perata
@jperata

Hi @davinci26 , with virtual-alexa , this is the documentation for dynamo
https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#dynamodb

In order to put some items, use the dynamo put method, and then you can use the get

John Kelvie
@jkelvie
I would also add with regard to get and put - you can access those as methods as Juan mentions
But by enabling the dynamoDB mock, if you are using ASK SDK, it should "just work" - you don't need to interact directly with dynamo at all
Instead, ASK SDK will interact with our mock dynamo and behave appropriately without any additional action on your part
ash-at-github
@ash-at-github
Per the Documentation, https://read.bespoken.io/unit-testing/guide/#test-environment, it the environment variable will be set, but i think that would only be applicable to lambda, right? Since for webservice, we are using endpoint, is there a way to indicate that this comes from virutal alexa?
Juan Perata
@jperata
Hi @ash-at-github , if your endpoint runs in the same machine of the tests, you have access to set your own variables. But if you want to differentiate a request coming from virtual alexa from one coming from Amazon directly we have some generated fields that differ in format. For example the sessionId for a real alexa request is formatted "amzn1.echo-api.session.<uuid>" while virtual alexa generates one like "SessionID.<uuid>".
ash-at-github
@ash-at-github
@jperata That sounds good, we can use different session id format here. thanks!
Siwani Agrawal
@siwaniagrawal
Does bespoken take care of firebase function testing as well?
Juan Perata
@jperata
Hi @siwaniagrawal , you can set up your Google Cloud function at the configuration in the YAML section. Here is our documentation on the handler setup
https://read.bespoken.io/unit-testing/guide-google/#google-cloud-function-configuration
ash-at-github
@ash-at-github
We are seeing this error: No match for dialog name: <Intent Name> even after specifying intent name instead of utterance
We have verified that the model file contains that intent. Interestingly, we see that error even when we use utterance corresponding to <Intent Name>, so it is doing some mapping of that utterance to the intent
Juan Perata
@jperata
Hi @ash-at-github , I believe you were using an external endpoint instead of a lambda. Do you have the model route setup to your latest model in the testing.json file or virtual Alexa instance? that's what we use to generate the request and interpret the utterances
ash-at-github
@ash-at-github
yes @jperata we have included the model and it's interpreting other intents correctly
And As I mentioned, this is happening irrespective of whether we supply utterance or intent name.
Utterance is getting mapped to the intent shown in error message
so some mapping is happening somewhere
ash-at-github
@ash-at-github
Detailed error log: No match for dialog name: TEXT_INTENT
Error: No match for dialog name: TEXT_INTENT
at DialogManager.Object.<anonymous>.DialogManager.handleDirective (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/dialog/DialogManager.js:31:31)
at RemoteSkillInteractor.<anonymous> (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:84:63)
at step (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:32:23)
at Object.next (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:13:53)
at fulfilled (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:4:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
Timestamp:
2019-03-28T12:20:25.960
Juan Perata
@jperata
could you share your interaction model with us, you can do it by DM to me if you don't want to share it publicly
ash-at-github
@ash-at-github
Ok, let me ping you separately about this issue
Harshit Agrawal
@agharshit08
I want to add bespoken to my Google assistant action for testing. How should I proceed?
Juan Perata
@jperata
Hi @agharshit08 , answered to you at the bst channel.
Colin MacAllister
@dcoli
Hello. I'm working with your virtual-google-assistant node package to test my media actions. In one text I'm launching the service by awaiting a welcome utterance -- not using the .launch() method -- and then awaiting a 'next' utterance, which should move to the next story in my setlist. The welcome works fine, but the next intent returns an "sorry, I can't help you with that." I'm new at this implementation, so I might be missing something. But these commands all work outside of mocha
Colin MacAllister
@dcoli
We haven't currently implemented the Default Welcome Intent, so .launch() doesn't work. Hoping I can sidestep that with a simple utterance of our welcome phrase
Juan Perata
@jperata
Hi @dcoli , you are right to use a simple utterance for the welcome utterance if you have a different one. If we are returning a "sorry, I can't help you with that" that means we are using a different intent (if we couldn't find any, we would have returned an error). It might be the case that you have more than one Intent that matches that utterance invocation, could you try using the "intend" method instead of the utter one?
Colin MacAllister
@dcoli
Thanks, Juan. I've tried follow-up utterances as well as indends