Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 27 20:13
    dependabot[bot] labeled #145
  • Jun 27 20:13
    dependabot[bot] opened #145
  • Jun 27 20:13

    dependabot[bot] on npm_and_yarn

    Bump copy-props from 2.0.4 to 2… (compare)

  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jun 23 19:26
    dependabot[bot] closed #126
  • Jun 23 19:26
    dependabot[bot] commented #126
  • Jun 23 19:26
    dependabot[bot] labeled #144
  • Jun 23 19:26
    dependabot[bot] opened #144
  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    Bump codecov from 3.6.5 to 3.8.… (compare)

  • Feb 11 00:22
    dependabot[bot] labeled #143
  • Feb 11 00:22
    dependabot[bot] opened #143
  • Feb 11 00:22

    dependabot[bot] on npm_and_yarn

    Bump pathval from 1.1.0 to 1.1.… (compare)

  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jan 22 03:48
    dependabot[bot] closed #127
  • Jan 22 03:48
    dependabot[bot] commented #127
  • Jan 22 03:48
    dependabot[bot] labeled #142
  • Jan 22 03:48
    dependabot[bot] opened #142
  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    Bump node-fetch from 2.6.0 to 2… (compare)

  • Jan 15 02:01
    dependabot[bot] labeled #141
  • Jan 15 02:01
    dependabot[bot] opened #141
Sergey Korol
@sskorol
@jperata here's a link: https://github.com/sskorol/fulfillment-test -> it's a clean project with a simple function which works with 2 common dialogflow intents ... agent is pushed as well...
Sergey Korol
@sskorol
note that webhook url refers to local ngrok... so you may need to raise your own instance or deploy this function into cloud...
Juan Perata
@jperata
Hi @sskorol , the issue with the fulfillment is addressed on the latest beta version of virtual-google-assistant (0.3.1), version 0.3.0 already was supporting correctly the body generation, (it does it by mocking a correct request and response object when doing the tests), but it was missing also a json method which is the thing added in version 0.3.1 and with that it works fine.
Michael Hargiss
@mycargus
:wave: Hullo! I want to use virtual-alexa to test a new Alexa skill I'm writing. I'm trying to decide whether to use Ruby or Node.js. From http://docs.bespoken.io/en/latest/tutorials/tutorial_alexa_unit_testing/ I see this: "And please note - though this example is written in Javascript, the emulator can be used to test non-Javascript-based skills!" How can I do this? Are there any drawbacks to testing a non-Javascript-based skill with virtual-alexa?
Juan Perata
@jperata
Hi @mycargus , the virtual-alexa emulator instance can be pointed to a URL, in that way you can start your local server in Ruby and then send the requests and receive the responses that this server gives you. But you would need to still do your tests in javascript. We have that tool now as part of our BST package and that way you can just write your tests using YAML so that you don't need to be switching between to different languages to do the testing. Here is our getting started documentation
Other than needing to start your server there are no major drawbacks when using a different language.
Michael Hargiss
@mycargus
Excellent! And I can still use the virtual-alexa mocks if I write the tests in YAML?
Juan Perata
@jperata
yes, here is the detail on how to use mocks with YAML
Dynamo mock works replacing the javascript library though, so that one would be one of the javascript only features for now.
Michael Hargiss
@mycargus
Thank you, these links are helpful. To be sure I understand correctly—which feature(s) only works when writing tests in javascript?
Juan Perata
@jperata
everything available in virtual-alexa should have an equivalent in YAML, in virtual alexa you can do some more complex cases, like using the response from your skill as part of the generation of the next request for example.
For the dynamo part, I mean that the skill itself has to be written in Javascript in order to be able to use it. (the tests can be on virtual alexa or yaml)
Michael Hargiss
@mycargus
Ah okay, I understand now. Thank you much for your help! I’m excited to use virtual-alexa.
Sergey Korol
@sskorol
@jperata thanks for the update!.. I'll try it today / tomorrow
Sergey Korol
@sskorol
@jperata it seems to be working on 0.3.1, but the result contains only the following: { fulfillmentText: 'intent response', outputContexts: [] } ... that might be ok, if you always override dialogflow agent's response with your own (on function's level)... but if you still want to get agent's response, which is defined at Dialogflow UI, it won't be working as expected... GH project that I've shared contains code, which extracts response from agent... and it fails, unfortunately... I guess due to mocked request / response?..
note that all required info for getting responses is present in agent's json files... is there any plan to support parsing them to retrieve responses, contexts, utters, etc?
Juan Perata
@jperata
@sskorol At the moment we are still not using the responses and elements from the agent.json or the ones set up in the dialog flow intent configurations. There are some requests for that functionality, so there are plans to support it, but we haven't set the time to make those improvements yet.
Sergey Korol
@sskorol
@jperata ok, got it, thanks
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Hi. I'm working on building an AudioPlayer based alexa skill and I'm trying to use the virtual-alexa for testing. Specifically, I'm trying to exercise the PlaybackNearlyFinished() handler that I've built.

If I understand the reading that I've done it appears there's no simple way to just simulate a PlaybackNearlyFinished event.

I tried doing it at first by generating a launch event, then using a filter to change the type to be a PlaybackNearlyFinished. This worked in that it directly generated calls to my handler, but the behavior wasn't entirely predictable. Specifically I had trouble with the save/load of persistent data in my request/response interceptors. In addition it appears to automatically fire off a PlaybackStarted event for the next stream that I've enqueued. I don't care about that event and want to test that separately.

I did a little more reading and discovered that there is the audioPlayer().playbackNearlyFinished() method. The examples though appear to imply that I need to go through a series of utterances or intents to setup the system before being able to announce that playback is nearly finished. This ends up behaving less like a unit test and more like an integration or end to end test. An issue, for example, in kicking off playback would break the test, even if the handler I'm trying to test is working as expected.

Can you provide some guidance on the best way to unit test a single handler? How do I focus in on the smallest unit possible to ensure that it is behaving as expected?

Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Well... FWIW, it turned out the problem I was having may have been related to a mock that I was using for the persistence layer. I fixed it by moving it from a beforeEach/afterEach init/reset pattern to a beforeAll/afterAll reset. In the former case I was calling "restore" and then re-mocking the library each time. In the latter (which ended up working) I used the "reset" functionality after each test runs so the mock is only set once, but it's history is cleared after each run.

More importantly to this project: I was able to generate the PlaybackNearlyFinished request by using the launch() method with a filter that changed the request type. I used the mock to setup the environment properly, called the filtered launch, and then tracked the response.

John Kelvie
@jkelvie
Hi @damiansilbergleithcunniff thanks for the update on this
The point you make about unit-testing versus integration testing is a good one - we automate a lot of the behavior around the AudioPlayer simply because we found this to be very useful
However, to allow it to be broken apart and use in isolation though I agree is also useful, and is more appropriately deemed a unit-test
John Kelvie
@jkelvie
I'm glad to hear, though, that you were able to get it working with filter
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Me too :)

The v-a is a powerful tool. Thanks for building it and making it available. It's also pretty unique in the marketplace. When I couldn't get it to do exactly what I wanted I searched for an alternative. There really isn't one. So kudos all around.

I'd love to see more access to individual intents/requests. I'd love to see the audioplayer more able to work in isolation (better setup to create the state you want) and I'd really love to see a simulator for S3 that's similar to the one you guys have for Dynamo. The Dynamo one is cool... if you're using dynamo for persistence. I'm not :)

Again though, great work and thank you.

John Kelvie
@jkelvie
Thanks @damiansilbergleithcunniff - appreciate the feedback
With regard to S3 support, that is not planned near-term, but we use Nock to mock Dynamo - you might find a similar approach would work for S3
Damian Silbergleith Cunniff
@damiansilbergleithcunniff

Makes sense. I ended up just mocking the persistence layer with a fake persistence adapter.

I have a dedicated persistence.js module with a method getPersistenceAdapter. That initializes my S3PersistenceAdapter with a bucket name (and any other settings) and is called during the custom skill initialization.

In test I use sinon to stub out that method.

  stubPersistenceAdapter(sinon.stub(persistence, 'getPersistenceAdapter'));

I then pass that into the method below to get a standard mock for the persistence adapter

/**
 * Creates a consistent stub for the persistence adapter which saves/loads attributes
 * @param stub - a sinon stub which has stubbed the getPersistenceAdapter
 * @returns a sinon stub which has been setup as a persistence adapter
 */
export const stubPersistenceAdapter = (stub) => {
  let savedAttributes = null;
  stub.returns({
    getAttributes: (requestEnvelope) => { return Promise.resolve(savedAttributes);},
    saveAttributes: (requestEnvelope, attributes) => {
      savedAttributes = attributes;
      return Promise.resolve();
    }
  });
  stub.getSavedAttributes =  () => savedAttributes;
  stub.setSavedAttributes = (val) => savedAttributes = val;
  return stub;
};

The nice thing about this solution is that it works regardless of the persistence layer (S3, Dymamo, custom?). It also allows me to get/set the attributes that have been written so that I can examine them directly in my tests. Most of my unit tests look at the responseBuilder generated output from the handler as well as the data written to persistence. In all cases I can easily populate that data before the test runs.

Anyway, that's what's worked for me so far :)

John Kelvie
@jkelvie
That's a really nice approach
Siwani Agrawal
@siwaniagrawal
Hey I'm siwani. I am working on an google-assistant project, the testing is done manually but I need a support for deployment level testing. Can I use bespoken for the purpose?
Juan Perata
@jperata
Hi @siwaniagrawal , Bespoken is intended for that purpose, you can setup a mixture of unit testing against your action code and e2e tests against your deployed action.
Here is a blog post detailing a complete CI and CD setup for Alexa, for google the setup would be the same with the exception of setting the specific platform and setup of your back-end of choice. The difference in configuration is explained in detail in https://read.bespoken.io/ depending on if you are doing unit or e2e testing.
Siwani Agrawal
@siwaniagrawal
Thanks :+1:
Siwani Agrawal
@siwaniagrawal
Hey! @jperata , I was going through the blog post and doc you suggested,but as I'm new to writing tests I'm finding it hard how to configure and make testing files for actions on google. Can you suggest me github link to any project on actions where I can understand where to unzip the json files, and where to write the testing.yml files and how to configure.
Juan Perata
@jperata
Hi @siwaniagrawal , we have https://github.com/bespoken/GuessThePriceForGoogle , it works starting a server instead of a google cloud function, but for guidance in the setup, I think it would be helpful to you. We are missing the CI configuration on that one, but the test setup is the same as this other one https://github.com/bespoken-samples/GuessThePrice/blob/master/.circleci/config.yml
Siwani Agrawal
@siwaniagrawal
Hi! @jperata I was running the unit test for actions on google project and this is the error I get every time, googleFunction is not a function TypeError: googleFunction is not a function How should I fix this?
Juan Perata
@jperata
Hi @siwaniagrawal , this error happens when we cannot execute the handler of your functions, this could be due to the handlers being set up incorrectly, or the function not being exported. Could you share your testing.json file with us and ensure that you are exporting your function. You can share it with me via DM if you don't want to share it in public
Sotiris Nanopoulos
@davinci26
Hey, thanks for the project. This has been asked probably a million times :) but is there a way to mock the get function of a dynamo db in the node version of VA?
Juan Perata
@jperata
Hi @davinci26 , that is a common concern, here is our documentation on how to approach Dynamo in general
https://read.bespoken.io/unit-testing/use-cases/#testing-with-dynamo
Sotiris Nanopoulos
@davinci26
@jperata thanks for the reply :). I am not using the yaml style files. I am working with mocha and the node api. Even with a yaml style I do not see from the documentation how I can pre-load the db with data. Am I missing something
What I tried is to add some put items after I user virtualAlexa.dynamoDB().mock();
Juan Perata
@jperata

Hi @davinci26 , with virtual-alexa , this is the documentation for dynamo
https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#dynamodb

In order to put some items, use the dynamo put method, and then you can use the get

John Kelvie
@jkelvie
I would also add with regard to get and put - you can access those as methods as Juan mentions
But by enabling the dynamoDB mock, if you are using ASK SDK, it should "just work" - you don't need to interact directly with dynamo at all
Instead, ASK SDK will interact with our mock dynamo and behave appropriately without any additional action on your part
ash-at-github
@ash-at-github
Per the Documentation, https://read.bespoken.io/unit-testing/guide/#test-environment, it the environment variable will be set, but i think that would only be applicable to lambda, right? Since for webservice, we are using endpoint, is there a way to indicate that this comes from virutal alexa?
Juan Perata
@jperata
Hi @ash-at-github , if your endpoint runs in the same machine of the tests, you have access to set your own variables. But if you want to differentiate a request coming from virtual alexa from one coming from Amazon directly we have some generated fields that differ in format. For example the sessionId for a real alexa request is formatted "amzn1.echo-api.session.<uuid>" while virtual alexa generates one like "SessionID.<uuid>".
ash-at-github
@ash-at-github
@jperata That sounds good, we can use different session id format here. thanks!
Siwani Agrawal
@siwaniagrawal
Does bespoken take care of firebase function testing as well?
Juan Perata
@jperata
Hi @siwaniagrawal , you can set up your Google Cloud function at the configuration in the YAML section. Here is our documentation on the handler setup
https://read.bespoken.io/unit-testing/guide-google/#google-cloud-function-configuration
ash-at-github
@ash-at-github
We are seeing this error: No match for dialog name: <Intent Name> even after specifying intent name instead of utterance
We have verified that the model file contains that intent. Interestingly, we see that error even when we use utterance corresponding to <Intent Name>, so it is doing some mapping of that utterance to the intent
Juan Perata
@jperata
Hi @ash-at-github , I believe you were using an external endpoint instead of a lambda. Do you have the model route setup to your latest model in the testing.json file or virtual Alexa instance? that's what we use to generate the request and interpret the utterances
ash-at-github
@ash-at-github
yes @jperata we have included the model and it's interpreting other intents correctly