Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jun 23 19:26
    dependabot[bot] closed #126
  • Jun 23 19:26
    dependabot[bot] commented #126
  • Jun 23 19:26
    dependabot[bot] labeled #144
  • Jun 23 19:26
    dependabot[bot] opened #144
  • Jun 23 19:26

    dependabot[bot] on npm_and_yarn

    Bump codecov from 3.6.5 to 3.8.… (compare)

  • Feb 11 00:22
    dependabot[bot] labeled #143
  • Feb 11 00:22
    dependabot[bot] opened #143
  • Feb 11 00:22

    dependabot[bot] on npm_and_yarn

    Bump pathval from 1.1.0 to 1.1.… (compare)

  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jan 22 03:48
    dependabot[bot] closed #127
  • Jan 22 03:48
    dependabot[bot] commented #127
  • Jan 22 03:48
    dependabot[bot] labeled #142
  • Jan 22 03:48
    dependabot[bot] opened #142
  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    Bump node-fetch from 2.6.0 to 2… (compare)

  • Jan 15 02:01
    dependabot[bot] labeled #141
  • Jan 15 02:01
    dependabot[bot] opened #141
  • Jan 15 02:01

    dependabot[bot] on npm_and_yarn

    Bump shelljs from 0.8.3 to 0.8.… (compare)

  • Nov 16 2021 21:41
    dependabot[bot] labeled #140
  • Nov 16 2021 21:41
    dependabot[bot] opened #140
Juan Perata
@jperata
Hi @siwaniagrawal , this error happens when we cannot execute the handler of your functions, this could be due to the handlers being set up incorrectly, or the function not being exported. Could you share your testing.json file with us and ensure that you are exporting your function. You can share it with me via DM if you don't want to share it in public
Sotiris Nanopoulos
@davinci26
Hey, thanks for the project. This has been asked probably a million times :) but is there a way to mock the get function of a dynamo db in the node version of VA?
Juan Perata
@jperata
Hi @davinci26 , that is a common concern, here is our documentation on how to approach Dynamo in general
https://read.bespoken.io/unit-testing/use-cases/#testing-with-dynamo
Sotiris Nanopoulos
@davinci26
@jperata thanks for the reply :). I am not using the yaml style files. I am working with mocha and the node api. Even with a yaml style I do not see from the documentation how I can pre-load the db with data. Am I missing something
What I tried is to add some put items after I user virtualAlexa.dynamoDB().mock();
Juan Perata
@jperata

Hi @davinci26 , with virtual-alexa , this is the documentation for dynamo
https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#dynamodb

In order to put some items, use the dynamo put method, and then you can use the get

John Kelvie
@jkelvie
I would also add with regard to get and put - you can access those as methods as Juan mentions
But by enabling the dynamoDB mock, if you are using ASK SDK, it should "just work" - you don't need to interact directly with dynamo at all
Instead, ASK SDK will interact with our mock dynamo and behave appropriately without any additional action on your part
ash-at-github
@ash-at-github
Per the Documentation, https://read.bespoken.io/unit-testing/guide/#test-environment, it the environment variable will be set, but i think that would only be applicable to lambda, right? Since for webservice, we are using endpoint, is there a way to indicate that this comes from virutal alexa?
Juan Perata
@jperata
Hi @ash-at-github , if your endpoint runs in the same machine of the tests, you have access to set your own variables. But if you want to differentiate a request coming from virtual alexa from one coming from Amazon directly we have some generated fields that differ in format. For example the sessionId for a real alexa request is formatted "amzn1.echo-api.session.<uuid>" while virtual alexa generates one like "SessionID.<uuid>".
ash-at-github
@ash-at-github
@jperata That sounds good, we can use different session id format here. thanks!
Siwani Agrawal
@siwaniagrawal
Does bespoken take care of firebase function testing as well?
Juan Perata
@jperata
Hi @siwaniagrawal , you can set up your Google Cloud function at the configuration in the YAML section. Here is our documentation on the handler setup
https://read.bespoken.io/unit-testing/guide-google/#google-cloud-function-configuration
ash-at-github
@ash-at-github
We are seeing this error: No match for dialog name: <Intent Name> even after specifying intent name instead of utterance
We have verified that the model file contains that intent. Interestingly, we see that error even when we use utterance corresponding to <Intent Name>, so it is doing some mapping of that utterance to the intent
Juan Perata
@jperata
Hi @ash-at-github , I believe you were using an external endpoint instead of a lambda. Do you have the model route setup to your latest model in the testing.json file or virtual Alexa instance? that's what we use to generate the request and interpret the utterances
ash-at-github
@ash-at-github
yes @jperata we have included the model and it's interpreting other intents correctly
And As I mentioned, this is happening irrespective of whether we supply utterance or intent name.
Utterance is getting mapped to the intent shown in error message
so some mapping is happening somewhere
ash-at-github
@ash-at-github
Detailed error log: No match for dialog name: TEXT_INTENT
Error: No match for dialog name: TEXT_INTENT
at DialogManager.Object.<anonymous>.DialogManager.handleDirective (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/dialog/DialogManager.js:31:31)
at RemoteSkillInteractor.<anonymous> (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:84:63)
at step (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:32:23)
at Object.next (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:13:53)
at fulfilled (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:4:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
Timestamp:
2019-03-28T12:20:25.960
Juan Perata
@jperata
could you share your interaction model with us, you can do it by DM to me if you don't want to share it publicly
ash-at-github
@ash-at-github
Ok, let me ping you separately about this issue
Harshit Agrawal
@agharshit08
I want to add bespoken to my Google assistant action for testing. How should I proceed?
Juan Perata
@jperata
Hi @agharshit08 , answered to you at the bst channel.
Colin MacAllister
@dcoli
Hello. I'm working with your virtual-google-assistant node package to test my media actions. In one text I'm launching the service by awaiting a welcome utterance -- not using the .launch() method -- and then awaiting a 'next' utterance, which should move to the next story in my setlist. The welcome works fine, but the next intent returns an "sorry, I can't help you with that." I'm new at this implementation, so I might be missing something. But these commands all work outside of mocha
Colin MacAllister
@dcoli
We haven't currently implemented the Default Welcome Intent, so .launch() doesn't work. Hoping I can sidestep that with a simple utterance of our welcome phrase
Juan Perata
@jperata
Hi @dcoli , you are right to use a simple utterance for the welcome utterance if you have a different one. If we are returning a "sorry, I can't help you with that" that means we are using a different intent (if we couldn't find any, we would have returned an error). It might be the case that you have more than one Intent that matches that utterance invocation, could you try using the "intend" method instead of the utter one?
Colin MacAllister
@dcoli
Thanks, Juan. I've tried follow-up utterances as well as indends
One think I notice is that the query sent with the request to the vga is "GOOGLE_ASSISTANT_WELCOME"
but I've already passed that stage
so it may be there's something I'm not setting that's defaulting to the welcome intent
Colin MacAllister
@dcoli
(@jperata )
Juan Perata
@jperata
@dcoli the "GOOGLE_ASSISTANT_WELCOME" is part of our base request that we modify depending on the utter or intend used, it's not used for the conversation. Could you see what is inside queryResult.intent.name and validate against your intents?
Colin MacAllister
@dcoli
@jperata For now I'm adding a filter to change the responseQuery to something we have implemented
Juan Perata
@jperata
glad that you could find something that works for you
Colin MacAllister
@dcoli
Thanks. It looks like data coming back from the second utterance is missing stuff we expected. In our case, the list of audio files to play. I'm going to just insert these into the response object in my tests, but it would be great to figure this out so we're truly testing
ash-at-github
@ash-at-github
Hi, since we use virtual-alexa with webservice, we had to use SkillURL and disable signature check. But we are concerned that we are making our service insecure by doing that. IS there any other way to avoid disabling signature check?
Juan Perata
@jperata
Hi @ash-at-github , since signature check validates if a request comes from Alexa and we are using an emulator, Virtual Alexa won't be able to generate the proper headers if you enable signature check, we can only suggest that you have two different environments with the production one still using the signature checks and a local one for tests.
ash-at-github
@ash-at-github
@jperata yes, pretty much what we did, but we would like those tests to run on prod environment too but we arent able to
Juan Perata
@jperata
For production, our e2e test might be more appropriate since those use real Alexa. I believe that if you disable signature checks the app won't be approved for certification. I don't think there's any other way since our filters can only make modification on the request and not on the headers.
ash-at-github
@ash-at-github
@jperata Ah I see..got it, thanks!
Colin MacAllister
@dcoli
@jperata, in our app we're sending data with the fulfillment response conversation in a property called "data." So what we need in the Virtual GA is to intercept and set conversation tokens and persist them between calls to the assistant.
Juan Perata
@jperata
Hi @dcoli , I will create a feature request with that detail on the virtual google assistant project, for that it would be very helpful if you could sent us a JSON request example with the fields you are setting up and a Response received with that data persisted. That way we can ensure that we are replicating your problem.
Colin MacAllister
@dcoli
@jperata, I have an example of a typical request and a successful response, obtained by posting the request via curl (Postman). Do you have an email where I can send it? We'd like to keep it private.
Juan Perata
@jperata
Hi @dcoli , I saw your DM with the attachment, I will create the issue and keep you posted.
Colin MacAllister
@dcoli
Thanks!
ash-at-github
@ash-at-github
We upgraded to the latest bespoken-tools version, we are seeing some html issues in the report. They are not rendered correctly. The icon before every command shows up broken image with 32*32 written below it. It's not a major functional roadblock but the report has become unsharable with others as the repeated broken icon image has made it jarring to look at it. Is there any known workaround for this?
Juan Perata
@jperata
could you share a screenshot of this with us? Dragging to the window works to upload files
ash-at-github
@ash-at-github
Screenshot 2019-04-15 15.04.12.png