Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Feb 11 00:22
    dependabot[bot] labeled #143
  • Feb 11 00:22
    dependabot[bot] opened #143
  • Feb 11 00:22

    dependabot[bot] on npm_and_yarn

    Bump pathval from 1.1.0 to 1.1.… (compare)

  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    (compare)

  • Jan 22 03:48
    dependabot[bot] closed #127
  • Jan 22 03:48
    dependabot[bot] commented #127
  • Jan 22 03:48
    dependabot[bot] labeled #142
  • Jan 22 03:48
    dependabot[bot] opened #142
  • Jan 22 03:48

    dependabot[bot] on npm_and_yarn

    Bump node-fetch from 2.6.0 to 2… (compare)

  • Jan 15 02:01
    dependabot[bot] labeled #141
  • Jan 15 02:01
    dependabot[bot] opened #141
  • Jan 15 02:01

    dependabot[bot] on npm_and_yarn

    Bump shelljs from 0.8.3 to 0.8.… (compare)

  • Nov 16 2021 21:41
    dependabot[bot] labeled #140
  • Nov 16 2021 21:41
    dependabot[bot] opened #140
  • Nov 16 2021 21:41

    dependabot[bot] on npm_and_yarn

    Bump aws-sdk from 2.402.0 to 2.… (compare)

  • Nov 08 2021 14:35
    jkelvie closed #139
  • Nov 08 2021 14:35

    jkelvie on master

    Added DeleteItem capability to … (compare)

  • Nov 08 2021 14:35
    jkelvie commented #139
  • Nov 08 2021 10:25
    allthepies commented #108
  • Nov 08 2021 10:16
    allthepies synchronize #139
Juan Perata
@jperata
Hi @siwaniagrawal , you can set up your Google Cloud function at the configuration in the YAML section. Here is our documentation on the handler setup
https://read.bespoken.io/unit-testing/guide-google/#google-cloud-function-configuration
ash-at-github
@ash-at-github
We are seeing this error: No match for dialog name: <Intent Name> even after specifying intent name instead of utterance
We have verified that the model file contains that intent. Interestingly, we see that error even when we use utterance corresponding to <Intent Name>, so it is doing some mapping of that utterance to the intent
Juan Perata
@jperata
Hi @ash-at-github , I believe you were using an external endpoint instead of a lambda. Do you have the model route setup to your latest model in the testing.json file or virtual Alexa instance? that's what we use to generate the request and interpret the utterances
ash-at-github
@ash-at-github
yes @jperata we have included the model and it's interpreting other intents correctly
And As I mentioned, this is happening irrespective of whether we supply utterance or intent name.
Utterance is getting mapped to the intent shown in error message
so some mapping is happening somewhere
ash-at-github
@ash-at-github
Detailed error log: No match for dialog name: TEXT_INTENT
Error: No match for dialog name: TEXT_INTENT
at DialogManager.Object.<anonymous>.DialogManager.handleDirective (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/dialog/DialogManager.js:31:31)
at RemoteSkillInteractor.<anonymous> (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:84:63)
at step (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:32:23)
at Object.next (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:13:53)
at fulfilled (/usr/local/Cellar/node/8.9.1/lib/node_modules/bespoken-tools/node_modules/skill-testing-ml/node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:4:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
Timestamp:
2019-03-28T12:20:25.960
Juan Perata
@jperata
could you share your interaction model with us, you can do it by DM to me if you don't want to share it publicly
ash-at-github
@ash-at-github
Ok, let me ping you separately about this issue
Harshit Agrawal
@agharshit08
I want to add bespoken to my Google assistant action for testing. How should I proceed?
Juan Perata
@jperata
Hi @agharshit08 , answered to you at the bst channel.
Colin MacAllister
@dcoli
Hello. I'm working with your virtual-google-assistant node package to test my media actions. In one text I'm launching the service by awaiting a welcome utterance -- not using the .launch() method -- and then awaiting a 'next' utterance, which should move to the next story in my setlist. The welcome works fine, but the next intent returns an "sorry, I can't help you with that." I'm new at this implementation, so I might be missing something. But these commands all work outside of mocha
Colin MacAllister
@dcoli
We haven't currently implemented the Default Welcome Intent, so .launch() doesn't work. Hoping I can sidestep that with a simple utterance of our welcome phrase
Juan Perata
@jperata
Hi @dcoli , you are right to use a simple utterance for the welcome utterance if you have a different one. If we are returning a "sorry, I can't help you with that" that means we are using a different intent (if we couldn't find any, we would have returned an error). It might be the case that you have more than one Intent that matches that utterance invocation, could you try using the "intend" method instead of the utter one?
Colin MacAllister
@dcoli
Thanks, Juan. I've tried follow-up utterances as well as indends
One think I notice is that the query sent with the request to the vga is "GOOGLE_ASSISTANT_WELCOME"
but I've already passed that stage
so it may be there's something I'm not setting that's defaulting to the welcome intent
Colin MacAllister
@dcoli
(@jperata )
Juan Perata
@jperata
@dcoli the "GOOGLE_ASSISTANT_WELCOME" is part of our base request that we modify depending on the utter or intend used, it's not used for the conversation. Could you see what is inside queryResult.intent.name and validate against your intents?
Colin MacAllister
@dcoli
@jperata For now I'm adding a filter to change the responseQuery to something we have implemented
Juan Perata
@jperata
glad that you could find something that works for you
Colin MacAllister
@dcoli
Thanks. It looks like data coming back from the second utterance is missing stuff we expected. In our case, the list of audio files to play. I'm going to just insert these into the response object in my tests, but it would be great to figure this out so we're truly testing
ash-at-github
@ash-at-github
Hi, since we use virtual-alexa with webservice, we had to use SkillURL and disable signature check. But we are concerned that we are making our service insecure by doing that. IS there any other way to avoid disabling signature check?
Juan Perata
@jperata
Hi @ash-at-github , since signature check validates if a request comes from Alexa and we are using an emulator, Virtual Alexa won't be able to generate the proper headers if you enable signature check, we can only suggest that you have two different environments with the production one still using the signature checks and a local one for tests.
ash-at-github
@ash-at-github
@jperata yes, pretty much what we did, but we would like those tests to run on prod environment too but we arent able to
Juan Perata
@jperata
For production, our e2e test might be more appropriate since those use real Alexa. I believe that if you disable signature checks the app won't be approved for certification. I don't think there's any other way since our filters can only make modification on the request and not on the headers.
ash-at-github
@ash-at-github
@jperata Ah I see..got it, thanks!
Colin MacAllister
@dcoli
@jperata, in our app we're sending data with the fulfillment response conversation in a property called "data." So what we need in the Virtual GA is to intercept and set conversation tokens and persist them between calls to the assistant.
Juan Perata
@jperata
Hi @dcoli , I will create a feature request with that detail on the virtual google assistant project, for that it would be very helpful if you could sent us a JSON request example with the fields you are setting up and a Response received with that data persisted. That way we can ensure that we are replicating your problem.
Colin MacAllister
@dcoli
@jperata, I have an example of a typical request and a successful response, obtained by posting the request via curl (Postman). Do you have an email where I can send it? We'd like to keep it private.
Juan Perata
@jperata
Hi @dcoli , I saw your DM with the attachment, I will create the issue and keep you posted.
Colin MacAllister
@dcoli
Thanks!
ash-at-github
@ash-at-github
We upgraded to the latest bespoken-tools version, we are seeing some html issues in the report. They are not rendered correctly. The icon before every command shows up broken image with 32*32 written below it. It's not a major functional roadblock but the report has become unsharable with others as the repeated broken icon image has made it jarring to look at it. Is there any known workaround for this?
Juan Perata
@jperata
could you share a screenshot of this with us? Dragging to the window works to upload files
ash-at-github
@ash-at-github
Screenshot 2019-04-15 15.04.12.png
ok, i just uploaded not sure if it's visible to you
Juan Perata
@jperata
yes, it is, i will validate what is causing it
ash-at-github
@ash-at-github
Thank you @jperata
ash-at-github
@ash-at-github
@jperata Had another related question, when we send HTML file that's generated in "test_output/report" via email, the html is not at all rendered properly. Guessing that's because it's styling etc separated via CSS, what can we do if we want to email the html report so that it's rendered correctly on client's machine?
ash-at-github
@ash-at-github
tried embedding the html in email body via Jenkins editable email notification plugin, however, email content is mainly json, even after setting the content type to HTML as mentioned in documents @jperata
Juan Perata
@jperata

@ash-at-github regarding embedding the HTML, emails usually need to have html inlined in order for them to work correctly, using a html inliner like this one can help, and it also provide options to do this process programatically

By the way we were not able to reproduce the issue with the broken image that you reported, could you zip file the complete report folder and sent it to us, you can do it via DM if you want to keep it private.

ash-at-github
@ash-at-github
Thanks @jperata will take a look, also will DM you with more details
ng-username
@ng-username
Hey Bespoken team. I've got a custom skill written in Java that I'm trying to write unit tests for, referring to http://docs.bespoken.io/en/latest/tutorials/tutorial_alexa_unit_testing/ for documentation. The documentation states "though this example is written in Javascript, the emulator can be used to test non-Javascript-based skills" but I cannot find any documentation regarding how to test non-js skills. Could you point me in the right direction?
Juan Perata
@jperata
Hi @danielvu95 , if you are using virtual-alexa as a library the usual methods will apply. Setting it up in the console where you are running node or in the IDE.
danielvu95
@danielvu95
thanks the response Juan
danielvu95
@danielvu95
@jperata Can you help me out with the intend method?
I tried using utter for my custom intent but that didn't seem to work
my custom intent catches all the commands issued from the user