Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 03:48

    dependabot[bot] on npm_and_yarn

    (compare)

  • 03:48
    dependabot[bot] closed #127
  • 03:48
    dependabot[bot] commented #127
  • 03:48
    dependabot[bot] labeled #142
  • 03:48
    dependabot[bot] opened #142
  • 03:48

    dependabot[bot] on npm_and_yarn

    Bump node-fetch from 2.6.0 to 2… (compare)

  • Jan 15 02:01
    dependabot[bot] labeled #141
  • Jan 15 02:01
    dependabot[bot] opened #141
  • Jan 15 02:01

    dependabot[bot] on npm_and_yarn

    Bump shelljs from 0.8.3 to 0.8.… (compare)

  • Nov 16 2021 21:41
    dependabot[bot] labeled #140
  • Nov 16 2021 21:41
    dependabot[bot] opened #140
  • Nov 16 2021 21:41

    dependabot[bot] on npm_and_yarn

    Bump aws-sdk from 2.402.0 to 2.… (compare)

  • Nov 08 2021 14:35
    jkelvie closed #139
  • Nov 08 2021 14:35

    jkelvie on master

    Added DeleteItem capability to … (compare)

  • Nov 08 2021 14:35
    jkelvie commented #139
  • Nov 08 2021 10:25
    allthepies commented #108
  • Nov 08 2021 10:16
    allthepies synchronize #139
  • Nov 05 2021 17:26
    allthepies opened #139
  • Oct 04 2021 14:16
    ysak-y opened #138
  • Aug 10 2021 17:39
    dependabot[bot] labeled #137
Colin MacAllister
@dcoli
One think I notice is that the query sent with the request to the vga is "GOOGLE_ASSISTANT_WELCOME"
but I've already passed that stage
so it may be there's something I'm not setting that's defaulting to the welcome intent
Colin MacAllister
@dcoli
(@jperata )
Juan Perata
@jperata
@dcoli the "GOOGLE_ASSISTANT_WELCOME" is part of our base request that we modify depending on the utter or intend used, it's not used for the conversation. Could you see what is inside queryResult.intent.name and validate against your intents?
Colin MacAllister
@dcoli
@jperata For now I'm adding a filter to change the responseQuery to something we have implemented
Juan Perata
@jperata
glad that you could find something that works for you
Colin MacAllister
@dcoli
Thanks. It looks like data coming back from the second utterance is missing stuff we expected. In our case, the list of audio files to play. I'm going to just insert these into the response object in my tests, but it would be great to figure this out so we're truly testing
ash-at-github
@ash-at-github
Hi, since we use virtual-alexa with webservice, we had to use SkillURL and disable signature check. But we are concerned that we are making our service insecure by doing that. IS there any other way to avoid disabling signature check?
Juan Perata
@jperata
Hi @ash-at-github , since signature check validates if a request comes from Alexa and we are using an emulator, Virtual Alexa won't be able to generate the proper headers if you enable signature check, we can only suggest that you have two different environments with the production one still using the signature checks and a local one for tests.
ash-at-github
@ash-at-github
@jperata yes, pretty much what we did, but we would like those tests to run on prod environment too but we arent able to
Juan Perata
@jperata
For production, our e2e test might be more appropriate since those use real Alexa. I believe that if you disable signature checks the app won't be approved for certification. I don't think there's any other way since our filters can only make modification on the request and not on the headers.
ash-at-github
@ash-at-github
@jperata Ah I see..got it, thanks!
Colin MacAllister
@dcoli
@jperata, in our app we're sending data with the fulfillment response conversation in a property called "data." So what we need in the Virtual GA is to intercept and set conversation tokens and persist them between calls to the assistant.
Juan Perata
@jperata
Hi @dcoli , I will create a feature request with that detail on the virtual google assistant project, for that it would be very helpful if you could sent us a JSON request example with the fields you are setting up and a Response received with that data persisted. That way we can ensure that we are replicating your problem.
Colin MacAllister
@dcoli
@jperata, I have an example of a typical request and a successful response, obtained by posting the request via curl (Postman). Do you have an email where I can send it? We'd like to keep it private.
Juan Perata
@jperata
Hi @dcoli , I saw your DM with the attachment, I will create the issue and keep you posted.
Colin MacAllister
@dcoli
Thanks!
ash-at-github
@ash-at-github
We upgraded to the latest bespoken-tools version, we are seeing some html issues in the report. They are not rendered correctly. The icon before every command shows up broken image with 32*32 written below it. It's not a major functional roadblock but the report has become unsharable with others as the repeated broken icon image has made it jarring to look at it. Is there any known workaround for this?
Juan Perata
@jperata
could you share a screenshot of this with us? Dragging to the window works to upload files
ash-at-github
@ash-at-github
Screenshot 2019-04-15 15.04.12.png
ok, i just uploaded not sure if it's visible to you
Juan Perata
@jperata
yes, it is, i will validate what is causing it
ash-at-github
@ash-at-github
Thank you @jperata
ash-at-github
@ash-at-github
@jperata Had another related question, when we send HTML file that's generated in "test_output/report" via email, the html is not at all rendered properly. Guessing that's because it's styling etc separated via CSS, what can we do if we want to email the html report so that it's rendered correctly on client's machine?
ash-at-github
@ash-at-github
tried embedding the html in email body via Jenkins editable email notification plugin, however, email content is mainly json, even after setting the content type to HTML as mentioned in documents @jperata
Juan Perata
@jperata

@ash-at-github regarding embedding the HTML, emails usually need to have html inlined in order for them to work correctly, using a html inliner like this one can help, and it also provide options to do this process programatically

By the way we were not able to reproduce the issue with the broken image that you reported, could you zip file the complete report folder and sent it to us, you can do it via DM if you want to keep it private.

ash-at-github
@ash-at-github
Thanks @jperata will take a look, also will DM you with more details
ng-username
@ng-username
Hey Bespoken team. I've got a custom skill written in Java that I'm trying to write unit tests for, referring to http://docs.bespoken.io/en/latest/tutorials/tutorial_alexa_unit_testing/ for documentation. The documentation states "though this example is written in Javascript, the emulator can be used to test non-Javascript-based skills" but I cannot find any documentation regarding how to test non-js skills. Could you point me in the right direction?
Juan Perata
@jperata
Hi @danielvu95 , if you are using virtual-alexa as a library the usual methods will apply. Setting it up in the console where you are running node or in the IDE.
danielvu95
@danielvu95
thanks the response Juan
danielvu95
@danielvu95
@jperata Can you help me out with the intend method?
I tried using utter for my custom intent but that didn't seem to work
my custom intent catches all the commands issued from the user
when using utter, I get unable to match utterance
Juan Perata
@jperata
Hi @danielvu95 , we use utter to simplify the tests, but it is not real Alexa, so we only apply regular expressions to match what you have in your interaction model. If your custom intent catches everything it's probable that we won't be able to apply the correct intent.
In order to use intend to test, you apply the intend and the slot values as parameters. Here is an example line
 const response = await virtualAlexa.intend("SlottedIntent", { SlotName: "Value" });
you can also use the request builder, and provide what you want to the request and finally send it:
https://github.com/bespoken/virtual-alexa#using-the-request-builder-new
danielvu95
@danielvu95
Thanks Juan that worked!
danielvu95
@danielvu95
Does anyone know if theres a way to create the apiAccessToken that is normally generated in the context block of a skill request? Certain Alexa apis such as getting timezone and and etc require this as a header
John Kelvie
@jkelvie
Hi @danielvu95 - there is no way to create a valid token locally
We recommend using nock or other mocking tools to fake these requests - we already do this with the address API - an example is here
You can configure mocks for other services in a similar way:
danielvu95
@danielvu95
Thanks for the info John, I'll take a look
ash-at-github
@ash-at-github
Hi, it looks like our virtual-alexa based scripts suddenly started failing with this error for every test: "
Error
Invalid response: 400 Message:
Timestamp:
2019-05-02T14:55:46.762". Nothing has been changed at our end, we also tried getting the latest version but that did not help either
Juan Perata
@jperata
Hi @ash-at-github , virtual-alexa only use networking to connect to your webhook endpoint. If it's failing with that for every request it's likely that your webhook is sending that for any reason. It can go from a self-signed certificate (node doesn't like to grab request from those unless you set an environment variable for security), or you may have added the Alexa validation to ensure the requests comes from Alexa.
to had a better understanding, you can copy a request generated with virtual-alexa (you can use filter functionality for that). And sent it directly through postman. It's likely you will get the same 400 error
ash-at-github
@ash-at-github
@jperata it runs fine locally (where we use 2.3.7 version of bespoken-tools) however only fails on server (which uses 2.1.22 version). Is that version no longer supported?
Juan Perata
@jperata
@ash-at-github I'm going to try out that version and see if i can reproduce the issue.
ash-at-github
@ash-at-github
ok, thanks @jperata