Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 16 21:41
    dependabot[bot] labeled #140
  • Nov 16 21:41
    dependabot[bot] opened #140
  • Nov 16 21:41

    dependabot[bot] on npm_and_yarn

    Bump aws-sdk from 2.402.0 to 2.… (compare)

  • Nov 08 14:35
    jkelvie closed #139
  • Nov 08 14:35

    jkelvie on master

    Added DeleteItem capability to … (compare)

  • Nov 08 14:35
    jkelvie commented #139
  • Nov 08 10:25
    allthepies commented #108
  • Nov 08 10:16
    allthepies synchronize #139
  • Nov 05 17:26
    allthepies opened #139
  • Oct 04 14:16
    ysak-y opened #138
  • Aug 10 17:39
    dependabot[bot] labeled #137
  • Aug 10 17:39
    dependabot[bot] opened #137
  • Aug 10 17:39

    dependabot[bot] on npm_and_yarn

    Bump path-parse from 1.0.5 to 1… (compare)

  • Jun 10 16:37
    btburton42 commented #115
  • Jun 07 21:51
    jkelvie commented #115
  • May 09 23:41
    dependabot[bot] labeled #136
  • May 09 23:41
    dependabot[bot] opened #136
  • May 09 23:41

    dependabot[bot] on npm_and_yarn

    Bump hosted-git-info from 2.7.1… (compare)

  • May 07 01:17
    dependabot[bot] labeled #135
  • May 07 01:17
    dependabot[bot] opened #135
Juan Perata
@jperata
Hi @danielvu95 , if you are using virtual-alexa as a library the usual methods will apply. Setting it up in the console where you are running node or in the IDE.
danielvu95
@danielvu95
thanks the response Juan
danielvu95
@danielvu95
@jperata Can you help me out with the intend method?
I tried using utter for my custom intent but that didn't seem to work
my custom intent catches all the commands issued from the user
when using utter, I get unable to match utterance
Juan Perata
@jperata
Hi @danielvu95 , we use utter to simplify the tests, but it is not real Alexa, so we only apply regular expressions to match what you have in your interaction model. If your custom intent catches everything it's probable that we won't be able to apply the correct intent.
In order to use intend to test, you apply the intend and the slot values as parameters. Here is an example line
 const response = await virtualAlexa.intend("SlottedIntent", { SlotName: "Value" });
you can also use the request builder, and provide what you want to the request and finally send it:
https://github.com/bespoken/virtual-alexa#using-the-request-builder-new
danielvu95
@danielvu95
Thanks Juan that worked!
danielvu95
@danielvu95
Does anyone know if theres a way to create the apiAccessToken that is normally generated in the context block of a skill request? Certain Alexa apis such as getting timezone and and etc require this as a header
John Kelvie
@jkelvie
Hi @danielvu95 - there is no way to create a valid token locally
We recommend using nock or other mocking tools to fake these requests - we already do this with the address API - an example is here
You can configure mocks for other services in a similar way:
danielvu95
@danielvu95
Thanks for the info John, I'll take a look
ash-at-github
@ash-at-github
Hi, it looks like our virtual-alexa based scripts suddenly started failing with this error for every test: "
Error
Invalid response: 400 Message:
Timestamp:
2019-05-02T14:55:46.762". Nothing has been changed at our end, we also tried getting the latest version but that did not help either
Juan Perata
@jperata
Hi @ash-at-github , virtual-alexa only use networking to connect to your webhook endpoint. If it's failing with that for every request it's likely that your webhook is sending that for any reason. It can go from a self-signed certificate (node doesn't like to grab request from those unless you set an environment variable for security), or you may have added the Alexa validation to ensure the requests comes from Alexa.
to had a better understanding, you can copy a request generated with virtual-alexa (you can use filter functionality for that). And sent it directly through postman. It's likely you will get the same 400 error
ash-at-github
@ash-at-github
@jperata it runs fine locally (where we use 2.3.7 version of bespoken-tools) however only fails on server (which uses 2.1.22 version). Is that version no longer supported?
Juan Perata
@jperata
@ash-at-github I'm going to try out that version and see if i can reproduce the issue.
ash-at-github
@ash-at-github
ok, thanks @jperata
Meanwhile, how do we get request that virtual alexa sends? Is it via trace=true? You mentioned filter but could not find anything relevant to extract request using that https://read.bespoken.io/end-to-end/guide/#filtering-during-test Note that we use skillURL instead of handler as we use Java webservice
ash-at-github
@ash-at-github
@jperata Have some updates. Tried with 2.3.7 on server and it gave the same error there as well. So, you need not look into the version issue. When it runs on server, it fails with this error: "java.lang.SecurityException: Request with id amzn1.echo-external.request.ad2b1d6f-6158-47a6-bbfc-e361bb216815 and timestamp 1556915707000 failed timestamp validation with a delta of 35889
at com.amazon.ask.servlet.verifiers.SkillRequestTimestampVerifier.verify(SkillRequestTimestampVerifier.java:79) ~[ask-sdk-servlet-support-2.9.2.jar!/:?]" Basically timestamp validations are failing. Any idea about this?
ash-at-github
@ash-at-github
Even if we set to max timestamp validation value (150 seconds) per the documentation here: https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-a-web-service.html#timestamp, looks like it will still fail? Assuming that the delta is 358 seconds per the error message above?
Juan Perata
@jperata
This is the request validation to verify the request is a proper request sent by Amazon. Our emulator can not generate a request that pass that validation, in order to continue your tests
you must either create an exception for things that comes from our test tools or test against a webhook that doesn't have that validation enabled
ash-at-github
@ash-at-github
We set the timestamp validation value to max and it worked. I just wonder why this issue would only happen on server when locally the tests run fine
Juan Perata
@jperata
are you sure that the validation is enabled on both?, doesn't make sense to have it enabled locally
ash-at-github
@ash-at-github
it's the same backend running on both places. Moreover this validation is done by Amazon SDK code and not part of our code
xcobbler
@xcobbler
Hello, I have a question about virtual-alexa. how can I do an Intent confirmation with this library on an intent that doesn't require confirmation (and hence doesn't have a dialog with the same name as the intent)? If I manually add a dialog to the model, the test framework does work, but through manual testing this breaks upsells.
Juan Perata
@jperata
Hi @xcobbler could you share how your intent look with and without the dialog in the model to be able to understand your issue a little better?
ash-at-github
@ash-at-github
We again started seeing "Invalid response: 400 Message: " for our virtual alexa scripts executed via Jenkins. We saw this before and we increased the timestamp validation limit to 150 seconds which is the max value suggested by Amazon (https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-a-web-service.html#timestamp), which resolved that issue but it again started happening. Any suggestions?
Juan Perata
@jperata
Hi @ash-at-github , have you had any recent change in your jenkins environment or development server. A likely culprit, if it was working before, is that the timestamp has issues due the development server and jenkins being in different machines with their clocks on different times.
ash-at-github
@ash-at-github
no changes as far as we are aware, is there anyway to debug this on virtual alexa end?
Juan Perata
@jperata
if you are using virtual alexa directly as a Javascript library, you can use the filter property to log the requests and verify the timestamp.
If you are using virtual Alexa through YML tests, then you enable the "trace" property and it will print out the complete requests and responses for each interaction.
ash-at-github
@ash-at-github
@jperata Have used Trace option before, but how will it help in debugging this issue? to get the timestamps? We are getting timestamps right now after the error even without trace. And they seem ok. Anything else we need to use from trace?
Juan Perata
@jperata
yes, to get the timestamp that is being sent inside the request, and validate it against the server time in your jenkins server
Adam Elmore
@adamelmore
Hi Bespoken Team! Is it possible to use the virtual-alexa library to test interaction models only (for skills that don't use lambdas, and instead rely on external services)? I'm hoping to write unit tests to test the interaction model only; basically a bunch of "utter" tests where I'm asserting on intents and slots. Make sense?
John Kelvie
@jkelvie
Hi @adamelmore - it's likely possible with a bit of tweaking. But a quick question - what are you trying to test? Is it primarily making sure the interaction model is configured correctly? Or are you trying to make sure that the speech recognition and NLU are working right?
Adam Elmore
@adamelmore
I'll be looking into e2e later (with bespoken) to test speech reco, but right now I'm looking to build tests that assert that given an utterance, a specific intent is resolved with specific slot values. Maybe this isn't advised given that there may be a delta between what virtual-alexa resolves and what the actual Alexa would resolve?
John Kelvie
@jkelvie
Yes, there will definitely be a delta - our resolution mechanism is simplistic, and is only meant as a convenience mechanism
Adam Elmore
@adamelmore
One hack I'm considering is adding a fake handler that just parrots back the request as the response. In that way, I could assert on response.intent, etc.
John Kelvie
@jkelvie
It would be useful for ensuring all phrases you think are associated with an intent are actually associated with it - but that's about it. It won't tell you anything about ASR or NLU performance
Adam Elmore
@adamelmore
Cool, maybe I should just rely on e2e tests to validate that my interaction model is good and hasn't regressed.
John Kelvie
@jkelvie
The e2e is for full regression testing of code and AI - we actually have a new product that is squarely focused on testing just interaction models, in a very complete way
Adam Elmore
@adamelmore
oh, can you you point me to the interaction model product?
John Kelvie
@jkelvie
It's called Usability Performance Testing - our most recent case study on it is here: https://bespoken.io/blog/the-mars-agency-case-study/
Adam Elmore
@adamelmore
:tada: thanks a ton!
John Kelvie
@jkelvie
And a general overview is on our website: https://bespoken.io/usability-testing/
Our pleasure - and of course reach out if you have any additional questions