Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 13 21:52

    ecruzado on v2.4.7

    (compare)

  • Sep 13 21:52

    ecruzado on master

    update skill testing ref 2.4.7 (compare)

  • Sep 13 21:45

    ecruzado on bst-client-15

    (compare)

  • Sep 11 04:42
    habuma commented #606
  • Sep 09 23:57
    jkelvie commented #606
  • Sep 09 15:40
    dmarvp unlabeled #602
  • Sep 09 15:40
    dmarvp labeled #602
  • Sep 09 15:24
    dmarvp unlabeled #602
  • Sep 09 15:24
    dmarvp labeled #602
  • Sep 06 21:16
    MKen-li closed #605
  • Sep 06 21:16
    MKen-li commented #605
  • Sep 05 21:32
    MKen-li closed #603
  • Sep 05 21:32
    MKen-li commented #603
  • Sep 05 20:14
    dmarvp assigned #602
  • Sep 05 18:48
    ecruzado commented #603
  • Sep 04 22:43
    MKen-li commented #603
  • Sep 04 20:55

    dependabot[bot] on npm_and_yarn

    (compare)

  • Sep 04 20:55

    ecruzado on master

    Bump mixin-deep from 1.3.1 to 1… (compare)

  • Sep 04 20:55
    ecruzado closed #608
  • Sep 04 20:07

    ecruzado on v2.4.6

    (compare)

mbparis
@mbparis
Have people noticed situations where a skill will run well when deployed on a Lambda but give a payload.code: 'SKILL_ENDPOINT_ERROR' when running on bst proxy ?
Screen Shot 2019-06-11 at 12.45.59 PM.png
Specifically looking like
mbparis
@mbparis
The code path that brings this error, doesnt behave this way on lambda, and the error state will occur 9/10 when running with proxy
mbparis
@mbparis
The response JSON I see in the terminal for bst proxy is the desired response, but it looks like instead of that getting sent back to the dev console, the dev console is receiving SkillDebugger:Capture Error Directive, followed by a CaptureDebug Directive with a SessionEndedRequest
Juan Perata
@jperata
Hi @mbparis does the terminal show any error at all?
mbparis
@mbparis
None
The JSON looks how I would expect/want it to look
Juan Perata
@jperata
@mbparis Checking on Alexa issues it seems a SKILL_ENDPOINT_ERROR can appear for self-signed servers, that shouldn't be the case with the proxy since we are verifying the bespoken.link domain with an appropriate certificate. When you selected the endpoint as the proxy url did you change the next selector to "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority". That might be the issue since that error usually refers to security around the endpoint.
mbparis
@mbparis
It is set to that. What I don't understand, if that is the case, is why the skill would launch, and I would be able to interact with it, but it would fail after a couple of turns of interaction
Juan Perata
@jperata
you are correct, if that were the case it would fail right away. The only other scenario I can think of is if you are using something like https://developer.amazon.com/docs/custom-skills/send-the-user-a-progressive-response.html , I think our proxy in that case may show the responses correctly on log, but only set forward the first one.
mbparis
@mbparis
We are not using that feature.
Thanks for the help anyway.
mbparis
@mbparis
It seems this might be an Alexa Developer Console problem, and not a bespoken-tools problem
The error never seems to occur when testing with an Echo device, but reproduces very reliably in the Console
Juan Perata
@jperata
Hi @mbparis , if you bring forward this in an Alexa forum or similar, please copy the link here so that we can follow up too in case we need to do something ourselves.
333ashok333
@333ashok333
@333ashok333
Guys,
I have installed bespoken tool in windows 10 using npm command...
After typing 'bst' command.. getting error as :
'bst' is not recognized as an internal or external command,
operable program or batch file.
do i need to configure path in environment variable??
even after that its not working..
Juan Perata
@jperata
HI @333ashok333 , did you do a global install, the command is npm install -g bespoken-toolsto work. Also after you install it is better if you use a new console so you ensure that the variable is there.
333ashok333
@333ashok333
Yes I have did global install.. and used the above command to install... Which is in the document...
And also I have checked opening the new console .. and typing 'bst' didn't work
@jperata
John Kelvie
@jkelvie
This is almost certainly due to it not being added correctly to the PATH for your environment @333ashok333
You need to identify where the global modules are installed, and then add it to your system environment variables
You can find where your node_modules are installed with this command: npm list bespoken-tools -g
The directory it prints out should then be added to your global PATH
333ashok333
@333ashok333
Thanks @jkelvie it worked..
gabbiano69
@gabbiano69
how to auto populate audiodate in skill audio alexa, i have a feed how to do for auto populate the list of episode that increment each day from podcast feed?
Juan Perata
@jperata
Hi @gabbiano69 , don't quite get what you need?, if you are using the YML tests and need to change something in the request, you can use our request expressions
sattpat
@sattpat
Hi folks.. I am using the awesome bespoken-tools to debug my alexa skill. While this is great, the problem that I am running into is that as I am debugging, sometimes the skill times out and alexa service sends a response "The requested skill did not provide a valid response". I was wondering is there is a solution for this.
John Kelvie
@jkelvie
Hi @sattpat are you using any external services or any doing any other operations with the skill that may be slow?
The combination of the additional network time to send requests to your laptop plus and external service may be too much
You might also check the speed of the network you are on - if there is anything to speed that up, that can help as well
Piero Dotti
@ProGM

Hi guys! I've a question about bespoken unit tests.
I'd like to test Dynamic Entities in Alexa. Is it possible to specify the id of a given slot, when defining a test?

Actually my test looks like this:

- I like vegetables:
  - intent: LikeIntent
    food: vegetables
  - response.outputSpeech.ssml: "/Great! you liked vegetables.*/"

I'd like to specify the slot id, other than the name. i.e.

- I like vegetables:
  - intent: LikeIntent
  # Something like this
    food:
      id: VEGETABLES
      value: vegetables
  - response.outputSpeech.ssml: "/Great! you liked vegetables.*/"
Edgar Cruzado
@ecruzado
Hi @ProGM you can write your test like this
- LikeIntent slot=slotValue: result
Piero Dotti
@ProGM
@ecruzado Thank you! the value you put into slotValue is... the value? or the ID? Is there a reference for the available options I can pass to slot=..?
Edgar Cruzado
@ecruzado
@ProGM it is the name, it is the only option you can pass
Siwani Agrawal
@siwaniagrawal
I am working on the actions on google and writing test suites for testing different intents. Here is a sample code. I'm using basic cards and display texts along with speech as response. prompt helps to test the speech, is there a way in which I can test the basic card title and the display texts?
- test: land direct query test
- land_intent land_type=cropland land_region=india: # Both the parameters are provided
  - request.originalDetectIntentRequest.payload.user.userStorage: "{\"data\":{\"noPermission\":false,\"name\":{\"display\":\"Stream Co\",\"family\":\"Co\",\"given\":\"Stream\"},\"location\":{\"zipCode\":\"94043\",\"formattedAddress\":\"Googleplex 1600 Amphitheatre Parkway, Mountain View, California 94043\",\"city\":\"Mountain View\",\"coordinates\":{\"latitude\":37.4219806,\"longitude\":-122.0841979}}}}"
  - prompt: "What else would you like to know next?"
John Kelvie
@jkelvie
Hi @siwaniagrawal - you can access card data via the JSON payload - the same you have tested the userStorage
We use JSON path for the tests, so just write out the card property - I believe it would be something like this for dialog flow:
- payload.google.richResponse.items[1].basicCard.title: My card title
I hope that makes sense - please let me know
Siwani Agrawal
@siwaniagrawal
Yes, thanks it works fine now👍
SameerahLibrary
@SameerahLibrary
Hello, I have mentioned the correct type and platform similar the sample testing.json file, however execution of the yml test file gives an output like this: valid type and platform must be defined either in the testing.json or the test file itself under the config element
John Kelvie
@jkelvie
Hi @SameerahLibrary - can you share the testing.json file with me? You can send it to me via direct message if you are concerned about privacy
whetstonesanj
@whetstonesanj_twitter
Hello, are there any examples of using SSML for utterances? Specifically, I'm trying to use a Polly voice to open a skill using an utterance as follows: <speak><voice name="Conchita">open My Skill</voice><speak>, but the utility fails with a Cannot read property 'error' of null. If I take out the 'voice' tags so only the 'speak' tags are in place, it works fine.
Juan Perata
@jperata
Hi @whetstonesanj_twitter our SSML in utterances support the Polly supported flags but the voice, for the voice-id you can set it as part of the test configuration
https://read.bespoken.io/end-to-end/getting-started/#overview
you can see in our sample that we indicate that we want to use Joey, you can do the same for using the "Conchita" voiceId
John Kelvie
@jkelvie
I would also note @whetstonesanj_twitter that the voice tag is part of the SSML configuration for Alexa TTS but not for Polly (which may seem odd since Alexa uses Polly, but so it is)
whetstonesanj
@whetstonesanj_twitter
@jkelvie @jperata - thanks guys! Will try poking around with that today - much appreciated!!! BTW, if we specify a voice in the configuration, since that doesn't involve SSML in the utterances, will that work on Google as well?
Juan Perata
@jperata
@whetstonesanj_twitter yes, the voice used for the e2e testing is done using polly both for Alexa and Google