Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 12 18:18
    dependabot[bot] labeled #662
  • Oct 12 18:18
    dependabot[bot] labeled #662
  • Oct 12 18:18
    dependabot[bot] opened #662
  • Oct 12 18:18

    dependabot[bot] on maven

    Bump junit from 4.12 to 4.13.1 … (compare)

  • Oct 05 18:03

    ecruzado on v2.4.87

    (compare)

  • Oct 05 18:03

    ecruzado on master

    update skill testing ref 2.4.87 (compare)

  • Oct 05 17:57

    ecruzado on bst-client-99

    (compare)

  • Oct 01 19:52

    ecruzado on v2.4.86

    (compare)

  • Oct 01 19:52

    ecruzado on master

    update skill testing ref 2.4.86 (compare)

  • Oct 01 19:47

    moisesnandres on bst-client-98

    (compare)

  • Sep 29 20:50

    ecruzado on v2.4.85

    (compare)

  • Sep 29 20:50

    ecruzado on master

    update skill testing ref 2.4.85 (compare)

  • Sep 29 20:44

    moisesnandres on bst-client-97

    (compare)

  • Sep 25 21:20

    ecruzado on v2.4.84

    (compare)

  • Sep 25 21:20

    ecruzado on master

    update skill testing ref 2.4.84 (compare)

  • Sep 25 21:16

    moisesnandres on bst-client-96

    (compare)

  • Sep 24 13:38

    ecruzado on v2.4.83

    (compare)

  • Sep 24 13:38

    ecruzado on master

    update skill testing ref 2.4.83 (compare)

  • Sep 24 13:31

    moisesnandres on bst-client-95

    (compare)

  • Sep 22 22:50

    ecruzado on v2.4.82

    (compare)

Bela Vizy
@OpenDog
thank you!
Diego Martín
@dmarvp
Hi @OpenDog , which version of the @assistant/conversation library are you using? It took me a while but I can confirm that setting an id through the user.params property works correctly and preserves the given id through the simulator, my google nest and a virtual device.
btw, user.params is where data is stored now, seems that user.storage was deprecated
Bela Vizy
@OpenDog
Hi @dmarvp , we have our own sdk. Are we talking about the same thing? This is what we send out to DialogFlow for instance:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"displayText": "Hi! How can I help you?",
"ssml": "<speak>Hi! How can I help you?</speak>"
}
}
]
},
"userStorage": "{\"userId\": \"71cfb72b-c21e-6458-b443-c4816b626f06\"}",
"noInputPrompts": [
{
"displayText": "How can I help you?",
"ssml": "<speak>How can I help you?</speak>"
}
]
}
}
}
Bela Vizy
@OpenDog
Oh I see why. It's guest user. We get this from your virtual tester: "payload": {
"user": {
"locale": "en-US",
"userVerificationStatus": "GUEST"
},
"conversation": {
Can I be a verified user on the virtual tester?
Diego Martín
@dmarvp
getting the status of VERIFIED it's something that comes from Google, not from us. I did struggle a bit with that at first, using the latest @assistant/conversation node library, I didn't know you were using other sdk.
Anyway, there are a series of steps to get the VERIFIED status: 1st of all you need to enable "personal results" on the virtual device from your Google home app as shown here: https://read.bespoken.io/end-to-end/setup/#enabling-personal-results-on-your-google-virtual-device
Diego Martín
@dmarvp
you also need to enable Web and App activity as explained here: https://support.google.com/googlenest/answer/7382500
it's also important to tick the box that says "Include Chrome history..."
all of this using the account that owns the Google virtual device, please try that and let me know if that works for you
Bela Vizy
@OpenDog
@dmarvp Thank you! It's working now. I admit I actually knew about this. I had this problem with "real" devices, but your virtual device always seemed like some woodoo to me :-) Yes. They are actual devices! My only excuse is that they are registered under my office google account and my phone is set to control my home devices so I never noticed them. Gracias again! You are awsome!
Diego Martín
@dmarvp
Glad that it's working now, @OpenDog . This has also been an exercise in learning more about how Google Actions work for me :)
Shashi Adhikari
@AdhikariShashi_twitter
Hi everyone. I would like to use bespoke to test in Dialogflow
Shashi Adhikari
@AdhikariShashi_twitter
Hi everyone, Is there anyone who can help me with the bespoken
Diego Martín
@dmarvp
Hi @AdhikariShashi_twitter , I believe we talked by email. Let me know if that helped :)
00aixxia00
@00aixxia00

I am working on an Alexa Skill written in Python using the ask-sdk for Python. This skill is running self-hosted on a Flask server listening to port 5000. Now i would like to integrate Unit-Tests using the Bespoken Framework.

My testing.json file looks like this:

{

"locales": "de-DE",
"interactionModel": "models/de-DE.json",
"trace": true,
"jest": {
    "silent": true
},
"skillURL": "http://localhost:5000"

}

i am locally running an http.server on port 9998 to get the report.html. The tests passed successfully, but the code coverage is not working correctly. It seems like the files used in this skills are not recognized at all. Instead they are looking up the files slides.js, debugger.js, jquery.js

what am I missing? I am thankful for any help!
John Kelvie
@jkelvie
Hi @00aixxia00 the code coverage we provide will only work when we are directly calling the code in-process. In this case, as you python code is being run inside a server, you will need to setup something separate there to capture the code coverage data.
I'm not that familiar with Python, but something like this may do the trick:
00aixxia00
@00aixxia00
allright thank you!
Tom V
@TomV
I setup bespoken for an Alexa skill a while ago to do some unit testing, then then shifted to just using the commandline ask cli to playback JSON files, less capable, but built in. Now I'm looking to upgrade my tests to use the bespoken unit tests. But I'm running into a problem with my tests.
I am getting a failure with the message:
No match for dialog name: AdditionalConcern
How can I troubleshoot that?
Tom V
@TomV
The skill is working fine, and I manage all the dialogs in my code, not using the dialog management feature. I really just want to write tests for now that are similar to this:
-- User: "I say W",
-- Alexa: "asking X? ",
-- User: "answer is Y",
-- Alexa: "asking Z",
Can I do that with the Bespoken unit tests? I thought I could, but now I'm a bit stumped as to how to get it running like it used to.
John Kelvie
@jkelvie
Hi @TomV - that error comes from this point in our code:
We are looking for an intent that corresponds to the dialog under the dialog section of the interaction model, like this:
Tom V
@TomV
@jkelvie thanks for the links. I've looked at the code, and it seems the only way to make my stuff work would be to add dialogs to my interaction model. That's too bad, I mostly am avoiding that, since I prefer much more dynamic responses you can't get from the Alexa dialog manager.
Also, much of my core code is multi-platform targeted, so relying on the Alexa dialog manager here creates another permutation to handle for the non-Alexa cases.
Tom V
@TomV

@jkelvie Thanks for pointing me in the right direction.

For anyone who hits the same issue: my workaround is to add a dummy "stub" in my interaction model for the intents that don't use Alexa's Dialog manager, like this:
context: My local interaction model.
en-US.json/interactionModel/dialog/intents/

        {
          "name": "AdditionalConcern",
          "confirmationRequired": false,
          "slots": []
        }
Tom V
@TomV

And more background for anyone curious. If you are making an Dialog.Elicit slot, you can do that in your programmatic JSON response without defining a Dialog Delegation strategy in the interaction model. In order to create the DialogDelegation strategy (or 'dialog model') you need to specify one of the following:

  • configure required slots,
  • specify slot validation, or
  • specify intent confirmation

These are all very simple string prompts, that will not have any context and can't be dynamic. In my code, I generate the elicit slot prompt based on the context of the conversation. (how many times have I asked, do I have an idea of possible values for this conversation, etc.)

but to work with Bespoken's Virtual Alexa, you will need to specify at least a stub dialog model for any intent that you wish to elicit. @jkelvie , can you confirm my understanding here, or clarify anything I got wrong? Thanks!
Tom V
@TomV
Different topic: when I run
bst test --config test/bst-config/testing.json
The tests run, then the process stays open, apparently watching the files. But the tests never re-run for me, even after I've changed files. Is there a simple way, within the bst context to get automatically rerun the right tests after the files change (like standard jest)? [ running on macos }
Tom V
@TomV
(looks like I asked something similar Dec 2019.. even after reading the response, I'm not able to get it working.)
Tom V
@TomV

@jperata - I'd love to show both the input utterance and the expected result phrase (or some of it) in the test results. e.g. (pseudo code)

 {
    input: "Hello Bespoken!": 
    expected: "Hello, How are you doing?"
}

Right now, either from the command line or the html (jest-stare based) output, I only see the input phrase (or intent & slots) and not the expected response. It's like hearing one side of a conversation.

Can you help me find where to look for how the describe block getting specified? Is it possible for me to easily modify that to add a bit of the expected result for passing tests?

Diego Martín
@dmarvp
hi @TomV, the bst test command is not meant to keep the process open. Are you on the latest version of bst?
Tom V
@TomV

@dmarvp BST: v2.4.65 Node: v12.14.1 is what I'm running, might be the new VSCode.. (it can auto attach to node process for debugging, but I have that disabled..) I'll try in a standard terminal.

Should I upgrade to2.4.72?

Diego Martín
@dmarvp
we have updated the latest tag of bst this morning, please try using version v2.4.74 and see if that helps
Tom V
@TomV
Yes, I am doing that now! Thanks.. It does seem to work better in the standard terminal :-)
Tom V
@TomV

@dmarvp No luck, still need to control-C ath the end of the tests, even from standard terminal window (vs VSCode terminal). Ends like this:

Test Suites: 2 passed, 2 total
Tests:       4 passed, 4 total
Snapshots:   0 total
Time:        26.993s, estimated 29s

Stuck there and I need to control c before I can run again.

Tom V
@TomV
Just occurs to me that it may be waiting for something on my side, like a database handle to close.. My test is
bst test && echo All Done
Never see the All Done msg.
Question: How would I do cleanup after the tests have run to close database connection for example? Is there a hook for that or do I need to figure out the jest tests?
Diego Martín
@dmarvp
You could use the filter property inside your testing.json file. It allows you to run custom code before and after each test or test suite: https://read.bespoken.io/end-to-end/guide/#filtering-during-test
Tom V
@TomV
@dmarvp Thanks! Sounds like exactly the right tool... and it was an RT(F)M answer ... I may have missed it since I'm working on unit tests.. Thanks for the link 👍
Tom V
@TomV

FYI to anyone who sees something like I did, where the tests are not closing the process after runing:

IF you are using a database even for unit test :-( ... Make sure you are closing the connections to the db after some idle period.

For those using redis, you can just use redis_client.unref() immediately after you create the redis client object. That will ensure after the commands run, it will close the connections. You may want to put this inside of conditional context flag, since performance will be better in production if you are not aggressive about closing connections.

Test your configs, your mileage may very :-)

Tom V
@TomV
FYI: anyone seeing the error could not read source map for ...bespoken-tools/bin/bst.js.map (and a many many more like that), this is a VS Code bug, not directly related to bespoken-proxy.
TLDR version: You can fix by updating your launch.json file that kicks off the bespoken proxy.
More info here: microsoft/vscode#102042 (and links from there.
John Kelvie
@jkelvie
Thanks for sharing this @TomV