Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Bülent Özden
    @bozden:mozilla.org
    [m]
    Huh! I've got it working by playing with the execution order.
    I'm not sure if this helps with C thou...
    Bülent Özden
    @bozden:mozilla.org
    [m]
    I also could run a couple of epocs without any problems... It introduced one more hassle thou. Manually restarting the runtime, thus not being able to use run-all...
    dexterp37
    @dexterp37:matrix.org
    [m]
    Ciaran (ccoreilly): we did improve on that after turning on optimization :) I found out that the demo is much slower on Firefox Nightly, but it's perfectly fine on Release
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    dexterp37: wheer is you demo? I could give it a try as well
    dexterp37: I guess you can assign yourself to https://bugzilla.mozilla.org/show_bug.cgi?id=1248897 and ship an extension for that :)
    3 replies
    dexterp37
    @dexterp37:matrix.org
    [m]
    But my attention got diverted from another thing:P I'm trying to do STT on whatsapp web audio messages via an extension
    dexterp37
    @dexterp37:matrix.org
    [m]
    gerard-majax: let me know what you think of the demo :)
    dexterp37
    @dexterp37:matrix.org
    [m]
    I tried in today's Firefox Nightly and the performance regression is gone. It now takes the same time as Firefox Release
    1 reply
    Jason Barbee
    @jasonbarbee
    Question, I see an example websocket streaming service in the example repo, but the coqui python service looks like it waits for all the audio before inferring through the engine. Is there a way to live stream data? or is it required to have the full audio file?
    1 reply
    dexterp37
    @dexterp37:matrix.org
    [m]
    I'll definitely profile if that happens again. Will also try to mozregression it but I'm currently lacking.. bandwith :D So can't promise it this time :°(
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    I have lot of it, if you have a range to suggest I can have a look
    dexterp37
    @dexterp37:matrix.org
    [m]
    105.0a1 (2022-08-07) (64-bit) - 105.0a1 (2022-08-09) (64-bit), Windows 11
    Should only be 4-5 nightlies
    reuben
    @reuben_m:matrix.org
    [m]
    dexterp37: one thing I noticed in the wasm example is that it blocks the UI, for longer inferences leading to Firefox showing a warning bar asking the user if they want to stop the script. is it possible to run inference in a worker?
    1 reply
    dexterp37
    @dexterp37:matrix.org
    [m]
    reuben: it is one of the things we were planning on experimenting with
    Bernardo
    @bernardohenz
    Does anyone have trained a model using an .ogg dataset?
    I think the read_ogg_vorbis might have a memory leak during the training phase. Can someone shed some light on such pyogg code?
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    dexterp37: do you think we could hope to revive the speechrecognition bug?
    1 reply
    dexterp37: I always felt sad not being able to go further than the few all hands demo I did
    maybe with wasm it might make it easier to land
    1 reply
    mariano_balto
    @mariano_balto:matrix.org
    [m]

    hello, I am trying to use the recently published artifact stt-1.4.0a5-cp39-cp39-linux_aarch64.whl. However, I am getting a Segmentation Fault error when trying to use the streaming interface.

    I have put a sample project that can be run against any aarch workstations in order to easily replicate the error.

    Should I also log a ticket?

    Edit:
    The same error happens when running against a mac with an M1 chip, but not sure if we publish M1-specific wheels (in theory should be the same?).

    Thanks!

    mariano_balto
    @mariano_balto:matrix.org
    [m]

    :point_up: Edit: hello, I am trying to use the recently published artifact stt-1.4.0a5-cp39-cp39-linux_aarch64.whl. However, I am getting a Segmentation Fault error when trying to use the streaming interface.

    I have a sample project that can be run against any aarch workstation in order to easily replicate the error.

    Should I also log a ticket?

    Edit:
    The same error happens when running against a mac with an M1 chip, but not sure if we publish M1-specific wheels (in theory should be the same?).

    Thanks!

    mariano_balto
    @mariano_balto:matrix.org
    [m]

    :point_up: Edit: hello, I am trying to use the recently published artifact stt-1.4.0a5-cp39-cp39-linux_aarch64.whl. However, I am getting a Segmentation Fault error when trying to use the streaming interface.

    I have a sample project that can be run against any aarch workstation in order to easily replicate the error.

    Should I also log a ticket?

    Edit:
    The same error happens when running against a mac with an M1 chip, but not sure if we publish M1-specific wheels (in theory, it should be the same?).

    Thanks!

    Daniel Souza
    @dsouza95
    Hello, does anyone know if it is possible to run the github actions in a fork from the coqui repo?
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    @dsouza95: yes, it will run against your repo though
    Daniel Souza
    @dsouza95
    great, that would be very helpful
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    you need to enable GitHub Actions on your fork
    (and it means you'll have to wait for the big tensorflow artifacts a long time on the first builds)
    Daniel Souza
    @dsouza95
    I tried running on my fork, but I got some errors on the Mac|Build libstt+client (arm64) step
    maybe it is passing on the coqui repo because it is leveraging some cache and I am not?
    Error: Invalid formula: /Users/runner/arm-target/arm-brew/Library/Taps/homebrew/homebrew-core/Formula/mit-scheme.rb mit-scheme: undefined method `on_intel' for #<Resource:0x00007fa8b537fc98>
    gerard-majax
    @gerard-majax:mozilla.org
    [m]
    brew breakign randomly ? that's really not surprising
    Daniel Souza
    @dsouza95
    it seems updating brew might fix that
    waiting the action to run to confirm if that is the case
    Daniel Souza
    @dsouza95
    that did it, will open a PR :)
    Ciaran (ccoreilly)
    @ccoreilly:matrix.org
    [m]
    Sorry for not contributing the example yet, I am a bit busy lately :(
    1 reply
    stevenm15
    @stevenm15:mozilla.org
    [m]
    Hi everyone! im trying to use coqui, i installed pip install stt, but when i run the code i get this error, does anyone know how could i solve it? i read that problem is due to AVX support
    Bilal Haider
    @BilalMH
    Hello everyone - does anyone have suggestions to improve accuracy and speed of the coqui api on Android? I'm trying to use it for messaging and there's a second delay + the accuracy can be quite low on the large_vocab.scorer
    3 replies
    Bilal Haider
    @BilalMH
    Also are there any other places I can download .tflite models and .scorers for Android STT other than the cocqui model page?
    comodoro
    @comodorovo:matrix.org
    [m]
    That^, maybe increase beam width (tradeoff with speed)
    dayllon-bot
    @dayllon-bot
    Hi! I have a couple of questions: 1) is there any option to freeze layers during training using coqui_stt_training.train? 2) By default dev loss is only evaluated after each epoch, is there any option of doing that more often? Like every X steps.
    3 replies
    josh 🐸
    @josh-coqui:matrix.org
    [m]
    we did it before
    took it out because it wasn't helpful, but maybe someone could figure out to make it work:)
    would make training much faster!
    spectie
    @spectie:matrix.org
    [m]
    or lower learning rate
    josh 🐸
    @josh-coqui:matrix.org
    [m]
    or higher dropout 👍️
    0/0
    @0/0:matrix.org
    [m]
    hey there, is there any chance someone can look into coqui-ai/STT#1897 and see if they can perhaps help out with it? it’d be a huge help