@/all Don't forget the Long Tail Language Challenge just launched!
You can get 1 week of free V100 GPU time to train a STT model for one of many languages. Checkout the details here
Sorry, this post has been removed by the moderators of r/MachineLearning.
from coqpit import Coqpit from dataclasses import dataclass, field from typing import Optional @dataclass class BaseConfig(Coqpit): checkpoint_dir: Optional[str] = None @dataclass class AppConfig(BaseConfig): src_file: Optional[str] = None print(AppConfig.new_from_dict(dict( checkpoint_dir = "/bar", src_file = "foo", )))
reuben: does it make sense to do this? (creating Coqpit classes dynamically from generic classes so you don't need to create a separate static Coqpit)
Below I intend to create a Coqpit object from
BaseCharacters class dynamically by
For now I've no idea how but looks useful
class MyModelConfig(Coqpit): field_1: int = 0 field_2: str = "" characters: Coqpit = BaseCharacters().to_coqpit() ...
to_coqpit. and yeah I have no idea how either
Good morning 👋
I've created a Wiki for ALL OPEN VOICE Enthusiasts (STT, TTS, Voice Assistants, Paper Stuff).
Maybe we can share/collect our knowledge (lessons learned, best practices, ...) to make them publically available.
Your feedback is highly appreciated - useful or useless?
Sparse is Enough in Scaling Transformers - https://arxiv.org/abs/2111.12763
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach. We address this problem by leveraging sparsity. We study sparse variants for all layers in the Transformer and propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer as we scale up the model size. Surprisingly, the sparse layers are enough to obtain the same perplexity as the standard Transformer with the same number of parameters. We also integrate with prior sparsity approaches to attention and enable fast inference on long sequences even with limited memory. This results in performance competitive to the state-of-the-art on long text summarization.
No matching distribution found for tensorflow==1.15
Anyone knows how to solve this?
Just a reminder. Tomorrow we're gonna have our first 🐸TTS community meeting v0.5
We plan to answer live questions then jump to the link below in the order of votes. Feel free to post your questions starting from today.
We'll post the recording somewhere in case you miss the call
So let's see how it's gonna turn out 😄
👉 Meeting link
👉 Meeting time
Dec 2, 2021 Thursday 17:30 - 18:15 CET
(Add to your calendar from here))
👉 Ask or upvote questions here
Hello everyone,i am seun and i am a frontend developer turned community manager..
it is awesome to join a fantastic commuity like this.
i think more awareness about this project needs to be made..i am excited to also start making meaningful contributions to the community