Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Rohit Kumar
    @aquatiko
    while fixing an issue i have been told to add a test for the case, can someone clarify this??
    Julian Gonggrijp
    @jgonggrijp
    Hi, I'm about to start creating w2v models from a large corpus (up to 50GB of text per model, English and Dutch). I'm told gensim may require large-ish amounts of memory. Assuming reading the text from disk can be done in small chunks at a time, could somebody give me a ballpark estimate of how much RAM I'll need to request for the VPS in order for gensim to be able to do its job at least somewhat smoothly? Thanks in advance!
    Stergiadis Manos
    @steremma
    One of gensim's most important properties is the ability to perform out-of-core computation, using generators instead of, say lists. Which means you might not even need to write the chunking logic yourself and RAM is not a consideration, at least not in terms of gensim's ability to complete the task. That being said, the more RAM you got the faster the training will go presumably!
    Stergiadis Manos
    @steremma
    @aquatiko You would need to add a set of unit tests that proves that your feature works (yields the expected output for a sample input). For example if you wrote a tokenizer, something like assert tokenize("That's a sentence") == ["that", "a", "sentence"]. You can see examples as most modules are currently tested
    Julian Gonggrijp
    @jgonggrijp
    @steremma Thanks for answering my question. So in general, for a Debian machine running gensim, would you say that 4GB of RAM is an OK amount, or likely to be a bit on the tight side?
    Stergiadis Manos
    @steremma
    Im far from being an expert but on Ubuntu I didn't have any trouble with Ubuntu 4GB (of course another machine with 16 was faster but that one also had better CPU etc.) In theory the out of core feature guarantees completion regardless of the RAM, but when it comes to timing estimates I would experiment myself. For example estimate the complexity by running for 10 - 100 - 1000 docs and then estimate it on your real dataset. Or look in the literature of the given model since gensim in most cases closely follows the papers mentioned in the docstrings
    You can also check the callbacks package and pass some of them to your model for inspection purposes. For example you can use a logging callback on_epoch_end so that you can see visualize the progress at runtime.
    Julian Gonggrijp
    @jgonggrijp
    Thanks!
    Stergiadis Manos
    @steremma
    np :)
    Stergiadis Manos
    @steremma

    Hello everyone. I spent some time over the weekend playing with a news dataset I found on Kaggle, and thought of showing Gensim's capabilities to the machine learning community. I was inspired by the fact that Kaggle recently partnered up with SpaCy by promoting kernels that use this package. So I thought of also showing an alternative. The kernel is only the result of 6-7 hours of Sunday work, however it achieves some interesting results with no more than 5 lines of Gensim code.

    You can take a look here: https://www.kaggle.com/steremma/news-exploration-using-gensim-and-sklearn

    PS: It's my first kaggle kernel ever so please go easy on me :D

    Radim Řehůřek
    @piskvorky
    Hi @steremma that's very interesting :) What's the advantage of "promoting kernels", how does that work?
    Stergiadis Manos
    @steremma
    Well they added a custom tag called SpaCy and then kernels featuring the package with many voted would win some reward or something.
    actually I took my kernel private for now because I realized that the topics produced are not reproducible unless I set the random state, I will fix and publish tomorrow
    as a result there was an explosion of SpaCy featuring kernels. However I don't know how much those affect popularity
    Stergiadis Manos
    @steremma

    Here is the new public link: https://www.kaggle.com/steremma/news-exploration-using-gensim-and-sklearn?scriptVersionId=6727890

    Looking forward to any feedback in case you have time to go through it @piskvorky

    Nabanita Dash
    @Naba7
    Hi!! I am new to gensim
    I want to understand the code base.I need help!!
    Jeff Schneider
    @jeffrschneider
    Are there any good examples for the creation of a topic model on a specific taxonomy such as IAB or IPTC?
    mik8142
    @mik8142
    hi there! first of all, sorry for my poor english. I read about wotd2vec, and i think i found mistake in movie-plots-by-genre, in jupyter notebook. I open issue and create pullrequest on github (RaRe-Technologies/movie-plots-by-genre#14). For now i try to understand is it a real mistake or my fault?
    Joseph Bullock
    @JosephPB

    Hi, I am trying to run dynamic topic modelling through the wrapper DtmModel. I am running this on a dataset of radon 1.5M documents each several sentences long. I'm getting the error:

    subprocess.CalledProcessError: Command '['/efs/data/jpb/Qatalog/dtm/dtm/main', '--ntopics=10', '--model=dtm', '--mode=fit', '--initialize_lda=true', '--corpus_prefix=/tmp/2beb46_train', '--outname=/tmp/2beb46_train_out', '--alpha=0.01', '--lda_max_em_iter=10', '--lda_sequence_min_iter=6', '--lda_sequence_max_iter=10', '--top_chain_var=0.005', '--rng_seed=0']' died with <Signals.SIGABRT: 6>.

    Any thoughts would be appreciated.

    By the way, I know it works on smaller datasets

    Syed Farhan
    @born-2learn
    Hi everyone, I am an engineering student from Bangalore, India looking forward to contributing to gensim as part of GSoC 2019. Please guide me with the steps to get started.
    Harshal Mittal
    @harshalmittal4
    Hi, I am a junior at IIT Roorkee India, and would like to contribute to the Gensim project for Gsoc19. May I get some beginner's guidance for the same, also some idea about the current year's projects for Gsoc would help. Thanks :)
    Julian Gonggrijp
    @jgonggrijp
    The first line of a w2v file contains two numbers in plaintext. What do these numbers mean? Number of unique tokens and vector size?
    matrixbot
    @matrixbot
    Philippe Rivière
    @Fil
    hello everyone; what kind of clustering and visualization techniques do you usually apply to the embeddings you compute with gensim?
    I'm trying UMAP+HDBSCAN with various parameters
    Ahmed T. Hmmad
    @athammad
    Hi guys, any idea on how to bring the prototype vector generated from the HDP model function to the list of documents? I have seen many examples with LDA but none with HDP
    V.Prasanna kumar
    @VpkPrasanna
    Hey everyone ..Can any one help me in Labelling the topics using Gensim module
    Brendan Reed
    @breedy231
    hi all, does anyone have experience in debugging a windows install? have mingw in my path but gensim still can't find the C compiler, using win10 py 3.7
    phalexo
    @phalexo
    @piskvorky Normally documents are tagged with a single unique identifier when training a Doc2Vec model. That said, one can use multiple tags. If I use a single tag associated with multiple documents, a vector is generated for that multi-document tag. My current understanding is that a multi-document tag vector will end up somewhere in the center of documents' vectors (close to the center of mass so to speak). Is my understanding correct? Thanks.
    phalexo
    @phalexo
    @/all Has this channel died? If so, where is the community support now? Thanks.
    Brenner Haverlock
    @officialbrenner
    Ello
    Radim Řehůřek
    @piskvorky
    @officialbrenner why not install from the precompiled Python Wheel?
    other than that, I'm afraid we have little ability to debug Windows: none of us have Windows, or use that platform.
    @phalexo the primary support channel is the mailing list: https://radimrehurek.com/gensim/support.html . I personally don't check Gitter at all.
    lengockyquang
    @lengockyquang
    Hi everyone, i'm using gensim for create word embedding model that includes additional linguistic features such as pos tag, lemma, named entity,... Are there any options for me to implement this idea ?
    ggqshr
    @ggqshr
    Hi everyone , can somebody tell me why gensim LDA model output different topic distribution for same sentence, everytime run it the result is diffent ,plz?
    Joseph Bullock
    @JosephPB
    @ggqshr the model is randomly seeded every time before performing clustering - this means that sometimes sentences can belong to different clusters. However, each sentence should have a probability of being assigned to each possible topic, so you might also be seeing that the probability isn't changing massively, but it is changing enough to alter the most likely topic
    ggqshr
    @ggqshr
    @JosephPB it's very helpful! Thank you very much!
    ggqshr
    @ggqshr
    @JosephPB hey ,i want to ask an other stupid question,How to avoid the previous situation,I have increased the value of the parameter passes, but the same sentence will still get different results, and the result varies greatly, can someone tell me how to avoid this and why?
    Joseph Bullock
    @JosephPB

    Hi @ggqshr No problem. If you want to be able to reproduce the same result each time then you can set the random_state to an interger value. See the parameters on the gensim page: https://radimrehurek.com/gensim/models/ldamodel.html

    Hope this helps :)

    ggqshr
    @ggqshr
    @JosephPB unfortunately, I have set random_state, but the results are still different each time.My situation is the same as the following page, but the passes parameter does not work.
    Philippe Rivière
    @Fil
    hello, I'm using gensim to generate an LDA model of my documents. Then I export the vectors to matrixmarket format, and create a 2D embedding with UMAP in JavaScript. So far so good. Now I would like to do this UMAP transform in python, but I can't find out how to "convert" the documents vectors in the LDA topic space… It should be "obvious" in the sense that what I need is a n * m matrix when n is the numbers of documents and m the number of topics.
    Philippe Rivière
    @Fil

    I'm blocked here:

    transformed = lda[corpus_lda]
    X = np.array(transformed)
    embedding = umap.UMAP().fit_transform(X)

    the value of X is an array of lists instead of a numpy array expected by umap.

    Philippe Rivière
    @Fil
    I built the np.array by hand and it works
    Herli Menezes
    @herlimenezes
    Hi, is there any gensim module for portuguese language?
    Herli Menezes
    @herlimenezes
    More specifically. How to manage diacritics in gensim?
    Lambda Developer
    @chetkhatri
    Hi All, is this channel active?
    Ajda
    @ajdapretnar
    @piskvorky Quick question. I know that LSI can return less than requested number of topics (for short texts, usually). I think LDA does that, too. How about HDP? Could it ever return less than the requested number of topics (in my interpretation, that is the m_T property)?