vector.shape. See the
# numpy vector of a wordexample at https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors
corpus2dense: it's true NLP produces sparse matrices using the bag-of-words representation. But some later transformations, such as LSI, transform the sparse vectors into a dense space. So using a dense representation actually saves you memory (less overhead than representing a dense matrix using sparse structures).
corpus2dense. Even scikit-learn didn't have sparse support for a while ;)
import gensim.downloader as api
from gensim.models import TfidfModel
from gensim.corpora import Dictionary
dataset = api.load("fake-news")
dct = Dictionary(dataset) # fit dictionary
corpus = [dct.doc2bow(line) for line in dataset] # convert corpus to BoW format
model = TfidfModel(corpus) # fit model
vector = model[corpus] # apply model to the first corpus document
corpus2dense, I had it overflow the memory of a large machine on just 250,000 documents .....
def read_dataset(fpath): with open(fpath, "r") as f: for line in f: # <sentence creation logic> yield sentence model = FastText(min_count=1, size=50, window=5, workers=8, sg=1, word_ngrams=1, min_n=3, max_n=6, iter=5, negative=0) model.build_vocab(read_dataset(args.fpath)) model.train(read_dataset(args.fpath), total_examples=model.corpus_count, epochs=model.iter) model.save("custom_model")
assert tokenize("That's a sentence") == ["that", "a", "sentence"]. You can see examples as most modules are currently tested
callbackspackage and pass some of them to your model for inspection purposes. For example you can use a logging callback
on_epoch_endso that you can see visualize the progress at runtime.
Hello everyone. I spent some time over the weekend playing with a news dataset I found on Kaggle, and thought of showing Gensim's capabilities to the machine learning community. I was inspired by the fact that Kaggle recently partnered up with SpaCy by promoting kernels that use this package. So I thought of also showing an alternative. The kernel is only the result of 6-7 hours of Sunday work, however it achieves some interesting results with no more than 5 lines of Gensim code.
You can take a look here: https://www.kaggle.com/steremma/news-exploration-using-gensim-and-sklearn
PS: It's my first kaggle kernel ever so please go easy on me :D