Docstrings in open source Python

Dmitry Berdov gensim, Open Source, Student Incubator

Hi everyone, my name is Dmitry Berdov, I’m a graduate student at the Ural Federal University, now working in QA testing (automation) sphere. I had no experience with writing documentation before joining the RARE Incubator, where my task has been to refactor and improve the poor state of Gensim docs. Now, after several months of shooting myself hard in the ...

RRP #4: Leonid Boytsov on kNN search and information retrieval

Radim Řehůřek podcast Leave a Comment

Episode Summary: Leo Boytsov, a PhD researcher from the Language Technologies Institute of Carnegie Mellon University, talks about fast approximate search in modern information retrieval. We discuss the curse of dimensionality, hard-to-beat baselines and NMSLIB, Leo's super fast library for nearest-neighbour search. How does NMSLIB compare to Facebook's FAISS and Spotify's Annoy? Warning: very technical. Links & resources: NMSLIB: Leonid's ...
gpu-benchmark-results

Machine learning mega-benchmark: GPU providers (part 2)

Shiva Manne Deep Learning, Machine Learning, Open Source 14 Comments

We had recently published a large-scale machine learning benchmark using word2vec, comparing several popular hardware providers and ML frameworks in pragmatic aspects such as their cost, ease of use, stability, scalability and performance. Since that benchmark only looked at the CPUs, we also ran an analogous ML benchmark focused on GPUs.

Counting Efficiently with Bounter pt. 2: CountMinSketch

Filip Štefaňák Machine Learning, Open Source 2 Comments

In my previous post on the new open source Python Bounter library we discussed how we can use its HashTable to quickly count approximate item frequencies in very large item sequences. Now we turn our attention to the second algorithm in Bounter, CountMinSketch (CMS), which is also optimized in C for top performance.

Implementing Poincaré Embeddings

Jayant Jain gensim, Open Source 10 Comments

I have been working on implementing a model called Poincaré embeddings over the last month or so. The model is from an interesting paper by Facebook AI Research – Poincaré Embeddings for Learning Hierarchical Representations [1]. This post describes the model at a relatively high level of abstraction, and the detailed technical challenges faced in the process of implementing it.