Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 10:19
    adrinjalali commented #21020
  • 10:02
    Hubble83 commented #21447
  • 09:59
    amascia synchronize #17541
  • 09:55
    miwojc synchronize #21426
  • 09:41
    jeremiedbb unlabeled #21361
  • 09:40

    glemaitre on 1.0.X

    Trigger wheel builder workflow:… (compare)

  • 09:38
    adrinjalali commented #17541
  • 09:37
    glemaitre closed #21404
  • 09:35
    glemaitre ready_for_review #21404
  • 09:34
    adrinjalali commented #18748
  • 09:31
    adrinjalali commented #21334
  • 09:29
    ArturoAmorQ synchronize #21341
  • 09:22
    krumetoft commented #21434
  • 09:21
    glemaitre synchronize #21404
  • 09:20

    glemaitre on main

    DOC update the News section in … (compare)

  • 09:20
    glemaitre closed #21417
  • 09:19
    krumetoft synchronize #21434
  • 09:18
    ogrisel commented #21447
  • 09:16
    ogrisel commented #21447
  • 09:15
    glemaitre edited #21455
Bin Wang

Hi team, I am new to Cpython but really wants to play with the internals of sklearn. I want to test out some of the cdef classes in the pyx file but looks like the methods are inaccessible within Python. Any thought?

For example:

from sklearn.tree import _utils
ph = _utils.PriorityHeap(100)

And I cannot find call methods like pop, push.
Usually how does the workflow look like if I want to play with the internals of sklearn within Jupyter notebook.

hello everyone. I'm really new to Machine learning in general and i have been working with some sklearn Regressors. I need some help :). My question is how do i know if the RMSE i have is minimum enough for good predictions. To what do i compare this RMSE to?
I was able to create a model by curve fitting a set of data that has 5 variables using GaussianProcessRegressor. The problem is I am unable to export/load this model into an older version of python (version 2.5.2). Is there a way to dump the equation/formula into mathematical terms in relations to these 5 variables so that I can use this prediction on the older python? Thanks
Adrin Jalali
@enoch-sun We don't really support those Python versions anymore. You can try and figure it out with some other persisting models such as ONNX or PMML, but you'll be mostly on your own
Thomas J. Fan
@biwa7636 The PriorityHeap functions pop and push are cdef, which means they are not available in python.
Jesse Leigh Patsolic
Is there a scikit-learn preferred way to store a vector using Cython? I've seen libcpp.vector, array.array and numpy used in the code base. @NicolasHug @amueller
Nicolas Hug
The way we do it now is to allocate numpy arrays (in python or in cython), and then use a memory view for pure cython parts. You can take a look at how we do it in e.g. ensemble/_hist_gradient_boosting
Hi, does apply in df.apply(fun) iterate over each columns in 'df' data-frame and pass them to 'fun' function as a series?
Bin Wang
@thomasjpfan, you are right, however, I also tried to execute the above code too using %%cython magic also from sklearn.tree cimport _utils but still did not work. Was it supposed to be like that?
# requires numpy headers
from sklearn.tree._utils cimport Stack
s = Stack(10)
>>> AttributeError: 'sklearn.tree._utils.Stack' object has no attribute 'top'
I found the source code so well written, fascinating and really want to be able to get the development environment up and running.
Bin Wang
Weird, the above code will work if I replace s = Stack(10) with cdef Stack s = Stack(10), I believe this must have something to do with static type declaration.
Jesse Leigh Patsolic
Does anyone know why the base estimator for ExtraTreesClassifier is ExtraTreeClassifier, instead of DecisionTreeClassifier with splitter='random'? I am working on adding a new type of tree. @NicolasHug @amueller
Nicolas Hug
No idea. It doesn't make much sense for ExtraTreeClassifier to allow for a splitter that isn't 'random' IMO.
Would you want to submit a PR to deprecate the parameter?

Hi All, I`m getting the following error while executing the python setup.py install
error: Command "cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Users\Moti\Anaconda3\envs\motidevs\lib\site-packages\numpy\core\include /EHsc /Tpsklearn\svm\src\libsvm\libsvm_template.cpp /Fobuild\temp.win-amd64-3.7\sklearn\svm\src\libsvm\libsvm_template.obj" failed with exit status 127

Do you have any idea? Thanks!

Any scikit devs who can shed some light on why calibration_curve is only for binary estimators?
Anjali Singh
how can i start committing to the open source
Adrin Jalali
@Anj-ali you can start by going through our contributing guides: https://scikit-learn.org/dev/developers/contributing.html#contributing
Anjali Singh
thank you Sir, surely i will do that
Olivier Grisel
Heads up: if you use conda and upgrade your env, you might get a crash when using n_jobs>=2. This is caused by an updated version of intel-openmp in the default channel of conda. I reported the issue upstream as ContinuumIO/anaconda-issues#11294 and the problem is tracked in this PR on the scikit-learn side: scikit-learn/scikit-learn#15020
The error message is OMP: Error #13: Assertion failure at z_Linux_util.cpp(2361) reported by the dying worker process.
Which in turns causes loky to raise: TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGABRT(-6)}.
Samesh Lakhotia
If someone is free to review, please take a look at scikit-learn/scikit-learn#14993 and scikit-learn/scikit-learn#15045.
Andreas Mueller
hm is there a pandas gitter? Or is @jorisvandenbossche around lol? For a pandas dtype, how do I get the closest numpy dtype to cast to?
Joris Van den Bossche
there is pandas gitter actually (pydata/pandas)
I don't think there is a typical way to do it
If I remember correctly, there is an issue about it
Basically, you would like to know the dtype of np.asarray(obj).dtype right? (but without needing to do the actual conversion?)
Andreas Mueller
it's for scikit-learn/scikit-learn#15094 which is currently failing because np.result_type(pd.CategoricalDType) raises an error
Joris Van den Bossche
the issue that I rememered is pandas-dev/pandas#22791
Andreas Mueller
ok. so no solution :-/ is there a work-around?
like what does actually happen when you do the conversion?
is it from the pd.DataFrame.__array__ method or something?
Andreas Mueller
yeah it is, no way to figure that one out :-/
Jesse Leigh Patsolic

Hello all (I am new to Cython),

I am currently working on adding an augmented version of Brieman's forest-RC (similar to RandomForest) algorithm into my fork of scikit-learn: In short, the algorithm takes linear combinations of features and projects them with weights randomly selected in {-1,1} to form a new feature to split on. The number of features combined at each split is a random variable.

The current SplitRecord only holds one feature, I need something to store a vector of features and a vector to hold weights.

  1. I tried initializing an np.ndarray and using memoryviews, but ran into GIL issues.
  2. I tried to make an ObliqueSplitRecord class, but that can't be passed as a pointer into functions because it is a Python object.
  3. I tried to augment the SplitRecord struct in _splitter.pxd but that didn't seem to work because vectors would then be of fixed length.
  4. I tried to use something similar to the tree/_utils:Stack but fell into the same problem as it was a class and couldn't be passed as a pointer into a function.

I am looking into using cppclass, but am not sure if that will fix solve my problem.

Does anyone have suggestions on how to best implement this in a Cythonic way? i.e. storing a vector of things while avoiding the GIL and not using python objects?

Adrin Jalali
@MrAE you can use a cpp vector in cython. But since you're changing the splitrecord struct, you'll need to change the code in quite a lot of places.
Mateusz Sokół
Hi, I have some basic question about local docs build for scikit. I've been trying to modify docs inside API for some file in sklearn/linear_model and followed instructions in Contributors Guide. But after few attempts the make command inside /docs does not seem to modify local docs build inside _build. In the browser, API docs didn't change although I modified the sources. Am I missing something?
Nicolas Hug
@mtsokol it seems that you're doing it right... maybe double check that 1. you're actually changing the sources, i.e. not anything in the _build folder, 2. the doc that you're changing is about a public estimators/tools (private tools aren't rendered in the doc anyway) and 3. that you're looking at the generated html in doc/_build/html/stable/
Nicolas Hug


re 1. you can't use (let alone allocate) numpy arrays when the GIL is released because these are Python objects. Is there a way for you to allocate the arrays somewhere where the GIL is held, and use memory views when the GIL is released? Memory views are safe to use without the GIL

re 2. is it still considered a Python object if you use a cdefed class and all the attributes are cdefed as well?

re 3. what vectors? can't you use a view as a field of the struct?

Nicolas Hug
Also @MrAE I happen to have been writing about Cython over the weekend... maybe that could help http://nicolas-hug.com/blog/cython_notes
Jan-Benedikt Jagusch
could somebody share a good example for class docstrings in scikit-learn that we could use as a sort of template? thanks!
Alessandro Surace
Here is the issue search string "is:issue is:open examples class docs involves:adrinjalali"
Alessandro Surace
Hey guys who is veerlosar on Githib? just want to talk about OneVsRestClassifier example
Benjamin Bossan
@zioalex I can talk to veerlosar, we're at the same sprint

Hey guys who is veerlosar on Githib? just want to talk about OneVsRestClassifier example

@zioalex what did you want to talk about?