Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 09:22
    AlfattanKurniawan edited #944
  • 09:18
    AlfattanKurniawan opened #944
  • Nov 24 16:10
    hayesall labeled #941
  • Nov 24 16:10
    hayesall labeled #941
  • Nov 24 16:10
    hayesall commented #941
  • Nov 24 15:45
    hayesall commented #942
  • Nov 24 15:33
    hayesall closed #943
  • Nov 24 15:33
    hayesall commented #943
  • Nov 24 03:24
    Ashley123456789 opened #943
  • Nov 18 16:36
    DCoupry opened #942
  • Nov 18 13:06
    h-holm commented #927
  • Nov 17 22:47
    balvisio commented #927
  • Nov 17 22:17
    h-holm commented #927
  • Nov 16 15:48
    sandra-tilmon opened #941
  • Nov 16 15:48
    sandra-tilmon labeled #941
  • Nov 16 06:23
    codecov[bot] commented #940
  • Nov 16 06:22
    codecov[bot] commented #940
  • Nov 16 06:19
    codecov[bot] commented #940
  • Nov 16 06:18
    codecov[bot] commented #940
  • Nov 16 06:12
    swarajban edited #940
Here is another wildcard question - I'll check myself but if someone has the answer off the top of the head it will save me some work - when oversampling and the new observations are created, are they appended to the bottom/end of the dataframe/array or are they placed adjacent to the observation from which it was created?
There we copy the original dataset
and then append new samples for each class

I'm using a multiclass dataset (cic-ids-2017), the target column is categorical (more than 4 classes), I used {pd.get_dummies} for One Hot Encoding. The dataset is very imbalanced, and when I tried to oversampling it using SMOTE method, doesn't work, I also tried to include them into a pipeline, but the pipeline cannot support get_dummies, I replaced it by OneHotEncoder, unfortunately, still not working :

X = dataset.drop(['Label'],1)
y = dataset.Label
steps = [('onehot', OneHotEncoder(), ('smt', SMOTE())]
pipeline = Pipeline(steps=steps)
X, y = pipeline.fit_resample(X, y)
Is there any proposition ?

My correlation matrix does not changed after using SMOTE, what could be the cause ?
I'm using resting-state fMRI correlation matrices, which are 4D, and I want to use SMOTE+ENN but it only allows me to use 2D data... How can I adress this problem without losing information from my original data?
Akilu Rilwan Muhammad

This question got to do with SMOTEBoost implementation found here https://github.com/gkapatai/MaatPy but I believe the issue is relayed to imblearn library.

I tried using the library to re-sample all classes in a multiclass problem. Caught by AttributeError: 'int' object has no attribute 'flatten' error:

How to reproduce (in Colab nb):
Clone repo:

!git clone https://github.com/gkapatai/MaatPy.git
cd MaatPy/

from maatpy.classifiers import SMOTEBoost

Dummy data:

X, y = make_classification(n_samples=1000, n_classes=3, n_informative=6, weights=[.1, .15, .75])
xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=.2, random_state=123)

And then:

from maatpy.classifiers import SMOTEBoost
model = SMOTEBoost()
model.fit(xtrain, ytrain)

/usr/local/lib/python3.7/dist-packages/imblearn/over_sampling/_smote.py in _make_samples(self, X, y_dtype, y_type, nn_data, nn_num, n_samples, step_size)
    106         random_state = check_random_state(self.random_state)
    107         samples_indices = random_state.randint(
--> 108             low=0, high=len(nn_num.flatten()), size=n_samples)
    109         steps = step_size * random_state.uniform(size=n_samples)
    110         rows = np.floor_divide(samples_indices, nn_num.shape[1])

AttributeError: 'int' object has no attribute 'flatten'
hi i have a question
when using SMOTE, i get this ValueError: Found array with dim 4. Estimator expected <= 2.
its a binary class problem
not sure how to fix
pls help
@krinetic1234 as far as I'm concerned, SMOTE only works for 2D data... I have the same problem and I don't know how to solve
interesting, yea I have a CSV of grayscale images basically where each image is 224 by 224
so in that case it wouldn't work..
is there an alternative to SMOTE That works well for images?
apparently you just like multiply the image stuff
and then reshape back
idk how well it'll work though

I used SMOTEENN and SMOTE Tomek in my initial data, they take between 1,5 and 2,5 hours. But when I added some data, they run 5 hours before I interrupted them.

  • Initial data : 49,77 MB
  • Added data : 79,25 MB

  • All data = 129,02 MB

NB. SMOTE take just some second for All data.

Really interesting @krinetic1234... but using that reshape will not cause loss of information?
i thought so too.... do any of you know of a better way
to do this "SMOTE" idea for images
and btw i tried the reshape and it dint rly work properly
Akilu Rilwan Muhammad

You just need to operate proper reshaping. I once worked with a time series activity data in which I created chunks of N-size time-steps. The shape of my input was (1, 100, 4). So for the training sample, I have (n_samples, 1, 100, 4) and was a five-class, multi-minority problem, that I want to oversample using SMOTE.

The way I go about it was to flatten the input, like so:

#..reshape (flatten) Train_X for SMOTE resanpling
nsamples, k, nx, ny = Train_X.shape
#Train_X = Train_X.reshape((nsamples,nx*ny))

#smote = SMOTE('not majority', random_state=42, k_neighbors=5)
#X_reample, Y_resample = smote.fit_sample(Train_X, Train_Y)

And then reshape the instance back to the original input shape, like so:

#..reshape input back to CNN xture
X_reample = X_reample.reshape(len(X_reample), k, nx, ny)
ok but does SMOTE actually augment images? @arilwan
like lets say that i have tons of images of cats and few images of dogs, does it actually augment the dog images? and if so, how does it oversample those?
i haven't seen much where people use SMOTE for oversampling images specifically, which is why im surprised
thanks by the way, i'll definitely check what you sent
i believe i did somethign simliar but got an error
Screen Shot 2021-06-28 at 10.54.48 PM.png
so i was also wondering something
i tried to do that and it didn't work
do you have any advice on what to do differently?
i didnt do exactly how you did but thoguht this was a simpler appraoch, conceptually-wise
Soledad Galli
In the instance hardness threshold, when in the docs says "InstanceHardnessThreshold is a specific algorithm in which a classifier is trained on the data and the samples with lower probabilities are removed"(https://imbalanced-learn.org/stable/under_sampling.html#instance-hardness-threshold) what probability exactly is it referring to? The probability of the majority? or the probability of the minority? It is not clear to me from the docs.
Assuming that the target has 2 classes, 0 and 1, and 1 is the minority class: cross_val_predict will return an array with the probabilities of class 0 and 1. Then the code takes the first vector (https://github.com/scikit-learn-contrib/imbalanced-learn/blob/f177b05/imblearn/under_sampling/_prototype_selection/_instance_hardness_threshold.py#L156), that is the probability of being of the majority class, and keeps those with the highest probability, so those that are easier to classify correctly as members of the majority. So far, I think I understand.
But if the target has 3 classes, 0, 1 and 2, and only 2 is the minority, the code (https://github.com/scikit-learn-contrib/imbalanced-learn/blob/f177b05/imblearn/under_sampling/_prototype_selection/_instance_hardness_threshold.py#L156) will only take the first vector of probabilities, that is of class 0. But for class 1, should it not be taking the second vector? is this a bug? or am I understanding the code wrongly?
Amila Wickramasinghe
I am trying to use a customize generator. But it gives the following error :
import keras
ParentClass = keras.utils.Sequence
except ImportError: AttributeError: module 'keras.utils' has no attribute 'Sequence' in _generator.py file. What can I do to overcome this error?
Soledad Galli
In Random Oversampling, when applying shrinkage we multiply the std of each variable, by the srhinkage (arbitrary and entered by the user) a smoothing_constant. The smoothing_constant is (4 / ((n_features + 2) n_samples)) ** (1 / (n_features + 4). What is the logic for this constant?
Also, as per the docs on RandomOversampling, "When generating a smoothed bootstrap, this method is also known as Random Over-Sampling Examples (ROSE) [1]." But, in the ROSE paper, do the authors not select samples which probability is 1/2? In RandomOversampling the smoothing is applied to all randomly extracted samples, regardless of their original probability.
James Proctor
Is there a simpler/built-in way to convert/cast imblearn objects to their sklearn object base/equivalent? My workaround is to create the base using all matching values from the object's dict and then once created update the sklearn object's dict with the imblearn object's dict.
Guillaume Lemaitre
imblearn object are inheriting from BaseEstimator from scikit-learn. So I am not sure what do you mean by converting it?
which type of operation you would like to apply that is a blocker?
James Proctor
I actually just needed to convert/downcast the type for compatibility with an external package that only supports sklearn objects. I wouldn't be calling fit which is where I expect the major changes occur but I thought it might be possible via a less hacky way.
Guillaume Lemaitre
But what is a sklearn object? Supposedly sklearn just provide the BaseEstimator class. Which check is done in the external package?
Hello All!
Quick question - what is the recommended way to grid search all samplers?