sklearn.neighbor.KernelDensity
alongisde scikit-multiflow
. As you mention the sklearn implementation works in batches of data, if you wan to update the densities you have to define data update strategy. This is very similar to how the KNNClassifier
is implemented. You will see there that the data is stored in a sliding window. Regarding drift detection, ADWIN as all other drift detectors take as input 1-dimensional data. You can check the KNNADWINClassifier
which uses ADWIN to monitor the classification performance of the basic KNN model. If ADWIN detects a change in classification performance, then the model is reset.
Hi @dossy , here is one
from skmultiflow.data import DataStream
import numpy as np
n_features = 10
n_samples = 50
X = np.random.random(size=(n_samples, n_feature
y = np.random.randint(2, size=n_samples)
stream = DataStream(data=X, y=y)
# stream.prepare_for_use() # if using the stable version (0.4.1)
stream.n_remaining_samples()
Last line return 50
numpy.ndarray
np.ndarray
as long as you define the index of the target column (last column by default). pandas.DataFrame
are also supported, following the same indications.
scikit-multiflow
is only a small part of the puzzle and there’s a lot of stuff you have to develop yourself around it?
I know this is a n00b question, but if I’m working with strings, I have to vectorize them first? I can’t just pass in a
pandas.DataFrame
containing strings - only real/int values?
All questions are welcomed. Currently, we only support numerical data. Your data must be pre-processed. As you mention, scikit-multiflow is focused on the learning part. The idea is that you can take it and integrate it as part of your workflow.
scikit-multiflow
processes data one sample at a time. We provide the FileStream
and DataStream
classes for the case when you have data in a file or in memory. Both are extensions of the Stream
class. If you want to read from log file you could process each log as it arrives (convert to numerical values) and the pass it to a model. The operation to receive and process the log entry can be wrapped in an extension of the Stream
class. The most relevant method is next_sample
.
Hi, I am building a HoeffdingTree classifier on a heavily imbalanced data stream (only ~1 in 1000 data points are of the positive class). Using the EvaluatePrequential
evaluator I am able to plot the precision and recall, however, the recall is extremely low as the model learns to predict the negative class almost always (only 50 positive predictions in my stream of 10 million data points).
Tree classifiers often give me class probabilities rather than discrete class outputs, and the actual recall (and precision) is of course threshold-dependent. Is there a way to control the threshold for which I am evaluating the recall?
@nuwangunasekara
Is there a way to achieve something similar to sklearn.preprocessing.OneHotEncoder() in scikit-multiflow in a streaming setting?
The StreamTransform
class can be used as base class to implement it. There are two main scenarios (with challenges) that I can see:
dict
) would help.Tree classifiers often give me class probabilities rather than discrete class outputs, and the actual recall (and precision) is of course threshold-dependent. Is there a way to control the threshold for which I am evaluating the recall?
You can get probabilities via the predict_proba
method from the HoeffdingTree
, however there is currently no suppoort for this in the EvaluatePrequential
class. In this case you might want to try implementing the evaluate prequential process. Something like this:
# Imports
from skmultiflow.data import SEAGenerator
from skmultiflow.trees import HoeffdingTreeClassifier
# Setting up a data stream
stream = SEAGenerator(random_state=1)
# Setup Hoeffding Tree estimator
ht = HoeffdingTreeClassifier()
# Setup variables to control loop and track performance
n_samples = 0
correct_cnt = 0
max_samples = 200
# Train the estimator with the samples provided by the data stream
while n_samples < max_samples and stream.has_more_samples():
X, y = stream.next_sample()
y_pred = ht.predict(X)
if y[0] == y_pred[0]:
correct_cnt += 1
ht = ht.partial_fit(X, y)
n_samples += 1
The metrics can be calculated using the ClassificationPerformanceEvaluator
and WindowClassificationPerformanceEvaluator
in the development branch.
@nuwangunasekara
Is there a way to achieve something similar to sklearn.preprocessing.OneHotEncoder() in scikit-multiflow in a streaming setting?
The
StreamTransform
class can be used as base class to implement it. There are two main scenarios (with challenges) that I can see:
- If you know the number of distinct values in a nominal attribute. Then should be as simple as mapping the value to the corresponding binary attribute (a
dict
) would help.- If you don’t know the distinct values in the nominal attribute. This is more challenging, first the mapping must be maintained dynamically. Second, if a new value appear, the length of the sample will change as a new binary attribute will be added. This is complex as it is not guaranteed that methods are going to support “emerging" attributes in this fashion.
I would explore the first scenario first as the second one seems more like a corner case.
Thamks for the tip @jacobmontiel !
# Imports
from skmultiflow.data import SEAGenerator
from skmultiflow.anomaly_detection import HalfSpaceTrees
# Setup a data stream
stream = SEAGenerator(random_state=1)
# Setup Half-Space Trees estimator
half_space_trees = HalfSpaceTrees(random_state=1, n_estimators=5)
# Pre-train the model with one sample
X, y = stream.next_sample()
half_space_trees.partial_fit(X, y)
# Setup variables to control loop and track performance
n_samples = 0
max_samples= 5000
anomaly_cnt = 0
# Train the estimator(s) with the samples provided by the data stream
while n_samples < max_samples and stream.has_more_samples():
X, y = stream.next_sample()
y_pred = half_space_trees.predict(X)
if y_pred[0] == 1:
anomaly_cnt += 1
half_space_trees = half_space_trees.partial_fit(X, y)
n_samples += 1
# Display results
print('{} samples analyzed.'.format(n_samples))
print('Half-Space Trees anomalies detected: {}'.format(anomaly_cnt))
Stream
object. You must replace the generator with the proper data.n_features
has been removed from the signature.Some comments:
- The pre-train phase is needed in this case to avoid an error when predicting and the model is empty
- The SEA generator does not really provides data with actual anomalies, we just show use it to show how the detector interacts with a
Stream
object. You must replace the generator with the proper data.- This example corresponds to the development version where the parameter
n_features
has been removed from the signature.
Thank you, appreciate it.
# Imports
from skmultiflow.data import SEAGenerator
from skmultiflow.anomaly_detection import HalfSpaceTrees
# Setup a data stream
stream = SEAGenerator(random_state=1)
stream.prepare_for_use()
# Setup Half-Space Trees estimator
half_space_trees = HalfSpaceTrees(random_state=1, n_estimators=5, n_features=2)
# Pre-train the model with one sample
X, y = stream.next_sample()
half_space_trees.partial_fit(X, y)
# Setup variables to control loop and track performance
n_samples = 0
max_samples= 5000
anomaly_cnt = 0
# Train the estimator(s) with the samples provided by the data stream
while n_samples < max_samples and stream.has_more_samples():
X, y = stream.next_sample()
y_pred = half_space_trees.predict(X)
if y_pred[0] == 1:
anomaly_cnt += 1
half_space_trees = half_space_trees.partial_fit(X, y)
n_samples += 1
# Display results
print('{} samples analyzed.'.format(n_samples))
print('Half-Space Trees anomalies detected: {}'.format(anomaly_cnt))
# Imports
from skmultiflow.anomaly_detection import HalfSpaceTrees
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(30, 3), columns=['x', 'y', 'z'])
# Access raw numpy array inside the dataframe
X_array = df.values
# Setup Half-Space Trees estimator
half_space_trees = HalfSpaceTrees(random_state=1, n_estimators=5) #, n_features=2)
# Pre-train the model with one sample
# the sample is a 1D array and we must pass a 2D array, thus np.asarray([X_array[0]])
half_space_trees.partial_fit(np.asarray([X_array[0]]), [0])
anomaly_cnt = 0
# Train the estimator(s) with the samples provided by the data stream
for X in X_array[1:]:
y_pred = half_space_trees.predict([X])
if y_pred[0] == 1:
anomaly_cnt += 1
half_space_trees = half_space_trees.partial_fit(np.asarray([X]), [0])
# Display results
print('Half-Space Trees anomalies detected: {}'.format(anomaly_cnt))
from skmultiflow.data import SEAGenerator
import pandas as pd
import numpy as np
X, y = SEAGenerator(random_state=12345).next_sample(1000)
df = pd.DataFrame(np.hstack((X, y.reshape(-1,1))),
columns=['attr_{}'.format(i) for i in range(X.shape[1])] + ['target'])
df.target = df.target.astype(int)
df.to_csv('stream.csv')
RandomForest
is the batch version based on Decision Trees. AdaptiveRandomForest
is the stream version based on Hoeffding Trees. AdaptiveRandomForest
can be used with or without the drift detection. If you want to use AdaptiveRandomForest
without drift detection you must initialize it as AdaptiveRandomForest(drift_detection_method=None)
Thank you so much @jacobmontiel# Imports from skmultiflow.anomaly_detection import HalfSpaceTrees import numpy as np import pandas as pd df = pd.DataFrame(np.random.randn(30, 3), columns=['x', 'y', 'z']) # Access raw numpy array inside the dataframe X_array = df.values # Setup Half-Space Trees estimator half_space_trees = HalfSpaceTrees(random_state=1, n_estimators=5) #, n_features=2) # Pre-train the model with one sample # the sample is a 1D array and we must pass a 2D array, thus np.asarray([X_array[0]]) half_space_trees.partial_fit(np.asarray([X_array[0]]), [0]) anomaly_cnt = 0 # Train the estimator(s) with the samples provided by the data stream for X in X_array[1:]: y_pred = half_space_trees.predict([X]) if y_pred[0] == 1: anomaly_cnt += 1 half_space_trees = half_space_trees.partial_fit(np.asarray([X]), [0]) # Display results print('Half-Space Trees anomalies detected: {}'.format(anomaly_cnt))
from skmultiflow.data.data_stream import DataStream
from skmultiflow.evaluation import EvaluatePrequential
from skmultiflow.trees import HoeffdingTree
stream = DataStream(X_train, y = y_train)
stream.prepare_for_use()
ht = HoeffdingTree()
evaluator = EvaluatePrequential(show_plot=True,
pretrain_size=5000,
max_samples=20000,
metrics = ['accuracy', 'running_time','model_size'],
output_file='results.csv')
evaluator.evaluate(stream=stream, model=ht);