Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Aug 26 2020 22:03
    User @agibsonccc unbanned @farizrahman4u
  • Aug 26 2020 22:02
    User @agibsonccc unbanned @RobAltena
  • Aug 26 2020 22:02
    User @agibsonccc unbanned @gsw85
  • Aug 26 2020 22:02
    User @agibsonccc unbanned @raver120
  • Aug 26 2020 22:01
    User @agibsonccc unbanned @cagneymoreau
  • Aug 20 2020 21:19
    @chrisvnicholson banned @cagneymoreau
  • Aug 20 2020 21:15
    @chrisvnicholson banned @konduitops
  • Aug 20 2020 21:13
    User @chrisvnicholson unbanned @eraly
  • Aug 20 2020 21:12
    User @chrisvnicholson unbanned @treo
  • Aug 20 2020 21:12
    User @chrisvnicholson unbanned @AlexDBlack
  • Jul 07 2020 22:38
    @chrisvnicholson banned @raver120
  • May 31 2020 20:46
    @chrisvnicholson banned @ShamsUlAzeem
  • May 31 2020 20:46
    @chrisvnicholson banned @farizrahman4u
  • May 31 2020 20:44
    @chrisvnicholson banned @gsw85_twitter
  • May 31 2020 20:44
    @chrisvnicholson banned @gsw85
  • May 31 2020 20:43
    @chrisvnicholson banned @AlexDBlack
  • May 31 2020 20:43
    @chrisvnicholson banned @agibsonccc
  • May 31 2020 20:43
    @chrisvnicholson banned @RobAltena
  • May 31 2020 20:42
    @chrisvnicholson banned @eraly
  • May 31 2020 20:42
    @chrisvnicholson banned @treo
Eduardo Gonzalez
Joao Madeira
I will. Thanks Eduardo
I'm trying to run a machine learning model on AWS EMR and I'm getting the error java.lang.ClassNotFoundException: org.nd4j.linalg.learning.config.IUpdater even though the model works in intellij. Is there a good way for me to bugcheck this and make sure that I have the appropriate libraries?
Just to add, I'm importing org.nd4j.linalg.learning.config.IUpdater already even though it isn't used directly in my model.
Adam Gibson
@karlmcphee hard to tell what your issue is. Can you please post a completely reproducible problem (dependencies, how you're running your commands, versions of software,..) on the forums? https://community.konduit.ai thanks!
I realized that I didn't have an nd4j backend set up. Is there a good practice to set up maven dependencies so that my project will work both on an EMR with CUDA 9.2 and my settings? Do I need CUDA version 9.2 for this?
Hello, everyone! Just a quick question, to see if there is a better way. I know that it is possible to use CnnSentenceDataSetIterator for RNN and CNN training/classification using WordVectors. Is it also possible to use it with a Vectorizer (e.g. TfidfVectorizer)?
Hi! I am wondering if the ComposableRecordReader could compose different numbers of records from different recordReader. Let's say we have imageRecordReader, wavFileRecordReader, CSVRecordReader, while we have 3records of images and 3 records of wavfiles and 2 records of lables. Technically, can we use composablerecordreader to compose them together, having 2 records containing images, wav filesand labels, and 1 record only image and wave files? (I know it is meaningless in the practical process, I just wonder that since I look into the source code of ComposableRecordReader and there seems no such constraint)
Rhys Compton

I've converted a bunch of Keras models into the Dl4j .zip format for WekaDeeplearning4j, a Weka wrapper for Dl4j. I was wondering whether the Dl4j maintainers would be interested in using these zip files for Dl4j? It would expand the Dl4j Model Zoo substantially:

  • Keras DenseNet121
  • Keras DenseNet169
  • Keras DenseNet201
  • Keras EfficientNetB0
  • Keras EfficientNetB1
  • Keras EfficientNetB2
  • Keras EfficientNetB3
  • Keras EfficientNetB4
  • Keras EfficientNetB5
  • Keras EfficientNetB6
  • Keras EfficientNetB7
  • Keras InceptionV3
  • Keras NASNetLarge
  • Keras NASNetMobile
  • Keras ResNet101
  • Keras ResNet101V2
  • Keras ResNet152
  • Keras ResNet152V2
  • Keras ResNet50
  • Keras ResNet50V2
  • Keras VGG16
  • Keras VGG19
  • Keras Xception

You can download them from here

I ask because I don't have the capacity to code up the architectures in Dl4j for each one of them, so the model architectures would only be available in pretrained format, but maybe that's ok (up to the maintainers I suppose).

is there an example somewhere to load json file , split in train and test like DataSet trainingData = testAndTrain.getTrain();
DataSet testData = testAndTrain.getTest(); i am getting this error java.lang.IllegalStateException: Cannot split DataSet with <= 1 rows (data set has 1 example) , when i try to use JacksonLineRecordReader
Hey, I feel kind of dumb here so sorry in advance. For the Doc2Vec sample: how do I structure the input data? I only found info for FileLabelAwareIterator with the labels as file structure. But I do not want to use a file system. So I would use LabelAwareDocumentIterator(?). How to fill that and how to structure the data? Is there a sample/example available?
Adam Gibson
@basedrhys @azanux:matrix.org @lw7M mind moving discussion to the forums at https://community.konduit.ai? It's a discourse which allows async discussions and actual indexable search results
@agibsonccc hey, sure and thanks for the reply :)
Luke Czapla
greetings all, long time no see! I have been working with drop out for regularization and was wondering if there are built-in methods to actually drop out nodes at scoring time and not just during the training process
Luke Czapla
I seem to have a poorly posed problem with way too many inputs (features) and not enough samples, so I want to try a Monte Carlo technique
hey guys, i am fairly new to java. So I want to keep running a model for inference in a while true loop. for all preprocessing and post processing, i am using nd4j INDArray. I have written quiet a lot of functions and each one of them creates many ndarrays and get called at every iteration of loop.
how can i improve the performance and is their a better way to manage this ?
Adam Gibson
@MankaranSingh @lukeczapla please post over on the forums at https://community.konduit.ai/ with more details and we'll reply when we can. Thanks!
Luke Czapla
ok thanks! posted and hopefully someone has some ideas or could point out the class / source code.

the tfidf vectorizer has a transform method

What is the difference between transform and vectorize methods ?

Daniel Cotter
Hey all, apologies for the rudimentary question, but what DataSetIterator are you supposed to use with a single layer LSTM that's -not- using regression for calculating precision numerical sequences of data? I'm attempting to use CSVSequenceRecordReader, but according to the code it's attempting to turn all labels into one-hot representation and failing because it's not a 1 or 0. I've attempted reading the documentation, but I can't seem to skim the nugget of information I need. What am I missing here? Happy to answer any follow up questions on this, as I'm not sure what all information you would need to discern this up front.
@agibsonccc Will the next release be 1.0.0-M1?

Hello everyone. I need some help to make a pipeline between TfidfRecordReader and RecordReaderDataSetIterator. This is my code :

   int batchSize = 100;
    int seed = 123;

    File rootTrainingTxtFolder = new File("./TRAINING/TXT");
    String[] allowedFormats=new String[]{".txt"};
    FileSplit fileSplit = new FileSplit(rootTrainingTxtFolder ,allowedFormats,new Random(seed));

    Configuration config = new Configuration();
    config.setBoolean(RecordReader.APPEND_LABEL, true);
    config.setInt(TextVectorizer.MIN_WORD_FREQUENCY, 1);

    TfidfRecordReader recordReader = new TfidfRecordReader();
    recordReader.initialize(config, fileSplit);

    int nbrLabel = recordReader.getLabels().size();

    DataSetIterator trainIter = new RecordReaderDataSetIterator.Builder(recordReader, batchSize)
            .classification(1, nbrLabel)


And i've got an exception :
Exception in thread "main" java.lang.IllegalStateException: Cannot put array: array should have leading dimension of 1 and equal rank to output array. Attempting to put array of shape [9418] into output array of shape [100]
at org.nd4j.common.base.Preconditions.throwStateEx(Preconditions.java:641)
at org.nd4j.common.base.Preconditions.checkState(Preconditions.java:340)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.putExample(RecordReaderMultiDataSetIterator.java:544)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.convertWritablesHelper(RecordReaderMultiDataSetIterator.java:516)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.convertWritables(RecordReaderMultiDataSetIterator.java:454)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.convertFeaturesOrLabels(RecordReaderMultiDataSetIterator.java:364)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.nextMultiDataSet(RecordReaderMultiDataSetIterator.java:327)
at org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator.next(RecordReaderMultiDataSetIterator.java:213)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:378)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:453)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:85)

What is wrong ? Need to use a WritableConverter ?

Adam Gibson
@lzd010 @AdrienDS72 please post on the forums at https://community.konduit.ai/ all your questions have either been answered there already or will be.

hey guys I wanted to calculate inner product using ND4J
A is a 3x4 matrix and B is 4x4
this is how we do it in numpy https://numpy.org/doc/stable/reference/generated/numpy.inner.html

how to do this in ND4J

Adam Gibson
@MankaranSingh please post over on the community forums so people can see the post in discussion in search results thanks! https://community.konduit.ai/

Hello, can anyone help me to run an image from file through the graph produced by the MobileNetTransferLearningExample? I have run the example, here modified to use the CIFAR-100 dataset rather than CIFAR-10, gotten a 78% accuracy rate, and saved the SameDiff to disk using asFlatFile(). In this test code I load the SameDiff from disk and run a jpg through it. I get a result from Predictions/Output, but I don't know how to interpret it. How can I map this 4D array to a single label? Thank you.


public class LoadMobileNet {

    public static void main(String[] args) throws Exception {

        int numClasses = 100; 

        SameDiff sd = SameDiff.fromFlatFile(new File("/tmp/frozenGraph.dat"));

        String urlString = "https://cdn.britannica.com/16/126516-050-2D2DB8AC/Triumph-Rocket-III-motorcycle-2005.jpg";
        int w = 1600;
        int h = 1131;
        URL url = new URL(urlString);
        INDArray testImage = new ImageLoader(h, w, 3).asMatrix(url.openStream());

        INDArray out = sd.batchOutput()
            .input("input", testImage)

        System.out.println("Result: " + out.shapeInfoToString());


Result: Rank: 4, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,30,44,100],  Stride: [132000,4400,100,1]
Adam Gibson
@MrForum mind posting your question over at the community forums so others can benefit from the discussion? https://community.konduit.ai/
@agibsonccc sure
@agibsonccc posted, though it wouldn't let me use "MobileNetTransferLearningExample" in the title, seemingly because the word is too long
Hi , is Nd4j.createComplex function removed from ND4J ? How can i use it
Eduardo Gonzalez
I thought we had to remove complex types due to lack of maintenance.
But some examples require it
Adam Gibson
@eix128 that's not true. I'm not sure what versions of dl4j you're using, but we removed complex years ago same with the java backend you were asking about
Please make sure you're using the up to date version of the project at https://github.com/eclipse/deeplearning4j and https://github.com/eclipse/deeplearning4j-examples not the old examples at https://github.com/deeplearning4j
Giovanni Berti
Hi! I'm trying to load a CSV dataset containing sequence data and want to use it to train a generative model. The CSV file contains also some other heading columns that I'd like to remove
I've tried using datavec with SequenceSchema, and CSVSequenceRecordReader, but I can't find a corresponding CSVSequenceRecordWriter to write data to
I've tried skipping the preprocessing part, and using a SequenceRecordReaderDataSetIterator, but it assumes that every record has a label, which is not my case
Hi , whats the difference between class NeuralNetConfiguration.Builder : optimizationAlgo and updater method ? They look same ?
and there is no AMSGrad on optimizationAlgo.Only its avalible in updater method
Adam Gibson
@eix128 please migrate your questions to the forums
We post there so other people can benefit from questions other people asked.
To answer your question, many confusing things are mainly due to new apis and backwards compatibility. Things used to be called optimization algorithms, now they're updaters to be more in line with the other industries. Please use the examples, I've seen you looking at old code and posting on old repos. I'm not sure where you keep looking (the book maybe?) but many things are there for backwards compatibility reasons and not much else. Whatever you're doing, stick to https://github.com/eclipse/deeplearning4j, https://github.com/eclipse/deeplearning4j-examples, https://community.konduit.ai/ - I mention this so you use the most up to date code from us, the most up to date examples, and join the community where the discussions' moved long ago
Everything else is likely out of date (blog posts, old repositories that haven't been touched in years etc)
Okay so you should flag optimizationAlgo method as deprecated and on javadoc you should point updater method call as alternative
Any idea why this hubert loss function would fail? Same network is working correctly for MSE loss function.
public class HuberLoss extends SameDiffLoss{

    public SDVariable defineLoss(SameDiff sd, SDVariable yPred, SDVariable yTrue) {
        return sd.loss.huberLoss(yTrue, yPred, null, 1.0);

NDArray::applyScalarArr BoolOps: this dtype: [5]; scalar dtype: [6]
Exception in thread "main" 14:24:54.952 [main] ERROR o.n.l.c.n.ops.NativeOpExecutioner - Failed to execute op huber_loss_grad. Attempted to execute with 3 inputs, 3 outputs, 1 targs,0 bargs and 1 iargs. Inputs: [(FLOAT,[1,3],c), (FLOAT,[],c), (FLOAT,[1,3],c)]. Outputs: [(FLOAT,[1,3],c), (FLOAT,[],c), (FLOAT,[1,3],c)]. tArgs: [1.0]. iArgs: [3]. bArgs: -. Input var names: [layerInput, sd_var, labels]. Output var names: [layerInput-grad, sd_var-grad, labels-grad] - Please see above message (printed out from c++) for a possible cause of error.
java.lang.RuntimeException: NDArray::applyScalarArr bool method: this and scalar arrays must have the same type!
    at org.nd4j.linalg.cpu.nativecpu.ops.NativeOpExecutioner.exec(NativeOpExecutioner.java:1918)
    at org.nd4j.linalg.factory.Nd4j.exec(Nd4j.java:6575)
    at org.nd4j.autodiff.samediff.internal.InferenceSession.doExec(InferenceSession.java:487)
    at org.nd4j.autodiff.samediff.internal.InferenceSession.getOutputs(InferenceSession.java:214)
    at org.nd4j.autodiff.samediff.internal.InferenceSession.getOutputs(InferenceSession.java:60)
    at org.nd4j.autodiff.samediff.internal.AbstractSession.output(AbstractSession.java:386)
    at org.nd4j.autodiff.samediff.SameDiff.directExecHelper(SameDiff.java:2579)
    at org.nd4j.autodiff.samediff.SameDiff.batchOutputHelper(SameDiff.java:2547)
Adam Gibson
@somegituser123 it looks like you filed an issue. So others can benefit (especially in google search results) please migrate posts like this over to the forums or GH issues for bugs if you want discussion. Thanks!
Donatien Dinyad Yeto
Hi !