Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 10 23:52
    AlexDBlack labeled #8520
  • Dec 10 23:52
    AlexDBlack labeled #8520
  • Dec 10 23:52
    AlexDBlack opened #8520
  • Dec 10 22:48
    AlexDBlack commented #8519
  • Dec 10 22:46
    AlexDBlack labeled #8519
  • Dec 10 22:46
    AlexDBlack labeled #8519
  • Dec 10 22:46
    AlexDBlack milestoned #8519
  • Dec 10 22:46
    AlexDBlack commented #8519
  • Dec 10 22:33
    AlexDBlack closed #8272
  • Dec 10 22:33
    AlexDBlack commented #8272
  • Dec 10 16:35
    raver119 assigned #8516
  • Dec 10 16:35
    raver119 assigned #8517
  • Dec 10 16:00
    yptheangel opened #8519
  • Dec 10 15:54
    yptheangel commented #8272
  • Dec 10 15:50
    yptheangel commented #8289
  • Dec 10 12:58
    raver119 closed #8450
  • Dec 10 12:47
    raver119 labeled #8464
  • Dec 10 12:37
    raver119 commented #8450
  • Dec 10 12:36
    alexanderst commented #8450
  • Dec 10 10:34
    raver119 closed #8492
f
@SidneyLann
Hi, when the next snapshot version will be build?
cqiaoYc
@cqiaoYc
@AlexDBlack the Red hat's Shenandoah is a non-stop GC. If using it, how to set Nd4j.getMemoryManager().togglePeriodicGc()?
Nitin
@nitinnat

@AlexDBlack Hi, needed a little bit of help with training an MLP with external errors. I've looked at the sample code that's out there but for some reason, the training differs from the standard method (i.e. with OutputLayer rather than DenseLayer). In this method, the loss steadily increases (I've tried both addi and subi with no luck). I'm using SquaredLoss with Sigmoid activation on a simple binary classification problem. Here is the gist - https://gist.github.com/nitinnat/88562f236326f5058e36789986b50707

If you could point out any possible errors I am making, it would be greatly appreciated!!

gitterBot
@raver120
@nitinnat Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
raguenets
@montardon

Hi, I trained Cifar example for 200 epochs. I saved the model as cifiarmodel.dl4j.zip. When I restore it to do inference, on test set, I get this confusion matrix.

0 1 2 3 4 5 6 7 8 9

0 0 01000 0 0 0 0 0 0 | 0 = 0
0 0 01000 0 0 0 0 0 0 | 1 = 1
0 0 01000 0 0 0 0 0 0 | 2 = 2
0 0 01000 0 0 0 0 0 0 | 3 = 3
0 0 01000 0 0 0 0 0 0 | 4 = 4
0 0 01000 0 0 0 0 0 0 | 5 = 5
0 0 01000 0 0 0 0 0 0 | 6 = 6
0 0 01000 0 0 0 0 0 0 | 7 = 7
0 0 01000 0 0 0 0 0 0 | 8 = 8
0 0 01000 0 0 0 0 0 0 | 9 = 9

public class InferenceOnTrainedModel {
    public static void main(String[] args) {
        try {
            MultiLayerNetwork model1 = ModelSerializer.restoreMultiLayerNetwork(new File(System.getProperty("java.io.tmpdir"), "cifarmodel.dl4j.zip"),true);
            Cifar10DataSetIterator cifarEval = new Cifar10DataSetIterator(Utils.BATCH_SIZE, new int[]{Utils.HEIGHT, Utils.WIDTH}, DataSetType.TEST, new ColorConversionTransform(COLOR_RGB2BGR), Utils.SEED);
            ImagePreProcessingScaler scaler = new ImagePreProcessingScaler();
            scaler.fit(cifarEval);
            cifarEval.setPreProcessor(scaler);
            Evaluation eval = model1.evaluate(cifarEval);
            System.out.println(eval.confusionMatrix());
            System.out.println(eval.accuracy());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}
Am I missing something ?
ThomasTrepanier
@thomtrep_gitlab
(I am new here so if I miss anything don't hesitate to tell me)
gitterBot
@raver120
@thomtrep_gitlab Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
ThomasTrepanier
@thomtrep_gitlab
I am trying to make a new listener for my MultiLayerNetwork, but I have a hard time converting the Model given as a parameter in the "onEpochEnd" event, I was wondering if there was a way to convert this Model to a MultiLayerNetwork. I have tried to use the configuration of the model and parse it as Json to then create a MultiLayerNetwork with it, but it doesn't work. I've also tried to create a MultiLayerConfiguration from the NeuralNetConfiguration and I think that could be a solution, but I can't figure out how to do it. Thanks!
I am on Windows, use Java 12 and Maven 3.6.3
ThomasTrepanier
@thomtrep_gitlab
Basically what I want to do is save my model performance in an excel sheet every epoch. I don't know if there would be another way to do this, but a listener was my first solution.
ThomasTrepanier
@thomtrep_gitlab
Well I found a way, simply parsed my Model into a MultiLayerNetwork lol
Jasper Andrew
@jasperandrew
Hello there. Hopefully this isn't a dumb question. I'm inexperienced with working on large Java projects, but I'm attempting to use ND4J in an old project that uses Ant to build. Is there a way to do that? Is there a .jar file I can use as a library? I'd love to just use maven, but this is a pretty old codebase, and it hasn't been brought into the modern era yet. Thanks!
gitterBot
@raver120
@jasperandrew Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
DataHorizon
@DataHorizon2_twitter
I am creating my RDD[DataSet] in spark but I am getting the following error. I am passing an RDD[List] by a map, with a method as an argument to turn it into RDD[DataSet] but the following error. Can someone provide some feedback:
image.png
gitterBot
@raver120
@iluvmf Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
Samuel Audet
@saudet
@jasperandrew I would say, create an uber JAR and use that with Ant
@montardon Make sure to save and load whatever needed for preprocessing too
Kebechet
@Kebechet
Hello...is it possible to manually set Epsilon in RL4j ? I wasnt able to find any function for that
Kebechet
@Kebechet
@saudet but that is minimal epsilon...not the actual one. Or am I missing something ?
raguenets
@montardon
Hi, I used Keras import to load a customized model (activation function and loss are customized). I got a error as expected for list of activations and losses does not contain my custom activation and custom loss. To work around this problem, I intend to change custom and custom loss in Keras model (to known ones), import Keras model in DL4J and then reset custom activation and custom losses in DL4J layers and model. I guess it should work provided I correctly code custom activation and custom loss as done in examples. Is it the right way to load custom Keras model ?
Benjamin Theriault
@iluvmf
Hello everyone, I am currently having an issue while trying to transform data using DataVec. The problem comes when I try using a stringToTimeTransform. Basically I have a csv file which uses this format for the time :"01.01.2005 00:00:00.000 GMT-0500". Sometimes, because of daylight saving, the offset becomes -0400 and this is where it seems to sometimes have issues because I get this error at some point in the transform process :
java.lang.IllegalArgumentException: Invalid format: "13.06.2012 07:00:00.000 GMT-0400" is malformed at "-0400"
This is what my stringToTimeTransform method looks like: " .stringToTimeTransform("Local time", "dd.MM.yyyy HH:mm:ss.SSS zzzZZZZZ", DateTimeZone.UTC)".
I am able to fix the issue by modifying my csv file to remove the part where the offset is, but I would prefer not having to do that and actually just use my csv file has it is. Is there any way to fix this issue without having to modifying my csv file?
Thank you for answering!
Samuel Audet
@saudet
@Kebechet We can set the other parameter to 1 so that it takes effect immediately...
AllenWGX
@AllenWGX
hi, I'm using 1.0.0-beta5 RL4j and find that there may be a bug in abstract class Policy(org.deeplearning4j.rl4j.policy.Policy). The play method has a while loop and in this loop the "obs" variable might not be update, always the initial value. Please check it.
rl4j-bug.png
Jasper Andrew
@jasperandrew
@saudet I've attempted to build an ND4J uberjar, but encountered the following error:
[ERROR] Failed to execute goal on project jackson: Could not resolve dependencies for project org.nd4j:jackson:jar:1.0.0-SNAPSHOT: 
Could not find artifact org.deeplearning4j:dl4j-test-resources:jar:1.0.0-SNAPSHOT in sonatype-nexus-snapshots 
(https://oss.sonatype.org/content/repositories/snapshots)
I know this is probably basic stuff, but I don't have experience with maven. Thank you.
To be clear, that was from running mvn package -P testresources on the pom.xml in the ND4J repo
Samuel Audet
@saudet
@AllenWGX It's probably been fixed, please try again with 1.0.0-SNAPSHOT, but if not, please file an !issue.
gitterBot
@raver120
To file an issue, please click 'New Issue' at https://github.com/deeplearning4j/deeplearning4j/issues and provide as much details on your problem, as possible
Samuel Audet
@saudet
@jasperandrew Use -Dmaven.test.skip to skip that as mentioned in the README.md file
AllenWGX
@AllenWGX
@saudet got it. Shall
Shall I compile it myself if I want to use 1.0.0-SNAPSHOT
Samuel Audet
@saudet
We can use precompiled binaries for !snapshots as well
gitterBot
@raver120
Please check out this guide to set up snapshots in your project: https://deeplearning4j.org/docs/latest/deeplearning4j-config-snapshots
AllenWGX
@AllenWGX
okay,let me check then, thanks
François Dupire
@dupirefr

Hi everybody!

I'm working for the blog Baeldung and am currently writing an article on matrix multiplication. ND4j is performing more than well on large matrices (less than 1 sec) compared to the other libraries I used (at least 25 sec, up to 500) and I was wondering if it relied, somehow, on parallelism to run?

I searched on Google but didn't manage to find anything and I dig a bit into the code but got to native calls.

gitterBot
@raver120
@dupirefr Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
RainerHerrler
@RainerHerrler
Hi everyone, i want to create a net where a part on my input should be permutation invariant. i came across a paper of deepsets and wanteted to realize this in my network configuration. i now wonder how i can split inputs and feed them to different nets and merge them together to some denselayers again. i cannot find something similar in the examples. can someone give me a hint how to approach that problem ?
gitterBot
@raver120
@RainerHerrler Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
raguenets
@montardon
Screenshot from 2019-12-10 11-09-03.png
Hi, I was training a UNet network when score suddenly grew up after iteration 3860. I'm curious to understand this phenomena.
ocean211
@ocean211
It seems that the order of the INDArray shape obtained from ComputationGraph.rnnTimeStep is not preserved after saving:
gitterBot
@raver120
@ocean211 Welcome! Here's a link to Deeplearning4j's Gitter Guidelines, our documentation and other DeepLearning resources online. Please explore these and enjoy! https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/GITTER_GUIDELINES.md
ocean211
@ocean211
Test1 shape: Rank: 3, DataType: FLOAT, Offset: 0, Order: f, Shape: [20,1,1], Stride: [1,20,20]
Test2 shape: Rank: 3, DataType: FLOAT, Offset: 0, Order: c, Shape: [1,1,20], Stride: [20,20,1]
Test1 is after training, Test2 after saving and reloading with ModelSerializer.restoreComputationGraph
rnnTimeStep INDArray[] parameter is the same for Test1 and Test2