Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 26 22:03
    User @agibsonccc unbanned @farizrahman4u
  • Aug 26 22:02
    User @agibsonccc unbanned @RobAltena
  • Aug 26 22:02
    User @agibsonccc unbanned @gsw85
  • Aug 26 22:02
    User @agibsonccc unbanned @raver120
  • Aug 26 22:01
    User @agibsonccc unbanned @cagneymoreau
  • Aug 20 21:19
    @chrisvnicholson banned @cagneymoreau
  • Aug 20 21:15
    @chrisvnicholson banned @konduitops
  • Aug 20 21:13
    User @chrisvnicholson unbanned @eraly
  • Aug 20 21:12
    User @chrisvnicholson unbanned @treo
  • Aug 20 21:12
    User @chrisvnicholson unbanned @AlexDBlack
  • Jul 07 22:38
    @chrisvnicholson banned @raver120
  • May 31 20:46
    @chrisvnicholson banned @ShamsUlAzeem
  • May 31 20:46
    @chrisvnicholson banned @farizrahman4u
  • May 31 20:44
    @chrisvnicholson banned @gsw85_twitter
  • May 31 20:44
    @chrisvnicholson banned @gsw85
  • May 31 20:43
    @chrisvnicholson banned @AlexDBlack
  • May 31 20:43
    @chrisvnicholson banned @agibsonccc
  • May 31 20:43
    @chrisvnicholson banned @RobAltena
  • May 31 20:42
    @chrisvnicholson banned @eraly
  • May 31 20:42
    @chrisvnicholson banned @treo
Eduardo Gonzalez
@wmeddie
wow you don’t see that every day.
DarkWingMcQuack
@DarkWingMcQuack
when this happens, this is what happens to the score
grafik.png
@izzat1996 would you go higher or lower?
could this be a vanishing gradient problem?
Eduardo Gonzalez
@wmeddie
I’d double check the data.
DarkWingMcQuack
@DarkWingMcQuack
@wmeddie it does happen always at the same data interval. if i restart the training it happens at another timestamp
      .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)
      .gradientNormalizationThreshold(1.0)
made it a bit better
izzat1996
@izzat1996
i will go a bit higher because before this when i try to train my model the same problems happens to my model
DarkWingMcQuack
@DarkWingMcQuack
@izzat1996 thanks i will try that
i just discovered this
grafik.png
i guess that this is not normal, too. This is my LSTM layer
DarkWingMcQuack
@DarkWingMcQuack
Does noone has an idea what is going on here? i tried different batch sizes and the problem remains. I checked the data but i was not able to find any NaN or other uncommon things. Maybe some outliners in the data, but could that lead to such a behaviour?
Eduardo Gonzalez
@wmeddie
Yeah.
One before I was doing e-mail processing, a few of the e-mails had huge sections of base64 encoded files in-line. Caused huge spikes in the loss.
DarkWingMcQuack
@DarkWingMcQuack
so i guess i need to do some outlier removal :/
Giri
@Giribushan
cudacudacudacudacuda
Adam Gibson
@agibsonccc
@nicoladaoud hard to tell what it is, you likely have a bad set of deps. Maybe use nd4j-native-platform instead?
ADAMYA850
@ADAMYA850
HI
I AM ADAMYA
I AM LERNING PYTHON
CAN ANYBODY OF YOU HELP mE
cawthorne
@cawthorne

Hi.

When I try to install nd4j on an arm64 server that doesn't have glibc 2.7 I get the error:

Caused by: java.lang.UnsatisfiedLinkError: /home/*/.javacpp/cache/benchmarks.jar/org/nd4j/nativeblas/linux-arm64/libjnind4jcpu.so: /lib64/libm.so.6: versionGLIBC_2.27' not found (required by /home/*/.javacpp/cache/benchmarks.jar/org/nd4j/nativeblas/linux-arm64/libnd4jcpu.so`

However it does work when installed on a server with that glibc.

Is there anyway to support other glibc environments? Or is everyone who doesn't have glibc 2.7 considered unsupported?

Thanks,
Greg

Eilons
@Eilons
image.png

Hi All.
I have a mac and use maven.
In my pom I added:

<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>1.0.0-beta7</version>
</dependency>
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-native</artifactId>
<version>1.0.0-beta7</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-nlp</artifactId>
<version>1.0.0-beta7</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-modelimport</artifactId>
<version>1.0.0-beta7</version>
</dependency>

I try to call the Word2Vec class but I get that the class is not identify (attached).
Can anyone help?

Note that when I use my windows computer, everything works fine
Adam Gibson
@agibsonccc
Use nd4j-native-platform and ensure intelij/eclipse updates its cache properly
Eilons
@Eilons
Wow! perfect! Thanks!
cawthorne
@cawthorne
bump for my post :)
Eduardo Gonzalez
@wmeddie
Are you using a really new OS?
cawthorne
@cawthorne
ubuntu 16
I have 2.23 installed and it needs 2.7
what would be the best way to resolve this do you think?
Eduardo Gonzalez
@wmeddie
Ah, the opposite problem.
for ubuntu… not sure.
cawthorne
@cawthorne
What are the general steps for an os you are familiar with?
Eduardo Gonzalez
@wmeddie
In Red-hot/Centos there’s devtoolset that you can possibly install.
And you run it after you’ve activated a recent enough devtoolset.
cawthorne
@cawthorne
Ah cool. Can it also do Downgrading if needed?
Eduardo Gonzalez
@wmeddie
otherwise, you can create a chroot with a newer version.
You have to “activate it” every time you want to use it.
cawthorne
@cawthorne
Ah okay good to know thanks
Eduardo Gonzalez
@wmeddie
same goes for the chroot.
cawthorne
@cawthorne
Ah I thought you were talking about chroot. Okay thanks.
NT
@nimishatandon
Am looking for some guidance s that my models are reproducible
When using dl4j beta-7. When using beta-2 and training with the exact same test and train data I was getting consistent models , but it's not the case with beta-7 .
Eduardo Gonzalez
@wmeddie
If you can share something reproducible in a bug report we will look into it. Properly seeding all of the libraries we integrate with is non-trivial.
NT
@nimishatandon
It's a vanilla LSTM model that I am training . Would you still like me to share the configuration and the stopping criteria of my models? I read on your website that setting the seed in the configuration object would help make the models reproducible. If there are other such recommendations I could follow that would be really helpful.
Eduardo Gonzalez
@wmeddie
If it was reproducible in beta2 and no longer the case (and nothing about that mentioned in the release notes) then its a bug.