Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Pratik Parikh
    is there a bootstrap project with datanucleus-ldap + jpa that i can reference as an example. http://www.datanucleus.org/products/accessplatform_5_0/datastores/ldap.html does not tell how to start from scratch
    Andy Jefferson
    JPA is designed for RDBMS only so hence no ready made samples available for it. You could take the https://github.com/datanucleus/samples-jpa/tutorial and add mapping for LDAP using the docs.
    Pratik Parikh
    Thanks @andyjefferson , in presistence.xml what would be the right provider for ldap? would it be org.datanucleus.api.jpa.PersistenceProviderImpl?
    Andy Jefferson
    Yes, the JPA provider (that you mention) is the same for all datastores. The javax.persistence.jdbc.url etc are the values in the first page you referenced
    Pratik Parikh

    Sorry to bother again. But does the following give you any insight into what i might be doing wrong

    SEVERE: Found Meta-Data for class org.keycloak.model.jpa.ldap.Group but this class is either not enhanced or you have multiple copies of the persistence API jar in your CLASSPATH!! Make sure all persistable classes are enhanced before running DataNucleus and/or the CLASSPATH is correct.
    org.datanucleus.exceptions.NucleusUserException: Found Meta-Data for class org.keycloak.model.jpa.ldap.Group but this class is either not enhanced or you have multiple copies of the persistence API jar in your CLASSPATH!! Make sure all persistable classes are enhanced before running DataNucleus and/or the CLASSPATH is correct.
    at org.datanucleus.metadata.MetaDataManagerImpl.initialiseClassMetaData(MetaDataManagerImpl.java:2813)
    at org.datanucleus.metadata.MetaDataManagerImpl.initialiseFileMetaData(MetaDataManagerImpl.java:2746)
    at org.datanucleus.metadata.MetaDataManagerImpl.initialiseFileMetaDataForUse(MetaDataManagerImpl.java:1381)
    at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1142)
    at org.datanucleus.api.jpa.JPAEntityManagerFactory.initialiseNucleusContext(JPAEntityManagerFactory.java:845)
    at org.datanucleus.api.jpa.JPAEntityManagerFactory.initialise(JPAEntityManagerFactory.java:448)
    at org.datanucleus.api.jpa.JPAEntityManagerFactory.<init>(JPAEntityManagerFactory.java:404)
    at org.datanucleus.api.jpa.PersistenceProviderImpl.createEntityManagerFactory(PersistenceProviderImpl.java:112)
    at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
    at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:38)
    at org.keycloak.model.ldap.tests.Main.main(Main.java:10)

    Andy Jefferson
    as it says ... you haven't run the enhancer over the class(es). Which the samples-jpa/tutorial would do.
    Pratik Parikh
    @andyjefferson would utilizing datanucleus-maven-plugin with goal enhance be same as using ant? I am using maven just want to confirm. Sorry some of this question are because i am new to datanucleus. Thanks for your patients in advance
    Pratik Parikh
    @andyjefferson thanks a bunch. I am now able to successfully utilize datanucleus with your assistance.
    much appreciated.
    I everybody
    I try to start with datanucleus and i meet an "newbie's" issue
    I have done a full install with maven and when i try to run the example i get the error : impossible to find or load the main class org.datanucleus.enhancer.DataNucleusEnhancer
    this is my pom
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    thanks to help me
    Andy Jefferson
    Datanucleus-core.jar has the enhancer and is required for all usage
    ok thanks. I have removed it because it's write "Note that this plugin step will automatically try to bring in the latest applicable version of datanucleus-core for use by the enhancer. It does this since you don't need to have datanucleus-core in your POM for compilation/enhancement. If you want to use an earlier version then you need to add exclusions to the maven-datanucleus-plugin" but ok
    i have add it in my pom and still have the same problem. Do i need to also configure buildpath in eclispe?
    Hi again, i have manage to solve my problem but now i meet another one. Hi always try to connect to hbase and i have set my persistence.xml like that
    <persistence-unit name="HBase">
        <exclude-unlisted-classes />
            <!-- property name="datanucleus.storeManagerType" value="hbase" /-->
            <property name="datanucleus.ConnectionURL" value="hbase:localhost:2181" />
            <property name="datanucleus.validateTables" value="true" />
            <property name="datanucleus.validateConstraints" value="false" />
            <property name="datanucleus.Optimistic" value="false" />
            <property name="datanucleus.Multithreaded" value="true" />
            <property name="datanucleus.schema.autoCreateAll" value="true" />
            <property name="datanucleus.mapping.Schema" value="pandora" />
    I have no error but i see that datanucleus try to call hbase-master
    i have seen some stuff about hbase-site.xml but i'm not sure if I need to make this file also from the client side, and if yes where I need to put it. From my knowlegde, this file must be set in the server side
    thanks for your answer
    Andy Jefferson
    datanucleus-core.jar is in the original tutorial so no idea why you took it out in the first place.
    and the persistence.xml in the tutorial is adequate to demonstrate it.
    For DataNucleus-core.jar, it was a misunderstanding from my part and i have re added it and I have found an example inside your github which help me to solve my issue. for the second, I'm not sure, I have done the mvn enhance which build with succes and when i try the exec, i get the following stack in Eclipse:
    '''Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x2e772ffc0x0, quorum=localhost:2181, baseZNode=/hbase
    zookeeper.disableAutoWatchReset is false
    Opening socket connection to server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
    Socket connection established to 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, initiating session
    Session establishment request sent on 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181
    Session establishment complete on server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, sessionid = 0x15b7fff4d70015e, negotiated timeout = 40000
    hconnection-0x2e772ffc0x0, quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
    hconnection-0x2e772ffc-0x15b7fff4d70015e connected
    Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,15024,0 request:: '/hbase/hbaseid,F response:: s{15,14910,1492095748876,1492819063769,5,0,0,0,67,0,15}
    Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,15024,0 request:: '/hbase/hbaseid,F response:: #ffffffff000146d61737465723a3136303030312171ffffff9effffffe07a264250425546a2437396438303036332d616336372d343431362d396462372d393262613333313932393666,s{15,14910,1492095748876,1492819063769,5,0,0,0,67,0,15}
    Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2785bf1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
    Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,15024,0 request:: '/hbase,F response:: s{2,2,1492095741141,1492095741141,0,36,0,0,0,16,14931}
    Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,15024,0 request:: '/hbase/master,F response:: #ffffffff000146d61737465723a3136303030484241ffffffd0ffffffe32dfffffffdffffffc150425546a1aae686d61737465722d312e766e657410ffffff807d18ffffffeaffffffcaffffffccffffff97ffffffb92b10018ffffff8a7d,s{14906,14906,1492819062622,1492819062622,0,0,0,97812551411171664,62,0,14906}
    Use SIMPLE authentication for service MasterService, sasl=false
    Connecting to hmaster-1.vnet/'''
    At the end I have seen the Hmaster but maybe is zookeeper which call it and return this stack
    In all cases, i don't manage to communicate with hbase, and i think it's due to my configuration.
    Andy Jefferson
    JDO tutorial has a hbase-site.xml under https://github.com/datanucleus/samples-jdo/tree/master/tutorial/src/main/resources Either way, it works for me and this is all down to your chosen database not DataNucleus (and JDO is recommended for good reasons, hence why it has the necessary config files).
    ok, for the moment i have tried with the JPA tutorial, i'll test with the JDO tutorial
    thanks for ypour help
    Hi Andy, I finally found my issue. My Docker image of HBase changed its ports so zookeeper failed to connect to the hbase-master, so after fixed that it was ok
    now with the tutorial which works well, i try to do an join request between Inventory and product table like that:
    SELECT I.name, P.name FROM Inventory I INNER JOIN I.products P WHERE P.price > 150.00
    i assume the fact the all tables are not empty and with the right datas
    my issue is no data are returned and i can see in the stack trace the following explaination:Impossible to evaluate all of filter in-datastore : null
    i didn't manage to find in the docs or the forum the reason
    Is it possible to do this or not in the case of HBase
    Andy Jefferson
    The reason would be in the LOG. Clearly HBase itself does not do "JOIN"s one of the reasons why JPA is not recommended for non-RDBMS datastores
    ok thanks so i will try with JDO
    Ivan D. Herazo E.
    Hi all.
    I'm new with Datanucleus API. Does anyone here have at hand a code example of statement batching? (I know Hibernate has support for that functionality). According to Datanucleus documentation (http://www.datanucleus.org/products/accessplatform_4_1/datastores/rdbms_statement_batching.html), it also supports statement batching, but I can't find a code example.
    Please help. Thanks in advance.