I have no error but i see that datanucleus try to call hbase-master
i have seen some stuff about hbase-site.xml but i'm not sure if I need to make this file also from the client side, and if yes where I need to put it. From my knowlegde, this file must be set in the server side
thanks for your answer
Andy Jefferson
@andyjefferson
datanucleus-core.jar is in the original tutorial so no idea why you took it out in the first place.
and the persistence.xml in the tutorial is adequate to demonstrate it.
mferlay
@mferlay
For DataNucleus-core.jar, it was a misunderstanding from my part and i have re added it and I have found an example inside your github which help me to solve my issue. for the second, I'm not sure, I have done the mvn enhance which build with succes and when i try the exec, i get the following stack in Eclipse:
'''Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x2e772ffc0x0, quorum=localhost:2181, baseZNode=/hbase zookeeper.disableAutoWatchReset is false Opening socket connection to server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) Socket connection established to 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, initiating session Session establishment request sent on 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181 Session establishment complete on server 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2181, sessionid = 0x15b7fff4d70015e, negotiated timeout = 40000 hconnection-0x2e772ffc0x0, quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null hconnection-0x2e772ffc-0x15b7fff4d70015e connected Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,15024,0 request:: '/hbase/hbaseid,F response:: s{15,14910,1492095748876,1492819063769,5,0,0,0,67,0,15} Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 2,4 replyHeader:: 2,15024,0 request:: '/hbase/hbaseid,F response:: #ffffffff000146d61737465723a3136303030312171ffffff9effffffe07a264250425546a2437396438303036332d616336372d343431362d396462372d393262613333313932393666,s{15,14910,1492095748876,1492819063769,5,0,0,0,67,0,15} Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2785bf1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 3,3 replyHeader:: 3,15024,0 request:: '/hbase,F response:: s{2,2,1492095741141,1492095741141,0,36,0,0,0,16,14931} Reading reply sessionid:0x15b7fff4d70015e, packet:: clientPath:null serverPath:null finished:false header:: 4,4 replyHeader:: 4,15024,0 request:: '/hbase/master,F response:: #ffffffff000146d61737465723a3136303030484241ffffffd0ffffffe32dfffffffdffffffc150425546a1aae686d61737465722d312e766e657410ffffff807d18ffffffeaffffffcaffffffccffffff97ffffffb92b10018ffffff8a7d,s{14906,14906,1492819062622,1492819062622,0,0,0,97812551411171664,62,0,14906} Use SIMPLE authentication for service MasterService, sasl=false Connecting to hmaster-1.vnet/172.18.0.5:16000'''
At the end I have seen the Hmaster but maybe is zookeeper which call it and return this stack
In all cases, i don't manage to communicate with hbase, and i think it's due to my configuration.
ok, for the moment i have tried with the JPA tutorial, i'll test with the JDO tutorial
thanks for ypour help
mferlay
@mferlay
Hi Andy, I finally found my issue. My Docker image of HBase changed its ports so zookeeper failed to connect to the hbase-master, so after fixed that it was ok
now with the tutorial which works well, i try to do an join request between Inventory and product table like that:
SELECT I.name, P.name FROM Inventory I INNER JOIN I.products P WHERE P.price > 150.00
i assume the fact the all tables are not empty and with the right datas
my issue is no data are returned and i can see in the stack trace the following explaination:Impossible to evaluate all of filter in-datastore : null
i didn't manage to find in the docs or the forum the reason
Is it possible to do this or not in the case of HBase
?
Andy Jefferson
@andyjefferson
The reason would be in the LOG. Clearly HBase itself does not do "JOIN"s one of the reasons why JPA is not recommended for non-RDBMS datastores
mferlay
@mferlay
ok thanks so i will try with JDO
Ivan D. Herazo E.
@idherazo_twitter
Hi all. I'm new with Datanucleus API. Does anyone here have at hand a code example of statement batching? (I know Hibernate has support for that functionality). According to Datanucleus documentation (http://www.datanucleus.org/products/accessplatform_4_1/datastores/rdbms_statement_batching.html), it also supports statement batching, but I can't find a code example. Please help. Thanks in advance.
_
Andy Jefferson
@andyjefferson
There is no possible 'code example' because statement batching is what is performed in communication with the rdbms. It sees that there are two batchable sql statements following each other to be executed so batches them ... Just like hibernate does.
Ivan D. Herazo E.
@idherazo_twitter
Let me see if I understand. Say for example: I have a "Person" entity with 4 fields: id, name, lastName and email. if I execute the "EntityManager.persist()" JPA operation multiple times with different instances of "Person", then Datanucleus performs a batch when I invoke "EntityManager.flush()" . instances? Am I correct?
Andy Jefferson
@andyjefferson
If you have SEVERAL Person objects to persist one after another, and "em.persist" is called and the inserts are queued, then when it gets time to flush changes to the datastore, IF there are multiple INSERT statements of the same structure following each other then they will be batched (when that persistence property is set, from the link) ... yes. IF on the other hand there are other SQL statements in between those INSERT statements (e.g due to needing to persist related objects) then they won't be batched. Best way to understand is to actually try it, and look at SQL statements in the log ... it tells you when batching happens