Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    aley2003
    @aley2003

    Hi, I have a problem with a NullPointerException at org.datanucleus.ExecutionContextImpl.performDetachAllOnTxnEndPreparation(ExecutionContextImpl.java:4402).
    It occurs randomly, if I set the L1Cache to WEAK.
    It looks like there is no problem, if the L1Cache is SOFT.

    I use datanucleus-api-jdo 5.1.9 with datanucleus-core 5.1.12, datanucleus-rdbms 5.1.11.

    My relevant properties are

    • datanucleus.cache.level1.type=WEAK,
    • datanucleus.cache.level2.type=none,
    • javax.jdo.option.Optimistic=true,
    • javax.jdo.option.Multithreaded=true,
    • datanucleus.rdbms.fetchUnloadedAutomatically=false,
    • datanucleus.persistenceByReachabilityAtCommit=false,
    • datanucleus.query.flushBeforeExecution=true,
    • datanucleus.DetachOnClose=false,
    • datanucleus.attachSameDatastore=true,
    • javax.jdo.option.DetachAllOnCommit=true,
    • javax.jdo.option.CopyOnAttach=true,
    • javax.jdo.option.RetainValues=true
    • javax.jdo.option.NontransactionalRead=false,

    The stack trace is:
    java.lang.NullPointerException
    at org.datanucleus.ExecutionContextImpl.performDetachAllOnTxnEndPreparation(ExecutionContextImpl.java:4402)
    at org.datanucleus.ExecutionContextImpl.preCommit(ExecutionContextImpl.java:4208)
    at org.datanucleus.ExecutionContextThreadedImpl.preCommit(ExecutionContextThreadedImpl.java:546)
    at org.datanucleus.ExecutionContextImpl.transactionPreCommit(ExecutionContextImpl.java:728)
    at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java:397)
    at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:287)
    at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107)
    ...

    Before the NPE occures there are ">> EC.close L1Cache op IS NULL!" and ">> EC.preCommit L1Cache op IS NULL!" messages in the log.

    My guess is the garbage collector removes the objects from the weak cache and the ObjectProvider becomes null.
    I tried to call System.gc() before commit() but this does not cause the NPE to occure always (only sometimes).

    What can I do to to find out what's going on?
    Any idea, how to fix the NPE or how to avoid null values in the L1Cache?

    bpair
    @bpair
    Newbie question - Is there a way to have the REST api return undefined attributes from Mongo? So basically returning the document. (Using JDO)
    Andy Jefferson
    @andyjefferson
    @bpair REST API returns what is mapped and defined in the docs only ... if something is "undefined" then how could it ever?!. If you want something other than that you get the code and see how the JDO API can be used to achieve what you want in a database-neutral way.
    bpair
    @bpair
    @andyjefferson I understand that, but since Mongo accepts undefined JSON attributes the likelihood that over time there are more json attributes in Mongo than what I have defined for DataNucleus seems high. I was hoping there was a way to get the whole json document. The concept of returning "undefined" data in a REST api happens all the time in Node and other non-typed languages. The Jackson library parses the whole json document as well before picking the attributes defined for DataNucleus. The parallel for RDBMS would be performing a select * and inspecting the column attributes for type information. Databases like Elasticsearch take unstructured JSON and index every attribute inferring the type of the data as string, int, float, or boolean. As I said I am new to DataNucleus and was wondering if this had been implemented.
    Maciej Wegorkiewicz
    @wegorkie_gitlab
    Hello. In one of my RDBMS tables (H2 database) I need fast querying for a row that is on specific place in order by two columns (COL1, COL2). The query is something like:
    SELECT id FROM table
    WHERE col1 <= something
    ORDER BY col1 DESC,col2 DESC
    It is efficient providing that I define an index:
    CREATE INDEX iname ON table(col1 DESC,col2 DESC)
    Is it possible to define such an index in JDO mapping schema?
    All I could do is to define:
    Maciej Wegorkiewicz
    @wegorkie_gitlab
    <index name="iname">
    <field name="col1"/>
    <field name="col2"/>
    </index>
    However without DESC ordering it is not so effective
    Andy Jefferson
    @andyjefferson
    Brad Beck
    @bradbeck
    Trying to get a JPA example working which uses Flyway for db creation and test data insertion. Flyway appears to create the schema ok, but DataNucleus doesn't seem to find it. Do I need to set some additional properties in my persistence.xml?
    [main] INFO org.flywaydb.core.internal.command.DbValidate - Successfully validated 2 migrations (execution time 00:00.013s)
    [main] INFO org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory - Creating Schema History table: "public"."flyway_schema_history"
    [main] INFO org.flywaydb.core.internal.command.DbMigrate - Current version of schema "public": << Empty Schema >>
    [main] INFO org.flywaydb.core.internal.command.DbMigrate - Migrating schema "public" to version 1 - create database
    [main] INFO org.flywaydb.core.internal.command.DbMigrate - Migrating schema "public" to version 2 - insert test data
    [main] INFO org.flywaydb.core.internal.command.DbMigrate - Successfully applied 2 migrations to schema "public" (execution time 00:00.031s)
    Mar 04, 2019 2:55:39 PM org.datanucleus.store.rdbms.query.JPQLQuery compileQueryFull
    WARNING: Query for candidates of com.example.datanucleus.entity.Bucket and subclasses resulted in no possible candidates : Required table missing : ""BUCKET"" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"
    Brad Beck
    @bradbeck
    NVM, answered my own question. Needed to set the following property:
          <property name="datanucleus.identifier.case" value="LowerCase"/>
    Ivan
    @advancedwebdeveloper
    hello
    can anyone tell about options for a commercial support?
    is there are any success stories, which could be presented with a customer?
    Ivan
    @advancedwebdeveloper
    @andyjefferson , anyone from your team would desire to speak on March 29 or March 30?
    Ivan
    @advancedwebdeveloper
    I also wonder if your company might speak about doing business on a commercial support of software libs - there is http://thinkstage.com.ua/en/
    Andy Jefferson
    @andyjefferson
    @advancedwebdeveloper Thanks for the invite. I don't do conferences and speaking, and prefer to concentrate on the software itself. Hope both go well.
    aley2003
    @aley2003
    There was a commit in datanucleus-api-jdo at 23 Nov 2018 which "adds hashCode for completeness" to JDOPersistenceManagerFactory. The added hashCode() does not obey the rules for hashCode() and equals() because equals() checks for identity and hashCode() delivers the properties based hashCode of Configuration. The spring framework uses JDOPersistenceManagerFactory as key for a Map. This ist broken with the new JDOPersistenceManagerFactory and Spring throws an IllegalStateException.
    I think the new JDOPersistenceManagerFactory.hashCode() and also the equals() functions and should be removed completely.
    Andy Jefferson
    @andyjefferson
    Deleting equals and hashCode isn't going to happen. If you want to CONTRIBUTE an implementation of such methods then you are, of course, free to do so.
    aley2003
    @aley2003
    JDOPersistenceManagerFactory.equals() does the same as Object.equal(), check for identity. So hashCode() has to deliver identical hashCodes for identical Objects. This is also implemented in Object.hashCode(). I see two solutions: remove both functions from JDOPersistenceManagerFactory or copy both functions from Object to JDOPersistenceManagerFactory. I have tested the first solution and it works.
    Dan Haywood
    @danhaywood
    @andyjefferson - I think that datanucleus.org is down
    Dan Haywood
    @danhaywood
    back up now
    ah, no, that was a page in my browser's cache
    Andy Jefferson
    @andyjefferson
    @danhaywood Should be back now.
    RCG
    @rebecacortazar
    Hi, I am new to this and this is a silly question, but do Datanucleus libraries support Java 12 ? If not, any idea of the expected release date? Any help will be very much welcome, thank you.
    Andy Jefferson
    @andyjefferson
    @rebecacortazar Not a silly question. Simple answer is I've no idea since I'm using Java 8 (what I develop DataNucleus with). The codebase certainly includes ASM v7 and so supports Java 11 bytecode (see datanucleus/datanucleus-core#314 ). Whether that is enough for Java 12 would be for someone to actually try.
    RCG
    @rebecacortazar
    Hi there! Thanks for being so swift. I have the answer: no, it does not work, Java 12 is not supported.
    Andy Jefferson
    @andyjefferson
    @rebecacortazar, Perhaps you can provide a more informative statement than 'it does not work', like what error you get doing what. Better still you get the code and contribute whatever update is needed to get it to work (e.g use latest ASM? See the repackaged ASM in datanucleus-core)
    Norbert Bartels
    @nbartels
    hi all, I'm migrating to DN 5.1.2 atm and found a strange problem. We have objects in the DB with a java.util.Date field. In older versions (we are coming from 4.1) the field is populated with a plain java.util.Date object and in 5.1.2 it is a org.datanucleus.store.types.wrappers.Date object. It looks like the getValue is not used on the wrapper somewhere in the code. Other objects (at least the ones we use) are okay so far. Is this change intended and we have to adapt to it?
    Andy Jefferson
    @andyjefferson
    A database holds a DATE or DATETIME column (assuming you have an RDBMS database). You havent defined WHERE you get these java types from (what operation etc). What is "intended" is what is in the test suites. No way of knowing whether your situation is covered by the test suites, but that's why people have been advised since 16 yrs ago to contribute. Oh, and v5.2 is the latest, so don't get the idea of moving to 5.1
    RCG
    @rebecacortazar
    @andyjefferson Sorry for my late response and my apologies. I cannot replicate what I did, but I remember that the problem was related to version compilation mismatches (version 56). In any case, my code is running perfectly well in Java 12. Sorry for the inconvenience.
    Norbert Bartels
    @nbartels
    @andyjefferson you can simply check the test here https://github.com/nbartels/test-jdo/tree/date-test it should explain everything
    Andy Jefferson
    @andyjefferson
    @nbartels You mean to say "I access a Date field of a persistable object WITHIN A TRANSACTION after retrieving it via getObjectById, and it is of a wrapper type". Yes how else would a JDO provider be able to intercept mutating methods on that field if it didn't replace with a wrapper that extends the DEFINED type? What it did in 4.x I've little interest in. If you don't intend on mutating that field then choose a SCO type that doesn't need a "proxy" as per http://www.datanucleus.org:15080/products/accessplatform_5_2/jdo/mapping.html#_temporal_types_java_util_java_sql_java_time_jodatime
    Alex Ilyin
    @engilyin

    @andyjefferson is there any plans to integrate DataNucleus with Micronaut Data (https://github.com/micronaut-projects/micronaut-data) other name Predator. You can read about https://objectcomputing.com/news/2019/07/18/unleashing-predator-precomputed-data-repositories

    Micronaut is a new good replacement for de-facto standard in the current Java world Spring Boot. Micronaut is similar to Spring but it completely eliminates reflection, runtime proxies, and dynamic classloading. So you can get significant performance gains, small footprint and startup time.
    Especially it is very interested with GraalVM Substrate native images to build efficient native cloud microservices and even FaaS.

    Currently they support just JDBC and JPA based on Hibernate. And of course due to the proxy nature of Hibernate they could not get all benefits of AoT on their platform when using JPA.

    I believe that is so exited opportunity for DataNucleus which is using compile time code enhancements instead of proxy and reflection API to become the primary JPA solution for Micronaut.

    Andy Jefferson
    @andyjefferson
    @engilyin No plans. Not something I'm interested in. Besides, any work for "integrating" would go on their side, not ours. Same would apply for "integrating" with Spring Data. DN provides / supports published APIs so it would be for third party software (Spring, Micronaut, etc) to use those. If they find that they need access to other DataNucleus info then they can request addition of DN internal APIs and I could add those. But this would be down to somebody who actually wants that integration to do the work.
    Alex Ilyin
    @engilyin
    Thank you Andy!
    Thirumurthi S
    @thirumurthis
    Hi All,
    I am working on an application which uses JDO and DataNucleus to query the Oracle database. The application was developed using Datanucleus 2.1.0 and now i am using 5.2.1, the application is fetching the data for most of the cases.
    There was one specific mapping which i am seeing "Exception in thread "main" java.lang.NoSuchMethodError: com.package.dao.car.dnSetValue" where the Java/Pojo mapping has package.jdo defeined for a table and corresponding java class extends
    another class which has defeined the field name. The enchancer successfully executed. https://stackoverflow.com/questions/58273407/jdo-table-mapping-to-pojo-with-no-annotation-and-extending-a-class-for-proper
    Andy Jefferson
    @andyjefferson
    @thirumurthis If the enhancer successfully executed then you can easily decompile the classes and inspect them for methods etc, as per http://www.datanucleus.org:15080/products/accessplatform_5_2/jdo/enhancer.html#_decompilation This would reveal whether it did indeed ... as would the log which tells you about addition of methods and fields to classes during the enhancement process
    Thirumurthi S
    @thirumurthis

    @andyjefferson I decompiled the class with 'javap -c', I could see the getValue and setValue method.
    Also the notice the comments for dnGet/Set as well
    30: invokevirtual #408 // Method dnGetValue:()Ljava/lang/String;
    33: invokevirtual #390 // Method dnSetValue:(Ljava/lang/String;)V

    Debugged and noticed the AbstractClassMetaData.class, getManagedMembers () returns 2 FieldMetadata and 1 PropertiesMetaData (corresponds to the field tagged using <property> tag in jdo) for this case.
    could there be any difference in using the Field and Properties in jdo mapping with respect to latest version of datanucleus.

    Andy Jefferson
    @andyjefferson
    @thirumurthis I personally use just field, always. Others use just properties. We have some tests with mixed, and all work (though nobody has ever convinced me of the point of doing that). What happened in some ancient version is of little interest because nobody here will look at old versions, far better to concentrate your time looking at WHY. The only time I'd see any possible problem is when a getter or a setter is missing ... i.e only one specified at some inheritance level. A runtime NoSuchMethod together with the decompiled classes should be enough to tell you what class is missing what.
    Andy Jefferson
    @andyjefferson
    Or maybe you have this situation ... datanucleus/datanucleus-core#257 If so, you just provide the other method and relay it to the superclass (or contribute a fix to the DN code ... as the issue says).
    Thirumurthi S
    @thirumurthis
    i think i found the root cause of my issue, the older DataNucleus-enhancer expects the getter/setter as jdoGetXXX/ jdoSetXXX which in newer version it should be dnGet/dnSet. The decompiler input helped since i compared the older and newer one locally. now it is working in my case. @andyjefferson Appreciate your help/support/time. Thank you.