DataNucleus Development Guide

DataNucleus is an Open Source development. It is written by many people yet has a broad scope and so is in need of help. It is released under an Apache 2 license, so you are welcome to develop it further. Since the software is free, you are benefiting from this work. You have a duty to contribute to the project, particularly if/when you have problems. It should be noted that many people use open source software without the minor intention of contributing anything back to it; that’s ok but then should everyone adopt that attitude then things will not get improved, and additionally those people are not in a position to request/expect any improvements that they would like to see (without putting something back in).

You can help the project in many ways. Here are some examples

  • Answer questions on Groups.IO or Gitter. New users need to learn somewhere just like you did when you first used DataNucleus. You could monitor Groups.IO when you have free time and attempt to answer people’s questions. People will appreciate it even if you don’t know all of the answers.

  • Debug problems in the code before reporting them on the Groups.IO. You can isolate the issue to a particular area of code even if you don’t know the solution, and post your findings on the Groups.IO for one of the developers to pick up and progress further. Better still, contribute updates so it saves our time in working out where the problem lies. These will be much more valuable to us. Since we use Git for source code repository the best way is to FORK/CLONE a particular GitHub project and submit a GitHub PULL REQUEST with your changes. Please see the DataNucleus coding standards.

  • Volunteer to be part of the DataNucleus project. To do this you need to start off by contributing patches for issues, and demonstrate your commitment to the project. We will then invite you to join the project. Thereafter you can push DataNucleus further yourself when you have time. Be aware though that once you have been given commit rights, inactivity for a period of time (e.g a year) will mean that you can lose these rights. Being a committer implies responsibility. You could volunteer to work on a particular part of DataNucleus (e.g RDBMS) or all of it, up to you.

  • Donate to the project. This will help to fund the time of one of the main developers and will mean that DataNucleus functionality will be expanded faster due to your help.

  • Contribute documentation or worked examples that can be included in the online documentation. If you think there is something missing from our documents then you can write it and contribute it and others will benefit from your work just like you have benefited from our DataNucleus product. The easiest way to contribute is to FORK/CLONE the appropriate GitHub project and then raise a GitHub PULL REQUEST and we will process it asap. You can also contribute via either Groups.IO or Gitter or by raising an issue against the DataNucleus plugin in question.

Note that involvement in an open source project can significantly improve your employability, since potential employers can see the quality of your code, so this is an opportunity to do something positive. Note also if you contribute to the system in some way then we are more likely to answer your questions, or listen to your ideas, so it’s in your interests to participate. We hope that the above has given you some idea how your time can be used to benefit the common goal of having a quality Java persistence solution free for all to use.

1. DataNucleus Development Process

DataNucleus uses Test Driven Development (TDD). It has many test suites available and all should be run to provide stability in the codebase.

When you have DataNucleus commit rights the development process should be as below. Please abide by these simple rules

  • Identify an issue to work on for a particular DataNucleus plugin. Raise an issue if it doesn’t yet exist, and allocate to yourself.

  • Develop code, unit tests (as appropriate), and documentation for the issue. DataNucleus is developed using Java8+.

  • Run all DataNucleus tests and the (public) JDO TCK, and when all pass to the same level as before then you can check your code into GitHub. Broken unit tests or JDO TCK tests must be fixed ASAP. Others are using GitHub latest too and if you break either the build or the tests then it means they often cannot work effectively. Breakage of unit tests or JDO TCK tests mean that your changes can be rolled back.

  • Issues that involve many changes should be split, where appropriate, into smaller steps so that you can still pass point 3 above with each check in

  • Where changes are significant and you cannot split them into smaller check-ins (that pass the tests) should be checked in to your own GitHub branch and when complete they should be merged into GitHub 'master'. If help is needed at this point then other developers should help in merging large changes.

  • All check-ins should refer to the issue being worked on in their commit message ("#34" contained in the commit message links the commit to issue #34 of that plugin).

  • Mark the issue as "Resolved" for the next release.

1.1. Mailing Lists

DataNucleus has no mailing lists as such, but each plugin is developed in GitHub and you can subscribe to notifications about a particular plugin.

1.2. Issue Tracking

The DataNucleus project uses GitHub issues for all issue tracking from April 2016 onwards. Please navigate to the source code of the plugin in question and use the Issues tab to raise/inspect issues for that plugin.

DataNucleus used JIRA for issue tracking from 2004 until April 2016.

If you think some fix that is in a released version is now a problem again, you should open a new issue and refer to the old one, rather than reopening the original issue. If you think some fix that is not yet in a released version is still a problem, you should reopen the original issue.

1.3. Metrics

DataNucleus originated as JPOX in 2003, becoming DataNucleus in 2008, and consequently many people have contributed to its development over the years and has been open source since the outset. In terms of metrics of the codebase you can get a measure of this by looking at the DataNucleus page on OpenHub. In terms of automated code quality tool reviews, you can see the results of LGTM.com.

1.4. Development Team

The management and direction of DataNucleus is provided by a small team of experienced individuals.

  • Andy Jefferson : Founder of the DataNucleus project. Has worked on the project since its initiation as JPOX in 2003, and has worked on all plugins. Initiated the support for db4o, ODF, MongoDB, NeoDatis, Neo4j, OOXML, Cassandra, javax.cache, RDBMS connection pools, and much more. Also contributed to version 2 of the plugin for GAE/Datastore, subject to the constraints that Google put on it.

  • Renato Garcia : worked on the migration to GitHub, test suite flexibility, auto-generation of MANIFESTs, Jenkins/Solar automation, and some more. Currently inactive

  • Erik Bengtson : Founder of the DataNucleus project. Worked on the project since its initiation as JPOX in 2003, until around 2011. Initiated support for Excel, XML, JSON and HBase. Currently inactive

  • Baris Ergun : Took over responsibility for the geospatial plugin. Currently inactive

  • Stefan Seelmann : Wrote the vast majority of the LDAP plugin. Currently inactive

  • Thomas Marti : Wrote an amount of the spatial plugin (now renamed to geospatial). Currently inactive

  • Stefan Schmid : Wrote an amount of the spatial plugin (now renamed to geospatial). Currently inactive

  • Joerg von Frantzius : worked on the test suite, and RDBMS areas. Currently inactive

In addition to those above, other people have contributed to varying degrees in the development of this software or documentation.

DataNucleus tackles a wide range of problem areas - are you interested in contributing to the DataNucleus project and want to get involved? The best way is to become part of the DataNucleus community and start by contributing patches. From that, gain commit rights. Where you take the project to from there is up to you.

2. Source Code : GitHub Repositories

GitHub

DataNucleus source code is hosted on GitHub and uses the Git (distributed) source code version control system.

DataNucleus source code for versions up to and including 3.3.5 were hosted by SourceForge. You can still get the code for those earlier versions by searching on SourceForge. This is not maintained or supported by us.

You can check out from GitHub using the following.

# Using SSH
git clone git@github.com:datanucleus/{repository-name}.git

# Using HTTPS
git clone https://github.com/datanucleus/{repository-name}.git

Obviously, not everyone will want to check out all DataNucleus project repositories, so use this command for the particular repositories that you require. Note that GitHub repositories are all browsable via the web, for example the DataNucleus Neo4j datastore plugin. Note that all plugin repositories are Maven projects so you need to understand how to build with Maven to build these plugins.

DataNucleus can be easily developed using Maven, Eclipse (plus m2e plugin), or other IDEs (let us know if you write docs for how to develop DataNucleus with a different IDE). You require Java8+, a Git client (to download/commit DataNucleus Git-based projects) and an editor.

If developing the DataNucleus Eclipse plugin you will need to install the Eclipse PDE addon.

All code in v6+ must be compilable using Java 11 currently, whilst code in v5 must be compilable with Java 8.

2.1. GitHub : Plugins

Within the DataNucleus project over on GitHub you have various repositories providing actual DataNucleus plugins. These are currently

All plugins are independently versioned ("master" is the latest branch). This is because they have their own lifecycle, and plugins are bundled together into the "products" (e.g AccessPlatform). So we could have AccessPlatform version 1.1 using version X of a plugin, and AccessPlatform version 2.0 using version Y of that plugin because it needs some new functionality.

2.1.1. Building with Maven

All DataNucleus plugins are Maven projects, with a pom.xml. To build and install the plugin using Maven simply type

mvn clean install

and the plugin is built and installed in your local Maven repository; how simple can it get?. If you are developing some feature that requires updates to, for example core (datanucleus-core), an API (e.g datanucleus-api-jdo) and a datastore (e.g datanucleus-rdbms) then you will need to build these in the same order, core first, then the API, then the datastore.

2.1.2. Building with Eclipse

When building/developing using Eclipse the first thing you need to do is install the Eclipse m2e plugin (if not already done). This means that the build of any plugin will build using Maven (and get its dependencies from Maven). You then need to import all DataNucleus projects you are working on. Since each plugin in Eclipse will build using Maven you don’t need to have all dependent projects present too, just the ones you’re working on.

2.2. GitHub : Tests

In order to test DataNucleus capabilities we have many end-to-end tests. In the GitHub DataNucleus project these are available in the repository tests. Below that you have some framework projects that need building first, then there are tests split by the persistence API they are for (JDO, JPA, REST, etc).

2.3. GitHub : Tools

The GitHub DataNucleus project also provides some tools to help in using DataNucleus. The repositories providing tools are

Like with all plugins, the tools are independently versioned since they have their own lifecycle.

2.4. GitHub : Documentation

The GitHub DataNucleus project also provides the documentation for DataNucleus.

2.4.1. Documentation : datanucleus.org Project Site

DataNucleus has a main site datanucleus.org for the overall project and the commercial services that we offer. This site uses AsciiDoc format documents, and uses the Maven AsciiDoctor plugin to generate HTML. The documents make use of the Bootstrap CSS/Javascript "library" (originally from Twitter), as well as Font-Awesome and some others. The documentation is in GitHub in repository docs-datanucleus. Now you can build the site itself by typing

mvn clean compile

The site is then available under target/site. The documentation is also generated every night from what is in GitHub, and this appears on the website.

2.4.2. Documentation : AccessPlatform Product Site

The DataNucleus AccessPlatform (JDO, JPA, REST) docs make use of AsciiDoc format documents, and uses the Maven AsciiDoctor plugin to generate HTML. The documents make use of the Bootstrap CSS/Javascript "library" (originally from Twitter), as well as Font-Awesome and some others. The documentation is in GitHub in repository docs-accessplatform. Now you can build the site itself by typing

mvn clean compile

The site is then available under target/site. The documentation is also generated every night from what is in GitHub, and this appears on the website.

2.5. Branching

As a general rule, we branch when we’re at a point of changing internal/external DataNucleus APIs of a plugin. So, for example, we are developing datanucleus-core and want to change some API. The process is as follows

  • Create branch with name "{version}" where the version is the version that is stored in that branch (so the current version).

  • Update pom.xml of master branch to the next version number (so if we have just created a branch for 4.2 then this will be 4.3.0-SNAPSHOT).

If you are developing a new feature then you can optionally develop it in its own branch, so choose a branch name that is not a version (so doesn’t conflict with the naming convention above). When you have fully implemented the feature and merged it into master and it is fully tested then you should delete the old branch.

2.6. Tagging

Tagging is performed when we release a plugin, and the Maven release plugin automatically creates tags with names of {artifact_id}_{version} (e.g datanucleus-core-4.2.11). Developers shouldn’t have any need for adding other tags.

2.7. Versioning

There are many different versioning systems in use by software. All typically use major.minor.revision (occasionally with finer grain below that). Some use suffix alpha or beta to reflect how close the software is to full release. Some use versioning starting at say 1.0.0, and going up from that until some release e.g 1.0.4 that is considered the full release. DataNucleus uses the following versioning policy

  • When we start a release lifecycle for a product it starts typically at Milestone 1 (e.g 1.0.0.M1). If we are developing a new plugin that will be used from DataNucleus v5.0 then we will start at 5.0 Milestone 1 (5.0.0.M1) for the new plugin.

  • Subsequent releases increment the milestone …​ e.g 1.0.0.M2, 1.0.0.M3 etc

  • When we feel we are close to a full release we can optionally have Release Candidate (e.g 1.0.0.RC1)

  • We have as many release candidates as necessary to get everything feature complete

  • The full release is Release (e.g 1.0.0.Release)

  • Any subsequent releases (after the full release) in that lifecycle increment the revision number, e.g 1.0.1, 1.0.2 etc

The use of "Milestone" rather than "alpha" or "beta" is because all DataNucleus releases are run against all unit tests and TCKs and so stability is typically inherent.

We increment the minor version number when we are changing internal APIs (but not client facing APIs). We increment the major version number when we are changing external (client-facing) APIs.

People are always advised to use the latest version on a particular branch, since that release will have all available bug fixes and backports.

2.8. Product Release Process

DataNucleus AccessPlatform is simply a bundled distribution and is typically released at intervals every 1-2 months, usually taking the latest release of each related plugin and packaging it. When there is a DataNucleus product to be released (e.g AccessPlatform), the following is the release process (administrator role)

  • Make sure that all required plugin versions are released. See the plugin release process below for details.

  • Go to the product that needs releasing update build.properties so that versions of all required plugins are set, and the version of the product is set. Check this change in.

  • Generate the required product archives (zips), and the documentation.

  • Release the archives and documentation on SourceForge

2.9. Plugin Release Process

Since DataNucleus development policy is that all current tests should always pass for checked in code then a plugin is always in a releasable state. In general changes don’t sit in the source repo for more than a month without release. If a commercial client requires a particular feature or bug fix then a release is expedited. If some upstream plugin needs releasing first then it is released before the plugin in question. We try to avoid any changes in internal APIs that would affect downstream plugins if not in a new release cycle. The following is the release process (administrator role)

  • Check that there are no unresolved issues marked against the release version in question and, if necessary complete the outstanding ones, or move them to a later version.

  • Should a developer have a good reason for the release not happening (e.g wanting to get another fix in first) then the release could be delayed, but this is the exception not the norm.

  • Use the Maven release plugin as follows : mvn clean release:clean release:prepare release:perform. This updates the version to remove SNAPSHOT, builds source/javadocs/jar, creates a tag and finally copies it to Sonatype ready for release.

  • Go to the DataNucleus Sonatype account and check/release/close the repository artifacts.

2.10. Sample Release Process

We try to make all samples use Maven and have the same release process, so then the released archive is consistent. This is the release process for a sample

  • Go to samples project and edit the pom.xml for version and dependencies

  • Create the assembly using mvn clean assembly:assembly

  • Copy the zip file on to SourceForge (or get an admin to do it)

3. Coding Standards

Here we provide an overview of the coding standards employed in DataNucleus source code. If you want to work on DataNucleus or contribute code to DataNucleus you are expected to use these coding standards. We know everyone has their own preference but these are ours so you follow them or any contributed code will not be directly included as is. They may differ from Oracle’s coding conventions but then those are the conventions of some US company and that doesn’t mean that they are necessarily "the best", "the official" or any such title. These are ours, so best get used to it ;-).

  • Indentation : 4 characters indent

  • Tabs : no tabs please!

  • Braces : insert a new line before opening brace and a new line before closing brace. Opening and closing braces should line up vertically.

  • Line Length : max line length 140

  • Imports : fully specify imports. Do NOT use asterisk notation!

  • Java Language Level : write for Java 1.8 as a minimum

  • Fields positioning : place fields at the top of a class.

  • Logging : use org.datanucleus.util.NucleusLogger which wraps Log4j, java.util.logging etc. Log as much info as is considered necessary at the appropriate level. See org.datanucleus.util.NucleusLogger for details

  • Localisation : all output exception and log messages should be localised. See org.datanucleus.util.Localiser for details

  • Licensing : make sure you include the standard DataNucleus/Apache 2 license copyright header to all files

If you are using Eclipse then we have an XML Configuration to specify in Eclipse. Please do NOT reformat existing code. Just format any code that you add

Here we have some examples of brace policy and other things

        package mypackage;

        import java.util.LinkedList;

        public class MyIntStack
        {
            private final LinkedList fStack;

            public MyIntStack()
            {
                fStack = new LinkedList();
            }

            public int pop()
            {
                return ((Integer) fStack.removeFirst()).intValue();
            }

            public void push(int elem)
            {
                fStack.addFirst(new Integer(elem));
            }

            public boolean isEmpty()
            {
                return fStack.isEmpty();
            }
        }

Example of indentation

        class Example
        {
            int[] myArray = {1, 2, 3, 4, 5, 6};
            int theInt = 1;
            String someString = "Hello";
            double aDouble = 3.0;

            void foo(int a, int b, int c, int d, int e, int f)
            {
                switch (a)
                {
                    case 0 :
                        Other.doFoo();
                        break;
                    default :
                        Other.doBaz();
                }
            }

            void bar(List v)
            {
                for (int i = 0; i < 10; i++)
                {
                    v.add(new Integer(i));
                }
            }
        }

Example with if …​ else

        class Example
        {
            void bar()
            {
                do
                {
                }
                while (true);
                try
                {
                }
                catch (Exception e)
                {
                }
            }

            void foo2()
            {
                if (true)
                {
                    return;
                }
                if (true)
                {
                    return;
                }
                else if (false)
                {
                    return;
                }
                else
                {
                    return;
                }
            }

            void foo(int state)
            {
                if (true)
                {
                    return;
                }
                if (true)
                {
                    return;
                }
                else if (false)
                {
                    return;
                }
                else
                {
                    return;
                }
            }
        }

4. Source Code Licensing

All contributions to the DataNucleus project must adhere to the Apache 2 license. Notwithstanding that, at the discretion of the administrators of the project, DataNucleus project downloads may include separately licensed code from third parties as a convenience and where permitted by the third party license, provided this is clearly indicated.

All original contributions must contain the following copyright notice.

	/**********************************************************************
	Copyright (c) 2017 {your name} and others. All rights reserved.
	Licensed under the Apache License, Version 2.0 (the "License");
	you may not use this file except in compliance with the License.
	You may obtain a copy of the License at

	    http://www.apache.org/licenses/LICENSE-2.0

	Unless required by applicable law or agreed to in writing, software
	distributed under the License is distributed on an "AS IS" BASIS,
	WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
	See the License for the specific language governing permissions and
	limitations under the License.

	Contributors:
	{year} {contributor1} - {description of contribution}
	{year} {contributor2} - {description of contribution}
	    ...
	**********************************************************************/

5. DataNucleus Extensions and Extension-Points

DataNucleus products are built using a plugin mechanism, allowing plugins to operate together. This plugin mechanism involves the use of a file plugin.xml situated at the root of the CLASSPATH, containing a definition of the extension points and extensions that the plugin utilises/provides. The plugin mechanism originated in the Eclipse IDE, but has no dependencies on Eclipse. This plugin mechanism is useful also from a user viewpoint in that you, the user, could provide plugins that use these extension points and extend the capabilities of DataNucleus. Plugins are loaded by a plugin manager when DataNucleus is initialised at runtime, and this plugin manager uses a registry mechanism, inspecting jars in the CLASSPATH. The three steps necessary for creating a DataNucleus plugin are

  1. Review the DataNucleus Extension-Point that you will need to implement to generate the plugin, and implement it.

  2. Create a file plugin.xml at the top level of your JAR defining the plugin details (see the actual Extension docs).

  3. Update the META-INF/MANIFEST.MF file contained in the jar so that it includes necessary information for OSGi.

Extension points differ from one version of DataNucleus to another, so you should consult the documentation of the version of DataNucleus that you are using. The documentation for extensions in the latest version of DataNucleus can be found here

6. Development Testing

6.1. Component Tests

DataNucleus uses JUnit for unit testing. The traditional style of "unit" test are included with the project that they are testing. For example "core" has some "unit" tests, and these are run when you build the project. In general DataNucleus doesn’t have many of this style of tests (you’re welcome to write some) because the more important aspect of testing is the actual reading/writing involving the datastore (see below).

6.2. End-to-End Tests

End-to-End tests also use JUnit, and are available in the GitHub DataNucleus repository tests. In these tests we persist objects in a datastore, and test the result. There are several scenarios for this type of test, because JDO/JPA have many aspects. By this we mean that we separate our tests into what they test. When developing anything, the tests should be the guiding light as to whether you should be checking anything in to GitHub. If your change breaks things, you shouldn’t check things in. We have tests divided by the persistence API that they are testing; look under the directories jdo, jpa and rest.

6.2.1. framework

All DataNucleus end-to-end tests are based around a framework and this requires building first before running any tests. Navigate into the framework directory, and build and install it by typing

mvn clean install

This builds the jar under "target" and installs it into your Maven repository.

6.2.2. framework.maven

This is a Maven plugin that allows some of the tests to be run with different configurations. You should build and install it by typing

mvn clean install

This builds the jar under "target" and installs it into your Maven repository.

6.2.3. samples

The samples project provides typical model classes to be persisted, with all different types of relations. This project doesn’t define how the classes are persisted, just what the classes are. They are then usable by all persistence APIs. You should build and install it by typing

mvn clean install

This builds the jar under "target" and installs it into your Maven repository.

6.2.4. End-to-End Test Scenarios

So we have now built the framework, framework.maven and samples projects and are ready to run any of the "scenarios". There are many scenarios including

  • Tests for JDO API

    • jdo/general - General tests for JDO

    • jdo/identity - Tests for JDO that run with both application and datastore identity types

    • jdo/jta - Tests for using JTA transactions with JDO

    • jdo/geospatial - Tests for use spatial types with JDO

    • jdo/rdbms - Tests specific to RDBMS, using JDO

    • jdo/ldap - Tests specific to LDAP, using JDO

    • jdo/spreadsheet - Tests specific to Excel and ODF, using JDO

    • jdo/hbase - Tests for basic handling with HBase, using JDO

    • jdo/mongodb - Tests for basic handling with MongoDB, using JDO

  • Tests for JPA API

    • jpa/general - General tests for JPA

    • jpa/jta - Tests for using JTA transaction with JPA

    • jpa/geospatial - Tests for using spatial types with JPA

    • jpa/rdbms - Tests specific to RDBMS, using JPA

  • Tests for REST API

    • rest/general - Tests using the REST API

To run a test scenario, go into the scenario project and type

mvn clean test

This then runs the tests for that scenario. There are also occasionally additional tests under org.datanucleus.tests.knownbugs and org.datanucleus.tests.newfeatures intended for incorporation into the test scenario at some later point; these are not run by default.

To run the tests on a different datastore (default=H2) type

mvn clean test -Pmysql

or replace "mysql" with "postgresql", "oracle", "mongodb", "hbase" etc

Long term strategy is to just have the overall test scenarios that apply just to JDO or JPA and drop the datastore-specific scenarios except where providing some feature specific to that datastore. Reality is that we don’t have resource to do this yet, so typically run such as test.jdo.mongodb for testing MongoDB, which obviously only tests a small subset of what ought to be tested. Offering your time to make all store plugins more feature complete is the only way this task will be performed.

6.2.5. Adding Unit Tests

Where you feel that our unit tests do not adequately cover functionality, you should add a test. Please follow the following process

  1. Decide which scenario your test fits into (e.g jdo/general, jpa/general)

  2. Look at the available model samples and choose one.

  3. Write your unit test, extending one of the common base classes, for example JDOPersistenceTestCase, or JPAPersistenceTestCase

  4. Run your test.

  5. Raise an issue and attach your testcase to the issue

6.2.6. Alternate Test Configurations

All tests run with a default test configuration (see the files in framework under src/conf). You can, with some test suites, run alternate test configurations. This is achieved using the framework.maven Maven plugin.

  1. Look for a configuration file under src/conf of framework such as optimistic-conf.properties. These properties are used to override the default properties

  2. Run the tests with -Dtest.configs=optimistic or -Dtest.configs=optimistic,pessimistic for example

6.3. JDO TCK

DataNucleus GitHub "master" passes the JDO TCK, and hence is a fully-compliant implementation of JDO. All developers should run this TCK from time to time to validate any changes. The following shows the results of the TCK as proof of compatibility. You can, of course, simply run DataNucleus yourself with the JDO TCK (downloadable from the Apache JDO Project). JDO is an open standard, being developed by the Apache JDO project. As yet there are no other fully compliant implementations of JDO, publishing their results on a publically visible website; don’t believe their claims of compliance unless they post this output below. The tests below were run with DataNucleus GitHub "master" (v5.0) on 25/Mar/2016.

	dsid-runonce-junit.txt:
	    OK Tests run: 002, Time: 002 seconds.
	dsid-instancecallbacks-junit.txt:
	    OK Tests run: 016, Time: 003 seconds.
	dsid-jdohelper-junit.txt:
	    OK Tests run: 045, Time: 002 seconds.
	dsid-pm-junit.txt:
	    OK Tests run: 168, Time: 027 seconds.
	dsid-pmf-junit.txt:
	    OK Tests run: 068, Time: 017 seconds.
	dsid-detach-junit.txt:
	    OK Tests run: 018, Time: 003 seconds.
	dsid-embeddedInheritance-junit.txt:
	    OK Tests run: 004, Time: 002 seconds.
	dsid-enhancement-junit.txt:
	    OK Tests run: 031, Time: 001 seconds.
	dsid-extents-junit.txt:
	    OK Tests run: 013, Time: 004 seconds.
	dsid-fetchplan-junit.txt:
	    OK Tests run: 021, Time: 002 seconds.
	dsid-fetchgroup-junit.txt:
	    OK Tests run: 035, Time: 002 seconds.
	dsid-lifecycle-junit.txt:
	    OK Tests run: 017, Time: 005 seconds.
	dsid-models-junit.txt:
	    OK Tests run: 050, Time: 075 seconds.
	dsid-models1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-query-junit.txt:
	    OK Tests run: 162, Time: 019 seconds.
	dsid-jdoql-junit.txt:
	    OK Tests run: 153, Time: 029 seconds.
	dsid-jdoql1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-transactions-junit.txt:
	    OK Tests run: 030, Time: 003 seconds.
	dsid-companyNoRelationships-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyEmbedded-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-company1-1Relationships-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-company1-MRelationships-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyM-MRelationships-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-companyAllRelationships-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyMapWithoutJoin-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-companyListWithoutJoin-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-companyPMClass-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-companyPMInterface-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotated1-1RelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotated1-MRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-companyAnnotatedAllRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedAllRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedAllRelationshipsPCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedAllRelationshipsJPAConcrete-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-companyAnnotatedAllRelationshipsJPAPM-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-companyAnnotatedAllRelationshipsPCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedAllRelationshipsPIPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedEmbeddedFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedM-MRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 006 seconds.
	dsid-companyAnnotatedM-MRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedNoRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 004 seconds.
	dsid-companyAnnotatedNoRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedNoRelationshipsPCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedEmbeddedJPAConcrete-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-companyAnnotatedEmbeddedJPAPM-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-companyAnnotatedNoRelationshipsPCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyAnnotatedNoRelationshipsPIPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-companyOverrideAnnotatedAllRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-inheritance1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-inheritance2-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-inheritance3-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-inheritance4-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	dsid-relationshipAllRelationships-junit.txt:
	    OK Tests run: 034, Time: 007 seconds.
	dsid-relationshipNoRelationships-junit.txt:
	    OK Tests run: 015, Time: 005 seconds.
	dsid-schemaAttributeClass-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-schemaAttributeOrm-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-schemaAttributePackage-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	dsid-compoundIdentity-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	dsid-throwOnUnknownStandardProperties-junit.txt:
	    OK Tests run: 002, Time: 000 seconds.
	app-instancecallbacks-junit.txt:
	    OK Tests run: 016, Time: 002 seconds.
	app-jdohelper-junit.txt:
	    OK Tests run: 045, Time: 002 seconds.
	app-pm-junit.txt:
	    OK Tests run: 168, Time: 026 seconds.
	app-pmf-junit.txt:
	    OK Tests run: 068, Time: 017 seconds.
	app-detach-junit.txt:
	    OK Tests run: 018, Time: 002 seconds.
	app-embeddedInheritance-junit.txt:
	    OK Tests run: 004, Time: 002 seconds.
	app-enhancement-junit.txt:
	    OK Tests run: 031, Time: 001 seconds.
	app-extents-junit.txt:
	    OK Tests run: 013, Time: 003 seconds.
	app-fetchplan-junit.txt:
	    OK Tests run: 021, Time: 002 seconds.
	app-fetchgroup-junit.txt:
	    OK Tests run: 035, Time: 001 seconds.
	app-lifecycle-junit.txt:
	    OK Tests run: 017, Time: 004 seconds.
	app-models-junit.txt:
	    OK Tests run: 050, Time: 068 seconds.
	app-models1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-query-junit.txt:
	    OK Tests run: 162, Time: 016 seconds.
	app-jdoql-junit.txt:
	    OK Tests run: 153, Time: 022 seconds.
	app-jdoql1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-transactions-junit.txt:
	    OK Tests run: 030, Time: 003 seconds.
	app-companyNoRelationships-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-companyEmbedded-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-company1-1Relationships-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-company1-MRelationships-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-companyM-MRelationships-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-companyAllRelationships-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyMapWithoutJoin-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-companyListWithoutJoin-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyPMClass-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyPMInterface-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotated1-1RelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotated1-MRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedAllRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedAllRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedAllRelationshipsPCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedAllRelationshipsJPAConcrete-junit.txt:
	    OK Tests run: 001, Time: 004 seconds.
	app-companyAnnotatedAllRelationshipsJPAPM-junit.txt:
	    OK Tests run: 001, Time: 004 seconds.
	app-companyAnnotatedAllRelationshipsPCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedAllRelationshipsPIPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedEmbeddedFCPM-junit.txt:
	    OK Tests run: 001, Time: 004 seconds.
	app-companyAnnotatedM-MRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedM-MRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedNoRelationshipsFCConcrete-junit.txt:
	    OK Tests run: 001, Time: 004 seconds.
	app-companyAnnotatedNoRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedNoRelationshipsPCConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedEmbeddedJPAConcrete-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedEmbeddedJPAPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedNoRelationshipsPCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyAnnotatedNoRelationshipsPIPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-companyOverrideAnnotatedAllRelationshipsFCPM-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-inheritance1-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-inheritance2-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	app-inheritance3-junit.txt:
	    OK Tests run: 001, Time: 001 seconds.
	app-inheritance4-junit.txt:
	    OK Tests run: 001, Time: 003 seconds.
	app-relationshipAllRelationships-junit.txt:
	    OK Tests run: 034, Time: 008 seconds.
	app-relationshipNoRelationships-junit.txt:
	    OK Tests run: 015, Time: 005 seconds.
	app-schemaAttributeClass-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-schemaAttributeOrm-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-schemaAttributePackage-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-compoundIdentity-junit.txt:
	    OK Tests run: 001, Time: 002 seconds.
	app-throwOnUnknownStandardProperties-junit.txt:
	    OK Tests run: 002, Time: 000 seconds.

	Total tests run: 1846.
	All (117) configurations passed.

6.4. JPA 1 TCK

All releases of DataNucleus since v1 pass the JPA1 (JSR 220) TCK. JPA1 is a standard developed in private by the JCP. Its discussions were not open and the TCK is not freely downloadable and so users cannot check any such claims for compliance. This is in direct contrast to the JDO standard. We leave it to users to decide how they feel about that. The tests below were run with JPA TCK 1.0b and DataNucleus GitHub master (v5.0) on 25/Mar/2016 against PostgreSQL 9.4.

	Completed running 435 tests.
	Number of Tests Passed      = 435
	Number of Tests Failed      = 0
	Number of Tests with Errors = 0

Sadly I’m not legally allowed to disclose any further details about these tests due to having to sign an NDA just to get hold of the TCK.

6.5. JPA 2+ TCK

As mentioned in this blog post we applied for the JPA 2.0 TCK on 8th February 2010. This request was handled by Jonathan Nimer and Patrick Curran at SUN/Oracle. They (eventually) provided us with a form to sign and return to gain access to the JPA2 TCK. This was returned to them at the end of April 2010 to their address of 4150 Network Circle, Santa Clara, CA 95054, USA. We have since prompted them on more than 3 occasions when we will be getting access to this secret TCK. They have still not provided it and as a result we are forced to claim full compliance with the JPA2 spec since testing is being hidden from us. The only possible conclusions for this unwillingness to provide the TCK as per their terms and conditions are either incompetence, or deliberate prevention of access. How do you, the user, feel about an organisation like Oracle preventing a level playing field for such technologies?

Since the JPA "group" have still not published an official JPA 2.0 / 2.1 API jar into Maven Central we really feel that their priorities are not in the best interests of you the user. The basic minimum should be publish the official JPA API jars, and get a public open source TCK. Once those are in place then we can talk.

As a measure of how well DataNucleus is compliant with the JPA TCK, in addition to our own tests, it is also tested as part of the tests written for the Blaze Persistence Framework and passes their tests as well as the likes of Hibernate, and better than EclipseLink, and bear in mind that their tests are centred around Hibernate features not DataNucleus features, and incorporate not just standard JPA behaviour but also items that have been requested for inclusion in later releases of JPA.

6.6. Jakarta Persistence TCK

There does not seem to be a separate TCK for Jakarta Persistence; it seems to be part of some huge Jakarta EE TCK. Since DataNucleus implements just one part of that we have no time for working out what that means for a Jakarta Persistence implementation. The JPA1 TCK had no placeholder for bytecode enhancement and we had to hack in a way of incorporating enhancement, but all files are seemingly different here anyway, so unclear how to proceed on that either. If it is important to you, then go and work it out and report back.

6.7. Databases Notes

6.7.1. Database setup for running tests

Each test project is run against a datastore (as defined above). The configuration of the datastores is stored under framework src/conf. Some tests require two database instances, which is why for every database there exist two files, e.g. maven.datanucleus.datastore=hsql refers to both

  • framework/src/conf/datanucleus-hsql.1.properties, and

  • framework/src/conf/datanucleus-hsql.2.properties

The default database configured in the test projects is H2.

Following are notes about running the DataNucleus unit tests with particular databases.

6.7.2. Oracle 10.2

If you face the issue ORA-12519, TNS:no appropriate service handler, try increasing the parameters sessions and processes to 300 and open\_cursors to 1000. To change these values in Oracle, issue the following statements.

	alter system set open_cursors = 1000 scope=spfile
	alter system set sessions = 300 scope=spfile
	alter system set processes = 300 scope=spfile

Refer also to the Oracle spfile (see also initXE.ora or init.ora)

	*.processes=300
	*.sessions=300
	*.open_cursors=1000

If you face the issue ORA-01000: maximum open cursors exceeded, try increasing the parameter open\_cursors to 1000 in the file initXE.ora or init.ora.

	*.open_cursors=1000

If you face OutOfMemory errors, increase the Xms and Xmx JVM args for running the junit tests.