DataNucleus requires that all classes that are persisted implement
interface defined by JDO.
Why should we do this, Hibernate/TopLink dont need it ?
. Well thats
a simple question really
DataNucleus uses this
interface, and adds it using bytecode enhancement
techniques so that you never need to actually change your classes. This means that you get
, and your classes always remain
ORM tools that use a mix of reflection and/or proxies are not totally transparent.
DataNucleus' use of
provides transparent change tracking. When any
change is made to an object the change creates a notification to DataNucleus allowing it to be
optimally persisted. ORM tools that dont have access to such change tracking have to use reflection
to detect changes. The performance of this process will break down as soon as you read a large
number of objects, but modify just a handful, with these tools having to compare all object
states for modification at transaction commit time.
In a JDO-enabled application there are 3 categories of classes. These are
and normal classes. The Meta-Data defines which classes fit into these
categories. To give an example for JDO, we have 3 classes. The class
is to be persisted in
the datastore. The class
directly updates the fields of class
but doesn't need
persisting. The class
is not involved in the persistence process. We would define JDO
MetaData for these classes like this
<class name="A" persistence-modifier="persistence-capable">
<class name="B" persistence-modifier="persistence-aware">
So our MetaData is mainly for those classes that are
and are to be persisted
to the datastore (we don't really need the
for thse classes since this is
the default). For
classes we simply notate that the class knows about
persistence. We don't define MetaData for any class that has no knowledge of persistence.
JDO requires that all classes to be persisted must implement the
Users could manually do this themselves but this would impose work on them. JDO permits the use of a
byte-code enhancer that converts the users normal classes to implement this interface. DataNucleus
provides its own byte-code enhancer (this can be found in the
). This section
describes how to use this enhancer with DataNucleus. The DataNucleus enhancer fully implements JDO2
and so is the recommended choice when persisting using the JDO2 API. The enhancement process adds the
necessary methods to the users class in order to implement
The example above doesn't show all
methods, but demonstrates that all added
methods and fields are prefixed with "jdo" to distinguish them from the users own methods and fields.
Also each persistent field of the class will be given a jdoGetXXX, jdoSetXXX method so that accesses
of these fields are intercepted so that JDO can manage their "dirty" state.
The MetaData defines which classes are required to be persisted, and also defines which aspects of
persistence each class requires. For example if a class has the
attribute set to
, then that class will be enhanced to also implement
Again, the example above doesn't show all methods added for the Detachable interface but the main
thing to know is that the detached state (object id of the datastore object, the version of the
datastore object when it was detached, and which fields were detached is stored in "jdoDetachedState").
Please see the JDO spec for more details.
If the MetaData is changed in any way during development, the classes should always be recompiled
and re-enhanced afterwards.
Some groups (e.g Hibernate) perpetuated arguments against "byte-code enhancement" saying that
it was somehow 'evil'. The most common were :-
Slows down the code-test cycle
. This is erroneous since you only need to enhance just
before test and the provided plugins for Ant, Eclipse and Maven all do the enhancement job
automatically and rapidly.
Is less "lazy" than the proxy approach since you have to load the object as soon as you get
a pointer to it
. In a 1-1 relation you
have to load
the object then since you would
cause issues with null pointers otherwise. With 1-N relations you load the elements of the
collection/map only when you access them and not the collection/map. Hardly an issue then is
Fail to detect changes to public fields unless you enhance your client code
very few people will be writing code with public fields since it is bad practice in an OO
design, and secondly, this is why we have "PersistenceAware" classes.
So as you can see, there are no valid reasons against byte-code enhancement, and the pluses are
that runtime detection of dirty events on objects is much quicker, hence your persistence layer
operates faster without any need for iterative reflection-based checks.
The fact is that Hibernate itself also now has a mode whereby you can do bytecode enhancement
although not the default mode of Hibernate. So maybe it wasn't so evil after all ?
Many people will wonder what actually happens to a class upon bytecode enhancement. In
simple terms the necessary methods and fields are added so as to implement
. If you want to check this, just use a Java decompiler such as
JD. It has a nice GUI allowing
you to just select your class to decompile and shows you the source.