A Transaction forms a unit of work. The Transaction manages what happens within that unit of work, and when an error occurs the Transaction can roll back any changes performed. Transactions can be managed by the users application, or can be managed by a framework (such as Spring), or can be managed by a JEE container. These are described below.
If using DataNucleus JPA in a J2SE environment the normal type of transaction is RESOURCE_LOCAL.
With this type of transaction the user manages the transactions themselves, starting, committing
or rolling back the transaction. With these transactions with JPA
you would do something like
EntityManager em = emf.createEntityManager(); EntityTransaction tx = em.getTransaction(); try { tx.begin(); {users code to persist objects} tx.commit(); } finally { if (tx.isActive()) { tx.rollback(); } } em.close();
In this case you will have defined your persistence-unit to be like this
<persistence-unit name="MyUnit" transaction-type="RESOURCE_LOCAL"> <properties> <property key="javax.persistence.jdbc.url" value="jdbc:mysql:..."/> ... </properties> ... </persistence-unit>
or
<persistence-unit name="MyUnit" transaction-type="RESOURCE_LOCAL"> <non-jta-data-source>java:comp/env/myDS</properties> ... </persistence-unit>
The basic idea with Locally-Managed transactions is that you are managing the transaction start and end.
The other type of transaction with JPA is using JTA. With this type, where you have a JTA data source from which you have a UserTransaction. This UserTransaction can have resources "joined" to it. In the case of JPA, you have two scenarios. The first scenario is where you have the UserTransaction created before you create your EntityManager. The create of the EntityManager will automatically join it to the current UserTransaction, like this
UserTransaction ut = (UserTransaction)new InitialContext().lookup("java:comp/UserTransaction"); ut.setTransactionTimeout(300); EntityManager em = emf.createEntityManager(); try { ut.begin(); .. perform persistence/query operations ut.commit(); } finally { em.close(); }
so we control the transaction using the UserTransaction. The second scenario is where the UserTransaction is started after you have the EntityManager. In this case we need to join our EntityManager to the newly created UserTransaction, like this
EntityManager em = emf.createEntityManager(); try { .. perform persistence, query operations UserTransaction ut = (UserTransaction)new InitialContext().lookup("java:comp/UserTransaction"); ut.setTransactionTimeout(300); ut.begin(); // Join the EntityManager operations to this UserTransaction em.joinTransaction(); // Commit the persistence/query operations performed above ut.commit(); } finally { em.close(); }
In the JTA case you will have defined your persistence-unit to be like this
<persistence-unit name="MyUnit" transaction-type="JTA"> <jta-data-source>java:comp/env/myDS</properties> ... </persistence-unit>
When using a JEE container you are giving over control of the transactions to the container. Here you have Container-Managed Transactions. In terms of your code, you would do like the above examples except that you would OMIT the tx.begin(), tx.commit(), tx.rollback() since the JEE container will be doing this for you.
When you use a framework like Spring you would not need to specify the tx.begin(), tx.commit(), tx.rollback() since that would be done for you.
DataNucleus allows the ability to operate without transactions. With JPA this is enabled by default (see the 2 properties datanucleus.NontransactionalRead, datanucleus.NontransactionalWrite set to true). This means that you can read objects and make updates outside of transactions. This is effectively an "auto-commit" mode.
EntityManager em = emf.createEntityManager(); {users code to persist objects} em.close();
When using non-transactional operations, you need to pay attention to the persistence property datanucleus.nontx.atomic. If this is true then any persist/delete/update will be committed to the datastore immediately. If this is false then any persist/delete/update will be queued up until the next transaction (or em.close()) and committed with that.
During a transaction, depending on the configuration, operations don't necessarily go to the datastore immediately, often waiting until commit. In some situations you need persists/updates/deletes to be in the datastore so that subsequent operations can be performed that rely on those being handled first. In this case you can flush all outstanding changes to the datastore using
em.flush();
A convenient vendor extension is to find which objects are waiting to be flushed at any time, like this
List<ObjectProvider> objs = ((JPAEntityManager)pm).getExecutionContext().getObjectsToBeFlushed();
DataNucleus also allows specification of the transaction isolation level. This is specified via the EntityManagerFactory property datanucleus.transactionIsolation. It accepts the standard JDBC values of
The default is read-committed. If the datastore doesn't support a particular isolation level then it will silently be changed to one that is supported. As an alternative you can also specify it on a per-transaction basis as follows (using the values in parentheses above).
org.datanucleus.api.jpa.JPAEntityTransaction tx = (org.datanucleus.api.jpa.JPAEntityTransaction)pm.currentTransaction(); tx.setOption("transaction.isolation", 2);
Obviously transactions are intended for committing changes. If you come across a situation where you don't want to commit anything under any circumstances you can mark the transaction as "read-only" by calling
EntityManager em = emf.createEntityManager(); Transaction tx = em.getTransaction(); try { tx.begin(); tx.setRollbackOnly(); {users code to persist objects} tx.rollback(); } finally { if (tx.isActive()) { tx.rollback(); } } em.close();
Any call to commit on the transaction will throw an exception forcing the user to roll it back.
A Transaction forms a unit of work. The Transaction manages what happens within that unit of work, and when an error occurs the Transaction can roll back any changes performed. There are the following locking types for a transaction.
Pessimistic locking isn't directly supported in JPA but are provided as a vendor extension. It is suitable for short lived operations where no user interaction is taking place and so it is possible to block access to datastore entities for the duration of the transaction. You would select pessimistic locking by adding the persistence property datanucleus.Optimistic as false.
By default DataNucleus does not currently lock the objects fetched in pessimistic locking, but you can configure this behaviour for RDBMS datastores by setting the persistence property datanucleus.rdbms.useUpdateLock to true. This will result in all "SELECT ... FROM ..." statements being changed to be "SELECT ... FROM ... FOR UPDATE". This will be applied only where the underlying RDBMS supports the "FOR UPDATE" syntax.
With pessimistic locking DataNucleus will grab a datastore connection at the first operation, and maintain it for the duration of the transaction. A single connection is used for the transaction (with the exception of any Identity Generation operations which need datastore access, so these can use their own connection).
In terms of the process of pessimistic (datastore) locking, we demonstrate this below.
Operation | DataNucleus process | Datastore process |
---|---|---|
Start transaction | ||
Persist object | Prepare object (1) for persistence | Open connection. Insert the object (1) into the datastore |
Update object | Prepare object (2) for update | Update the object (2) into the datastore |
Persist object | Prepare object (3) for persistence | Insert the object (3) into the datastore |
Update object | Prepare object (4) for update | Update the object (4) into the datastore |
Flush | No outstanding changes so do nothing | |
Perform query | Generate query in datastore language | Query the datastore and return selected objects |
Persist object | Prepare object (5) for persistence | Insert the object (5) into the datastore |
Update object | Prepare object (6) for update | Update the object (6) into the datastore |
Commit transaction | Commit connection |
So here whenever an operation is performed, DataNucleus pushes it straight to the datastore. Consequently any queries will always reflect the current state of all objects in use. However this mode of operation has no version checking of objects and so if they were updated by external processes in the meantime then they will overwrite those changes.
It should be noted that DataNucleus provides two persistence properties that allow an amount of control over when flushing happens with pessimistic locking
Optimistic locking is the only official option in JPA. It is suitable for longer lived operations maybe where user interaction is taking place and where it would be undesirable to block access to datastore entities for the duration of the transaction. The assumption is that data altered in this transaction will not be updated by other transactions during the duration of this transaction, so the changes are not propagated to the datastore until commit()/flush(). The data is checked just before commit to ensure the integrity in this respect. The most convenient way of checking data for updates is to maintain a column on each table that handles optimistic locking data. The user will decide this when generating their MetaData.
Rather than placing version/timestamp columns on all user datastore tables, JPA allows the user to notate particular classes as requiring optimistic treatment. This is performed by specifying in MetaData or annotations the details of the field/column to use for storing the version - see versioning for JPA. With JPA1 you must have a field in your class ready to store the version.
In JPA1 you can read the version by inspecting the field marked as storing the version value.
In terms of the process of optimistic locking, we demonstrate this below.
Operation | DataNucleus process | Datastore process |
---|---|---|
Start transaction | ||
Persist object | Prepare object (1) for persistence | |
Update object | Prepare object (2) for update | |
Persist object | Prepare object (3) for persistence | |
Update object | Prepare object (4) for update | |
Flush | Flush all outstanding changes to the datastore | Open connection. Version check of object (1) Insert the object (1) in the datastore. Version check of object (2) Update the object (2) in the datastore. Version check of object (3) Insert the object (3) in the datastore. Version check of object (4) Update the object (4) in the datastore. |
Perform query | Generate query in datastore language | Query the datastore and return selected objects |
Persist object | Prepare object (5) for persistence | |
Update object | Prepare object (6) for update | |
Commit transaction | Flush all outstanding changes to the datastore | Version check of object (5) Insert the object (5) in the datastore Version check of object (6) Update the object (6) in the datastore. Commit connection. |
Here no changes make it to the datastore until the user either commits the transaction, or they invoke flush(). The impact of this is that when performing a query, by default, the results may not contain the modified objects unless they are flushed to the datastore before invoking the query. Depending on whether you need the modified objects to be reflected in the results of the query governs what you do about that. If you invoke flush() just before running the query the query results will include the changes. The obvious benefit of optimistic locking is that all changes are made in a block and version checking of objects is performed before application of changes, hence this mode copes better with external processes updating the objects.
Please note that for some datastores (e.g RDBMS) the version check followed by update/delete is performed in a single statement.
See also :-