Longterm memory

free range storage

A graphical desktop application has quite different transaction boundaries than application services, which must be considered be part of a bigger transaction. Desktop applications are the outermost part of a transaction

So there's not reason to accept EntityManager as transaction boundary or to pollute application space with EntitManagers.

With SR-JRC plenty of EntityManagers could participate a transaction. Same is true for remote application services. Particularly there's no EntityManager visible to application. They do their job, but you don't have to bother with them.

Anyway - a developer must know about the existance of EntityManager and that an EntityManager is the boundary for certain storage.

Example

TransactionFactory taFactory = ApplicationServiceProvider.getService(TransactionFactory.class);
Transaction ta = taFactory.createTransaction();

ta.add(new TOSave(getAppConfig()));
ta.execute();
The code above is from a loadable application, that saves its configuration as preferences, the code below saves an entity to a SQL-database.
TransactionFactory taFactory = ApplicationServiceProvider.getService(TransactionFactory.class);
Transaction ta = taFactory.createTransaction();

ta.add(new TOSave(any));
ta.execute();
As you can see, there's no difference, where you store your entity to, or which format will be used for storage. And of cause it does not matter, which classes performes the real storage.

a question of competence

The responsibility of an EntityManager will be detemince by the ApplicationContext and the class hierarchy of the Entity to store. The definition of EntityManagers looks like:

<bean id="repository" class="de.schwarzrot.data.access.Repository">
    <property name="exporter" ref="entityExporter" />
    <property name="importer" ref="entityImporter" />
    <property name="managers">
        <map>
            <entry key="de.schwarzrot.app.config.support.AbstractConfigBase"
             value-ref="prefsEntityManager" />
            <entry key="de.schwarzrot.data.VdrEntity"
             value-ref="vdrEntityManager" />
            <entry key="de.schwarzrot.data.Entity"
             value-ref="jdbcEntityManager" />
            <entry key="java.lang.Object"
             value-ref="baseEntityManager" />
        </map>
    </property>
</bean>
This way you can add new Entity at any time without having to care about your application setup. The responsibility is clear and deterministic:
  • all instances of AbstractConfigBase, as well as all its children will be treaten by PreferencesEntityManager
  • all instances of VdrEntity, as well as all its children will be handled by VdrEntityManager
  • all instances of Entity, as well as all its children will be embraced by JDBCEntityManager
  • finally BaseEntityManager will deal with all instances of any other type
Both AbstractConfigBase and VdrEntity are decendants of Entity - and they all are java.lang.Object - obviously! So on determine the responsibility its evident to take ancestry into account. SR-JRC offers this at no extra charges :)

richness in species

VdrAssistant contains 3 different EntityManagers, one for Java Preferences, one for SQL-databases and another for special cases (the JDBCEntityManager uses several helper classes, to support different dialects of different database vendors, but that's another story).

One of such a special case is maintaining a proprietary system en parallel to a SQL-database. All read access can be handled by the SQL-database, but each write-access has to be duplicated so the SQL-database stores the record, as well as the proprietary system. Talking about VdrAssistant, the interface to the proprietary system is a proprietary TCP/IP protocol and the EntityManager is a wrapper around JDBCEntityManager so the most commands are handled by the inner JDBCEntityManager but write commands will be routet to the inner JDBCEntityManager AND performed by the special entity manager itself.

The application does not need to know anything about the dualism. It treats a VdrEntity as any other Entity.

Airbag wanted?

There exists 2 types of transactions: read- and write-transactions. Write-transactions have already been introduced above. read-transactions could save a lot of resources, so if you know it in advance ... tell the TransactionManager and use the method setRollbackOnly.

Have a look at this example, where the job-processor reads the recording it needs to transform:

Transaction ta = taFactory.createTransaction();
TORead<Recording> tor = new TORead<Recording>(Recording.class);

tor.addCondition(new EqualConditionElement("id", job.getSubject()));
tor.setReadRelated(true);
ta.add(tor);
ta.setRollbackOnly();
ta.execute();

if (tor.getResult() != null && tor.getResult().size() > 0) {
    job.setSubject(tor.getResult().get(0));
}
   
Additionally the example from above shows the usage of a condition as well as the trigger to resolve all references of the entity.

One of the goals of SR-JRC is for sure the fact, that you can decide by API usage whether you'd like to resolve references or not.

Creating the condition shows the fact, that job.getSubject() returns an Entity whereas the propertytype of "id" is Long - so the types of condition value and property value do not need to match of type. But the helper class that translates the condition needs to find a Converter that is able to translate an Entity into a Long.

After executing the transaction the job-processor checks, whether the read operation was successful and returned a result. If so, the subject will be replaced by the received instance.

variations

SR-JRC supports these transation operations:


TOCount
determines the number of records, that meet the criteria

TORead
reads records (even partially), that meed the criteria. Optionally you can ask for reference resolution.

TORemove
deletes records, that meet the criteria

TOSave
stores the given instance(s)

TOSetProperty
stores single attributes of the given instance(s)

TOContextOperation
offers the possibility to modify objects inside a transaction

I say, she says ...

Many values have different representation in java world and outside (database or file). Therefore the data access layer uses translation helpers, whenever a value is read or written. Such translation helpers are of type de.schwarzrot.data.support.Converter. The framework already introduces the most important converters. The ConverterFactory is a published application service - so the application can add its own Converters or replace existing ones.

The interface of translation helpers is defined as:

public interface Converter {
    public Object fromPersistence(Class type, Object value);

    public Class getValueType();

    public Object toPersistence(Object value, int physicalSize);
}
   
Using fromPersistence you'll need to specify "type", the type of the property in java world. toPersistence does not need any type information, but it could be necessary to cut the given value to fit into database fields.