1. Introduction
This documentation describes the GORM API mechanics and how a datastore implementation can be built to interface to any database providing a GORM API onto it. This documentation is mainly targeted at developers interested in creating implementations of GORM on top of alternative datastores.
As of this writing the project has several implementations of GORM against a variety of different datastore implementations. Current implementations include:
-
Hibernate 3, 4 and 5
-
MongoDB
-
Redis
-
Neo4j
-
Cassandra
-
java.util.ConcurrentHashMap (the fastest datastore in the world)
The remainder of this document describes how the project is structured, how to build a project and how to implement a GORM provider.
2. Getting Started
2.1. Checking out and Building
The project is hosted on GitHub.
You are free to fork the project from there or clone it anonymously using git:
git clone https://github.com/grails/grails-data-mapping.git cd grails-data-mapping
The project has a Gradle build.
To build the project you can run the assemble
task:
./gradlew assemble
To install the jar files for the various subprojects into your local Maven repository you can run:
./gradlew publishToMavenLocal
2.2. Project Structure
The project is essentially a multi-project Gradle build. There is a core API and then subprojects that implement that API. The core API subprojects include:
-
grails-datastore-core
- The core API, this provides core interfaces for implementing a GORM provider -
grails-datastore-gorm
- The runtime meta-programming and AST transformation infrastructure behind GORM. This also provides end users with APIs likegrails.gorm.CriteriaBuilder
andgrails.gorm.DetachedCriteria
-
grails-datastore-gorm-support
- Support classes for easing the writing of a GORM plugin for Grails -
grails-datastore-gorm-tck
- The TCK that includes hundreds of Spock specifications that a GORM implementation will need to pass -
grails-datastore-web
- Classes required to integrate GORM into a web tier
In addition to this, there are separate projects of GORM implementations for various datastores:
3. Understanding the GORM API
3.1. Introduction
The GORM Developer API is divided into a low-level API that implementors must implement for each specific datastore, and a set of higher-level APIs that enhance domain classes with features visible to regular users, such as dynamic finders, criteria queries, and so on.
The low-level API classes are located within the grails-datastore-core
subproject, whereas the higher-level APIs used to enhance domain classes can be found in grails-datastore-gorm
. In this section, we will discuss the low-level API.
3.2. Datastore Basics
3.2.1. The MappingContext
The org.grails.datastore.mapping.model.MappingContext
interface is used to obtain metadata about the classes that are configured for persistence. There are org.grails.datastore.mapping.model.PersistentEntity
and org.grails.datastore.mapping.model.PersistentProperty
interfaces that represent a class and its properties respectively. These can be obtained and introspected via the MappingContext
.
There are various concrete implementations of the MappingContext
interface such as:
-
DocumentMappingContext
- Used for document stores, subclassed byMongoMappingContext
-
JpaMappingContext
- Used for JPA -
KeyValueMappingContext
- Used by key/value stores
Creating a new MappingContext
may be useful because it allows users to configure how a class is mapped to the underlying datastore using GORM’s mapping
block as well as allowing registration of custom type converters and so on. The implementation for Neo4j looks like this:
class Neo4jMappingContext extends AbstractMappingContext {
MappingFactory<Collection, Attribute> mappingFactory
MappingConfigurationStrategy syntaxStrategy
Neo4jMappingContext() {
mappingFactory = new GraphGormMappingFactory()
syntaxStrategy = new GormMappingConfigurationStrategy(mappingFactory)
//addTypeConverter(new StringToNumberConverterFactory().getConverter(BigDecimal))
addTypeConverter(new StringToShortConverter())
addTypeConverter(new StringToBigIntegerConverter())
...
}
@Override
protected PersistentEntity createPersistentEntity(Class javaClass) {
GraphPersistentEntity persistentEntity = new GraphPersistentEntity(javaClass, this)
mappingFactory.createMappedForm(persistentEntity) // populates mappingFactory.entityToPropertyMap as a side effect
persistentEntity
}
MappingConfigurationStrategy getMappingSyntaxStrategy() {
syntaxStrategy
}
MappingFactory getMappingFactory() {
mappingFactory
}
}
Notice how Neo4j provides a custom GraphGormMappingFactory
and GraphPersistentEntity
to allow the domain class configuration to be changed for a given Neo4j Node
.
3.2.2. The Datastore Interface
The org.grails.datastore.mapping.core.Datastore
interface is the equivalent of a SQL DataSource
where by it provides the necessary capability to create a connection. In most cases one can simply subclass the AbstractDatastore
super class and implement the createSession
method. The following implementation is from the SimpleMapDatastore
which implements GORM ontop of a ConcurrentHashMap
:
@Override
protected Session createSession(PropertyResolver connDetails) {
return new SimpleMapSession(this, getMappingContext(), getApplicationEventPublisher());
}
The implementation depends a lot on the underlying datastore. For example for MongoDB the following implementation is used:
@Override
protected Session createSession(PropertyResolver connDetails) {
return new MongoSession(this, getMappingContext(), getApplicationEventPublisher(), false);
}
Notice that the Datastore
also has a reference to the MappingContext
discussed in the previous section.
3.2.3. The Session Interface
The org.grails.datastore.mapping.core.Session
interface represents an active connection. It can be either stateful or stateless, depending on the implementation. For example of embedded databases where there is no network connection, a stateful session is not particularly useful, but a datastore that creates network connections you may want to cache returned instances to reduce load.
The AbstractSession
class provides some support for creating stateful sessions, if you prefer a stateless implementation then simply implement Session
or subclass AbstractAttributeStoringSession
.
In general, if you subclass AbstractSession
, the minimum you need to do is implement the createPersister
method:
protected Persister createPersister(Class cls, MappingContext mappingContext) {
PersistentEntity entity = mappingContext.getPersistentEntity(cls.getName());
if (entity == null) {
return null;
}
return new SimpleMapEntityPersister(mappingContext, entity, this,
(SimpleMapDatastore) getDatastore(), publisher);
}
The example above is from the SimpleMapSession
implementation, which creates a SimpleMapEntityPersister
instance and returns it. Returning null indicates that the class cannot be persisted and an exception will be thrown.
3.3. Implementing CRUD
3.3.1. The EntityPersister Interface
The EntityPersister
interface is used to implement the basic Create, Read, Update and Delete (CRUD) operations. There are individual methods to implement such as persistEntity
, updateEntity
, deleteEntity
and so on.
In many cases there is a representation of an entity in its "native" form as supplied by the datastore driver. For example in Cassandra this could be a ColumnFamily
, or in MongoDB a DBCollection
.
To support implementing such cases, there is an abstract NativeEntryEntityPersister<T, K>
super class that provides the basis for an implementation that maps a native entry, such as a MongoDB DBObject
or a Neo4j Node
, to a persisted entity and back again.
The two generic types of this superclass indicate the native entry type (example DBObject
in MongoDB) and the native key type (example ObjectId
in MongoDB). The MongoDB implementation looks like this:
public class MongoEntityPersister extends NativeEntryEntityPersister<DBObject, Object>
Note that Object
is used for the key since MongoDB also supports Long and String-based identifiers.
They key methods that need implementing are defined below:
-
getEntityFamily()
- Defines the name of the entity group or family. This could be a database table, a Cassandra Column Family or a MongoDB collection -
T createNewEntry(String family)
- Creates a native entry ready to be inserted -
Object getEntryValue(T nativeEntry, String property)
- Retrieves a value of entry and returns its Java object form. For example a "date" property stored as a String in the datastore would need to be returned as a java.util.Date at this point -
setEntryValue(T nativeEntry, String key, Object value)
- Sets a value of the native entry, converting any Java objects to the required native format -
deleteEntry(String family, K key, Object entry)
- Deletes an entry for the given family, native key and entry -
T retrieveEntry(PersistentEntity persistentEntity, String family, Serializable key)
- Retrieves a native entry for the given entity, family and key -
K storeEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, K storeId, T nativeEntry)
- Stores a native entry for the given id -
updateEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, K key, T entry)
- Updates an entry -
K generateIdentifier(PersistentEntity persistentEntity, T entry)
- Generate an identifier for the given native entry -
PropertyValueIndexer getPropertyIndexer(PersistentProperty property)
- If the datastore requires manual indexing, you’ll need to implement aPropertyIndexer
, otherwise return null -
AssociationIndexer getAssociationIndexer(T nativeEntry, Association association)
- If the datastore requires manual indexing, you’ll need to implement aAssociationIndexer
, otherwise return null
3.3.2. Create
The createNewEntry
method is used to create a native record that will be inserted into the datastore. In MongoDB this is a DBObject
whilst in the implementation for ConcurrentHashMap
it is another Map
:
@Override
protected DBObject createNewEntry(String family) {
return new BasicDBObject();
}
3.3.3. Read
The retrieveEntry
method is used to retrieve a native record for a given key:
protected DBObject retrieveEntry(final PersistentEntity persistentEntity,
String family, final Serializable key) {
return mongoTemplate.execute(new DbCallback<DBObject>() {
public DBObject doInDB(DB con) throws MongoException, DataAccessException {
DBCollection dbCollection = con.getCollection(getCollectionName(persistentEntity));
return dbCollection.findOne(key);
}
});
}
Here you can see the MongoDB
implementation that uses a Spring Data MongoTemplate
to find a DBObject
for the given key. There is a separate storeEntry
method that is used to actually store the native object. In MongoDB
this looks like:
@Override
protected Object storeEntry(final PersistentEntity persistentEntity, final EntityAccess entityAccess,
final Object storeId, final DBObject nativeEntry) {
return mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
nativeEntry.put(MONGO_ID_FIELD, storeId);
return storeId;
}
});
}
Notice it doesn’t actually do anything native insert into a MongoDB collection. This is because the Datastore API supports the notion of batch insert operations and flushing. In the case of MongoDB
the MongoSession
implementation overrides the flushPendingInserts
method of AbstractSession
and performs a batch insert of multiple MongoDB documents (ie DBObject
s) at once:
collection.insert(dbObjects.toArray(DBObject::new), writeConcernToUse);
Other datastores that do not support batch inserts would instead do the insert in the storeEntry
method itself. For example the implementation for ConcurrentHashMap
looks like (with Groovy):
protected storeEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, storeId, Map nativeEntry) {
if (!persistentEntity.root) {
nativeEntry.discriminator = persistentEntity.discriminator
}
datastore.put(storeId, nativeEntry)
return storeId
}
3.3.4. Update
The updateEntry
method is used to update an entry:
public void updateEntry(final PersistentEntity persistentEntity, final EntityAccess ea,
final Object key, final DBObject entry) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
String collectionName = getCollectionName(persistentEntity, entry);
DBCollection dbCollection = con.getCollection(collectionName);
if (isVersioned(ea)) {
// TODO this should be done with a CAS approach if possible
DBObject previous = dbCollection.findOne(key);
checkVersion(ea, previous, persistentEntity, key);
}
MongoSession mongoSession = (MongoSession) session;
dbCollection.update(dbo, entry, false, false, mongoSession.getWriteConcern());
return null;
}
});
}
As you can see again the underlying database specific update
method is used, in this case the DBCollection
's update
method.
3.3.5. Delete
The deleteEntry
method is used to delete an entry. For example in the ConcurrentHashMap
implementation it is simply removed from the map:
protected void deleteEntry(String family, key, entry) {
datastore.remove(key)
}
Whilst in MongoDB
the DBCollection
object’s remove
method is called:
@Override
protected void deleteEntry(String family, final Object key, final Object entry) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
DBCollection dbCollection = getCollection(con);
MongoSession mongoSession = (MongoSession) session;
dbCollection.remove(key, mongoSession.getWriteConcern());
return null;
}
protected DBCollection getCollection(DB con) {
return con.getCollection(getCollectionName(getPersistentEntity()));
}
});
}
Note that if the underlying datastore supports batch delete operations you may want override and implement the deleteEntries
method which allows for deleting multiple entries in a single operation. The implementation for MongoDB looks like:
protected void deleteEntries(String family, final List<Object> keys) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
String collectionName = getCollectionName(getPersistentEntity());
DBCollection dbCollection = con.getCollection(collectionName);
MongoSession mongoSession = (MongoSession) getSession();
MongoQuery query = mongoSession.createQuery(getPersistentEntity().getJavaClass());
query.in(getPersistentEntity().getIdentity().getName(), keys);
dbCollection.remove(query.getMongoQuery());
return null;
}
});
}
You’ll notice that this implementation uses a MongoQuery
instance. Also, it’s important to note that when implementing an EntityPersister
, you enable basic CRUD operations, but not querying. The latter is a subject we’ll explore in the following sections. However, before delving into that, we need to cover secondary indices, as they are required for querying.
3.4. Secondary Indexing
Many datastores do not support secondary indexing or require you to build your own. In cases like this, you will need to implement a PropertyIndexer
.
If the underlying datastore supports secondary indexes then it is ok to just return a null PropertyIndexer and let the datastore handle the indexing.
|
For example the ConcurrentHashMap
implementation creates secondary indices by populating another Map
containing the indices:
void index(value, primaryKey) {
def index = getIndexName(value)
def indexed = indices[index]
if (indexed == null) {
indexed = []
indices[index] = indexed
}
if (!indexed.contains(primaryKey)) {
indexed << primaryKey
}
}
The implementation for Redis is very similar and stores the primary key in a Redis set:
public void index(final Object value, final Long primaryKey) {
if (value == null) {
return;
}
final String primaryIndex = createRedisKey(value);
redisTemplate.sadd(primaryIndex, primaryKey);
}
An index name is typically built from the entity name, property name and property value. The primary key of the entity is stored in this index for later querying. In fact there is a query
method that needs to be implemented on PropertyIndexer
. The ConcurrentHashMap
implementation looks like this:
List query(value, int offset, int max) {
def index = getIndexName(value)
def indexed = indices[index]
if (!indexed) {
return Collections.emptyList()
}
return indexed[offset..max]
}
Depending on the characteristics of the underlying database you may want to do the indexing asynchronously or you may want to index into a search library such as Lucene. For datastores that are eventually consistent for example it makes sense to do all indexing asynchronously.
Finally, when an object is deleted it will need to removed from the indices. This can be done with the deindex
method:
void deindex(value, primaryKey) {
def index = getIndexName(value)
def indexed = indices[index]
if (indexed) {
indexed.remove(primaryKey)
}
}
3.5. Implementing Querying
3.5.1. Introduction
The org.grails.datastore.mapping.query.Query
abstract class defines the query model, and it is the job of the GORM implementor to translate this query model into an underlying database query. This is different depending on the implementation and may involve:
-
Generating a String-based query such as SQL or JPA-QL
-
Creating a query object such as MongoDB’s use of a
Document
to define queries -
Generating for use with manually created Secondary indices as is the case with Redis
The Query
object defines the following:
-
One or many
Criterion
that define the criteria to query by. -
Zero or more
Projection
instances that define what the data you want back will look like. -
Pagination parameters such as
max
andoffset
-
Sorting parameters
There are many types of Criterion
for each specific type of query, examples include Equals
, Between
, Like
etc. Depending on the capabilities of the underlying datastore you may implement only a few of these.
There are also many types of Projection
such as SumProjection
, MaxProjection
and CountProjection
. Again you may implement only a few of these.
If, for instance, the underlying datastore does not support the calculation of a sum or max for a specific property, you can utilize the ManualProjections class to carry out these operations in memory on the client.
|
Writing a Query
implementation is probably the most complex part of implementing a GORM provider, but starts by subclassing the Query
class and implementing the executeQuery
method:
public class MongoQuery extends Query implements QueryArgumentsAware {
...
}
3.5.2. Using the Query Model
To implement querying you need to understand the Query model. As discussed, a Query
contains a list of Criterion
. However, the root Criterion
could be a conjunction (an AND query) or a disjunction (an OR query). The Query
may also contain a combination of regular criterion (=, !=, LIKE etc.) and junctions (AND, OR or NOT). Implementing a Query
therefore requires writing a recursive method. The implementation for ConcurrentHashMap
looks like:
Collection executeSubQueryInternal(criteria, criteriaList) {
SimpleMapResultList resultList = new SimpleMapResultList(this)
for (Query.Criterion criterion in criteriaList) {
if (criterion instanceof Query.Junction) {
resultList.results << executeSubQueryInternal(criterion, criterion.criteria)
}
else {
PersistentProperty property = getValidProperty(criterion)
def handler = handlers[criterion.getClass()]
def results = handler?.call(criterion, property) ?: []
resultList.results << results
}
}
}
Note that if a Junction
is encountered (representing AND, OR, or NOT), the method recursively handles the junctions. Otherwise, it obtains and executes a handler for the Criterion
class. The handlers
map is a map of Criterion
class to query handlers. The implementation for Equals
appears as follows:
def handlers = [
...
(Query.Equals): { Query.Equals equals, PersistentProperty property ->
def indexer = entityPersister.getPropertyIndexer(property)
final value = subqueryIfNecessary(equals)
return indexer.query(value)
}
...
]
This approach simply employs the property indexer to query for all identifiers. However, it’s worth noting that this is a scenario involving a datastore, such as ConcurrentHashMap
, that lacks support for secondary indices. Instead of manually querying secondary indices in this manner, an alternative might be to construct a String-based or native query. For instance, in MongoDB, this process appears as follows:
queryHandlers.put(Equals.class, new QueryHandler<Equals>() {
public void handle(PersistentEntity entity, Equals criterion, Document query) {
String propertyName = getPropertyName(entity, criterion);
Object value = criterion.getValue();
PersistentProperty property = entity.getPropertyByName(criterion.getProperty());
MongoEntityPersister.setDBObjectValue(query, propertyName, value, entity.getMappingContext());
}
});
Observe that in this case, the query takes the form of a DBObject
. In the context of Gemfire, the implementation differs as follows:
queryHandlers.put(Equals.class, new QueryHandler() {
public int handle(PersistentEntity entity, Criterion criterion, StringBuilder q, List params, int index) {
Equals eq = (Equals) criterion;
final String name = eq.getProperty();
validateProperty(entity, name, Equals.class);
q.append(calculateName(entity, name));
return appendOrEmbedValue(q, params, index, eq.getValue(), EQUALS);
}
});
In this case a StringBuilder
is used to construct a OQL query from the Query
model.
3.6. GORM Enhancer
Once you have implemented the lower-level APIs you can trivially provide a GORM API to a set of Grails domain classes. For example consider the following simple domain class:
import grails.persistence.*
@Entity
class Book {
String title
}
The following setup code can be written to enable GORM for MongoDB:
// create context
def context = new MongoMappingContext(databaseName)
context.addPersistentEntity(Book)
// create datastore
def mongoDatastore = new MongoDatastore(context)
mongoDatastore.afterPropertiesSet()
// enhance
def enhancer = new MongoGormEnhancer(mongoDatastore, new DatastoreTransactionManager(datastore: mongoDatastore))
enhancer.enhance()
// use GORM!
def books = Book.list()
The key element for enabling the use of all GORM methods (list()
, dynamic finders, etc.) is the utilization of the MongoGormEnhancer
. This class is a subclass of org.grails.datastore.gorm.GormEnhancer
and offers extensions to GORM specifically tailored for MongoDB. However, a subclass is not mandatory, and if you don’t need any datastore-specific extensions, you can equally use the standard GormEnhancer
:
def enhancer = new GormEnhancer(mongoDatastore, new DatastoreTransactionManager(datastore: mongoDatastore))
enhancer.enhance()
3.7. Adding to GORM APIs
By default, the GORM compiler ensures that all GORM entities implement the GormEntity
trait, which provides them with all the default GORM methods. Nevertheless, if there’s a need to extend GORM functionality to incorporate additional methods tailored to a specific datastore, you can achieve this by extending the GormEntity
trait.
For example Neo4j adds methods for Cypher querying:
trait Neo4jEntity<D> extends GormEntity<D> {
static Result cypherStatic(String queryString, Map params ) {
def session = AbstractDatastore.retrieveSession(Neo4jDatastore)
def graphDatabaseService = (GraphDatabaseService)session.nativeInterface
graphDatabaseService.execute(queryString, params)
}
}
With this addition, you need to instruct the GORM compiler to make entities implement this trait. To achieve this, implement a TraitProvider
:
package org.grails.datastore.gorm.neo4j
import grails.neo4j.Neo4jEntity
import groovy.transform.CompileStatic
import org.grails.compiler.gorm.GormEntityTraitProvider
@CompileStatic
class Neo4jEntityTraitProvider implements GormEntityTraitProvider {
final Class entityTrait = Neo4jEntity
}
And then add a src/main/resources/META-INF/services/org.grails.compiler.gorm.GormEntityTraitProvider
file specifying the name of your trait provider:
org.grails.datastore.gorm.neo4j.Neo4jEntityTraitProvider
GORM will automatically inject the trait into any domain class discovered in grails-app/domain
or annotated with the Entity
annotation. However, if Hibernate is present on the classpath, you must inform GORM to map the domain class with Neo4j:
static mapWith = "neo4j"
4. Using the Test Compatibility Kit
The grails-datastore-gorm-tck
project provides several hundred tests to guarantee that a particular GORM implementation is compliant. To use the TCK you need to define a dependency on the TCK in the subprojects build.gradle
file:
testCompile project(':grails-datastore-gorm-tck')
Then create a Setup.groovy
file that sets up your custom datastore in your implementation.
For example the ConcurrentHashMap
implementation has one defined in grails-datastore-gorm-test/src/test/groovy/org/grails/datastore/gorm/Setup.groovy
:
class Setup {
static destroy() {
// noop
}
static Session setup(classes) {
def ctx = new GenericApplicationContext()
ctx.refresh()
def simple = new SimpleMapDatastore(ctx)
...
for (cls in classes) {
simple.mappingContext.addPersistentEntity(cls)
}
...
def enhancer = new GormEnhancer(simple, new DatastoreTransactionManager(datastore: simple))
enhancer.enhance()
simple.mappingContext.addMappingContextListener({ e -> enhancer.enhance e } as MappingContext.Listener)
simple.applicationContext.addApplicationListener new DomainEventListener(simple)
simple.applicationContext.addApplicationListener new AutoTimestampEventListener(simple)
return simple.connect()
}
}
Some setup code has been omitted for clarity, but essentially, the Setup.groovy
class should initialize the Datastore
and return a Session
from the static setup method, which is passed a list of classes to configure.
With this setup, all the TCK tests will be run against the subproject. If a specific test cannot be implemented due to the underlying datastore lacking support for a particular feature, you can create a test with the same name as the failing test, and that will then override the corresponding test in the TCK.
For example: SimpleDB doesn’t support pagination. Add a grails.gorm.tests.PagedResultSpec
class that overrides the one from the TCK. Each test is a Spock specification and Spock has an Ignore
annotation that can be used to ignore a particular test:
/**
* Ignored for SimpleDB because SimpleDB doesn't support pagination
*/
@Ignore
class PagedResultSpec extends GormDatastoreSpec{
...
}
5. Step-by-Step Guide to Creating an Implementation
To get started with a new GORM implementation, the following steps are required:
5.1. Initial Directory Creation
git clone https://github.com/grails/grails-data-mapping.git cd grails-data-mapping mkdir grails-datastore-gorm-xyz
5.2. Setup Gradle Build
Create build.gradle:
vi grails-datastore-gorm-xyz/build.gradle
With contents:
dependencies {
implementation project(':grails-datastore-gorm'),
project(':grails-datastore-web'),
project(':grails-datastore-gorm-support')
testImplementation project(':grails-datastore-gorm-tck')
testRuntime "javax.servlet:javax.servlet-api:$servletApiVersion"
}
Add new project to settings.gradle in root project:
vi settings.gradle
Changes shown below:
// GORM Implementations
'grails-datastore-gorm-neo4j',
'grails-datastore-gorm-xyz',
...
5.3. Create Project Source Directories
mkdir grails-datastore-gorm-xyz/src/main/groovy mkdir grails-datastore-gorm-xyz/src/test/groovy
5.4. Generate IDE Project Files and Import into IDE (Optional)
./gradlew grails-datastore-gorm-xyz:idea
Or
./gradlew grails-datastore-gorm-xyz:eclipse
5.5. Implement Required Interfaces
In src/main/groovy
create implementations:
-
org.grails.datastore.xyz.XyzDatastore
extends and implementsorg.grails.datastore.mapping.core.AbstractDatastore
-
org.grails.datastore.xyz.XyzSession
extends and implementsorg.grails.datastore.mapping.core.AbstractSession
-
org.grails.datastore.xyz.engine.XyzEntityPersister
extends and implementsorg.grails.datastore.mapping.engine.NativeEntryEntityPersister
-
org.grails.datastore.xyz.query.XyzQuery
extends and implementsorg.grails.datastore.mapping.query.Query
5.6. Create Test Suite
In src/test/groovy
create org.grails.datastore.gorm.Setup
class to configure TCK:
class Setup {
static xyz
static destroy() {
xyz.disconnect()
}
static Session setup(classes) {
def ctx = new GenericApplicationContext()
ctx.refresh()
xyz = new XyzDatastore(ctx)
for (cls in classes) {
xyz.mappingContext.addPersistentEntity(cls)
}
def enhancer = new GormEnhancer(xyz, new DatastoreTransactionManager(datastore: xyz))
enhancer.enhance()
xyz.mappingContext.addMappingContextListener({ e -> enhancer.enhance e } as MappingContext.Listener)
xyz.applicationContext.addApplicationListener new DomainEventListener(xyz)
xyz.applicationContext.addApplicationListener new AutoTimestampEventListener(xyz)
xyz.connect()
}
}
Then in src/test/groovy
create test suite class to allow running tests in IDE (without this you won’t be able to run TCK tests from the IDE). Example test suite:
package org.grails.datastore.gorm
import org.junit.runners.Suite.SuiteClasses
import org.junit.runners.Suite
import org.junit.runner.RunWith
import grails.gorm.tests.*
/**
* @author graemerocher
*/
@RunWith(Suite)
@SuiteClasses([
FindByMethodSpec,
ListOrderBySpec
])
class XyzTestSuite {
}
5.7. Implement the TCK!
Keep iterating until you have implemented all the tests in the TCK.