Introduction
This documentation describes the GORM API mechanics and how a datastore implementation can be built to interface to any database providing a GORM API onto it. This documentation is mainly targeted at developers interested in creating implementations of GORM ontop of alternative datastores.
As of this writing the project has several implementations of GORM against a variety of different datastore implementations. Current implementations include:
-
Hibernate 3,4 and 5
-
MongoDB
-
Redis
-
Neo4j
-
Cassandra
-
java.util.ConcurrentHashMap (the fastest datastore in the world)
The remainder of this document describes how the project is structured, how to build a project and how to implement a GORM provider.
Getting Started
Checking out and Building
The project is currently hosted on Github at [https://github.com/grails/grails-data-mapping].
You are free to fork the project from there or clone it anonymously using git:
git clone git@github.com:grails/grails-data-mapping.git
cd grails-data-mapping
The project has a Gradle build. You will need Intellij 15 or greater to work with the source code.
Use the Intellij 15 Gradle tooling to import the project.
To build the project you can run the assemble
task:
./gradlew assemble
To install the jar files for the various subprojects into your local Maven repository you can run:
./gradlew install
To build all of the documentation run the command:
./gradlew allDocs
Documentation will produced in the build/docs
directory.
If you experience PermGen errors when building documentation you may need to increase the JVM permgen inside GRADLE_OPTS |
Project Structure
The project is essentially a multi-project Gradle build. There is a core API and then subprojects that implement that API. The core API subprojects include:
-
grails-datastore-core
- The core API, this provides core interfaces for implementing a GORM provider -
grails-datastore-gorm
- The runtime meta-programming and AST transformation infrastructure behind GORM. Also provides end users APIs likegrails.gorm.CriteriaBuilder
andgrails.gorm.DetachedCriteria
-
grails-datastore-gorm-support
- Support classes for easing the writing of a GORM plugin for Grails -
grails-datastore-gorm-tck
- The TCK that includes hundreds of Spock specifications that a GORM implementation will need to pass -
grails-datastore-web
- Classes required to integrate GORM into a web tier
Beyond these core subprojects there are implementations for various datastores. For example:
-
grails-datastore-mongodb/grails-datastore-gorm-hibernate
- GORM for Hibernate -
grails-datastore-mongodb/grails-datastore-gorm-mongo
- GORM for MongoDB project [https://grails.org/plugin/mongodb] -
grails-datastore-neo4j
- GORM for Neo4j project [https://grails.org/plugin/neo4j] -
grails-datastore-redis/grails-datastore-gorm-redis
- GORM for Redis project [https://grails.org/plugin/redis] -
grails-datastore-cassandra/grails-datastore-gorm-cassandra
- GORM for Cassandra project [https://grails.org/plugin/cassandra]
The documentation for each implementation is kept in the documentation subprojects that start with grails-documentation
. There are documentation projects for the core API, MongoDB, Neo4j, Redis, and Cassandra.
Finally the Grails 3 plugins that are used to distribute the GORM implementations to end users can be found in the grails-plugins
directory and the grails2-plugins
directory for Grails 2.x.
Understanding the GORM API
Introduction
The GORM Developer API is split into a low-level API that implementors need to implement for each individual datastore and then set of higher level APIs that enhance domain classes with things regular users see such as dynamic finders, criteria queries and so on.
The low-level API classes are found in the grails-datastore-core
subproject, whilst the higher level APIs used to enhance domain classes are found in grails-datastore-gorm
. In this section we will discuss the low-level API.
Datastore Basics
The MappingContext
The org.grails.datastore.mapping.model.MappingContext
interface is used to obtain metadata about the classes that are configured for persistence. There are org.grails.datastore.mapping.model.PersistentEntity
and org.grails.datastore.mapping.model.PersistentProperty
interfaces that represent a class and its properties respectively. These can be obtained and introspected via the MappingContext
.
There are various concrete implementations of the MappingContext
interface such as:
-
DocumentMappingContext
- Used for document stores, subclassed byMongoMappingContext
-
JpaMappingContext
- Used for JPA -
KeyValueMappingContext
- Used by key/value stores
Creating a new MappingContext
may be useful because it allows users to configure how a class is mapped to the underlying datastore using GORM’s mapping
block as well as allowing registration of custom type converters and so on. The implementation for Neo4j looks like this:
class Neo4jMappingContext extends AbstractMappingContext {
MappingFactory<Collection, Attribute> mappingFactory
MappingConfigurationStrategy syntaxStrategy
Neo4jMappingContext() {
mappingFactory = new GraphGormMappingFactory()
syntaxStrategy = new GormMappingConfigurationStrategy(mappingFactory)
//addTypeConverter(new StringToNumberConverterFactory().getConverter(BigDecimal))
addTypeConverter(new StringToShortConverter())
addTypeConverter(new StringToBigIntegerConverter())
...
}
@Override
protected PersistentEntity createPersistentEntity(Class javaClass) {
GraphPersistentEntity persistentEntity = new GraphPersistentEntity(javaClass, this)
mappingFactory.createMappedForm(persistentEntity) // populates mappingFactory.entityToPropertyMap as a side effect
persistentEntity
}
MappingConfigurationStrategy getMappingSyntaxStrategy() {
syntaxStrategy
}
MappingFactory getMappingFactory() {
mappingFactory
}
}
Notice how Neo4j provides a custom GraphGormMappingFactory
and GraphPersistentEntity
to allow the domain class configuration to be changed for a given Neo4j Node
.
The Datastore Interface
The org.grails.datastore.mapping.core.Datastore
interface is the equivalent of a SQL DataSource
where by it provides the necessary capability to create a connection. In most cases one can simply subclass the AbstractDatastore
super class and implement the createSession
method. The following implementation is from the SimpleMapDatastore
which implements GORM ontop of a ConcurrentHashMap
:
@Override
protected Session createSession(PropertyResolver connDetails) {
return new SimpleMapSession(this, getMappingContext(), getApplicationEventPublisher());
}
The implementation depends a lot on the underlying datastore. For example for MongoDB the following implementation is used:
@Override
protected Session createSession(PropertyResolver connDetails) {
return new MongoSession(this, getMappingContext(), getApplicationEventPublisher(), false);
}
Notice that the Datastore
also has a reference to the MappingContext
discussed in the previous section.
The Session Interface
The org.grails.datastore.mapping.core.Session
interface represents an active connection. It can be either stateful or stateless, depending on the implementation. For example of embedded databases where there is no network connection, a stateful session is not particularly useful, but a datastore that creates network connections you may want to cache returned instances to reduce load.
The AbstractSession
class provides some support for creating stateful sessions, if you prefer a stateless implementation then simply implement Session
or subclass AbstractAttributeStoringSession
.
In general if you subclass AbstractSession
the minimum you need to do is implement the createPersister
method:
protected Persister createPersister(Class cls, MappingContext mappingContext) {
PersistentEntity entity = mappingContext.getPersistentEntity(cls.getName());
if (entity == null) {
return null;
}
return new SimpleMapEntityPersister(mappingContext, entity, this,
(SimpleMapDatastore) getDatastore(), publisher);
}
The example above is from the SimpleMapSession
implementation, which creates a SimpleMapEntityPersister
instance and returns it. Returning null indicates that the class cannot be persisted and an exception will be thrown
Implementing CRUD
The EntityPersister Interface
The EntityPersister
interface is used to implement the basic Create, Read, Update and Delete (CRUD) operations. There are individual methods to implement such as persistEntity
, updateEntity
, deleteEntity
and so on.
In many cases there is a representation of an entity in its "native" form as supplied by the datastore driver. For example in Cassandra this could be a ColumnFamily
, or in MongoDB a DBCollection
.
To support implementation such cases there is an abstract NativeEntryEntityPersister<T, K>
super class that provides the basis for an implementation that maps a native entry, such as a MongoDB DBObject
or a Neo4j Node
to a persist entity and back again.
The 2 generic types of this superclass indicate the native entry type (example DBObject
in MongoDB) and the native key type (example ObjectId
in MongoDB). The MongoDB implementation looks like this:
public class MongoEntityPersister extends NativeEntryEntityPersister<DBObject, Object>
Note that Object
is used for the key since MongoDB also supports Long and String-based identifiers.
They key methods that need implementing are defined below:
-
getEntityFamily()
- Defines the the name of the entity group or family. This could be a database table, a Cassandra Column Family or a MongoDB collection. -
T createNewEntry(String family)
- Creates a native entry ready to be inserted -
Object getEntryValue(T nativeEntry, String property)
- Retrieves a value of entry and returns its Java object form. For example a "date" property stored as a String in the datastore would need to b returned as a java.util.Date at this point -
setEntryValue(T nativeEntry, String key, Object value)
- Sets a value of the native entry, converting any Java objects to the required native format -
deleteEntry(String family, K key, Object entry)
- Deletes an entry for the given family, native key and entry -
T retrieveEntry(PersistentEntity persistentEntity, String family, Serializable key)
- Retrieves a native entry for the given entity, family and key -
K storeEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, K storeId, T nativeEntry)
- Stores a native entry for the given id -
updateEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, K key, T entry)
- Updates an entry -
K generateIdentifier(PersistentEntity persistentEntity, T entry)
- Generate an identifier for the given native entry -
PropertyValueIndexer getPropertyIndexer(PersistentProperty property)
- If the datastore requires manual indexing you’ll need to implement aPropertyIndexer
otherwise return null -
AssociationIndexer getAssociationIndexer(T nativeEntry, Association association)
- If the datastore requires manual indexing you’ll need to implement aAssociationIndexer
otherwise return null
Create
The createNewEntry
method is used to create a native record that will be inserted into the datastore. In MongoDB this is a DBObject
whilst in the implementation for ConcurrentHashMap
it is another Map
:
@Override
protected DBObject createNewEntry(String family) {
return new BasicDBObject();
}
Read
The retrieveEntry
method is used to retrieve a native record for a given key:
protected DBObject retrieveEntry(final PersistentEntity persistentEntity,
String family, final Serializable key) {
return mongoTemplate.execute(new DbCallback<DBObject>() {
public DBObject doInDB(DB con) throws MongoException, DataAccessException {
DBCollection dbCollection = con.getCollection(getCollectionName(persistentEntity));
return dbCollection.findOne(key);
}
});
}
Here you can see the MongoDB
implementation that uses a Spring Data MongoTemplate
to find a DBObject
for the given key. There is a separate storeEntry
method that is used to actually store the native object. In MongoDB
this looks like:
@Override
protected Object storeEntry(final PersistentEntity persistentEntity, final EntityAccess entityAccess,
final Object storeId, final DBObject nativeEntry) {
return mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
nativeEntry.put(MONGO_ID_FIELD, storeId);
return storeId;
}
});
}
Notice it doesn’t actually do anything native insert into a MongoDB collection. This is because the Datastore API supports the notion of batch insert operations and flushing. In the case of MongoDB
the MongoSession
implementation overrides the flushPendingInserts
method of AbstractSession
and performs a batch insert of multiple MongoDB documents (ie `DBObject`s) at once:
collection.insert(dbObjects.toArray(new DBObject[dbObjects.size()]), writeConcernToUse);
Other datastores that do not support batch inserts would instead to the insert in the storeEntry
method itself. For example the implementation for ConcurrentHashMap
looks like (note Groovy code):
protected storeEntry(PersistentEntity persistentEntity, EntityAccess entityAccess, storeId, Map nativeEntry) {
if (!persistentEntity.root) {
nativeEntry.discriminator = persistentEntity.discriminator
}
datastore<<family>>.put(storeId, nativeEntry)
return storeId
}
Update
The updateEntry
method is used to update an entry:
public void updateEntry(final PersistentEntity persistentEntity, final EntityAccess ea,
final Object key, final DBObject entry) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
String collectionName = getCollectionName(persistentEntity, entry);
DBCollection dbCollection = con.getCollection(collectionName);
if (isVersioned(ea)) {
// TODO this should be done with a CAS approach if possible
DBObject previous = dbCollection.findOne(key);
checkVersion(ea, previous, persistentEntity, key);
}
MongoSession mongoSession = (MongoSession) session;
dbCollection.update(dbo, entry, false, false, mongoSession.getWriteConcern());
return null;
}
});
}
As you can see again the underlying database specific update
method is used, in this case the DBCollection’s `update
method.
Delete
The deleteEntry
method is used to delete an entry. For example in the ConcurrentHashMap
implementation it is simply removed from the map:
protected void deleteEntry(String family, key, entry) {
datastore<<family>>.remove(key)
}
Whilst in MongoDB
the DBCollection
object’s remove
method is called:
@Override
protected void deleteEntry(String family, final Object key, final Object entry) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
DBCollection dbCollection = getCollection(con);
MongoSession mongoSession = (MongoSession) session;
dbCollection.remove(key, mongoSession.getWriteConcern());
return null;
}
protected DBCollection getCollection(DB con) {
return con.getCollection(getCollectionName(getPersistentEntity()));
}
});
}
Note that if the underlying datastore supports batch delete operations you may want override and implement the deleteEntries
method which allows for deleting multiple entries in a single operation. The implementation for MongoDB looks like:
protected void deleteEntries(String family, final List<Object> keys) {
mongoTemplate.execute(new DbCallback<Object>() {
public Object doInDB(DB con) throws MongoException, DataAccessException {
String collectionName = getCollectionName(getPersistentEntity());
DBCollection dbCollection = con.getCollection(collectionName);
MongoSession mongoSession = (MongoSession) getSession();
MongoQuery query = mongoSession.createQuery(getPersistentEntity().getJavaClass());
query.in(getPersistentEntity().getIdentity().getName(), keys);
dbCollection.remove(query.getMongoQuery());
return null;
}
});
}
You’ll notice this implementation uses a MongoQuery
instance. Note that implementing an EntityPersister
you have enabled basic CRUD operations, but not querying, which is a topic of the following sections. First, however secondary indices need to covered since they are required for querying.
Secondary Indexing
Many datastores do not support secondary indexing or require you to build your own. In cases like this you will need to implement a PropertyIndexer
.
If the underlying datastore supports secondary indexes then it is ok to just return a null PropertyIndexer and let the datastore handle the indexing
|
For example the ConcurrentHashMap
implementation creates secondary indices by populating another Map
containing the indices:
void index(value, primaryKey) {
def index = getIndexName(value)
def indexed = indices<<index>>
if (indexed == null) {
indexed = []
indices<<index>> = indexed
}
if (!indexed.contains(primaryKey)) {
indexed << primaryKey
}
}
The implementation for Redis is very similar and stores the primary key in a Redis set:
public void index(final Object value, final Long primaryKey) {
if (value == null) {
return;
}
final String primaryIndex = createRedisKey(value);
redisTemplate.sadd(primaryIndex, primaryKey);
}
An index name is typically built from the entity name, property name and property value. The primary key of the entity is stored in this index for later querying. In fact there is a query
method that needs to be implemented on PropertyIndexer
. The ConcurrentHashMap
implementation looks like this:
List query(value, int offset, int max) {
def index = getIndexName(value)
def indexed = indices<<index>>
if (!indexed) {
return Collections.emptyList()
}
return indexed[offset..max]
}
Depending on the characteristics of the underlying database you may want to do the indexing asynchronously or you may want to index into a search library such as Lucene. For datastores that are eventually consistent for example it makes sense to do all indexing asynchronously.
Finally, when an object is deleted it will need to removed from the indices. This can be done with the deindex
method:
void deindex(value, primaryKey) {
def index = getIndexName(value)
def indexed = indices<<index>>
if (indexed) {
indexed.remove(primaryKey)
}
}
Implementing Querying
Introduction
The org.grails.datastore.mapping.query.Query
abstract class defines the query model and it is the job of the GORM implementor to translate this query model into an underlying database query. This is different depending on the implementation and may involve:
-
Generating a String-based query such as SQL or JPA-QL
-
Creating a query object such as MongoDB’s use of a
Document
to define queries -
Generating for use with manually created Secondary indices as is the case with Redis
The Query
object defines the following:
-
One or many
Criterion
that define the criteria to query by. -
Zero or many
Projection
instances that define what the data you want back will look like. -
Pagination parameters such as
max
,offset
-
Sorting parameters
There are many types of Criterion
for each specific type of query, examples include Equals
, Between
, Like
etc. Depending on the capabilities of the underlying datastore you may implement only a few of these.
There are also many types of Projection
such as SumProjection
, MaxProjection
and CountProjection
. Again you may implement only a few of these.
If the underlying datastore doesn’t for example support calculating a sum or max of a particular property, there is a ManualProjections class that you can use to perform these operations in memory on the client.
|
Writing a Query
implementation is probably the most complex part of implementing a GORM provider, but starts by subclassing the Query
class and implementing the executeQuery
method:
public class MongoQuery extends Query implements QueryArgumentsAware {
...
}
Using the Query Model
To implement querying you need to understand the Query model. As discussed a Query
contains a list of Criterion
, however the root Criterion
could be a conjunction (an AND query) or a disjunction (an OR query). The Query
may also contain a combination of regular criterion (=, !=, LIKE etc.) and junctions (AND, OR or NOT). Implementing a Query
therefore requires writing a recursive method. The implementation for ConcurrentHashMap
looks like
Collection executeSubQueryInternal(criteria, criteriaList) {
SimpleMapResultList resultList = new SimpleMapResultList(this)
for (Query.Criterion criterion in criteriaList) {
if (criterion instanceof Query.Junction) {
resultList.results << executeSubQueryInternal(criterion, criterion.criteria)
}
else {
PersistentProperty property = getValidProperty(criterion)
def handler = handlers[criterion.getClass()]
def results = handler?.call(criterion, property) ?: []
resultList.results << results
}
}
}
Notice that if a Junction
is encountered (which represents an AND, OR or NOT) then the method recurses to handle the junctions, otherwise a handler for the Criterion
class is obtained and executed. The handlers
map is a map of Criterion
class to query handlers. The implementation for Equals
looks like:
def handlers = [
...
(Query.Equals): { Query.Equals equals, PersistentProperty property->
def indexer = entityPersister.getPropertyIndexer(property)
final value = subqueryIfNecessary(equals)
return indexer.query(value)
}
...
]
Which simply uses the property indexer to query for all identifiers. Of course here we are a describing a case of a datastore (in this case ConcurrentHashMap
) which doesn’t support secondary indices. It may be that instead of manually querying the secondary indices in this way that you simply build a String-based or native query. For example in MongoDB this looks like:
queryHandlers.put(Equals.class, new QueryHandler<Equals>() {
public void handle(PersistentEntity entity, Equals criterion, Document query) {
String propertyName = getPropertyName(entity, criterion);
Object value = criterion.getValue();
PersistentProperty property = entity.getPropertyByName(criterion.getProperty());
MongoEntityPersister.setDBObjectValue(query, propertyName, value, entity.getMappingContext());
}
});
Notice how the query in this case is a DBObject
. For Gemfire again the implementation is different:
queryHandlers.put(Equals.class, new QueryHandler() {
public int handle(PersistentEntity entity, Criterion criterion, StringBuilder q, List params, int index) {
Equals eq = (Equals) criterion;
final String name = eq.getProperty();
validateProperty(entity, name, Equals.class);
q.append(calculateName(entity, name));
return appendOrEmbedValue(q, params, index, eq.getValue(), EQUALS);
}
});
In this case a StringBuilder
is used to construct a OQL query from the Query
model.
GORM Enhancer
Once you have implemented the lower-level APIs you can trivially provide a GORM API to a set of Grails domain classes. For example consider the following simple domain class:
import grails.persistence.*
@Entity
class Book {
String title
}
The following setup code can be written to enable GORM for MongoDB:
// create context
def context = new MongoMappingContext(databaseName)
context.addPersistentEntity(Book)
// create datastore
def mongoDatastore = new MongoDatastore(context)
mongoDatastore.afterPropertiesSet()
// enhance
def enhancer = new MongoGormEnhancer(mongoDatastore, new DatastoreTransactionManager(datastore: mongoDatastore))
enhancer.enhance()
// use GORM!
def books = Book.list()
They key part to enabling the usage of all the GORM methods (list()
, dynamic finders etc.) is the usage of the MongoGormEnhancer
. This class subclasses org.grails.datastore.gorm.GormEnhancer
and provides some extensions to GORM specific to MongoDB. A subclass is not required however and if you don’t require any datastore specific extensions you can just as easily use the regular GormEnhancer
:
def enhancer = new GormEnhancer(mongoDatastore, new DatastoreTransactionManager(datastore: mongoDatastore))
enhancer.enhance()
Adding to GORM APIs
By default the GORM compiler will make all GORM entities implement the GormEntity
trait. Which provide all of the default GORM methods. However if you want to extend GORM to provide more methods specific to a given data store you can do so by extending this trait.
For example Neo4j adds methods for Cypher querying:
trait Neo4jEntity<D> extends GormEntity<D> {
static Result cypherStatic(String queryString, Map params ) {
def session = AbstractDatastore.retrieveSession(Neo4jDatastore)
def graphDatabaseService = (GraphDatabaseService)session.nativeInterface
graphDatabaseService.execute(queryString, params)
}
}
With this addition then you then need to tell the GORM compiler to make entities implement this trait. To do that implement a TraitProvider
:
package org.grails.datastore.gorm.neo4j
import grails.neo4j.Neo4jEntity
import groovy.transform.CompileStatic
import org.grails.compiler.gorm.GormEntityTraitProvider
@CompileStatic
class Neo4jEntityTraitProvider implements GormEntityTraitProvider {
final Class entityTrait = Neo4jEntity
}
And then add a src/main/resources/META-INF/services/org.grails.compiler.gorm.GormEntityTraitProvider
file specifying the name of your trait provider:
org.grails.datastore.gorm.neo4j.Neo4jEntityTraitProvider
GORM will automatically inject to trait into any domain class found in grails-app/domain
or annotated with the Entity
annotation, unless Hibernate is on the classpath in which case you have to tell GORM to map the domain class with Neo4j:
static mapWith = "neo4j"
Using the Test Compatibility Kit
The grails-datastore-gorm-tck
project provides a few hundred tests that ensure a particular GORM implementation is compliant. To use the TCK you need to define a dependency on the TCK in the subprojects build.gradle
file:
testCompile project(':grails-datastore-gorm-tck')
Then create a Setup.groovy
file that sets up your custom datastore in your implementation.
For example the ConcurrentHashMap
implementation has one defined in grails-datastore-gorm-test/src/test/groovy/org/grails/datastore/gorm/Setup.groovy
:
class Setup {
static destroy() {
// noop
}
static Session setup(classes) {
def ctx = new GenericApplicationContext()
ctx.refresh()
def simple = new SimpleMapDatastore(ctx)
...
for (cls in classes) {
simple.mappingContext.addPersistentEntity(cls)
}
...
def enhancer = new GormEnhancer(simple, new DatastoreTransactionManager(datastore: simple))
enhancer.enhance()
simple.mappingContext.addMappingContextListener({ e -> enhancer.enhance e } as MappingContext.Listener)
simple.applicationContext.addApplicationListener new DomainEventListener(simple)
simple.applicationContext.addApplicationListener new AutoTimestampEventListener(simple)
return simple.connect()
}
}
Some setup code has been omitted for clarity but basically the Setup.groovy
class should initiase the Datastore
and return a Session
from the static setup
method which gets passed a list of classes that need to be configured.
With this done all of the TCK tests will run against the subproject. If a particular test cannot be implemented because the underlying datastore doesn’t support the feature then you can create a test that matches the name of the test that is failing and it will override said test.
For example SimpleDB doesn’t support pagination so there is a grails.gorm.tests.PagedResultSpec
class that overrides the one from the TCK. Each test is a Spock specification and Spock has an Ignore
annotation that can be used to ignore a particular test:
/**
* Ignored for SimpleDB because SimpleDB doesn't support pagination
*/
@Ignore
class PagedResultSpec extends GormDatastoreSpec{
...
}
Step by Step Guide to Creating an Implementation
To get started with your a new GORM implementation the following steps are required:
Initial Directory Creation
$ git clone git@github.com:grails/grails-data-mapping.git
$ cd grails-data-mapping
$ mkdir grails-datastore-gorm-xyz
Setup Gradle Build
Create build.gradle:
$ vi grails-datastore-gorm-xyz/build.gradle
With contents:
dependencies {
compile project(':grails-datastore-gorm'),
project(':grails-datastore-web'),
project(':grails-datastore-gorm-support')
testCompile project(':grails-datastore-gorm-tck')
testRuntime "javax.servlet:javax.servlet-api:$servletApiVersion"
}
Add new project to settings.gradle in root project:
$ vi settings.gradle
Changes shown below:
// GORM Implementations
'grails-datastore-gorm-neo4j',
'grails-datastore-gorm-xyz',
....
Create Project Source Directories
$ mkdir grails-datastore-gorm-xyz/src/main/groovy
$ mkdir grails-datastore-gorm-xyz/src/test/groovy
Generate IDE Project Files and Import into IDE
$ gradlew grails-datastore-gorm-xyz:idea
Or
$ gradlew grails-datastore-gorm-xyz:eclipse
Implement Required Interfaces
In src/main/groovy
create implementations:
-
org.grails.datastore.xyz.XyzDatastore
extends and implementsorg.grails.datastore.mapping.core.AbstractDatastore
-
org.grails.datastore.xyz.XyzSession
extends and implementsorg.grails.datastore.mapping.core.AbstractSession
-
org.grails.datastore.xyz.engine.XyzEntityPersister
extends and implementsorg.grails.datastore.mapping.engine.NativeEntryEntityPersister
-
org.grails.datastore.xyz.query.XyzQuery
extends and implementsorg.grails.datastore.mapping.query.Query
Create Test Suite
In src/test/groovy
create org.grails.datastore.gorm.Setup
class to configure TCK:
class Setup {
static xyz
static destroy() {
xyz.disconnect()
}
static Session setup(classes) {
def ctx = new GenericApplicationContext()
ctx.refresh()
xyz = new XyzDatastore(ctx)
for (cls in classes) {
xyz.mappingContext.addPersistentEntity(cls)
}
def enhancer = new GormEnhancer(xyz, new DatastoreTransactionManager(datastore: xyz))
enhancer.enhance()
xyz.mappingContext.addMappingContextListener({ e -> enhancer.enhance e } as MappingContext.Listener)
xyz.applicationContext.addApplicationListener new DomainEventListener(xyz)
xyz.applicationContext.addApplicationListener new AutoTimestampEventListener(xyz)
xyz.connect()
}
}
Then in src/test/groovy
create test suite class to allow running tests in IDE (without this you won’t be able to run TCK tests from the IDE). Example test suite:
package org.grails.datastore.gorm
import org.junit.runners.Suite.SuiteClasses
import org.junit.runners.Suite
import org.junit.runner.RunWith
import grails.gorm.tests.*
/**
* @author graemerocher
*/
@RunWith(Suite)
@SuiteClasses([
FindByMethodSpec,
ListOrderBySpec
])
class XyzTestSuite {
}
Implement the TCK!
Keep iterating until you have implemented all the tests in the TCK.