I have been a huge fan of the Grails Searchable plugin, and its one of the main selling points when I talk to people about Grails. But something has made me go sour on it recently. A great blog post from Boris Goykhman about the plugin has prompted me to write my own.
For those of you that don’t know; Searchable plugin is a plugin wrapper in the Grails Framework for the Compass Search engine that abstracts much of the domain class to search index mappings. And Compass is an extended implementation of the robust Open Source Lucene search engine. But its the “Domain Model to Search Index” abstraction that has cause for concern.
So lets start – I think Searchable is absolutely brilliant!!! It simplifies the process of adding a search engine to a Grails application. Although apart from some straight forward indexing and searching, I don’t believe its designed to handle complex search scenarios. Sure, you can spend a lot of time bastardising your application to support many of the elements (such as searchable component and searchable reference) to try and get deep nested relationships correctly mapped, but more often than not, I find myself also bastardising my domain model to handle what I want search to do!!
I will continue to use searchable for basic type search mappings, but I have recently decided that it is more efficient to separate the concerns between domain data and search indexes. Domain models are inherently a tree structure, there are relationships, abstractions and extensions. Whereas your search index is a flattened version of this. And trying to do this flattening through decorators and mapping files is not effective.
In a recent project, I took the approach to handling the domain to search indexing data structure conversion myself. And its already started to demonstrate pay offs! I have moved to the Solr Search engine, which is another emerging implementation of the Lucene Engine. I favour it these days for its added features particularly in Geo Spatial Search capabilities.
Quite simply, my application has a SolrService that takes domain objects that I pass it and convert them to a flattened Solr Object (that I also implemented), that is then passed to the Solr Search Engine through SolrJ. All this is quite simply managed through the afterInsert, afterUpdate and afterDelete triggers in the Domain objects. But it means that I don’t have to write complex mapping files to try and map my domain structures to the search index I want to create.
What are the benefits? Most of you would state that it would take more time doing this, but I found it to be the contrary. I have spent so much time (sometimes weeks), tweaking searchable configurations and mappings to see how it would affect my index. Looking at indexes with the Luke tool to try and determine what is actually going on and then trying to perform searches against that data to see if I get back what I am after.
I have found that the search indexes are much smaller and much more performant because of it. Because you have much more control over the indexes, I create my indexes so that I don’t have to do any GORM queries at all when rendering search results. Which you can also do in searchable, but its much more complex to understand the translation.
Soon I will try and post some code on how I actually did this.