This document captures the design of enhancements to data discovery in 4.0. Its main goal is to serve the Listing Center Home Page of CDAP 4.0.
- User stories documented (Bhooshan)
- User stories reviewed (Nitin)
- User stories reviewed (Todd)
- Requirements documented (Bhooshan)
- Requirements Reviewed (Nitin/Todd)
- Design Documented (Bhooshan
- Design Reviewed (Andreas/Terence/Poorna)
The main requirements influencing these enhancements are:
- Support configurable sorting for search results. Preferably, both
sortOrdershould be supported. In addition, it would be nice to support multiple combinations of
- Support pagination for search results. The API should accept
offset(defines the start position in the search results) and
limit(defines the number of results to show at a time) parameters.
- Search queries should be able to filter results by one or more entity types
- Metadata for every search result should include (**needs confirmation**):
- Status - Composed of statistics, current state, etc of the entity.
- Potential requirement: Ability to annotate (if not filter) an entity by scope (
- As a CDAP user, I should be able to search all entities (artifacts, applications, programs, datasets, streams, views) sorted by a name and/or creation time
- As a CDAP user, I should be able to paginate search results by specifying a page size. In addition, I should be able to specify the offset from where to return search results.
- As a CDAP user, I should be able to filter search results by a given entity type
The CDAP search backend today has been implemented using an
IndexedTable. Implementing sorting and pagination on this implementation may be difficult as well as introduce performance bottlenecks, due to multiple potential HBase scans. Also, an index would have to be stored per
sortOrder combination. An alternative to this is to fetch results for the provided search query and sort them in-memory after that. But in a big data scenario, this option is not viable.
The eventual goal of CDAP is to move from the current
IndexedTable backed search to an external search engine. The major motivations for that are to facilitate richer search queries and full-text search. Some initial investigation about alternatives for search are at External Search and Indexing Engine Investigation. A summary of the two most viable alternatives - Apache Solr and Elasticsearch can be found at these links:
Most research indicates feature parity between the two options, although Elasticsearch seems to have better REST API and JSON support. However, being that Apache Solr is more favored in Hadoop-land (supported by more distributions, is the only search engine that Cloudera supports, and has support in Slider to run on YARN), it makes more sense as the first candidate for supporting a search backend. The search backend, however, can be made pluggable (as an extension loaded using its own classloader using an SPI), so it could be swiped out for ElasticSearch if users wish to in future.
4.0 Requirements in Apache Solr
Sorting (including multiple sort orderings) is supported in Apache Solr using the sort parameter.
Filtering is supported using the fq parameter.
Running Apache Solr
Solr can be run as either a separate Twill Runnable using logic like https://github.com/lucidworks/yarn-proto/blob/master/src/main/java/org/apache/solr/cloud/yarn/SolrMaster.java or can be housed inside the
DatasetOpExecutorTwillRunnable as well. This decision depends on some prototyping. Solr will be started to use HDFS for persistence.
Solr supports a standalone mode, which starts up a separate Solr process. However, we will prefer to use EmbeddedSolrServer, in the same process as standalone CDAP.
CDAP will use EmbeddedSolrServer in in-memory mode.
Like in 3.5, there would be a call to update the index everytime the metadata of an entity is updated. Unlike in 3.5 though, this call would be an HTTP call to the Search Service (running Solr in 4.0).
Note: Since this call is now an HTTP call,
- should it be asynchronous?
- it will happen outside of the transaction to update the Metadata Dataset.
Since the persistence stores for metadata and the search index will be different, we will need a utility to keep them in sync. This could be a service/thread that runs periodically (preferred), or a tool that is invoked manually.
There should be a way to upgrade existing indexes to be stored in the new Search backend. The index sync tool should be developed in a way that it can be run via the Upgrade Tool to update existing metadata in the new search backend.
- What is the schema of data to be indexed in the new search backend?
REST API changes
The following changes would be made in the metadata search RESTful API:
sortparameter that specifies the sort query. It contains a comma-separated list of sort fields and sort order. e.g.
offsetparameter that specifies the offset into the search results. Defaults to 0.
sizeparameter that specifies the number of results to return, starting at the
offset. Defaults to
The response would contain 2 fields, other than the above input parameters:
results- Contains a set of search results matching the search query
total- specifies the total number of matched entities. This can be used to calculate the number of pages.
TODO: Given the format of the entityId object in the search response, figure out if sorting can be applied on the entity name.
Status of an Entity
Along with showing the metadata of an entity (name, description, tags, properties, etc), one of the requirements for the home page is to also show a brief 'status' for every entity, which is a summary of statistics and metrics. For each entity type, status should surface:
Artifact: # apps, # extensions, # plugins
Application: Total # programs, # Running, # Stopped
Dataset: Read Rate, Write Rate, # apps using it
Stream: Read Rate, Write Rate, # apps connected to it, # stream views created
Stream View: Read Rate, Write Rate, # apps connected to it
This information will not be surfaced from the metadata system. The UI will have to make separate calls potentially for:
- Metrics APIs for getting Read Rate and Write Rate
- Usage Registry for apps using datasets, streams and stream views
- App Fabric APIs for getting the other information from App Fabric.
For 2 and 3, there could be an alternative to provide a UI-only (non-documented) batch endpoint.
Dataset Types in Metadata System
Currently, the Metadata System only supports artifacts, applications, programs, datasets, streams and stream views as entities. Is support for dataset types and modules necessary for 4.0?