Posted by
Jacky Li on
Oct 20, 2017; 8:57am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/Discussion-Carbon-Store-abstraction-tp24337p24429.html
Hi All,
To provide clear API and avoid cyclic dependency between modules, the overall design will look like following diagram:
Hive-integration ---
\
Spark-integration----
\
———> carbondata-store ————————> carbondata-metadata
\ /\ /\ /\
\ / | |
\ / | |
|———> carbondata-table / |
|———> carbondata-core —/ |
|———> carbondata-processing —/
There are three new modules:
1. Carbondata-store: The main purpose of carbondata-store is to provide public interface to all integration module. It is a very thin module
2. Carbondata-table: It implements interface defines in carbondata-store, it provides table level concept abstraction like schema, segment, etc.
3. Carbondata-metadata: It holds all metadata class that need to be shared in all modules, metadata like TableInfo object
In order to provide a clean API, ONLY carbondata-store and carbondata-metadata should provide public API, other modules like carbondata-table, carbondata-processing should not expose any public class and method, they are just implementing interface defined by carbondata-store.
This also means that if we find there are public class or method in carbondata-core, carbondata-table or carbondata-processing, it means they should be refactored and move to either carbondata-metadata or carbondata-store.
The public API provided by carbondata-store should includes:
Table management:
Initialize and persist table metadata when integration module create table. Currently, the metadata includes TableInfo. Table path should be specified by integration module
Delete metadata and data in table path when integration module drop table
Retrieve TableInfo from table path
Check whether table exists
Alter metadata in TableInfo
Segment management. (Segment is operated in transactional way)
Open new segment when integration module load new data
Commit segment when data operation is done successfully
Close segment when data operation failed
Delete segment when integration module drop segment
Retrieve segment information by giving segmentId
Compaction management
Compaction policy for deciding whether compaction should be carried out
Data operation (carbondata-store provides map functions in map-reduce manner)
Data loading map function
Delete segment map function
other operation that involves map side operation. (basically, it is the internalComputefunction in all RDD in current spark integration module)
This is the current idea, please advise
Regards,
Jacky Li
> 在 2017年10月20日,下午3:31,Liang Chen <
[hidden email]> 写道:
>
> Hi
>
> Thank you started this discussion. agree, for exposing the clear interface
> to users, there are some optimization works.
>
> Can you list the more detail about your proposal? for example: what class
> you propose to move to carbon store, what api you propose to create and
> expose to users.
> I suggest we can discuss and confirm your proposal in dev first, then start
> to create sub task in Jira.
>
> Regards
> Liang
>
>
> Jacky Li wrote
>> Hi community,
>>
>> I am proposing to create a carbondata-store module to abstract the carbon
>> store concept. The reason is:
>>
>> 1. Initially, carbon is designed as a file format, as it evolves to
>> provide more features, it implemented more and more functionalities in the
>> spark integration module. However, as community is trying to integrate
>> more and more compute framework with carbon, these functionalities is
>> duplicated across integration layer. Idealy, these functionality can be
>> unified and provided in one place.
>>
>> 2. The current interface of carbondata exposed to user is through SQL, but
>> the developer interface for developers who want to do compute engine
>> integration is not very clear.
>>
>> 3. There are many SQL command that carbon supported, but they are
>> implemented through spark RDD only. It is not sharable across compute
>> framework.
>>
>> Due to these reasons, for the long term future of carbondata, I think it
>> is better to abstract the interface for compute engine integration within
>> a new module called carbondata-store. It can wrap all store level
>> functionalities that above file format in an independent module of compute
>> engine, so that every integration module can depends on it and duplicate
>> code is removed.
>>
>> This is a continuous effort for long term, I will break this work into
>> subtask and start it by creating JIRA issue, if you agree.
>>
>> Regards,
>> Jacky Li
>
>
>
>
> --
> Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/