[jira] [Updated] (CARBONDATA-3557) Support write Flink streaming data to Carbon

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Updated] (CARBONDATA-3557) Support write Flink streaming data to Carbon

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhi Liu updated CARBONDATA-3557:
--------------------------------
    Description:
Sometimes, user need to write flink streaming data to carbon,  required high concurrency and high throughput.

The write process is:
 # Write flink streaming data to local file system of flink task node use flink StreamingFileSink and carbon SDK;
 # Copy local carbon data file to carbon data store system, such as HDFS, S3;
 # Generate and write segment file to ${tablePath}/load_details;

Run "alter table ${tableName} collect segments" command on server, to compact segment files in ${tablePath}/load_details, and then move the compacted segment file to ${tablePath}/Metadata/Segments/,update table status file finally.

  was:
Sometimes, user need to write flink streaming data to carbon,  required high concurrency and high throughput.

 


> Support write Flink streaming data to Carbon
> --------------------------------------------
>
>                 Key: CARBONDATA-3557
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3557
>             Project: CarbonData
>          Issue Type: New Feature
>          Components: spark-integration
>            Reporter: Zhi Liu
>            Priority: Major
>             Fix For: 2.0.0
>
>          Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Sometimes, user need to write flink streaming data to carbon,  required high concurrency and high throughput.
> The write process is:
>  # Write flink streaming data to local file system of flink task node use flink StreamingFileSink and carbon SDK;
>  # Copy local carbon data file to carbon data store system, such as HDFS, S3;
>  # Generate and write segment file to ${tablePath}/load_details;
> Run "alter table ${tableName} collect segments" command on server, to compact segment files in ${tablePath}/load_details, and then move the compacted segment file to ${tablePath}/Metadata/Segments/,update table status file finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)