Login  Register

Re: DDL for CarbonData table backup and recovery (new feature)

Posted by mohdshahidkhan on Nov 27, 2017; 5:51am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/DDL-for-CarbonData-table-backup-and-recovery-new-feature-tp27854p28290.html

Thanks for the clarification Naresh.
Please find my answer.

Actually if the export command is on CarbonData table, we can just zip the
actual table folder & associated agg table folders into user mentioned
location. It dont export Metadata
Copy data from 1 cluster to other will still remain same in your approach
also.
Agree, we don't want the export data, its simply user has the tables from
the previous cluster
and want to use them, so to use that he has register with the hive.

After copying data into new cluster, how to synchronize incremental loads
or schema evolution from old cluster to new cluster ?
should we need to drop the table in new cluster, copy the data from old
cluster to new cluster & recreate table again ?
A. synch from old to new is not is scope.

I think creating carbondata table requires schema information also to be
passed.
CREATE TABLE $dbName.$tbName (${ fields.map(f => f.rawSchema).mkString(",")
}) USING CARBONDATA OPTIONS (tableName "$tbName", dbName "$dbName",
tablePath "$tablePath")
A. agree will take the same.




--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/