|
hi, i deployed carbon on yarn and tried to convert json data into carbondata using carbon-spark-sql.
when i run following create table command:
create table q3_carbon (cookie string, freq int, key bigint) STORED BY 'carbondata';
it return succ.
then , i run :
show create table q3_carbon;
it return :
CREATE EXTERNAL TABLE `q3_carbon`(
`col` array<string> COMMENT 'from deserializer')
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe'
WITH SERDEPROPERTIES (
'tableName'='default.q3_carbon',
'tablePath'='hdfs://###/Opt/CarbonStore/default/q3_carbon')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
'hdfs://###:8020/apps/hive/warehouse/q3_carbon'
TBLPROPERTIES (
'spark.sql.sources.provider'='carbondata',
'transient_lastDdlTime'='1476953210')
column schema is not matched with my create table command.
then i tried to run the convert sql :
insert overwrite TABLE q3_carbon select cookie, freq, key from jsonTable;
it show the ERROR message:
"Error in query: unresolved operator 'InsertIntoTable Relation[cookie#25,freq#26L,key#27L] CarbonDatasourceRelation(`default`.`q3_carbon`,None), Map(), true, false;
------------------------------
i'm using carbondata 0.2 , on spark1.6.1 & hadoop 2.7.1
the data in table jsonTable is like following:
{"cookie":"FFFFFB438C406B85C41456D80CD2F75E","freq":2,"key":3158409}
|