Re: CarbonData保存CSV找不到方法com.univocity.parsers.csv.CsvWriterSettings.setQuoteEscapingEnabled

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: CarbonData保存CSV找不到方法com.univocity.parsers.csv.CsvWriterSettings.setQuoteEscapingEnabled

Liang Chen-2
ok,Thanks for your feedback.

Please modify the pom file under processing, see if it can work in 1.2.0.

<dependency>
      <groupId>com.univocity</groupId>
      <artifactId>univocity-parsers</artifactId>
      <version>2.2.1</version>
</dependency>

Regards
Liang

2018-01-25 11:56 GMT+08:00 Luo Colin <[hidden email]>:

> Chenliang,
>
>
>
>        环境:Apache Spark 2.1, CarbonData 1.2, Java
>
>
>
>    .format("csv").save报java.lang.NoSuchMethodError:
> com.univocity.parsers.csv.CsvWriterSettings.setQuoteEscapingEnabled,检查发现CarbonData
> 1.2中的univocity-parsers-1.5.6.jar没有setQuoteEscapingEnabled方法,而CarbonData
> 1.3中univocity-parsers-2.2.1.jar则有。
>
>
>
> Best Regards
>
>
>
> Colin
>
Reply | Threaded
Open this post in threaded view
|

Re: 分区表load数据然后update,结果数据被delete

Liang Chen-2
Hi

Please send  all questions to carbondata mailing list.
you can send mail to [hidden email] and follow the
guide to join.

Currently, don't support to use "PARTITIONED BY (city string)" and
"TBLPROPERTIES
('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='2')" together.

Regards
Liang

2018-02-02 10:23 GMT+08:00 Luo Colin <[hidden email]>:

> 陈总,
>
>         一表不分区正常,分区后load数据,然后执行update语句,结果是数据被删除了。真是要命了,请问大侠如何应对?
>
>
> scala> carbon.sql("CREATE TABLE IF NOT EXISTS test3(id string,name
> string,age Int) PARTITIONED BY (city string) STORED BY 'carbondata'
> TBLPROPERTIES ('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='2')")
> 18/02/02 10:04:19 AUDIT CreateTable: [ubuntu][bigdata][Thread-1]Creating
> Table with Database name [default] and Table name [test3]
> 18/02/02 10:04:19 WARN HiveExternalCatalog: Couldn't find corresponding
> Hive SerDe for data source provider org.apache.spark.sql.CarbonSource.
> Persisting data source table `default`.`test3` into Hive metastore in Spark
> SQL specific format, which is NOT compatible with Hive.
> 18/02/02 10:04:19 AUDIT CreateTable: [ubuntu][bigdata][Thread-1]Table
> created with Database name [default] and Table name [test3]
> res11: org.apache.spark.sql.DataFrame = []
>
> scala> carbon.sql("LOAD DATA INPATH 'demo.csv'INTO TABLE test3")
> 18/02/02 10:04:48 AUDIT CarbonDataRDDFactory$:
> [ubuntu][bigdata][Thread-1]Data load request has been received for table
> default.test3
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: main sort scope is set to
> LOCAL_SORT
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-7 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-7 batch sort size is set to 0
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-7 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-8 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-8 batch sort size is set to 0
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-8 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-7 Error occurs while creating dirs: /tmp/376530432901513_0/default
> /test3/Fact/Part0/Segment_0/0
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-7 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 WARN MeasureFieldConverterImpl: pool-117-thread-1 Cant
> not convert value to Numeric type value. Value considered as null.
> 18/02/02 10:04:48 WARN MeasureFieldConverterImpl: pool-117-thread-1 Cant
> not convert value to Numeric type value. Value considered as null.
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-8 Error occurs while creating dirs: /tmp/376530441791900_1/default
> /test3/Fact/Part0/Segment_0/1
> 18/02/02 10:04:48 WARN CarbonDataProcessorUtil: Executor task launch
> worker-8 sort scope is set to LOCAL_SORT
> 18/02/02 10:04:48 ERROR DataLoadExecutor: Executor task launch worker-8
> Data Load is partially success for table test3
> 18/02/02 10:04:48 AUDIT CarbonDataRDDFactory$:
> [ubuntu][bigdata][Thread-1]Data load is successful for default.test3
> res12: org.apache.spark.sql.DataFrame = []
>
>
> scala> carbon.sql("SELECT * FROM test3").show()
> +---+-----+----+--------+
> | id| name| age|    city|
> +---+-----+----+--------+
> |  1|david|null|shenzhen|
> |  2|eason|null|shenzhen|
> |  3|jarry|  35|   wuhan|
> +---+-----+----+--------+
>
>
> scala> carbon.sql("update test3 set (age) = (100) where id='3'").show()
> 18/02/02 10:06:55 AUDIT deleteExecution$: [ubuntu][bigdata][Thread-1]Delete
> data operation is successful for default.test3
> 18/02/02 10:06:55 AUDIT CarbonDataRDDFactory$:
> [ubuntu][bigdata][Thread-1]Data load request has been received for table
> default.test3
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: main sort scope is set to
> LOCAL_SORT
> 18/02/02 10:06:55 WARN Executor: 1 block locks were not released by TID =
> 37:
> [rdd_94_0]
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: [Executor task launch
> worker-10][partitionID:test3;queryID:376657526113286] sort scope is set
> to LOCAL_SORT
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: [Executor task launch
> worker-10][partitionID:test3;queryID:376657526113286] batch sort size is
> set to 0
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: [Executor task launch
> worker-10][partitionID:test3;queryID:376657526113286] sort scope is set
> to LOCAL_SORT
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: [Executor task launch
> worker-10][partitionID:test3;queryID:376657526113286] Error occurs while
> creating dirs: /tmp/376657803010017_2/default/test3/Fact/Part0/Segment_0/2
> 18/02/02 10:06:55 WARN CarbonDataProcessorUtil: [Executor task launch
> worker-10][partitionID:test3;queryID:376657526113286] sort scope is set
> to LOCAL_SORT
> 18/02/02 10:06:55 AUDIT CarbonDataRDDFactory$:
> [ubuntu][bigdata][Thread-1]Data update is successful for default.test3
> ++
> ||
> ++
> ++
>
>
> scala> carbon.sql("SELECT * FROM test3").show()
> +---+-----+----+--------+
> | id| name| age|    city|
> +---+-----+----+--------+
> |  1|david|null|shenzhen|
> |  2|eason|null|shenzhen|
> +---+-----+----+--------+
>
>
> Colin
> ------------------------------
> *发件人:* Luo Colin <[hidden email]>
> *发送时间:* 2018年1月25日 15:44
> *收件人:* Liang Chen
> *主题:* 答复: CarbonData保存CSV找不到方法com.univocity.parsers.csv.CsvWriterSetti
> ngs.setQuoteEscapingEnabled
>
>
> 昨天用univocity-parsers 2.2.1试过是可以的
>
> Thanks
>
>
>
> 发送自 Windows 10 版邮件 <https://go.microsoft.com/fwlink/?LinkId=550986>应用
>
>
>
> *发件人: *Liang Chen <[hidden email]>
> *发送时间: *2018年1月25日 14:39
> *收件人: *Luo Colin <[hidden email]>; [hidden email]
> *主题: *Re: CarbonData保存CSV找不到方法com.univocity.parsers.csv.CsvWriterSetti
> ngs.setQuoteEscapingEnabled
>
>
>
> ok,Thanks for your feedback.
>
>
>
> Please modify the pom file under processing, see if it can work in 1.2.0.
>
>
>
> <dependency>
>
>       <groupId>com.univocity</groupId>
>
>       <artifactId>univocity-parsers</artifactId>
>
>       <version>2.2.1</version>
>
> </dependency>
>
>
>
> Regards
>
> Liang
>
>
>
> 2018-01-25 11:56 GMT+08:00 Luo Colin <[hidden email]>:
>
> Chenliang,
>
>
>
>        环境:Apache Spark 2.1, CarbonData 1.2, Java
>
>
>
>    .format("csv").save报java.lang.NoSuchMethodError:
> com.univocity.parsers.csv.CsvWriterSettings.setQuoteEscapingEnabled,检查发现CarbonData
> 1.2中的univocity-parsers-1.5.6.jar没有setQuoteEscapingEnabled方法,而CarbonData
> 1.3中univocity-parsers-2.2.1.jar则有。
>
>
>
> Best Regards
>
>
>
> Colin
>
>
>
>
>