Posted by
Jocean shi on
Mar 26, 2019; 4:34am
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/LONG-STRING-COLUMNS-dont-t-have-effect-tp76493p76496.html
version is 1.5.2
message:
Caused by: org.apache.carbondata.streaming.CarbonStreamException: Task
failed while writing rows
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:361)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:264)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:263)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
... 3 more
Caused by: java.lang.Exception: Dataload failed, String length cannot
exceed 32000 characters
at
org.apache.carbondata.streaming.parser.FieldConverter$.objectToString(FieldConverter.scala:53)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp$$anonfun$parserRow$1.apply(RowStreamParserImp.scala:63)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp$$anonfun$parserRow$1.apply(RowStreamParserImp.scala:62)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp.parserRow(RowStreamParserImp.scala:62)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:374)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:368)
at
org.apache.carbondata.streaming.segment.StreamSegment.appendBatchData(StreamSegment.java:294)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:352)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:342)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:342)
at
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:354)
xm_zzc <
[hidden email]> 于2019年3月26日周二 上午11:19写道:
> Can you give us the detailed error message? What is the carbondata version
> you used?
>
>
>
> --
> Sent from:
>
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/>