[jira] [Resolved] (CARBONDATA-2979) select count fails when carbondata file is written through SDK and read through sparkfileformat for complex datatype map(struct->array->map)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Resolved] (CARBONDATA-2979) select count fails when carbondata file is written through SDK and read through sparkfileformat for complex datatype map(struct->array->map)

Akash R Nilugal (Jira)

     [ https://issues.apache.org/jira/browse/CARBONDATA-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ravindra Pesala resolved CARBONDATA-2979.
-----------------------------------------
       Resolution: Fixed
    Fix Version/s: 1.5.0

> select count fails when carbondata file is written through SDK and read through sparkfileformat for complex datatype map(struct->array->map)
> --------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-2979
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2979
>             Project: CarbonData
>          Issue Type: Bug
>          Components: file-format
>    Affects Versions: 1.5.0
>            Reporter: Rahul Singha
>            Assignee: Manish Gupta
>            Priority: Minor
>             Fix For: 1.5.0
>
>         Attachments: MapSchema_15_int.avsc
>
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> *Steps:*
> create carabondata and carbonindex file using SDK
> place the files in a hdfs location
> Read files using spark file format
> create table schema15_int using carbon location 'hdfs://hacluster/user/rahul/map/mapschema15_int';
> Select count(*) from  schema15_int;
> *Actual Result:*
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 24.0 failed 4 times, most recent failure: Lost task 0.3 in stage 24.0 (TID 34, BLR1000014238, executor 3): java.io.IOException: All the files doesn't have same schema. Unsupported operation on nonTransactional table. Check logs.
>  at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.updateColumns(AbstractQueryExecutor.java:276)
>  at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getDataBlocks(AbstractQueryExecutor.java:234)
>  at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.initQuery(AbstractQueryExecutor.java:141)
>  at org.apache.carbondata.core.scan.executor.impl.AbstractQueryExecutor.getBlockExecutionInfos(AbstractQueryExecutor.java:401)
>  at org.apache.carbondata.core.scan.executor.impl.VectorDetailQueryExecutor.execute(VectorDetailQueryExecutor.java:44)
>  at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.initialize(VectorizedCarbonRecordReader.java:143)
>  at org.apache.spark.sql.carbondata.execution.datasources.SparkCarbonFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(SparkCarbonFileFormat.scala:395)
>  at org.apache.spark.sql.carbondata.execution.datasources.SparkCarbonFileFormat$$anonfun$buildReaderWithPartitionValues$2.apply(SparkCarbonFileFormat.scala:361)
>  at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:124)
>  at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:174)
>  at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105)
>  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
>  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
>  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
>  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
>  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>  at org.apache.spark.scheduler.Task.run(Task.scala:108)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)