anubhav tarar created CARBONDATA-1437:
-----------------------------------------
Summary: Wrong Exception Mesage When Number Of Bucket is Specified as zero
Key: CARBONDATA-1437
URL:
https://issues.apache.org/jira/browse/CARBONDATA-1437 Project: CarbonData
Issue Type: Bug
Components: spark-integration
Affects Versions: 1.2.0
Environment: spark2.1,hadoop2.7
Reporter: anubhav tarar
Assignee: anubhav tarar
Priority: Trivial
steps to reproduce
0: jdbc:hive2://localhost:10000> CREATE TABLE uniqData_t17(ID Int, date Timestamp, country String,name String, phonetype String, serialname String, salary Int)
0: jdbc:hive2://localhost:10000> STORED BY 'CARBONDATA' TBLPROPERTIES('bucketnumber'='0', 'bucketcolumns'='name','DICTIONARY_INCLUDE'='NAME');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (0.501 seconds)
0: jdbc:hive2://localhost:10000> load data inpath 'hdfs://localhost:54310/dataDiff1.csv' into table uniqData_t17 OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','FILEHEADER'='ID,date,country,name,phonetype,serialname,salary');
Error: java.lang.Exception: DataLoad failure (state=,code=0)
logs
7/08/31 12:17:07 WARN CarbonDataProcessorUtil: [Executor task launch worker-9][partitionID:default_uniqdata_t17_578e819e-bec8-49e5-a292-890db623e116] sort scope is set to LOCAL_SORT
17/08/31 12:17:07 ERROR DataLoadExecutor: [Executor task launch worker-9][partitionID:default_uniqdata_t17_578e819e-bec8-49e5-a292-890db623e116] Data Loading failed for table uniqdata_t17
java.lang.ArithmeticException: / by zero
at org.apache.carbondata.processing.newflow.sort.impl.ParallelReadMergeSorterWithBucketingImpl.initialize(ParallelReadMergeSorterWithBucketingImpl.java:78)
it should give meaningfull exception such as number of bucket can not be zero
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)