[ https://issues.apache.org/jira/browse/CARBONDATA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16566207#comment-16566207 ] xuchuanyin edited comment on CARBONDATA-2802 at 8/2/18 1:32 AM: ---------------------------------------------------------------- The error is not due to BloomFilter datamap. We can reproduce it this way: 1. create base table; 2. load data; 3. create index datamap 4. create preagg 5. query on preagg 6. clear datamaps for table (this will cause the problem) . In test code ,we can call ``` val carbonTable = CarbonEnv.getCarbonTable("default", "table")(sparkSession) val tableIdentifier = carbonTable.getAbsoluteTableIdentifier DatamapStoreManager.getInstance().clearDataMaps(tableIdentifier) ``` If we skip step3 or step5, the result is OK was (Author: xuchuanyin): The error is not due to BloomFilter datamap. We can reproduce it this way: 1. create base table; 2. load data; 3. create index datamap 4. create preagg 5. query on preagg 6. clear datamaps for table (this will cause the problem) . In test code ,we can call ``` val carbonTable = CarbonEnv.getCarbonTable("default", "table")(sparkSession) val tableIdentifier = carbonTable.getAbsoluteTableIdentifier DatamapStoreManager.getInstance().clearDataMaps(tableIdentifier) ``` If we skip step4 or step5, the result is OK > Creation of Bloomfilter Datamap is failing after UID,compaction,pre-aggregate datamap creation > ---------------------------------------------------------------------------------------------- > > Key: CARBONDATA-2802 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2802 > Project: CarbonData > Issue Type: Bug > Components: other > Affects Versions: 1.4.1 > Environment: Spark 2.2 > Reporter: Rahul Singha > Priority: Minor > Labels: bloom-filter > > *Steps :* > 1.CREATE TABLE uniqdata(CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,36),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format'; > 2.LOAD DATA INPATH 'hdfs://hacluster/user/rahul/2000_UniqData.csv' into table uniqdata OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1'); > 3.update uniqdata set (active_emui_version) = ('ACTIVE_EMUI_VERSION_00001') where cust_id = 9000; > 4.delete from uniqdata where cust_id = 9000; > 5.insert into uniqdata select 9000,'CUST_NAME_00000','ACTIVE_EMUI_VERSION_00000','1970-01-01 01:00:03.0','1970-01-01 02:00:03.0',123372036854,-223372036854,12345678901.1234000000,22345678901.1234000000,1.12345674897976E10, -1.12345674897976E10,1; > 6.alter table uniqdata compact 'major'; > 7.create datamap uniqdata_agg on table uniqdata using 'preaggregate' as select cust_name, avg(cust_id) from uniqdata group by cust_id, cust_name; > 8.CREATE DATAMAP bloom_dob ON TABLE uniqdata USING 'bloomfilter' DMPROPERTIES ('INDEX_COLUMNS' = 'dob', 'BLOOM_SIZE'='640000', 'BLOOM_FPP'='0.00001'); > *Actual output :* > 0: jdbc:hive2://ha-cluster/default> CREATE DATAMAP bloom_dob ON TABLE uniqdata USING 'bloomfilter' DMPROPERTIES ('INDEX_COLUMNS' = 'dob', 'BLOOM_SIZE'='640000', 'BLOOM_FPP'='0.00001'); > Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 199.0 failed 4 times, most recent failure: Lost task 1.3 in stage 199.0 (TID 484, BLR1000025336, executor 182): java.io.InvalidClassException: scala.collection.convert.Wrappers$MutableSetWrapper; no valid constructor > at java.io.ObjectStreamClass$ExceptionInfo.newInvalidClassException(ObjectStreamClass.java:157) > at java.io.ObjectStreamClass.checkDeserialize(ObjectStreamClass.java:862) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2041) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) > at java.util.ArrayList.readObject(ArrayList.java:797) > at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2176) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) > at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) > at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) > at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80) > at org.apache.spark.scheduler.Task.run(Task.scala:108) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Driver stacktrace: (state=,code=0) > > *Expected output :* > Datamap Creation should be success -- This message was sent by Atlassian JIRA (v7.6.3#76005) |
Free forum by Nabble | Edit this page |