GitHub user jackylk opened a pull request:
https://github.com/apache/incubator-carbondata/pull/433 [WIP] fix testcase failure for -Pno-kettle and -Pspark-2.0 There are test case failing for -Pno-kettle and -Pspark-2.0, because of big decimal compression feature You can merge this pull request into a Git repository by running: $ git pull https://github.com/jackylk/incubator-carbondata fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-carbondata/pull/433.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #433 ---- commit 1f6d217ca8aa15dce7d4afa329e00d60326792a1 Author: jackylk <[hidden email]> Date: 2016-12-13T16:28:37Z fix testcase ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/163/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 i have checked the code, and switched to branch: spark2 scala: 2.10.4 when i ran : mvn -DskipTests -Pspark-2.0 compile i got the error: ``` [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] /Users/nathan/workspace/carbondata/core/src/main/java/org/apache/carbondata/scan/complextypes/StructQueryType.java:[32,49] æ¾ä¸å°ç¬¦å· 符å·: ç±» GenericInternalRowWithSchema ä½ç½®: ç¨åºå org.apache.spark.sql.catalyst.expressions [ERROR] /Users/user/workspace/carbondata/core/src/main/java/org/apache/carbondata/scan/complextypes/StructQueryType.java:[182,16] æ¾ä¸å°ç¬¦å· 符å·: ç±» GenericInternalRowWithSchema ä½ç½®: ç±» org.apache.carbondata.scan.complextypes.StructQueryType [INFO] 2 errors ``` there is no `org.apache.spark.sql.catalyst.expressions.GenericInternalRowWithSchema.GenericInternalRowWithSchema` in spark2 how can i fixed it ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 there is a class named `org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema' in spark2 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 Can you use `mvn clean verify -Pspark-2.0` ? I think maybe your local mvn repo is not cleaned --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 i have switched to master ,and it`s worked. The dependency has changed to `import org.apache.spark.sql.catalyst.expressions.GenericInternalRow; ` in `org.apache.carbondata.scan.complextypes.StructQueryType` ~~~~ Thank youï¼jacky --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/179/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 hi, jacky, branch: master the command `mvn -DskipTests -Pspark-2.0 -Dhadoop.version=2.6.0 clean package` has ran successfully. but how can i run carbondata with spark2 ? i have edited the `bin/carbon-spark-shell` : `ASSEMBLY_DIR="$CARBON_SOURCE/assembly/target/scala-2.11"` the command prompted: ``` /root/spark/bin/spark-submit --name Carbon Spark shell --class org.apache.spark.repl.carbon.Main /root/carbondata/assembly/target/scala-2.11/carbondata_2.11-1.0.0-incubating-SNAPSHOT-shade-hadoop2.6.0.jar java.lang.ClassNotFoundException: org.apache.spark.repl.carbon.Main at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:228) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:693) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 @littleJava yes, I think carbon-spark-shell will not work with carbon jar built with -Pspark-2.0, and actually it is not needed. You can use spark shell or spark-sql tool with carbon jar directly. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 hi, i ran 'org.apache.spark.sql.examples.CarbonExample' with spark2.0.2 submit , and got the exception: ``` AUDIT 15-12 16:04:24,691 - [spark-host][root][Thread-1]Table created with Database name [default] and Table name [carbon_table] WARN 15-12 16:04:24,691 - Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.CarbonSource. Persisting data source relation `carbon_table` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. Exception in thread "main" java.lang.UnsupportedOperationException: loadTable is not implemented at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.loadTable(InMemoryCatalog.scala:290) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:297) at org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:335) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) at org.apache.spark.sql.examples.CarbonExample$.main(Carbon.scala:105) at org.apache.spark.sql.examples.CarbonExample.main(Carbon.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124) ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user littleJava commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 another questionï¼ is hive optional ? I just test carbondata in spark2 local driver model. and what is the role-playing of derby , how can i use derby ? Thank you ! by the way : the new version of carbondata-format-1.0.0-incubating-SNAPSHOT should be uploaded ð --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 @littleJava Yes, you can test using spark local mode. In cluster mode, hive is not optional, as carbon-spark integration will store the metadata in hive meta store. The carbondata-format-1.0.0-incubating-SNAPSHOT jar is updated, thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/433 This PR is duplicated with #449 closing it --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/433 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |