How to compile CarbonData 1.5.1 with Spark 2.3.1

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

How to compile CarbonData 1.5.1 with Spark 2.3.1

xm_zzc
Hi:
  The steps to compile CarbonData 1.5.1 with Spark 2.3.1 are as follows:
  1. cover CarbonDataSourceScan.scala: cp -f
integration/spark2/src/main/commonTo2.1And2.2/org/apache/spark/sql/execution/strategy/CarbonDataSourceScan.scala
integration/spark2/src/main/spark2.3/org/apache/spark/sql/execution/strategy/CarbonDataSourceScan.scala

  2. Edit
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/bigdecimal/TestBigDecimal.scala:
    Line 48: change 'salary decimal(30, 10))' to 'salary decimal(27, 10))'

  3. Edit
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala:
    1) Line 297: change 'classOf[Seq[String]],' to
'classOf[Seq[Attribute]],'
    2) Replace Line 299-301 with a line 'method.invoke(dataSourceObj, mode,
query, query.output, physicalPlan)';

  4. Use command to compile: mvn -DskipTests -Pspark-2.3 -Phadoop-2.8
-Pbuild-with-format -Pmv -Dspark.version=2.3.1
-Dhadoop.version=2.6.0-cdh5.8.3 clean package. It worked.

  You can refer to the  PR#2779
<https://github.com/apache/carbondata/pull/2779>  .




--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/