LONG_STRING_COLUMNS dont't have effect

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

LONG_STRING_COLUMNS dont't have effect

Jocean shi
The paramter  LONG_STRING_COLUMNS dont't have effect.
I try the two way:
first: TBLPROPERTIES ('LONG_STRING_COLUMNS'='col1,col2')
second: df.format("carbondata")
.option("tableName", "carbonTable")
  .option("long_string_columns", "col1, col2")
  .save()
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

xm_zzc
Can you give us the detailed error message? What is the carbondata version
you used?



--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

Jocean shi
version is 1.5.2
message:
Caused by: org.apache.carbondata.streaming.CarbonStreamException: Task
failed while writing rows
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:361)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:264)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:263)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
... 3 more
Caused by: java.lang.Exception: Dataload failed, String length cannot
exceed 32000 characters
at
org.apache.carbondata.streaming.parser.FieldConverter$.objectToString(FieldConverter.scala:53)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp$$anonfun$parserRow$1.apply(RowStreamParserImp.scala:63)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp$$anonfun$parserRow$1.apply(RowStreamParserImp.scala:62)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at
org.apache.carbondata.streaming.parser.RowStreamParserImp.parserRow(RowStreamParserImp.scala:62)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:374)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:368)
at
org.apache.carbondata.streaming.segment.StreamSegment.appendBatchData(StreamSegment.java:294)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:352)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:342)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:342)
at
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415)
at
org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:354)


xm_zzc <[hidden email]> 于2019年3月26日周二 上午11:19写道:

> Can you give us the detailed error message? What is the carbondata version
> you used?
>
>
>
> --
> Sent from:
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

xm_zzc
Write into stream table? now it can't support to write long string into
stream table.



--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

Jocean shi
I know.
another question:
why can't debug carbondata on windows system. sad

Best
Jocean.shi

xm_zzc <[hidden email]> 于2019年3月26日周二 下午1:25写道:

> Write into stream table? now it can't support to write long string into
> stream table.
>
>
>
> --
> Sent from:
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

xm_zzc
Generally we use linux as development env. But on window system it should be
ok.
Can you show the detailed problem which you met on window system?
@xuchuanyin, please help.



--
Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

Jocean shi
Exception in thread "main" org.apache.spark.sql.AnalysisException:
java.lang.RuntimeException: java.lang.RuntimeException: Error while running
command to get file permissions : java.io.IOException: (null) entry in
command string: null ls -F E:\tmp\hive
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
at
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
at
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
at
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
at
example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
at
example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
;
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
at
example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
at
example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Error
while running command to get file permissions : java.io.IOException: (null)
entry in command string: null ls -F E:\tmp\hive
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
at
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
at
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
at
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
at
example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
at
example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)

at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
at
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
... 18 more
Caused by: java.lang.RuntimeException: Error while running command to get
file permissions : java.io.IOException: (null) entry in command string:
null ls -F E:\tmp\hive
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
at
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
at
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
at
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
at
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
at
org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
at
example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
at
example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)

at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:699)
at
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
at
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
... 33 more


xm_zzc <[hidden email]> 于2019年3月26日周二 下午4:16写道:

> Generally we use linux as development env. But on window system it should
> be
> ok.
> Can you show the detailed problem which you met on window system?
> @xuchuanyin, please help.
>
>
>
> --
> Sent from:
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: LONG_STRING_COLUMNS dont't have effect

Jocean shi
Another bug about path.
carbondata get hdfs path use File.separator. The File.separator is "\" in
windows

Jocean shi <[hidden email]> 于2019年3月26日周二 下午6:23写道:

> Exception in thread "main" org.apache.spark.sql.AnalysisException:
> java.lang.RuntimeException: java.lang.RuntimeException: Error while running
> command to get file permissions : java.io.IOException: (null) entry in
> command string: null ls -F E:\tmp\hive
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
> at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
> at
> org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
> at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
> at
> org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
> at
> example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
> at
> example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
> ;
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
> at
> org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
> at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
> at
> org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
> at
> example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
> at
> example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Error
> while running command to get file permissions : java.io.IOException: (null)
> entry in command string: null ls -F E:\tmp\hive
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
> at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
> at
> org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
> at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
> at
> org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
> at
> example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
> at
> example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
>
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> ... 18 more
> Caused by: java.lang.RuntimeException: Error while running command to get
> file permissions : java.io.IOException: (null) entry in command string:
> null ls -F E:\tmp\hive
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
> at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
> at
> org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
> at
> org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
> at
> org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.externalCatalog(CarbonSessionState.scala:232)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog$lzycompute(CarbonSessionState.scala:219)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.catalog(CarbonSessionState.scala:217)
> at
> org.apache.spark.sql.hive.CarbonSessionStateBuilder.analyzer(CarbonSessionState.scala:244)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
> at
> org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
> at
> org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
> at
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
> at
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
> at
> org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:423)
> at
> example.spark.CarbondataStreamingConfigTest$.main(CarbondataStreamingConfigTest.scala:37)
> at
> example.spark.CarbondataStreamingConfigTest.main(CarbondataStreamingConfigTest.scala)
>
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:699)
> at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
> at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
> ... 33 more
>
>
> xm_zzc <[hidden email]> 于2019年3月26日周二 下午4:16写道:
>
>> Generally we use linux as development env. But on window system it should
>> be
>> ok.
>> Can you show the detailed problem which you met on window system?
>> @xuchuanyin, please help.
>>
>>
>>
>> --
>> Sent from:
>> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>>
>