GitHub user mohammadshahidkhan opened a pull request:
https://github.com/apache/carbondata/pull/1834 [CARBONDATA-2056] Hadoop Configuration with access key and secret key should be passed while creating InputStream of distributed carbon file. ⦠Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? None - [X] Any backward compatibility impacted? None - [X] Document update required? None - [X] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. Not possible to add test case. - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/mohammadshahidkhan/incubator-carbondata AccessAndSecretKeyProblem Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1834.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1834 ---- commit 07a78effbdefcc78d4cda83282015a8b97f28471 Author: mohammadshahidkhan <mohdshahidkhan1987@...> Date: 2018-01-18T16:05:30Z [CARBONDATA-2056] Hadoop Configuration with access key and secret key should be passed while creating InputStream of distributed carbon file. ---- --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1834 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1731/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1834 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2967/ --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan commented on the issue:
https://github.com/apache/carbondata/pull/1834 retest SDV please --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1834 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3207/ --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan commented on the issue:
https://github.com/apache/carbondata/pull/1834 Failed test case is random --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan commented on the issue:
https://github.com/apache/carbondata/pull/1834 retest SDV please --- |
In reply to this post by qiuchenjian-2
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1834#discussion_r164779536 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/filesystem/AbstractDFSCarbonFile.java --- @@ -299,11 +299,11 @@ public boolean delete() { } @Override public DataInputStream getDataInputStream(String path, FileFactory.FileType fileType, - int bufferSize, String compressor) throws IOException { + int bufferSize, Configuration configuration, String compressor) throws IOException { --- End diff -- Only handling in case of getDataInputStream is not sufficient. 1) All the file operations should use configuration passed through constructor. 2) All connecting flows from spark to carbondata file operations should pass the hadoop configurations. Ex: InputFormats and OutputFormats should comply to configuration being passed. 3) RDDs involving fileoperations, like DataloadRdd, MergeRdd, ScanRdd should pass conf to executor and pass to file operations. Ex: refer NewHadoopRDD of spark which broadcasts conf. --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1834 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3232/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1834 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3356/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/1834 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2119/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/1834 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3273/ --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan commented on the issue:
https://github.com/apache/carbondata/pull/1834 closing this PR as per @ravipesala this fix will be handled for all the I/O operation together with S3 support implementation --- |
In reply to this post by qiuchenjian-2
Github user mohammadshahidkhan closed the pull request at:
https://github.com/apache/carbondata/pull/1834 --- |
Free forum by Nabble | Edit this page |