GitHub user ajantha-bhat opened a pull request:
https://github.com/apache/carbondata/pull/2672 [HOTFIX] improve sdk multi-thread performance changes in this PR: currently writing rows to each writer iterator doesn't happen concurrently. This also can be made concurrently. Also for Avro can use sdkUserCore in input processor step. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? NA - [ ] Any backward compatibility impacted? NA - [ ] Document update required? NA - [ ] Testing done done. UT already added. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/ajantha-bhat/carbondata unmanaged_table Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2672.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2672 ---- commit 8d786f3f1b1221bae77cd93256c7dd03a24e5acc Author: ajantha-bhat <ajanthabhat@...> Date: 2018-08-29T17:41:09Z [HOTFIX] improve sdk multi-thread performance ---- --- |
Github user ajantha-bhat commented on the issue:
https://github.com/apache/carbondata/pull/2672 @gvramana , @ravipesala please review --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2672 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6467/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8154/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/83/ --- |
In reply to this post by qiuchenjian-2
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2672 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6479/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8168/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/97/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/carbondata/pull/2672 can you fill the PR description template @ajantha-bhat --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r213934443 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; --- End diff -- please add comment for this counter --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r213934934 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; CarbonMultiRecordWriter(CarbonOutputIteratorWrapper[] iterators, DataLoadExecutor dataLoadExecutor, CarbonLoadModel loadModel, Future future, ExecutorService executorService) { super(null, dataLoadExecutor, loadModel, future, executorService); this.iterators = iterators; + counter = new AtomicLong(0); } - @Override public synchronized void write(NullWritable aVoid, ObjectArrayWritable objects) + @Override public void write(NullWritable aVoid, ObjectArrayWritable objects) throws InterruptedException { - iterators[counter].write(objects.get()); - if (++counter == iterators.length) { - //round robin reset - counter = 0; + int hash = (int) (counter.incrementAndGet() % iterators.length); --- End diff -- why not make counter AtomicInteger? Then no need to type cast to int --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r213935140 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; CarbonMultiRecordWriter(CarbonOutputIteratorWrapper[] iterators, DataLoadExecutor dataLoadExecutor, CarbonLoadModel loadModel, Future future, ExecutorService executorService) { super(null, dataLoadExecutor, loadModel, future, executorService); this.iterators = iterators; + counter = new AtomicLong(0); } - @Override public synchronized void write(NullWritable aVoid, ObjectArrayWritable objects) + @Override public void write(NullWritable aVoid, ObjectArrayWritable objects) throws InterruptedException { - iterators[counter].write(objects.get()); - if (++counter == iterators.length) { - //round robin reset - counter = 0; + int hash = (int) (counter.incrementAndGet() % iterators.length); --- End diff -- rename `hash` to `iteratorNum` --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r213935430 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/steps/InputProcessorStepWithNoConverterImpl.java --- @@ -64,10 +63,13 @@ private Map<Integer, GenericDataType> dataFieldsWithComplexDataType; + private short sdkUserCore; --- End diff -- What does this variable mean? Please add comment, and maybe change to more meaningful name --- |
In reply to this post by qiuchenjian-2
Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r214313919 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; CarbonMultiRecordWriter(CarbonOutputIteratorWrapper[] iterators, DataLoadExecutor dataLoadExecutor, CarbonLoadModel loadModel, Future future, ExecutorService executorService) { super(null, dataLoadExecutor, loadModel, future, executorService); this.iterators = iterators; + counter = new AtomicLong(0); } - @Override public synchronized void write(NullWritable aVoid, ObjectArrayWritable objects) + @Override public void write(NullWritable aVoid, ObjectArrayWritable objects) throws InterruptedException { - iterators[counter].write(objects.get()); - if (++counter == iterators.length) { - //round robin reset - counter = 0; + int hash = (int) (counter.incrementAndGet() % iterators.length); --- End diff -- If makes an integer and write called for more than INT_MAX records, it will give negative results, So, keeping long is enough for very huge record. hence long. But always long % int will be within int. so a safe type cast. https://stackoverflow.com/questions/7262133/will-a-long-int-will-always-fit-into-an-int --- |
In reply to this post by qiuchenjian-2
Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r214314587 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; CarbonMultiRecordWriter(CarbonOutputIteratorWrapper[] iterators, DataLoadExecutor dataLoadExecutor, CarbonLoadModel loadModel, Future future, ExecutorService executorService) { super(null, dataLoadExecutor, loadModel, future, executorService); this.iterators = iterators; + counter = new AtomicLong(0); } - @Override public synchronized void write(NullWritable aVoid, ObjectArrayWritable objects) + @Override public void write(NullWritable aVoid, ObjectArrayWritable objects) throws InterruptedException { - iterators[counter].write(objects.get()); - if (++counter == iterators.length) { - //round robin reset - counter = 0; + int hash = (int) (counter.incrementAndGet() % iterators.length); --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r214314596 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,29 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + private AtomicLong counter; --- End diff -- done --- |
In reply to this post by qiuchenjian-2
Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r214318648 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/steps/InputProcessorStepWithNoConverterImpl.java --- @@ -64,10 +63,13 @@ private Map<Integer, GenericDataType> dataFieldsWithComplexDataType; + private short sdkUserCore; --- End diff -- done. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8206/ --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2672 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/135/ --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2672#discussion_r214509355 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -460,27 +461,32 @@ public CarbonLoadModel getLoadModel() { private CarbonOutputIteratorWrapper[] iterators; - private int counter; + // keep counts of number of writes called + // and it is used to load balance each write call to one iterator. + private AtomicLong counter; CarbonMultiRecordWriter(CarbonOutputIteratorWrapper[] iterators, DataLoadExecutor dataLoadExecutor, CarbonLoadModel loadModel, Future future, ExecutorService executorService) { super(null, dataLoadExecutor, loadModel, future, executorService); this.iterators = iterators; + counter = new AtomicLong(0); } - @Override public synchronized void write(NullWritable aVoid, ObjectArrayWritable objects) + @Override public void write(NullWritable aVoid, ObjectArrayWritable objects) throws InterruptedException { - iterators[counter].write(objects.get()); - if (++counter == iterators.length) { - //round robin reset - counter = 0; + int iteratorLength = iterators.length; + int iteratorNum = (int) (counter.incrementAndGet() % iteratorLength); + synchronized (iterators[iteratorNum]) { --- End diff -- If we want to let each SDK user thread write to its own iterator, why not create multiple CarbonWriter (each has one iterator) which shared one SortDataRows and let user call each CarbonWriter to write. Thus no synchronization is required. --- |
Free forum by Nabble | Edit this page |