GitHub user kumarvishal09 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/470 [WIP]Fixed Performance issue for dictionary loading during decoder **Problem** Currently during dictionary loading in carbon decoder is slow as get method is getting called. **Solution** Call getAll api to load the dictionary the dictionary concurrently You can merge this pull request into a Git repository by running: $ git pull https://github.com/kumarvishal09/incubator-carbondata DictionaryLoadingPerformanceIssue Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-carbondata/pull/470.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #470 ---- commit 44abae45a5860e6bb955cead21591afe346ed193 Author: kumarvishal <[hidden email]> Date: 2016-12-27T02:52:50Z Fixed Performance issue for dictionary loading during decoder ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/332/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/360/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/361/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/362/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/509/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/510/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.5.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/517/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r95131638 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { - try { - cache.get(new DictionaryColumnUniqueIdentifier( - atiMap(f._1).getCarbonTableIdentifier, - f._2, f._3)) - } catch { - case _: Throwable => null - } + new DictionaryColumnUniqueIdentifier( + atiMap(f._1).getCarbonTableIdentifier, + f._2, f._3) } else { null } } - dicts + try { + val noDictionaryIndexes = new java.util.ArrayList[Int]() + dictionaryColumnIds.zipWithIndex.foreach { x => + if (x._1 == null) { --- End diff -- can you use (columnId, index) instead of x? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r95131758 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { --- End diff -- please avoid using f, use a more meaningful tuple with names --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r95131812 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { - try { - cache.get(new DictionaryColumnUniqueIdentifier( - atiMap(f._1).getCarbonTableIdentifier, - f._2, f._3)) - } catch { - case _: Throwable => null - } + new DictionaryColumnUniqueIdentifier( + atiMap(f._1).getCarbonTableIdentifier, + f._2, f._3) } else { null } } - dicts + try { + val noDictionaryIndexes = new java.util.ArrayList[Int]() + dictionaryColumnIds.zipWithIndex.foreach { x => + if (x._1 == null) { + noDictionaryIndexes.add(x._2) + } + } + val dict = cache.getAll(dictionaryColumnIds.filter(_ != null).toSeq.asJava); + val finalDict = new java.util.ArrayList[Dictionary]() + var dictIndex: Int = 0 + dictionaryColumnIds.zipWithIndex.foreach { x => --- End diff -- same as previous comment --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r95131917 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { - try { - cache.get(new DictionaryColumnUniqueIdentifier( - atiMap(f._1).getCarbonTableIdentifier, - f._2, f._3)) - } catch { - case _: Throwable => null - } + new DictionaryColumnUniqueIdentifier( + atiMap(f._1).getCarbonTableIdentifier, + f._2, f._3) } else { null } } - dicts + try { + val noDictionaryIndexes = new java.util.ArrayList[Int]() + dictionaryColumnIds.zipWithIndex.foreach { x => + if (x._1 == null) { + noDictionaryIndexes.add(x._2) + } + } + val dict = cache.getAll(dictionaryColumnIds.filter(_ != null).toSeq.asJava); + val finalDict = new java.util.ArrayList[Dictionary]() + var dictIndex: Int = 0 + dictionaryColumnIds.zipWithIndex.foreach { x => + if (!noDictionaryIndexes.contains(x._2)) { + finalDict.add(dict.get(dictIndex)) + dictIndex += 1 + } else { + finalDict.add(null) --- End diff -- why adding null? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r103383136 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { - try { - cache.get(new DictionaryColumnUniqueIdentifier( - atiMap(f._1).getCarbonTableIdentifier, - f._2, f._3)) - } catch { - case _: Throwable => null - } + new DictionaryColumnUniqueIdentifier( + atiMap(f._1).getCarbonTableIdentifier, + f._2, f._3) } else { null } } - dicts + try { + val noDictionaryIndexes = new java.util.ArrayList[Int]() + dictionaryColumnIds.zipWithIndex.foreach { x => + if (x._1 == null) { + noDictionaryIndexes.add(x._2) + } + } + val dict = cache.getAll(dictionaryColumnIds.filter(_ != null).toSeq.asJava); + val finalDict = new java.util.ArrayList[Dictionary]() + var dictIndex: Int = 0 + dictionaryColumnIds.zipWithIndex.foreach { x => --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r103383147 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { - try { - cache.get(new DictionaryColumnUniqueIdentifier( - atiMap(f._1).getCarbonTableIdentifier, - f._2, f._3)) - } catch { - case _: Throwable => null - } + new DictionaryColumnUniqueIdentifier( + atiMap(f._1).getCarbonTableIdentifier, + f._2, f._3) } else { null } } - dicts + try { + val noDictionaryIndexes = new java.util.ArrayList[Int]() + dictionaryColumnIds.zipWithIndex.foreach { x => + if (x._1 == null) { --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/470#discussion_r103383163 --- Diff: integration/spark/src/main/scala/org/apache/spark/sql/CarbonDictionaryDecoder.scala --- @@ -224,21 +226,39 @@ case class CarbonDictionaryDecoder( } private def getDictionary(atiMap: Map[String, AbsoluteTableIdentifier], - cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { - val dicts: Seq[Dictionary] = getDictionaryColumnIds.map { f => + cache: Cache[DictionaryColumnUniqueIdentifier, Dictionary]) = { + val dictionaryColumnIds = getDictionaryColumnIds.map { f => if (f._2 != null) { --- End diff -- ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Failed with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/975/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 test this please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/997/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user jackylk commented on the issue:
https://github.com/apache/incubator-carbondata/pull/470 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
In reply to this post by qiuchenjian-2
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/470 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [hidden email] or file a JIRA ticket with INFRA. --- |
Free forum by Nabble | Edit this page |