Login  Register

Re: Question

Posted by ravipesala on Jun 20, 2017; 3:11pm
URL: http://apache-carbondata-dev-mailing-list-archive.168.s1.nabble.com/Question-tp15666p15676.html

Hi,

it is because compaction flow uses query flow, It queries the data from the
segments which needs to be compacted and sends for merge sort. So writer
step gets the spark row data thats why it has spark decimal in compaction.

Regards,
Ravindra.

On 20 June 2017 at 16:06, Lu Cao <[hidden email]> wrote:

> Hi dev,
> Any one knows why the decimal type in compaction flow is processed as below
> in CarbonFactDataHandlerColumnar  ?
> I can't understand according to the comments.
>
> // convert measure columns
> for (int i = 0; i < type.length; i++) {
>   Object value = rows[i];
>
>   // in compaction flow the measure with decimal type will come as
> spark decimal.
>   // need to convert it to byte array.
>   if (type[i] == DataType.DECIMAL && compactionFlow) {
>     BigDecimal bigDecimal = ((Decimal) rows[i]).toJavaBigDecimal();
>     value = DataTypeUtil.bigDecimalToByte(bigDecimal);
>   }
>   measurePage[i].putData(rowId, value);
> }
>
>
> Thanks!
> Lionel
>



--
Thanks & Regards,
Ravi