Hi,
Simple insert you mean "insert by values"? I don't think in real data
pipeline this will be used frequently. Ideally insert will be used for
inserting from other table or external table.
Just for one row insert (or insert by values) I don't think we need to
avoid using spark Rdd flow. Also based on your design using sdk to write a
transactional table segment brings extra overhead of creating metadata
files manually. Considering the changes and it's value addition in the real
time scenario,
-1 from my side for this requirement.
Thanks,
Ajantha
On Tue, 2 Feb, 2021, 6:51 pm akshay_nuthala, <
[hidden email]>
wrote: