Quantcast
Channel: SCN : Unanswered Discussions - Data Services and Data Quality
Viewing all articles
Browse latest Browse all 3719

Setting the default commit size

$
0
0


Hi,

 

We are loading high volume of data into HANA using the bulk loader option and we're finding that the optimal commit size is different for different environments.  So the problem is how can we do this?  We can't set a different value in the target table options for different environments because it is a code change.  We have the option to choose a commit size of 'default', but I can't find where this default value is set.  I tried changing the commit size setting in the datastore but this had no impact.

 

It must be possible because when I set a target table to 'default' commit size and run the dataflow, in one environment it uses 1000 and in another it uses 10000.

 

BLKLOAD: HANA table <DH_TEST>, type <Column store>, commit size <10000>, auto correct load <no>, update method <update>, update rows <no>, delete rows <no>.

 

BLKLOAD: HANA table <DH_TEST>, type <Column store>, commit size <1000>, auto correct load <no>, update method <update>, update rows <no>, delete rows <no>.

 

We're using:

DS - 14.2.1.622

HANA - 1.00.85

 

Thanks

Dan


Viewing all articles
Browse latest Browse all 3719

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>