'Teradata JDBC Error 1338 after certain number of batch executions
I am trying to insert 3.8M records into a Teradata table containing 14 columns, through a batch of 10000 records using RJDBC
package of R.
It always crashes after inserting 380000 records i.e. at 39th batch execution.
This is the error I get,
Error in .jcall(ps, "[I", "executeBatch") :
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.10.00.22] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
I've tried changing to a different set of rows from the same dataframe. And, behavior remains same, crashes at the 39th batch execution.
Any idea what is this Error 1338
and what could be the fix?
Also, it mentions about getNextException
but how do I use that with R?
The approach I'm using is similar to this one https://developer.teradata.com/blog/ulrich/2013/11/a-wider-test-case-on-r-jdbc-fastload
Solution 1:[1]
To get the initial exception "object", you need to use .jgetEx to and then you can use the getNextException method on it to retrieve the root cause.
See this link on the teradata developers forum for a complete example.
Typical causes of an exception during a batch insert are:
- "value related": invalid values, excessive precision of numeric data
- "size related" : the inserted rows create a "no more room in database" error
Since you say that you've tested different datasets and they all fail at the same execution, that's very likely a "no more room in database".
However, this doesn't necessarily mean that the space that has been granted to you is full: it might be that the table has a poorly chosen Primary Index (or the data has an error that produces duplicate primary index), resulting in a very skewed distribution. In this case, you would be wasting a lot of space that could be reclaimed if you can define a more apt primary index (or use a NOPI table, if you're using it as some sort of a stage table).
Here you can find a discussion on this "space waste" topic with lots of useful queries to diagnose the issue.
Solution 2:[2]
Same happens to me I was trying to insert 27.4M records into a Teradata table containing 8 columns. I've got the same error at 5.35M row.
The only thing that works for me was make a commit every 1M records.
Solution 3:[3]
So few notes here.
- The first is GC. It's not an issue in the usual java program but under a load of millions records try to do eventually
Thread.sleep
with single seconds. This will trigger GC. I found some race conditions in the Teradata JDBC driver ver17.00.00.03
. For performance purpose it is reusing old objects with a weak reference to something to be collected. When working with fastload™ or multiload™ loads, this became crucial. - Ensure you have enough space in your database. This also causes this exception.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | FTello31 |
Solution 3 | msangel |