Datometry Hyper-Q implements configurable, adaptive bulk load error handling to let the user strike the right balance between performance and error granularity when migrating to modern MPP data warehouse systems. This article provides the background for bulk loading and error handling in bulk load operations, describes operational best-practices for bulk load error handling in deployments of Hyper-Q with a modern Data Warehouse system, and explains the available configuration options and their impact. Throughout this article, MLoad is referred to as a representative example of a bulk load client application and SQL Data Warehouse as a representative example of a modern Data Warehouse system.
What is the difference between Teradata and modern Data Warehouse systems?
Teradata performs bulk load processing and error handling at a per-row level and individually records every record that could not be successfully loaded. This includes both the acquisition of data from the bulk load source into the database as well as the processing of all loaded tuples through the application query, which allows for powerful transformations and application strategies (UPDATE, INSERT, UPSERT).
Modern data warehouse systems separate data transformation from data loading to achieve best possible throughput. The bulk load client utilities for modern data warehouse systems focus on the acquisition of data. Data is applied to the target table directly and without any transformation.
How does Hyper-Q support existing bulk load scripts that combine acquisition and application?
Users leverage the powerful flexibility and expressiveness of Teradata bulk load scripting languages in their ETL/ELT setup. Hyper-Q enables the use of the existing bulk load scripts and the application logic they implement against the new, modern Data Warehouse system by handling the acquisition and application steps expressed in the script. It leverages bulk load mechanisms native to the modern Data Warehouse system to efficiently load data into a work table in the warehouse. Once all data has been acquired, Hyper-Q performs the transformations by running database queries against the work table and the target table. This includes, for example, the emulation of UPSERT processing on target systems without native support for UPSERTs.
How does Hyper-Q perform bulk load error handling?
Acquisition phase
During the acquisition phase, Hyper-Q utilizes the native bulk load client of the target system, configured to write an error for any row that could not be successfully transferred to the work table to a log file or error table (depending on the client used). Hyper-Q records an entry in the error table specified by the MLoad job per batch of rows performed. This entry contains details how to locate and interpret the client's error log file or table.
Application phase
During the application phase, Hyper-Q utilizes regular SQL statements to perform the transformations on the data. As with any other SQL request, if any row fails with an error, the whole statement fails. For example, if any row processed in a SELECT statement fails with a division by zero, the whole statement fails. This means that by default, if any row fails during the application phase for any reason, no rows will be loaded into the target table. Hyper-Q records an entry in the error table specified by the MLoad, detailing the error that occurred.
Decoding entries in the error table
The entries recorded in the MLoad error table can be decoded by running the following query against SQL DW directly:
SELECT CONVERT(varchar(max), CONVERT(varbinary(max), '0x' + CONVERT(varchar(max), hostdata, 0), 1), 0) FROM <errortable>;
How can I get more granular details similar to Teradata bulk loading?
Hyper-Q provides the option of advanced adaptive error handling (AAEH) during bulk load processing to improve the granularity of error handling during the application phase. When enabled, Hyper-Q automatically retries the application of smaller chunks of the work table up to a configurable error row count. It employs a depth-first strategy to continuously break the work table into repetitively smaller chunks, as needed, to apply all rows to the target table that can be applied successfully.
What configuration options does Hyper-Q offer for bulk loading?
Hyper-Q recognizes the gateway configuration option bulk_load_retry_max_error_rows
to configure AAEH.
- When the option is not present or set to a value of
0
, AAEH is disabled. - When the option is set to a value greater than
0
, AAEH is enabled. The value controls when AAEH aborts processing, expressed as the number of individual rows that failed to be applied. A value of1
means that as soon as the granularity of a single row has been reached and that row causes the transformation query to fail, AAEH stops processing any further rows from the work table that have not yet been applied.
What is the recommendation for configuring AAEH?
A user's specific requirements for performance and error granularity factor into determining the best configuration for error handling. As a general principle, since the potential cost increases with the maximum error row count configured, the value should be set as low as possible to maximize performance in case invalid data is present. There are four typical scenarios:
- The objective is to get the best possible performance and data is expected to be clean or data issues can be identified and fixed without having to have the specific offending row identified by the system. In this case, AAEH should be disabled.
- Identical to Scenario 1 above with the difference that the first offending row must be identified explicitly by the system. In this case, AAEH should be enabled and configured with a maximum error row count of 1.
- All valid data must be loaded into the target table. In this case, AAEH should be enabled and configured with a maximum error row count that is larger than the expected failing rows per load job.
- All invalid data needs to be identified at row granularity. In this case, AAEH should be enabled and configured with a maximum error row count that is larger than the expected failing rows per load job.
Note that scenarios 3 and 4 should only be considered when there are very few invalid rows in the input data, in particular if the application query is non-trivial and costly to execute. Also for Scenario 2, the cost of running the application query must be considered. It will be executed at least log n times for n number of rows in the table.
Examples
The following examples illustrate the effect of enabling advanced adaptive error handling.
Setup
All examples use the following MLoad script and data file. The example loads a series of strings into a column of type DATE
. The data file contains strings representing calendar dates in ISO 8601 format as well as invalid strings, such as "loremipsum" and three date strings with a month value exceeding 12. The transformation that is performed when applying the data to the target table is casting the string to a date in the INSERT
statement.
MLoad script:
.LOGTABLE demo.ML_mytable;
.LOGON ***/***,***;
.BEGIN IMPORT MLOAD
TABLES demo.mytable
WORKTABLES demo.WT_mytable
ERRORTABLES demo.ET_mytable
demo.UV_mytable
SESSIONS 5;
.LAYOUT DATAIN_LAYOUT;
.FIELD mydate 1 CHAR(10);
.FILLER EOL_PAD 11 CHAR( 1);
.DML LABEL INSERT_DML;
INSERT INTO demo.mytable (
mydate = :mydate
);
.IMPORT INFILE mytable.dat
FORMAT UNFORMAT
LAYOUT DATAIN_LAYOUT
APPLY INSERT_DML;
.END MLOAD;
.LOGOFF &SYSRC;
Data file mytable.dat
:
2000-01-01
2000-02-02
2000-03-03
2000-04-04
2000-05-05
2000-06-06
2000-07-07
loremipsum
2000-08-08
2000-09-09
2000-10-10
2000-11-11
2000-12-12
2000-13-13
2000-14-14
2000-15-15
Example 1: AAEH disabled
When advanced adaptive error handling is disabled, a single attempt is made to apply the transformations to all of the data in the work table and store the results in the target table. If any row fails, which is the case in this example, no data has been inserted into the target table per MLoad output:
**** 07:59:52 UTY0818 Statistics for table demo.mytable:
Inserts: 0
Updates: 0
Deletes: 0
MLoad output further indicates that the error table demo.ET_mytable
contains one row:
**** 08:00:11 UTY0825 Error table statistics for:
Target table 1: demo.mytable
Number of Rows Error Table Name
==================== ==================================================
1 demo.ET_mytable
0 demo.UV_mytable
The decoded entry in the error table reads:
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC
Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Conversion failed when converting date and/or
time from character string. SQLSTATE IS: 22007, Backend Error Code:
241, SQL State: 22007, target schema: 'demo', target table: 'mytable',
work table schema: 'demo', work table : 'wt_mytable_18121971', rownumbers
: 'N/A' |
The entry shows the error message retrieved from the downstream database ("Conversion failed when converting date and/or time from character string."), the schema and object names of both the target table as well as the internal work table, and the row numbers for the tuples from the work table processed in this request. Here, "N/A" signifies that row numbers are not available as the whole table was run through the application query.
Example 2: AAEH enabled with maximum error row count set to 1
When advanced adaptive error handling is enabled, repeated attempts are made to successfully insert valid data by breaking the data in the work table into smaller pieces and retrying the application query. AAEH's configuration option (maximum error row count) is set to 1, which causes AAEH to stop processing after the first failing row (row 8 "loremipsum" in the input file). With this setting, 7 valid rows have been successfully inserted into the target table per MLoad output:
**** 08:39:36 UTY0818 Statistics for table demo.mytable:
Inserts: 7
Updates: 0
Deletes: 0
Contents of the target table:
MLoad output further indicates that the error table demo.ET_mytable
contains three rows:
**** 08:39:55 UTY0825 Error table statistics for:
Target table 1: demo.mytable
Number of Rows Error Table Name
==================== ==================================================
3 demo.ET_mytable
0 demo.UV_mytable
The decoded entries in the error table read:
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC
Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for
SQL Server][SQL Server]Conversion failed when converting date and/or
time from character string. SQLSTATE IS: 22007, Backend Error Code:
241, SQL State: 22007, target schema: 'demo', target table: 'mytable',
work table schema: 'demo', work table : 'wt_mytable_26958950', rownumbers
: '8' |
MLOAD retry error: maximum number of errors reached, target schema:
'demo', target table: 'mytable', work table schema: 'demo', work
table : 'wt_mytable_26958950', rownumbers : '9' |
MLOAD retry error: maximum number of errors reached, target schema:
'demo', target table: 'mytable', work table schema: 'demo', work
table : 'wt_mytable_26958950', rownumbers : '(10, 17)' |
The first entry shows the error message retrieved from the downstream database ("Conversion failed when converting date and/or time from character string."), the schema and object names of both the target table as well as the internal work table, and the row number (8) for the first tuple from the work table that failed. Since the maximum error row count is set to 1 there are no further entries in the table pertaining to failed rows.
The second and third entries show which chunks of the work table have not been retried because the maximum number of errors has been reached. Row number 9, which would have been next in its own chunk as well as the chunk containing rows 10 through 17.
Processing explained
After the initial, default attempt of processing all rows from the work table failed, AAEH starts. Its depth-first strategy continuously breaks the work table into repetitively smaller chunks. In this example it processed the following ranges of rows through the transformation query:
Query count | Rows | Result | Row error count |
1 | 1-9 | fail | 0 |
2 | 1-5 | success | 0 |
3 | 6-9 | fail | 0 |
4 | 6-7 | success | 0 |
5 | 8-9 | fail | 0 |
6 | 8 | fail | 1 - threshold reached, end processing |
Example 3: AAEH enabled with maximum error row count set to 10
When advanced adaptive error handling is enabled, repeated attempts are made to successfully insert valid data by breaking the data in the work table into smaller pieces and retrying the application query. AAEH's configuration option (maximum error row count) is set to 10, which causes AAEH to process all rows (4 error rows in the input file). With this setting, all 12 valid rows have been successfully inserted into the target table per MLoad output:
**** 09:12:37 UTY0818 Statistics for table demo.mytable:
Inserts: 12
Updates: 0
Deletes: 0
Contents of the target table:
MLoad output further indicates that the error table demo.ET_mytable
contains 4 rows:
**** 09:12:54 UTY0825 Error table statistics for:
Target table 1: demo.mytable
Number of Rows Error Table Name
==================== ==================================================
4 demo.ET_mytable
0 demo.UV_mytable
The decoded entries in the error table read:
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Conversion failed when converting date and/or time from character string. SQLSTATE IS: 22007, Backend Error Code: 241, SQL State: 22007, target schema: 'demo', target table: 'mytable', work table schema: 'demo', work table : 'wt_mytable_39475909', rownumbers : '16' |
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Conversion failed when converting date and/or time from character string. SQLSTATE IS: 22007, Backend Error Code: 241, SQL State: 22007, target schema: 'demo', target table: 'mytable', work table schema: 'demo', work table : 'wt_mytable_39475909', rownumbers : '8' |
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Conversion failed when converting date and/or time from character string. SQLSTATE IS: 22007, Backend Error Code: 241, SQL State: 22007, target schema: 'demo', target table: 'mytable', work table schema: 'demo', work table : 'wt_mytable_39475909', rownumbers : '14' |
Failed to run query on the underlying database: Error Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server][Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Conversion failed when converting date and/or time from character string. SQLSTATE IS: 22007, Backend Error Code: 241, SQL State: 22007, target schema: 'demo', target table: 'mytable', work table schema: 'demo', work table : 'wt_mytable_39475909', rownumbers : '15' |
The table contains one entry for each failing row. Each entry shows the error message retrieved from the downstream database ("Conversion failed when converting date and/or time from character string."), the schema and object names of both the target table as well as the internal work table, and the row number.
Processing explained
After the initial, default attempt of processing all rows from the work table failed, AAEH starts. Its depth-first strategy continuously breaks the work table into repetitively smaller chunks. In this example it processed the following ranges of rows through the transformation query:
Query count | Rows | Result | Row error count |
1 | 1-9 | fail | 0 |
2 | 1-5 | success | 0 |
3 | 6-9 | fail | 0 |
4 | 6-7 | success | 0 |
5 | 8-9 | fail | 0 |
6 | 8 | fail | 1 |
7 | 9 | success | 1 |
8 | 10-17 | fail | 1 |
9 | 10-13 | success | 1 |
10 | 14-17 | fail | 1 |
11 | 14-15 | fail | 1 |
12 | 14 | fail | 2 |
13 | 15 | fail | 3 |
14 | 16-17 | fail | 3 |
15 | 16 | fail | 4 |
16 | 17 | success | 4 - input file and work table contain 16 rows, reached sentinel |
Comments
0 comments
Article is closed for comments.