Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.

DataWrap2ODBC Error - Mapped column not part of INSERT statement

Bilbottom
7 - Meteor

I've set up a table in our Redshift database whose first 2 columns are keys and last 3 columns are data. I'm trying to populate the 3 data columns from Alteryx using an Output tool, expecting the 2 first columns to auto-populate

 

The DDL for the table is:

 

CREATE TABLE table_name
(
    key INTEGER IDENTITY PRIMARY KEY,
    parent_key INTEGER DEFAULT -1 REFERENCES table_name(key),
    file_name VARCHAR(48) NOT NULL,
    file_datetime TIMESTAMP NOT NULL,
    created_by VARCHAR(32)
);

 

 

The expectation is that there will be several cases where the record will have no parent_key, hence the DEFAULT -1.

 

The workflow I'm trying to run creates a stream with just the 3 columns file_name, file_datetime, and created_by with their data types set to be the same as what the database expects:

 

Bilbottom_0-1633109254381.png

 

This is then pushed into an Output tool where the Output Options is Append Existing, and Append Field Map is set to map those 3 fields from the input stream to their corresponding Redshift columns:

 

Bilbottom_1-1633109431128.png

 

Bilbottom_2-1633109459670.png

 

However, despite mapping the columns explicitly, the Output tool is not including file_name in the INSERT statement and consequently returns the following error:

 

DataWrap2ODBC::SendBatch: [Amazon][Amazon Redshift] (30) Error occurred while trying to execute a query:

[SQLState XX000] ERROR:  Cannot insert a NULL value into column file_name¶

DETAIL:  ¶

  -----------------------------------------------¶

  error:  Cannot insert a NULL value into column file_name¶

  code:      8007¶

  context:   query execution¶

  query:     18004363¶

  location:  column:3¶

  process:   query2_56_18004363 [pid=31420]¶

  -----------------------------------------------¶¶

Insert into "schema_name"."table_name"("file_datetime","created_by") Values (?,?)

 

Just to test it, I dropped the NOT NULL condition from the table and the upload then ran without an error, but no value was passed into file_name since the above query ran as-is

 

I also tried using the In-DB tools instead by using a Data Stream In to a temporary table and then Write Data In-DB using the Append Existing option, but that also failed because it tries to INSERT the temp table which is missing the key columns

 

How can I get the Output tool to include the file_name column in its INSERT query? All I want to run is:

 

INSERT INTO schema_name.table_name(file_name, file_datetime, created_by) VALUES (?, ?, ?)

 

 

4 REPLIES 4
danilang
19 - Altair
19 - Altair

hi @Bilbottom 

 

Since the query ran correctly when you dropped the Not Null condition from the DB table, it looks like your workflow is trying to push NULL values in some of the file_name records.  Check the values going into the Output Data tool for NULL values in file_name

 

Dan

Bilbottom
7 - Meteor

Hi @danilang, thanks for the response

 

I wasn't clear enough in my original post -- I've confirmed that the file_name column has no NULL values (I'm only pushing a single record up at the moment), and despite having a value in the column, even when the query ran with the NOT NULL condition dropped it still just INSERTed a NULL value for file_name in the DB, despite there being a value in the input stream

 

This is because the Output tool is running an INSERT statement without the file_name column included, and I have no idea how to force the Output tool to use file_name when I've already explicitly mapped it in the tool configuration

Treyson
13 - Pulsar
13 - Pulsar

Did you end up figuring this out @Bilbottom?

Treyson Marks
Senior Analytics Engineer
Bilbottom
7 - Meteor

I did not, so I used Python instead :P

Labels