Free Trial

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.

"Write Data in DB" tool is changing from update rows to create table when saved

Lokesh123
5 - Atom

In Database- The "Write Data in DB" tool is changing from update rows to create table when saved-this was working prior.

If I set the update rows and map the field and run local on my machine it works-after saving to either my computer or server and open it, the tool changes to create table.

 

Any idea why its changing. I appreciate any help or tips.

10 REPLIES 10
TimothyH
Alteryx
Alteryx

Hi @Lokesh123  Is it possible that the workflow is processing the data and changing something about the schema that isn't recognized in the target table?  A change in the data type could cause this.

dpage_dte
6 - Meteoroid

Experiencing similar issue with Write Data In-DB, both with 2024.1 and 2024.2, it appears that the tool frequently "resets", though in my case it is with the Merge Tables option. 

apathetichell
19 - Altair

what database are you writing to? is it Databricks?

dpage_dte
6 - Meteoroid

In my case, yes, Azure Databricks with Simba Spark ODBC (v2.9). It seems to happen more often after a failed run, all of the merge-mapping is lost or the tool is no longer set to merge-mode at all. 

apathetichell
19 - Altair

@dpage_dte- background here - this is a relatively new option (Merge in Databricks write)  and my hunch is that when the Merge fails the backup behavior is to reset the tool vs capturing error and keeping the configuration.

-

1) if you want to see why it's failing - turn on logging at the ODBC level.

2) reach out to the Product team in addition to the help team - to talk about the behavior. Because this is a new feature and because Databricks is sexy - they will want to here. Paging  @renat_isch ... I don't think you manage this side of the Databricks product but may be able to point this in the right direction.

dpage_dte
6 - Meteoroid

Thank you for the response and the context, that was the impression I got as well. I was able to turn on the ODBC logging and it seems like there's no way to simulate the INSERT portion of the MEGRE/UPSERT w/ an auto-generated identity column (as the Alteryx merge passes NULL if I omit the identity column), so the tool probably won't work for my use-case anyway. 

 

I'll let my CSM know what I'm seeing, happy to support product however needed. 

 

In the meantime, is there a way I can run an arbitrary SQL statement In-DB instead? I can stage the data in a temp table using a Data-Stream In, and then run the MERGE myself with the correct UPDATE or INSERT conditions

apathetichell
19 - Altair

You can do what you want - but you'll need a workaround. Short -> there is no SQL tool IN-DB. I've mentioned numerous times that this would be awesome... but there are tools that you can use to execute SQL - they are dynamic input. input data and output data (presql/post sql). My recommendation would be to use some kind of batch macro to write your data (to control order flow) - and then after have one of those execute your SQL after you've written your data. Note - you do not need actually write/read any data here - maybe a dummy table/single row. you need the preseql/post sql component of the tool to pass in your SQL to your connection.

dpage_dte
6 - Meteoroid

Thank you for the response, again. Is this the work-around you describe? Post SQL for In Database tools - Alteryx Community

 

I'll give it a try, I had avoided it because it seemed a little counterintuitive to pull back out of the In-DB stream to then run the SQL, and as you noted it makes sequencing of multiple streams a little difficult. I'll happily vote for any suggestion to run SQL directly. It seems like the Apache Spark Code tool could probably do it, but I'm hesitant to add that complexity for a quick statement alone. 

apathetichell
19 - Altair

Yes - that works. If you have to control order processing (ie your post sql is executing too early) you can try with a canvas/batch macro which does your in-db stuff. - action on canvas/second macro to do your postsql.

 

your other alternative is run command/cli. I never got the apache spark code tool do what I want/needed and kind of gave up with it.

Labels
Top Solution Authors